Sample records for multi-frame blind deconvolution

  1. Post-processing of adaptive optics images based on frame selection and multi-frame blind deconvolution

    NASA Astrophysics Data System (ADS)

    Tian, Yu; Rao, Changhui; Wei, Kai

    2008-07-01

    The adaptive optics can only partially compensate the image blurred by atmospheric turbulence due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frames blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are suitable for blind deconvolution from the recorded AO close-loop frames series are selected by the frame selection technique and then do the multi-frame blind deconvolution. There is no priori knowledge except for the positive constraint in blind deconvolution. It is benefit for the use of multi-frame images to improve the stability and convergence of the blind deconvolution algorithm. The method had been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system at Yunnan Observatory. The results show that the method can effectively improve the images partially corrected by adaptive optics.

  2. Multi-frame partially saturated images blind deconvolution

    NASA Astrophysics Data System (ADS)

    Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2016-12-01

    When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.

  3. Adaptive Optics Image Restoration Based on Frame Selection and Multi-frame Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Tian, Yu; Rao, Chang-hui; Wei, Kai

    Restricted by the observational condition and the hardware, adaptive optics can only make a partial correction of the optical images blurred by atmospheric turbulence. A postprocessing method based on frame selection and multi-frame blind deconvolution is proposed for the restoration of high-resolution adaptive optics images. By frame selection we mean we first make a selection of the degraded (blurred) images for participation in the iterative blind deconvolution calculation, with no need of any a priori knowledge, and with only a positivity constraint. This method has been applied to the restoration of some stellar images observed by the 61-element adaptive optics system installed on the Yunnan Observatory 1.2m telescope. The experimental results indicate that this method can effectively compensate for the residual errors of the adaptive optics system on the image, and the restored image can reach the diffraction-limited quality.

  4. Adaptive optics images restoration based on frame selection and multi-framd blind deconvolution

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Rao, C. H.; Wei, K.

    2008-10-01

    The adaptive optics can only partially compensate the image blurred by atmospheric turbulent due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frame blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are picked out by frame selection technique is deconvolved. There is no priori knowledge except the positive constraint. The method has been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system in Yunnan Observatory. The results showed that the method can effectively improve the images partially corrected by adaptive optics.

  5. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  6. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  7. New Physical Constraints for Multi-Frame Blind Deconvolution

    DTIC Science & Technology

    2014-12-10

    Laboratory) Dr. Julian Christou (Large Binocular Telescope Observatory) REAL ACADEMIA DE CIENCIAS Y ARTES DE BARCELONA RAMBLA DE LOS ESTUDIOS 115... CIENCIAS Y ARTES DE BARCELONA RAMBLA DE LOS ESTUDIOS 115 BARCELONA, 08002 SPAIN 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING

  8. Parallelization of a blind deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  9. Optimal Dictionaries for Sparse Solutions of Multi-frame Blind Deconvolution

    DTIC Science & Technology

    2014-09-01

    object is the Hubble Space Telescope (HST). As stated above, the dictionary training used the first 100 of the total of the simulated PSFs. The second set...diffraction-limited Hubble image and HubbleRE is the reconstructed image from the 100 simulated atmospheric turbulence degraded images of the HST

  10. Deconvolution of astronomical images using SOR with adaptive relaxation.

    PubMed

    Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J

    2011-07-04

    We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.

  11. Research on adaptive optics image restoration algorithm based on improved joint maximum a posteriori method

    NASA Astrophysics Data System (ADS)

    Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying

    2018-03-01

    In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.

  12. Incorporating LWIR Data into Multi-Frame Blind Deconvolution of Visible Imagery

    DTIC Science & Technology

    2015-10-18

    18.7% 10% 12% Fermi Gamma-ray Space Telescope (GLAST) 19.7% 50% 19% Hubble Space Telescope (HST) (Night 1) 39.9% 20% 15% Iridium 82 14.4% 40% 9...LEO Satellite name Δ Δ ΔMM Delta 1 Rocket Body 12.8% 10% 7% Fermi Gamma-ray Space Telescope (GLAST) 4.3% 10% 6% Hubble Space Telescope (HST) (Night...2) 21.4% 20% -4% Hubble Space Telescope (HST) (Night 3) 41.4% 30% 1% (a) (b) (c) Fig. 3. (a) LWIR image of HST, (b) LWIR image converted

  13. Distributed capillary adiabatic tissue homogeneity model in parametric multi-channel blind AIF estimation using DCE-MRI.

    PubMed

    Kratochvíla, Jiří; Jiřík, Radovan; Bartoš, Michal; Standara, Michal; Starčuk, Zenon; Taxt, Torfinn

    2016-03-01

    One of the main challenges in quantitative dynamic contrast-enhanced (DCE) MRI is estimation of the arterial input function (AIF). Usually, the signal from a single artery (ignoring contrast dispersion, partial volume effects and flow artifacts) or a population average of such signals (also ignoring variability between patients) is used. Multi-channel blind deconvolution is an alternative approach avoiding most of these problems. The AIF is estimated directly from the measured tracer concentration curves in several tissues. This contribution extends the published methods of multi-channel blind deconvolution by applying a more realistic model of the impulse residue function, the distributed capillary adiabatic tissue homogeneity model (DCATH). In addition, an alternative AIF model is used and several AIF-scaling methods are tested. The proposed method is evaluated on synthetic data with respect to the number of tissue regions and to the signal-to-noise ratio. Evaluation on clinical data (renal cell carcinoma patients before and after the beginning of the treatment) gave consistent results. An initial evaluation on clinical data indicates more reliable and less noise sensitive perfusion parameter estimates. Blind multi-channel deconvolution using the DCATH model might be a method of choice for AIF estimation in a clinical setup. © 2015 Wiley Periodicals, Inc.

  14. Multi-limit unsymmetrical MLIBD image restoration algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Cheng, Yiping; Chen, Zai-wang; Bo, Chen

    2012-11-01

    A novel multi-limit unsymmetrical iterative blind deconvolution(MLIBD) algorithm was presented to enhance the performance of adaptive optics image restoration.The algorithm enhances the reliability of iterative blind deconvolution by introducing the bandwidth limit into the frequency domain of point spread(PSF),and adopts the PSF dynamic support region estimation to improve the convergence speed.The unsymmetrical factor is automatically computed to advance its adaptivity.Image deconvolution comparing experiments between Richardson-Lucy IBD and MLIBD were done,and the result indicates that the iteration number is reduced by 22.4% and the peak signal-to-noise ratio is improved by 10.18dB with MLIBD method. The performance of MLIBD algorithm is outstanding in the images restoration the FK5-857 adaptive optics and the double-star adaptive optics.

  15. AIDA: an adaptive image deconvolution algorithm with application to multi-frame and three-dimensional data

    PubMed Central

    Hom, Erik F. Y.; Marchis, Franck; Lee, Timothy K.; Haase, Sebastian; Agard, David A.; Sedat, John W.

    2011-01-01

    We describe an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of multi-frame and three-dimensional data acquired through astronomical and microscopic imaging. AIDA is a reimplementation and extension of the MISTRAL method developed by Mugnier and co-workers and shown to yield object reconstructions with excellent edge preservation and photometric precision [J. Opt. Soc. Am. A 21, 1841 (2004)]. Written in Numerical Python with calls to a robust constrained conjugate gradient method, AIDA has significantly improved run times over the original MISTRAL implementation. Included in AIDA is a scheme to automatically balance maximum-likelihood estimation and object regularization, which significantly decreases the amount of time and effort needed to generate satisfactory reconstructions. We validated AIDA using synthetic data spanning a broad range of signal-to-noise ratios and image types and demonstrated the algorithm to be effective for experimental data from adaptive optics–equipped telescope systems and wide-field microscopy. PMID:17491626

  16. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    NASA Astrophysics Data System (ADS)

    Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  17. Constrained maximum consistency multi-path mitigation

    NASA Astrophysics Data System (ADS)

    Smith, George B.

    2003-10-01

    Blind deconvolution algorithms can be useful as pre-processors for signal classification algorithms in shallow water. These algorithms remove the distortion of the signal caused by multipath propagation when no knowledge of the environment is available. A framework in which filters that produce signal estimates from each data channel that are as consistent with each other as possible in a least-squares sense has been presented [Smith, J. Acoust. Soc. Am. 107 (2000)]. This framework provides a solution to the blind deconvolution problem. One implementation of this framework yields the cross-relation on which EVAM [Gurelli and Nikias, IEEE Trans. Signal Process. 43 (1995)] and Rietsch [Rietsch, Geophysics 62(6) (1997)] processing are based. In this presentation, partially blind implementations that have good noise stability properties are compared using Classification Operating Characteristics (CLOC) analysis. [Work supported by ONR under Program Element 62747N and NRL, Stennis Space Center, MS.

  18. Minimum entropy deconvolution and blind equalisation

    NASA Technical Reports Server (NTRS)

    Satorius, E. H.; Mulligan, J. J.

    1992-01-01

    Relationships between minimum entropy deconvolution, developed primarily for geophysics applications, and blind equalization are pointed out. It is seen that a large class of existing blind equalization algorithms are directly related to the scale-invariant cost functions used in minimum entropy deconvolution. Thus the extensive analyses of these cost functions can be directly applied to blind equalization, including the important asymptotic results of Donoho.

  19. Space Imagery Enhancement Investigations; Software for Processing Middle Atmosphere Data

    DTIC Science & Technology

    2011-12-19

    SUPPLEMENTARY NOTES 14. ABSTRACT This report summarizes work related to optical superresolution for the ideal incoherent 1D spread function...optical superresolution , incoherent image eigensystem, image registration, multi-frame image reconstruction, deconvolution 16. SECURITY... Superresolution -Related Investigations ............................................................................. 1 2.2.1 Eigensystem Formulations

  20. Partial Deconvolution with Inaccurate Blur Kernel.

    PubMed

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.

  1. The research of multi-frame target recognition based on laser active imaging

    NASA Astrophysics Data System (ADS)

    Wang, Can-jin; Sun, Tao; Wang, Tin-feng; Chen, Juan

    2013-09-01

    Laser active imaging is fit to conditions such as no difference in temperature between target and background, pitch-black night, bad visibility. Also it can be used to detect a faint target in long range or small target in deep space, which has advantage of high definition and good contrast. In one word, it is immune to environment. However, due to the affect of long distance, limited laser energy and atmospheric backscatter, it is impossible to illuminate the whole scene at the same time. It means that the target in every single frame is unevenly or partly illuminated, which make the recognition more difficult. At the same time the speckle noise which is common in laser active imaging blurs the images . In this paper we do some research on laser active imaging and propose a new target recognition method based on multi-frame images . Firstly, multi pulses of laser is used to obtain sub-images for different parts of scene. A denoising method combined homomorphic filter with wavelet domain SURE is used to suppress speckle noise. And blind deconvolution is introduced to obtain low-noise and clear sub-images. Then these sub-images are registered and stitched to combine a completely and uniformly illuminated scene image. After that, a new target recognition method based on contour moments is proposed. Firstly, canny operator is used to obtain contours. For each contour, seven invariant Hu moments are calculated to generate the feature vectors. At last the feature vectors are input into double hidden layers BP neural network for classification . Experiments results indicate that the proposed algorithm could achieve a high recognition rate and satisfactory real-time performance for laser active imaging.

  2. A neural network approach for the blind deconvolution of turbulent flows

    NASA Astrophysics Data System (ADS)

    Maulik, R.; San, O.

    2017-11-01

    We present a single-layer feedforward artificial neural network architecture trained through a supervised learning approach for the deconvolution of flow variables from their coarse grained computations such as those encountered in large eddy simulations. We stress that the deconvolution procedure proposed in this investigation is blind, i.e. the deconvolved field is computed without any pre-existing information about the filtering procedure or kernel. This may be conceptually contrasted to the celebrated approximate deconvolution approaches where a filter shape is predefined for an iterative deconvolution process. We demonstrate that the proposed blind deconvolution network performs exceptionally well in the a-priori testing of both two-dimensional Kraichnan and three-dimensional Kolmogorov turbulence and shows promise in forming the backbone of a physics-augmented data-driven closure for the Navier-Stokes equations.

  3. Real-time blind image deconvolution based on coordinated framework of FPGA and DSP

    NASA Astrophysics Data System (ADS)

    Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun

    2015-10-01

    Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.

  4. Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation

    NASA Astrophysics Data System (ADS)

    Wen, Bo; Zhang, Qiheng; Zhang, Jianlin

    2011-11-01

    Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.

  5. Compensating Atmospheric Turbulence Effects at High Zenith Angles with Adaptive Optics Using Advanced Phase Reconstructors

    NASA Astrophysics Data System (ADS)

    Roggemann, M.; Soehnel, G.; Archer, G.

    Atmospheric turbulence degrades the resolution of images of space objects far beyond that predicted by diffraction alone. Adaptive optics telescopes have been widely used for compensating these effects, but as users seek to extend the envelopes of operation of adaptive optics telescopes to more demanding conditions, such as daylight operation, and operation at low elevation angles, the level of compensation provided will degrade. We have been investigating the use of advanced wave front reconstructors and post detection image reconstruction to overcome the effects of turbulence on imaging systems in these more demanding scenarios. In this paper we show results comparing the optical performance of the exponential reconstructor, the least squares reconstructor, and two versions of a reconstructor based on the stochastic parallel gradient descent algorithm in a closed loop adaptive optics system using a conventional continuous facesheet deformable mirror and a Hartmann sensor. The performance of these reconstructors has been evaluated under a range of source visual magnitudes and zenith angles ranging up to 70 degrees. We have also simulated satellite images, and applied speckle imaging, multi-frame blind deconvolution algorithms, and deconvolution algorithms that presume the average point spread function is known to compute object estimates. Our work thus far indicates that the combination of adaptive optics and post detection image processing will extend the useful envelope of the current generation of adaptive optics telescopes.

  6. New regularization scheme for blind color image deconvolution

    NASA Astrophysics Data System (ADS)

    Chen, Li; He, Yu; Yap, Kim-Hui

    2011-01-01

    This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.

  7. Parallel Implementation of a Frozen Flow Based Wavefront Reconstructor

    NASA Astrophysics Data System (ADS)

    Nagy, J.; Kelly, K.

    2013-09-01

    Obtaining high resolution images of space objects from ground based telescopes is challenging, often requiring the use of a multi-frame blind deconvolution (MFBD) algorithm to remove blur caused by atmospheric turbulence. In order for an MFBD algorithm to be effective, it is necessary to obtain a good initial estimate of the wavefront phase. Although wavefront sensors work well in low turbulence situations, they are less effective in high turbulence, such as when imaging in daylight, or when imaging objects that are close to the Earth's horizon. One promising approach, which has been shown to work very well in high turbulence settings, uses a frozen flow assumption on the atmosphere to capture the inherent temporal correlations present in consecutive frames of wavefront data. Exploiting these correlations can lead to more accurate estimation of the wavefront phase, and the associated PSF, which leads to more effective MFBD algorithms. However, with the current serial implementation, the approach can be prohibitively expensive in situations when it is necessary to use a large number of frames. In this poster we describe a parallel implementation that overcomes this constraint. The parallel implementation exploits sparse matrix computations, and uses the Trilinos package developed at Sandia National Laboratories. Trilinos provides a variety of core mathematical software for parallel architectures that have been designed using high quality software engineering practices, The package is open source, and portable to a variety of high-performance computing architectures.

  8. Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image

    NASA Astrophysics Data System (ADS)

    He, Xingwu; You, Junchen

    2018-03-01

    Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.

  9. iSAP: Interactive Sparse Astronomical Data Analysis Packages

    NASA Astrophysics Data System (ADS)

    Fourt, O.; Starck, J.-L.; Sureau, F.; Bobin, J.; Moudden, Y.; Abrial, P.; Schmitt, J.

    2013-03-01

    iSAP consists of three programs, written in IDL, which together are useful for spherical data analysis. MR/S (MultiResolution on the Sphere) contains routines for wavelet, ridgelet and curvelet transform on the sphere, and applications such denoising on the sphere using wavelets and/or curvelets, Gaussianity tests and Independent Component Analysis on the Sphere. MR/S has been designed for the PLANCK project, but can be used for many other applications. SparsePol (Polarized Spherical Wavelets and Curvelets) has routines for polarized wavelet, polarized ridgelet and polarized curvelet transform on the sphere, and applications such denoising on the sphere using wavelets and/or curvelets, Gaussianity tests and blind source separation on the Sphere. SparsePol has been designed for the PLANCK project. MS-VSTS (Multi-Scale Variance Stabilizing Transform on the Sphere), designed initially for the FERMI project, is useful for spherical mono-channel and multi-channel data analysis when the data are contaminated by a Poisson noise. It contains routines for wavelet/curvelet denoising, wavelet deconvolution, multichannel wavelet denoising and deconvolution.

  10. A frequency-domain seismic blind deconvolution based on Gini correlations

    NASA Astrophysics Data System (ADS)

    Wang, Zhiguo; Zhang, Bing; Gao, Jinghuai; Huo Liu, Qing

    2018-02-01

    In reflection seismic processing, the seismic blind deconvolution is a challenging problem, especially when the signal-to-noise ratio (SNR) of the seismic record is low and the length of the seismic record is short. As a solution to this ill-posed inverse problem, we assume that the reflectivity sequence is independent and identically distributed (i.i.d.). To infer the i.i.d. relationships from seismic data, we first introduce the Gini correlations (GCs) to construct a new criterion for the seismic blind deconvolution in the frequency-domain. Due to a unique feature, the GCs are robust in their higher tolerance of the low SNR data and less dependent on record length. Applications of the seismic blind deconvolution based on the GCs show their capacity in estimating the unknown seismic wavelet and the reflectivity sequence, whatever synthetic traces or field data, even with low SNR and short sample record.

  11. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    NASA Astrophysics Data System (ADS)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  12. Source Pulse Estimation of Mine Shock by Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Makowski, R.

    The objective of seismic signal deconvolution is to extract from the signal information concerning the rockmass or the signal in the source of the shock. In the case of blind deconvolution, we have to extract information regarding both quantities. Many methods of deconvolution made use of in prospective seismology were found to be of minor utility when applied to shock-induced signals recorded in the mines of the Lubin Copper District. The lack of effectiveness should be attributed to the inadequacy of the model on which the methods are based, with respect to the propagation conditions for that type of signal. Each of the blind deconvolution methods involves a number of assumptions; hence, only if these assumptions are fulfilled, we may expect reliable results.Consequently, we had to formulate a different model for the signals recorded in the copper mines of the Lubin District. The model is based on the following assumptions: (1) The signal emitted by the sh ock source is a short-term signal. (2) The signal transmitting system (rockmass) constitutes a parallel connection of elementary systems. (3) The elementary systems are of resonant type. Such a model seems to be justified by the geological structure as well as by the positions of the shock foci and seismometers. The results of time-frequency transformation also support the dominance of resonant-type propagation.Making use of the model, a new method for the blind deconvolution of seismic signals has been proposed. The adequacy of the new model, as well as the efficiency of the proposed method, has been confirmed by the results of blind deconvolution. The slight approximation errors obtained with a small number of approximating elements additionally corroborate the adequacy of the model.

  13. Toward Overcoming the Local Minimum Trap in MFBD

    DTIC Science & Technology

    2015-07-14

    the first two years of this grant: • A. Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind Deconvolution...Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Numerical Optimization Meth- ods for Blind Deconvolution, Numerical Algorithms, volume 65, issue 1...Publications (published) during reporting period: A. Cornelio, E. Loli Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind

  14. Blind deconvolution post-processing of images corrected by adaptive optics

    NASA Astrophysics Data System (ADS)

    Christou, Julian C.

    1995-08-01

    Experience with the adaptive optics system at the Starfire Optical Range has shown that the point spread function is non-uniform and varies both spatially and temporally as well as being object dependent. Because of this, the application of a standard linear and non-linear deconvolution algorithms make it difficult to deconvolve out the point spread function. In this paper we demonstrate the application of a blind deconvolution algorithm to adaptive optics compensated data where a separate point spread function is not needed.

  15. Designing a stable feedback control system for blind image deconvolution.

    PubMed

    Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan

    2018-05-01

    Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Limb Spicules from the Ground and from Space

    NASA Astrophysics Data System (ADS)

    Pasachoff, Jay M.; Jacobson, William A.; Sterling, Alphonse C.

    2009-11-01

    We amassed statistics for quiet-sun chromosphere spicules at the limb using ground-based observations from the Swedish 1-m Solar Telescope on La Palma and simultaneously from NASA’s Transition Region and Coronal Explorer (TRACE) spacecraft. The observations were obtained in July 2006. With the 0.2 arcsecond resolution obtained after maximizing the ground-based resolution with the Multi-Object Multi-Frame Blind Deconvolution (MOMFBD) program, we obtained specific statistics for sizes and motions of over two dozen individual spicules, based on movies compiled at 50-second cadence for the series of five wavelengths observed in a very narrow band at Hα, on-band and at ± 0.035 nm and ± 0.070 nm (10 s at each wavelength) using the SOUP filter, and had simultaneous observations in the 160 nm EUV continuum from TRACE. The MOMFBD restoration also automatically aligned the images, facilitating the making of Dopplergrams at each off-band pair. We studied 40 Hα spicules, and 14 EUV spicules that overlapped Hα spicules; we found that their dynamical and morphological properties fit into the framework of several previous studies. From a preliminary comparison with spicule theories, our observations are consistent with a reconnection mechanism for spicule generation, and with UV spicules being a sheath region surrounding the Hα spicules.

  17. A novel SURE-based criterion for parametric PSF estimation.

    PubMed

    Xue, Feng; Blu, Thierry

    2015-02-01

    We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.

  18. Strehl-constrained iterative blind deconvolution for post-adaptive-optics data

    NASA Astrophysics Data System (ADS)

    Desiderà, G.; Carbillet, M.

    2009-12-01

    Aims: We aim to improve blind deconvolution applied to post-adaptive-optics (AO) data by taking into account one of their basic characteristics, resulting from the necessarily partial AO correction: the Strehl ratio. Methods: We apply a Strehl constraint in the framework of iterative blind deconvolution (IBD) of post-AO near-infrared images simulated in a detailed end-to-end manner and considering a case that is as realistic as possible. Results: The results obtained clearly show the advantage of using such a constraint, from the point of view of both performance and stability, especially for poorly AO-corrected data. The proposed algorithm has been implemented in the freely-distributed and CAOS-based Software Package AIRY.

  19. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    PubMed

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  20. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  1. Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data

    NASA Astrophysics Data System (ADS)

    Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam

    2018-06-01

    Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.

  2. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    NASA Astrophysics Data System (ADS)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  3. Active illuminated space object imaging and tracking simulation

    NASA Astrophysics Data System (ADS)

    Yue, Yufang; Xie, Xiaogang; Luo, Wen; Zhang, Feizhou; An, Jianzhu

    2016-10-01

    Optical earth imaging simulation of a space target in orbit and it's extraction in laser illumination condition were discussed. Based on the orbit and corresponding attitude of a satellite, its 3D imaging rendering was built. General simulation platform was researched, which was adaptive to variable 3D satellite models and relative position relationships between satellite and earth detector system. Unified parallel projection technology was proposed in this paper. Furthermore, we denoted that random optical distribution in laser-illuminated condition was a challenge for object discrimination. Great randomicity of laser active illuminating speckles was the primary factor. The conjunction effects of multi-frame accumulation process and some tracking methods such as Meanshift tracking, contour poid, and filter deconvolution were simulated. Comparison of results illustrates that the union of multi-frame accumulation and contour poid was recommendable for laser active illuminated images, which had capacities of high tracking precise and stability for multiple object attitudes.

  4. Scientific Visualization Made Easy for the Scientist

    NASA Astrophysics Data System (ADS)

    Westerhoff, M.; Henderson, B.

    2002-12-01

    amirar is an application program used in creating 3D visualizations and geometric models of 3D image data sets from various application areas, e.g. medicine, biology, biochemistry, chemistry, physics, and engineering. It has demonstrated significant adoption in the market place since becoming commercially available in 2000. The rapid adoption has expanded the features being requested by the user base and broadened the scope of the amira product offering. The amira product offering includes amira Standard, amiraDevT, used to extend the product capabilities by users, amiraMolT, used for molecular visualization, amiraDeconvT, used to improve quality of image data, and amiraVRT, used in immersive VR environments. amira allows the user to construct a visualization tailored to his or her needs without requiring any programming knowledge. It also allows 3D objects to be represented as grids suitable for numerical simulations, notably as triangular surfaces and volumetric tetrahedral grids. The amira application also provides methods to generate such grids from voxel data representing an image volume, and it includes a general-purpose interactive 3D viewer. amiraDev provides an application-programming interface (API) that allows the user to add new components by C++ programming. amira supports many import formats including a 'raw' format allowing immediate access to your native uniform data sets. amira uses the power and speed of the OpenGLr and Open InventorT graphics libraries and 3D graphics accelerators to allow you to access over 145 modules, enabling you to process, probe, analyze and visualize your data. The amiraMolT extension adds powerful tools for molecular visualization to the existing amira platform. amiraMolT contains support for standard molecular file formats, tools for visualization and analysis of static molecules as well as molecular trajectories (time series). amiraDeconv adds tools for the deconvolution of 3D microscopic images. Deconvolution is the process of increasing image quality and resolution by computationally compensating artifacts of the recording process. amiraDeconv supports 3D wide field microscopy as well as 3D confocal microscopy. It offers both non-blind and blind image deconvolution algorithms. Non-blind deconvolution uses an individual measured point spread function, while non-blind algorithms work on the basis of only a few recording parameters (like numerical aperture or zoom factor). amiraVR is a specialized and extended version of the amira visualization system which is dedicated for use in immersive installations, such as large-screen stereoscopic projections, CAVEr or Holobenchr systems. Among others, it supports multi-threaded multi-pipe rendering, head-tracking, advanced 3D interaction concepts, and 3D menus allowing interaction with any amira object in the same way as on the desktop. With its unique set of features, amiraVR represents both a VR (Virtual Reality) ready application for scientific and medical visualization in immersive environments, and a development platform that allows building VR applications.

  5. Semi-blind sparse image reconstruction with application to MRFM.

    PubMed

    Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O

    2012-09-01

    We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.

  6. Improving lateral resolution and image quality of optical coherence tomography by the multi-frame superresolution technique for 3D tissue imaging.

    PubMed

    Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R

    2017-11-01

    The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues.

  7. Improving Range Estimation of a 3-Dimensional Flash Ladar via Blind Deconvolution

    DTIC Science & Technology

    2010-09-01

    12 2.1.4 Optical Imaging as a Linear and Nonlinear System 15 2.1.5 Coherence Theory and Laser Light Statistics . . . 16 2.2 Deconvolution...rather than deconvolution. 2.1.5 Coherence Theory and Laser Light Statistics. Using [24] and [25], this section serves as background on coherence theory...the laser light incident on the detector surface. The image intensity related to different types of coherence is governed by the laser light’s spatial

  8. Improving lateral resolution and image quality of optical coherence tomography by the multi-frame superresolution technique for 3D tissue imaging

    PubMed Central

    Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R.

    2017-01-01

    The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues. PMID:29188089

  9. Supersampling multiframe blind deconvolution resolution enhancement of adaptive-optics-compensated imagery of LEO satellites

    NASA Astrophysics Data System (ADS)

    Gerwe, David R.; Lee, David J.; Barchers, Jeffrey D.

    2000-10-01

    A post-processing methodology for reconstructing undersampled image sequences with randomly varying blur is described which can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive optics compensated imagery taken by the Starfire Optical Range 3.5 meter telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques which includes a representation of spatial sampling by the focal plane array elements in the forward stochastic model of the imaging system. This generalization enables the random shifts and shape of the adaptive compensated PSF to be used to partially eliminate the aliasing effects associated with sub- Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss which occurs when imaging in wide FOV modes.

  10. Supersampling multiframe blind deconvolution resolution enhancement of adaptive optics compensated imagery of low earth orbit satellites

    NASA Astrophysics Data System (ADS)

    Gerwe, David R.; Lee, David J.; Barchers, Jeffrey D.

    2002-09-01

    We describe a postprocessing methodology for reconstructing undersampled image sequences with randomly varying blur that can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive-optics-(AO)-compensated imagery taken by the Starfire Optical Range 3.5-m telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground-based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques that include a representation of spatial sampling by the focal plane array elements based on a forward stochastic model. This generalization enables the random shifts and shape of the AO- compensated point spread function (PSF) to be used to partially eliminate the aliasing effects associated with sub-Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss that occurs when imaging in wide- field-of-view (FOV) modes.

  11. Blind image deconvolution using the Fields of Experts prior

    NASA Astrophysics Data System (ADS)

    Dong, Wende; Feng, Huajun; Xu, Zhihai; Li, Qi

    2012-11-01

    In this paper, we present a method for single image blind deconvolution. To improve its ill-posedness, we formulate the problem under Bayesian probabilistic framework and use a prior named Fields of Experts (FoE) which is learnt from natural images to regularize the latent image. Furthermore, due to the sparse distribution of the point spread function (PSF), we adopt a Student-t prior to regularize it. An improved alternating minimization (AM) approach is proposed to solve the resulted optimization problem. Experiments on both synthetic and real world blurred images show that the proposed method can achieve results of high quality.

  12. Improving the blind restoration of retinal images by means of point-spread-function estimation assessment

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés. G.; Millán, María. S.; Å orel, Michal; Kotera, Jan; Å roubek, Filip

    2015-01-01

    Retinal images often suffer from blurring which hinders disease diagnosis and progression assessment. The restoration of the images is carried out by means of blind deconvolution, but the success of the restoration depends on the correct estimation of the point-spread-function (PSF) that blurred the image. The restoration can be space-invariant or space-variant. Because a retinal image has regions without texture or sharp edges, the blind PSF estimation may fail. In this paper we propose a strategy for the correct assessment of PSF estimation in retinal images for restoration by means of space-invariant or space-invariant blind deconvolution. Our method is based on a decomposition in Zernike coefficients of the estimated PSFs to identify valid PSFs. This significantly improves the quality of the image restoration revealed by the increased visibility of small details like small blood vessels and by the lack of restoration artifacts.

  13. Optimized Deconvolution for Maximum Axial Resolution in Three-Dimensional Aberration-Corrected Scanning Transmission Electron Microscopy

    PubMed Central

    Ramachandra, Ranjan; de Jonge, Niels

    2012-01-01

    Three-dimensional (3D) data sets were recorded of gold nanoparticles placed on both sides of silicon nitride membranes using focal series aberration-corrected scanning transmission electron microscopy (STEM). The deconvolution of the 3D datasets was optimized to obtain the highest possible axial resolution. The deconvolution involved two different point spread function (PSF)s, each calculated iteratively via blind deconvolution.. Supporting membranes of different thicknesses were tested to study the effect of beam broadening on the deconvolution. It was found that several iterations of deconvolution was efficient in reducing the imaging noise. With an increasing number of iterations, the axial resolution was increased, and most of the structural information was preserved. Additional iterations improved the axial resolution by maximal a factor of 4 to 6, depending on the particular dataset, and up to 8 nm maximal, but at the cost of a reduction of the lateral size of the nanoparticles in the image. Thus, the deconvolution procedure optimized for highest axial resolution is best suited for applications where one is interested in the 3D locations of nanoparticles only. PMID:22152090

  14. Effective Alternating Direction Optimization Methods for Sparsity-Constrained Blind Image Deblurring.

    PubMed

    Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi

    2017-01-18

    Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.

  15. Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution

    DTIC Science & Technology

    2015-06-08

    deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero

  16. Blind channel estimation and deconvolution in colored noise using higher-order cumulants

    NASA Astrophysics Data System (ADS)

    Tugnait, Jitendra K.; Gummadavelli, Uma

    1994-10-01

    Existing approaches to blind channel estimation and deconvolution (equalization) focus exclusively on channel or inverse-channel impulse response estimation. It is well-known that the quality of the deconvolved output depends crucially upon the noise statistics also. Typically it is assumed that the noise is white and the signal-to-noise ratio is known. In this paper we remove these restrictions. Both the channel impulse response and the noise model are estimated from the higher-order (fourth, e.g.) cumulant function and the (second-order) correlation function of the received data via a least-squares cumulant/correlation matching criterion. It is assumed that the noise higher-order cumulant function vanishes (e.g., Gaussian noise, as is the case for digital communications). Consistency of the proposed approach is established under certain mild sufficient conditions. The approach is illustrated via simulation examples involving blind equalization of digital communications signals.

  17. Plenoptic Image Motion Deblurring.

    PubMed

    Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo

    2018-04-01

    We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.

  18. Nonlinear spatio-temporal filtering of dynamic PET data using a four-dimensional Gaussian filter and expectation-maximization deconvolution

    NASA Astrophysics Data System (ADS)

    Floberg, J. M.; Holden, J. E.

    2013-02-01

    We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications.

  19. Statistical Deconvolution for Superresolution Fluorescence Microscopy

    PubMed Central

    Mukamel, Eran A.; Babcock, Hazen; Zhuang, Xiaowei

    2012-01-01

    Superresolution microscopy techniques based on the sequential activation of fluorophores can achieve image resolution of ∼10 nm but require a sparse distribution of simultaneously activated fluorophores in the field of view. Image analysis procedures for this approach typically discard data from crowded molecules with overlapping images, wasting valuable image information that is only partly degraded by overlap. A data analysis method that exploits all available fluorescence data, regardless of overlap, could increase the number of molecules processed per frame and thereby accelerate superresolution imaging speed, enabling the study of fast, dynamic biological processes. Here, we present a computational method, referred to as deconvolution-STORM (deconSTORM), which uses iterative image deconvolution in place of single- or multiemitter localization to estimate the sample. DeconSTORM approximates the maximum likelihood sample estimate under a realistic statistical model of fluorescence microscopy movies comprising numerous frames. The model incorporates Poisson-distributed photon-detection noise, the sparse spatial distribution of activated fluorophores, and temporal correlations between consecutive movie frames arising from intermittent fluorophore activation. We first quantitatively validated this approach with simulated fluorescence data and showed that deconSTORM accurately estimates superresolution images even at high densities of activated fluorophores where analysis by single- or multiemitter localization methods fails. We then applied the method to experimental data of cellular structures and demonstrated that deconSTORM enables an approximately fivefold or greater increase in imaging speed by allowing a higher density of activated fluorophores/frame. PMID:22677393

  20. Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong

    2011-06-01

    With the use of adaptive optics (AO), the ocular aberrations can be compensated to get high-resolution image of living human retina. However, the wavefront correction is not perfect due to the wavefront measure error and hardware restrictions. Thus, it is necessary to use a deconvolution algorithm to recover the retinal images. In this paper, a blind deconvolution technique called Incremental Wiener filter is used to restore the adaptive optics confocal scanning laser ophthalmoscope (AOSLO) images. The point-spread function (PSF) measured by wavefront sensor is only used as an initial value of our algorithm. We also realize the Incremental Wiener filter on graphics processing unit (GPU) in real-time. When the image size is 512 × 480 pixels, six iterations of our algorithm only spend about 10 ms. Retinal blood vessels as well as cells in retinal images are restored by our algorithm, and the PSFs are also revised. Retinal images with and without adaptive optics are both restored. The results show that Incremental Wiener filter reduces the noises and improve the image quality.

  1. Spectral identification of a 90Sr source in the presence of masking nuclides using Maximum-Likelihood deconvolution

    NASA Astrophysics Data System (ADS)

    Neuer, Marcus J.

    2013-11-01

    A technique for the spectral identification of strontium-90 is shown, utilising a Maximum-Likelihood deconvolution. Different deconvolution approaches are discussed and summarised. Based on the intensity distribution of the beta emission and Geant4 simulations, a combined response matrix is derived, tailored to the β- detection process in sodium iodide detectors. It includes scattering effects and attenuation by applying a base material decomposition extracted from Geant4 simulations with a CAD model for a realistic detector system. Inversion results of measurements show the agreement between deconvolution and reconstruction. A detailed investigation with additional masking sources like 40K, 226Ra and 131I shows that a contamination of strontium can be found in the presence of these nuisance sources. Identification algorithms for strontium are presented based on the derived technique. For the implementation of blind identification, an exemplary masking ratio is calculated.

  2. Dense deconvolution net: Multi path fusion and dense deconvolution for high resolution skin lesion segmentation.

    PubMed

    He, Xinzi; Yu, Zhen; Wang, Tianfu; Lei, Baiying; Shi, Yiyan

    2018-01-01

    Dermoscopy imaging has been a routine examination approach for skin lesion diagnosis. Accurate segmentation is the first step for automatic dermoscopy image assessment. The main challenges for skin lesion segmentation are numerous variations in viewpoint and scale of skin lesion region. To handle these challenges, we propose a novel skin lesion segmentation network via a very deep dense deconvolution network based on dermoscopic images. Specifically, the deep dense layer and generic multi-path Deep RefineNet are combined to improve the segmentation performance. The deep representation of all available layers is aggregated to form the global feature maps using skip connection. Also, the dense deconvolution layer is leveraged to capture diverse appearance features via the contextual information. Finally, we apply the dense deconvolution layer to smooth segmentation maps and obtain final high-resolution output. Our proposed method shows the superiority over the state-of-the-art approaches based on the public available 2016 and 2017 skin lesion challenge dataset and achieves the accuracy of 96.0% and 93.9%, which obtained a 6.0% and 1.2% increase over the traditional method, respectively. By utilizing Dense Deconvolution Net, the average time for processing one testing images with our proposed framework was 0.253 s.

  3. Sequential deconvolution from wave-front sensing using bivariate simplex splines

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai

    2015-05-01

    Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.

  4. Blind source deconvolution for deep Earth seismology

    NASA Astrophysics Data System (ADS)

    Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.

    2007-12-01

    We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.

  5. Retinal image restoration by means of blind deconvolution

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés G.; Šorel, Michal; Šroubek, Filip; Millán, María S.

    2011-11-01

    Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.

  6. Improving space debris detection in GEO ring using image deconvolution

    NASA Astrophysics Data System (ADS)

    Núñez, Jorge; Núñez, Anna; Montojo, Francisco Javier; Condominas, Marta

    2015-07-01

    In this paper we present a method based on image deconvolution to improve the detection of space debris, mainly in the geostationary ring. Among the deconvolution methods we chose the iterative Richardson-Lucy (R-L), as the method that achieves better goals with a reasonable amount of computation. For this work, we used two sets of real 4096 × 4096 pixel test images obtained with the Telescope Fabra-ROA at Montsec (TFRM). Using the first set of data, we establish the optimal number of iterations in 7, and applying the R-L method with 7 iterations to the images, we show that the astrometric accuracy does not vary significantly while the limiting magnitude of the deconvolved images increases significantly compared to the original ones. The increase is in average about 1.0 magnitude, which means that objects up to 2.5 times fainter can be detected after deconvolution. The application of the method to the second set of test images, which includes several faint objects, shows that, after deconvolution, up to four previously undetected faint objects are detected in a single frame. Finally, we carried out a study of some economic aspects of applying the deconvolution method, showing that an important economic impact can be envisaged.

  7. Novel Blind Recognition Algorithm of Frame Synchronization Words Based on Soft-Decision in Digital Communication Systems.

    PubMed

    Qin, Jiangyi; Huang, Zhiping; Liu, Chunwu; Su, Shaojing; Zhou, Jing

    2015-01-01

    A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK) signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.

  8. Non-stationary blind deconvolution of medical ultrasound scans

    NASA Astrophysics Data System (ADS)

    Michailovich, Oleg V.

    2017-03-01

    In linear approximation, the formation of a radio-frequency (RF) ultrasound image can be described based on a standard convolution model in which the image is obtained as a result of convolution of the point spread function (PSF) of the ultrasound scanner in use with a tissue reflectivity function (TRF). Due to the band-limited nature of the PSF, the RF images can only be acquired at a finite spatial resolution, which is often insufficient for proper representation of the diagnostic information contained in the TRF. One particular way to alleviate this problem is by means of image deconvolution, which is usually performed in a "blind" mode, when both PSF and TRF are estimated at the same time. Despite its proven effectiveness, blind deconvolution (BD) still suffers from a number of drawbacks, chief among which stems from its dependence on a stationary convolution model, which is incapable of accounting for the spatial variability of the PSF. As a result, virtually all existing BD algorithms are applied to localized segments of RF images. In this work, we introduce a novel method for non-stationary BD, which is capable of recovering the TRF concurrently with the spatially variable PSF. Particularly, our approach is based on semigroup theory which allows one to describe the effect of such a PSF in terms of the action of a properly defined linear semigroup. The approach leads to a tractable optimization problem, which can be solved using standard numerical methods. The effectiveness of the proposed solution is supported by experiments with in vivo ultrasound data.

  9. Blind deconvolution of astronomical images with band limitation determined by optical system parameters

    NASA Astrophysics Data System (ADS)

    Luo, L.; Fan, M.; Shen, M. Z.

    2007-07-01

    Atmospheric turbulence greatly limits the spatial resolution of astronomical images acquired by the large ground-based telescope. The record image obtained from telescope was thought as a convolution result of the object function and the point spread function. The statistic relationship of the images measured data, the estimated object and point spread function was in accord with the Bayes conditional probability distribution, and the maximum-likelihood formulation was found. A blind deconvolution approach based on the maximum-likelihood estimation technique with real optical band limitation constraint is presented for removing the effect of atmospheric turbulence on this class images through the minimization of the convolution error function by use of the conjugation gradient optimization algorithm. As a result, the object function and the point spread function could be estimated from a few record images at the same time by the blind deconvolution algorithm. According to the principle of Fourier optics, the relationship between the telescope optical system parameters and the image band constraint in the frequency domain was formulated during the image processing transformation between the spatial domain and the frequency domain. The convergence of the algorithm was increased by use of having the estimated function variable (also is the object function and the point spread function) nonnegative and the point-spread function band limited. Avoiding Fourier transform frequency components beyond the cut off frequency lost during the image processing transformation when the size of the sampled image data, image spatial domain and frequency domain were the same respectively, the detector element (e.g. a pixels in the CCD) should be less than the quarter of the diffraction speckle diameter of the telescope for acquiring the images on the focal plane. The proposed method can easily be applied to the case of wide field-view turbulent-degraded images restoration because of no using the object support constraint in the algorithm. The performance validity of the method is examined by the computer simulation and the restoration of the real Alpha Psc astronomical image data. The results suggest that the blind deconvolution with the real optical band constraint can remove the effect of the atmospheric turbulence on the observed images and the spatial resolution of the object image can arrive at or exceed the diffraction-limited level.

  10. Using deconvolution to improve the metrological performance of the grid method

    NASA Astrophysics Data System (ADS)

    Grédiac, Michel; Sur, Frédéric; Badulescu, Claudiu; Mathias, Jean-Denis

    2013-06-01

    The use of various deconvolution techniques to enhance strain maps obtained with the grid method is addressed in this study. Since phase derivative maps obtained with the grid method can be approximated by their actual counterparts convolved by the envelope of the kernel used to extract phases and phase derivatives, non-blind restoration techniques can be used to perform deconvolution. Six deconvolution techniques are presented and employed to restore a synthetic phase derivative map, namely direct deconvolution, regularized deconvolution, the Richardson-Lucy algorithm and Wiener filtering, the last two with two variants concerning their practical implementations. Obtained results show that the noise that corrupts the grid images must be thoroughly taken into account to limit its effect on the deconvolved strain maps. The difficulty here is that the noise on the grid image yields a spatially correlated noise on the strain maps. In particular, numerical experiments on synthetic data show that direct and regularized deconvolutions are unstable when noisy data are processed. The same remark holds when Wiener filtering is employed without taking into account noise autocorrelation. On the other hand, the Richardson-Lucy algorithm and Wiener filtering with noise autocorrelation provide deconvolved maps where the impact of noise remains controlled within a certain limit. It is also observed that the last technique outperforms the Richardson-Lucy algorithm. Two short examples of actual strain fields restoration are finally shown. They deal with asphalt and shape memory alloy specimens. The benefits and limitations of deconvolution are presented and discussed in these two cases. The main conclusion is that strain maps are correctly deconvolved when the signal-to-noise ratio is high and that actual noise in the actual strain maps must be more specifically characterized than in the current study to address higher noise levels with Wiener filtering.

  11. Depth from Optical Turbulence

    DTIC Science & Technology

    2012-01-01

    Dagobert, and C. Franchis . Atmospheric tur- bulence restoration by diffeomorphic image registration and blind deconvolution. In ACIVS, 2008. 1 [4] S...20] V. Tatarskii. Wave Propagation in a Turbulent Medium. McGraw-Hill Books, 1961. 2 [21] Y. Tian and S. Narasimhan. A globally optimal data-driven

  12. Sheet-scanned dual-axis confocal microscopy using Richardson-Lucy deconvolution.

    PubMed

    Wang, D; Meza, D; Wang, Y; Gao, L; Liu, J T C

    2014-09-15

    We have previously developed a line-scanned dual-axis confocal (LS-DAC) microscope with subcellular resolution suitable for high-frame-rate diagnostic imaging at shallow depths. Due to the loss of confocality along one dimension, the contrast (signal-to-background ratio) of a LS-DAC microscope is deteriorated compared to a point-scanned DAC microscope. However, by using a sCMOS camera for detection, a short oblique light-sheet is imaged at each scanned position. Therefore, by scanning the light sheet in only one dimension, a thin 3D volume is imaged. Both sequential two-dimensional deconvolution and three-dimensional deconvolution are performed on the thin image volume to improve the resolution and contrast of one en face confocal image section at the center of the volume, a technique we call sheet-scanned dual-axis confocal (SS-DAC) microscopy.

  13. A hybrid method for synthetic aperture ladar phase-error compensation

    NASA Astrophysics Data System (ADS)

    Hua, Zhili; Li, Hongping; Gu, Yongjian

    2009-07-01

    As a high resolution imaging sensor, synthetic aperture ladar data contain phase-error whose source include uncompensated platform motion and atmospheric turbulence distortion errors. Two previously devised methods, rank one phase-error estimation algorithm and iterative blind deconvolution are reexamined, of which a hybrid method that can recover both the images and PSF's without any a priori information on the PSF is built to speed up the convergence rate by the consideration in the choice of initialization. To be integrated into spotlight mode SAL imaging model respectively, three methods all can effectively reduce the phase-error distortion. For each approach, signal to noise ratio, root mean square error and CPU time are computed, from which we can see the convergence rate of the hybrid method can be improved because a more efficient initialization set of blind deconvolution. Moreover, by making a further discussion of the hybrid method, the weight distribution of ROPE and IBD is found to be an important factor that affects the final result of the whole compensation process.

  14. Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.

    PubMed

    Harikumar, G; Bresler, Y

    1999-01-01

    We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.

  15. Closed-form expressions of some stochastic adapting equations for nonlinear adaptive activation function neurons.

    PubMed

    Fiori, Simone

    2003-12-01

    In recent work, we introduced nonlinear adaptive activation function (FAN) artificial neuron models, which learn their activation functions in an unsupervised way by information-theoretic adapting rules. We also applied networks of these neurons to some blind signal processing problems, such as independent component analysis and blind deconvolution. The aim of this letter is to study some fundamental aspects of FAN units' learning by investigating the properties of the associated learning differential equation systems.

  16. The Second Flight of the Sunrise Balloon-borne Solar Observatory: Overview of Instrument Updates, the Flight, the Data, and First Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solanki, S. K.; Riethmüller, T. L.; Barthol, P.

    The Sunrise balloon-borne solar observatory, consisting of a 1 m aperture telescope that provides a stabilized image to a UV filter imager and an imaging vector polarimeter, carried out its second science flight in 2013 June. It provided observations of parts of active regions at high spatial resolution, including the first high-resolution images in the Mg ii k line. The obtained data are of very high quality, with the best UV images reaching the diffraction limit of the telescope at 3000 Å after Multi-Frame Blind Deconvolution reconstruction accounting for phase-diversity information. Here a brief update is given of the instruments andmore » the data reduction techniques, which includes an inversion of the polarimetric data. Mainly those aspects that evolved compared with the first flight are described. A tabular overview of the observations is given. In addition, an example time series of a part of the emerging active region NOAA AR 11768 observed relatively close to disk center is described and discussed in some detail. The observations cover the pores in the trailing polarity of the active region, as well as the polarity inversion line where flux emergence was ongoing and a small flare-like brightening occurred in the course of the time series. The pores are found to contain magnetic field strengths ranging up to 2500 G, and while large pores are clearly darker and cooler than the quiet Sun in all layers of the photosphere, the temperature and brightness of small pores approach or even exceed those of the quiet Sun in the upper photosphere.« less

  17. VizieR Online Data Catalog: Spectra of 13 lensed quasars (Sluse+, 2012)

    NASA Astrophysics Data System (ADS)

    Sluse, D.; Hutsemekers, D.; Courbin, F.; Meylan, G.; Wambsganss, J.

    2012-05-01

    Extracted flux calibrated spectra of 13 lensed quasars following the methodology described in Sect. 2.1. of the oaoer. The data were obtained with the FORS spectrograph at VLT in multi-object spectroscopy mode. The typical wavelength coverage is from 4200 to 8200Å. The data concern the following objects: HE0047-1756 (HE0047), Q0142-100 (Q0142), SDSSJ0246-0825 (SDSS0246), HE0435-1223 (HE0435), SDSSJ0806+2006 (SDSS0806), FBQ0951+2635 (FBQ0951), BRI0952-0115 (BRI0952), SDSSJ1138+0314 (J1138), J1226-0006 (J1226), SDSSJ1335+0118 (J1335), Q1355-2257 (Q1355), WFI2033-4723 (WFI2033), and HE2149-2745 (HE2149). For each object, we provide the 1D flux calibrated spectrum of the 2 individual images in the slit. In addition, we also provide the 2D reduced spectrum and corresponding 1σ error frame (corresponding files are named "objectnamedata" and "objectnameerr"), and the 2D processed spectra associated to the deconvolution, as shown in Fig.1 of the paper. These processed 2D spectra are the deconvolved frame ("dec"), the extended component of the flux emission ("ext") and the residual frame in σ units ("_res") corresponding to panel (b), (c) and (d) of Fig.1. A pdf file file similar to Fig.1 is also provided for each object. (4 data files).

  18. Texas two-step: a framework for optimal multi-input single-output deconvolution.

    PubMed

    Neelamani, Ramesh; Deffenbaugh, Max; Baraniuk, Richard G

    2007-11-01

    Multi-input single-output deconvolution (MISO-D) aims to extract a deblurred estimate of a target signal from several blurred and noisy observations. This paper develops a new two step framework--Texas Two-Step--to solve MISO-D problems with known blurs. Texas Two-Step first reduces the MISO-D problem to a related single-input single-output deconvolution (SISO-D) problem by invoking the concept of sufficient statistics (SSs) and then solves the simpler SISO-D problem using an appropriate technique. The two-step framework enables new MISO-D techniques (both optimal and suboptimal) based on the rich suite of existing SISO-D techniques. In fact, the properties of SSs imply that a MISO-D algorithm is mean-squared-error optimal if and only if it can be rearranged to conform to the Texas Two-Step framework. Using this insight, we construct new wavelet- and curvelet-based MISO-D algorithms with asymptotically optimal performance. Simulated and real data experiments verify that the framework is indeed effective.

  19. Image deblurring by motion estimation for remote sensing

    NASA Astrophysics Data System (ADS)

    Chen, Yueting; Wu, Jiagu; Xu, Zhihai; Li, Qi; Feng, Huajun

    2010-08-01

    The imagery resolution of imaging systems for remote sensing is often limited by image degradation resulting from unwanted motion disturbances of the platform during image exposures. Since the form of the platform vibration can be arbitrary, the lack of priori knowledge about the motion function (the PSF) suggests blind restoration approaches. A deblurring method which combines motion estimation and image deconvolution both for area-array and TDI remote sensing has been proposed in this paper. The image motion estimation is accomplished by an auxiliary high-speed detector and a sub-pixel correlation algorithm. The PSF is then reconstructed from estimated image motion vectors. Eventually, the clear image can be recovered by the Richardson-Lucy (RL) iterative deconvolution algorithm from the blurred image of the prime camera with the constructed PSF. The image deconvolution for the area-array detector is direct. While for the TDICCD detector, an integral distortion compensation step and a row-by-row deconvolution scheme are applied. Theoretical analyses and experimental results show that, the performance of the proposed concept is convincing. Blurred and distorted images can be properly recovered not only for visual observation, but also with significant objective evaluation increment.

  20. A method of PSF generation for 3D brightfield deconvolution.

    PubMed

    Tadrous, P J

    2010-02-01

    This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.

  1. Bilinear Inverse Problems: Theory, Algorithms, and Applications

    NASA Astrophysics Data System (ADS)

    Ling, Shuyang

    We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical guarantees and stability theory are derived and the number of sampling complexity is nearly optimal (up to a poly-log factor). Applications in imaging sciences and signal processing are discussed and numerical simulations are presented to demonstrate the effectiveness and efficiency of our approach.

  2. A new approach to blind deconvolution of astronomical images

    NASA Astrophysics Data System (ADS)

    Vorontsov, S. V.; Jefferies, S. M.

    2017-05-01

    We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.

  3. A deconvolution extraction method for 2D multi-object fibre spectroscopy based on the regularized least-squares QR-factorization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li

    2014-09-01

    This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.

  4. Blind Bayesian restoration of adaptive optics telescope images using generalized Gaussian Markov random field models

    NASA Astrophysics Data System (ADS)

    Jeffs, Brian D.; Christou, Julian C.

    1998-09-01

    This paper addresses post processing for resolution enhancement of sequences of short exposure adaptive optics (AO) images of space objects. The unknown residual blur is removed using Bayesian maximum a posteriori blind image restoration techniques. In the problem formulation, both the true image and the unknown blur psf's are represented by the flexible generalized Gaussian Markov random field (GGMRF) model. The GGMRF probability density function provides a natural mechanism for expressing available prior information about the image and blur. Incorporating such prior knowledge in the deconvolution optimization is crucial for the success of blind restoration algorithms. For example, space objects often contain sharp edge boundaries and geometric structures, while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits smoothed, random , texture-like features on a peaked central core. By properly choosing parameters, GGMRF models can accurately represent both the blur psf and the object, and serve to regularize the deconvolution problem. These two GGMRF models also serve as discriminator functions to separate blur and object in the solution. Algorithm performance is demonstrated with examples from synthetic AO images. Results indicate significant resolution enhancement when applied to partially corrected AO images. An efficient computational algorithm is described.

  5. High-cadence observations of spicular-type events on the Sun

    NASA Astrophysics Data System (ADS)

    Shetye, J.; Doyle, J. G.; Scullion, E.; Nelson, C. J.; Kuridze, D.; Henriques, V.; Woeger, F.; Ray, T.

    2016-05-01

    Context. Chromospheric observations taken at high-cadence and high-spatial resolution show a range of spicule-like features, including Type-I, Type-II (as well as rapid blue-shifted excursions (RBEs) and rapid red-shifted excursions (RREs) which are thought to be on-disk counterparts of Type-II spicules) and those which seem to appear within a few seconds, which if interpreted as flows would imply mass flow velocities in excess of 1000 km s-1. Aims: This article seeks to quantify and study rapidly appearing spicular-type events. We also compare the multi-object multi-frame blind deconvolution (MOMFBD) and speckle reconstruction techniques to understand if these spicules are more favourably observed using a particular technique. Methods: We use spectral imaging observations taken with the CRisp Imaging SpectroPolarimeter (CRISP) on the Swedish 1-m Solar Telescope. Data was sampled at multiple positions within the Hα line profile for both an on-disk and limb location. Results: The data is host to numerous rapidly appearing features which are observed at different locations within the Hα line profile. The feature's durations vary between 10-20 s and lengths around 3500 km. Sometimes, a time delay in their appearance between the blue and red wings of 3-5 s is evident, whereas, sometimes they are near simultaneous. In some instances, features are observed to fade and then re-emerge at the same location several tens of seconds later. Conclusions: We provide the first statistical analysis of these spicules and suggest that these observations can be interpreted as the line-of-sight (LOS) movement of highly dynamic spicules moving in and out of the narrow 60 mÅ transmission filter that is used to observe in different parts of the Hα line profile. The LOS velocity component of the observed fast chromospheric features, manifested as Doppler shifts, are responsible for their appearance in the red and blue wings of Hα line. Additional work involving data at other wavelengths is required to investigate the nature of their possible wave-like activity.

  6. High-resolution, high-sensitivity, ground-based solar spectropolarimetry with a new fast imaging polarimeter. I. Prototype characterization

    NASA Astrophysics Data System (ADS)

    Iglesias, F. A.; Feller, A.; Nagaraju, K.; Solanki, S. K.

    2016-05-01

    Context. Remote sensing of weak and small-scale solar magnetic fields is of utmost relevance when attempting to respond to a number of important open questions in solar physics. This requires the acquisition of spectropolarimetric data with high spatial resolution (~10-1 arcsec) and low noise (10-3 to 10-5 of the continuum intensity). The main limitations to obtain these measurements from the ground, are the degradation of the image resolution produced by atmospheric seeing and the seeing-induced crosstalk (SIC). Aims: We introduce the prototype of the Fast Solar Polarimeter (FSP), a new ground-based, high-cadence polarimeter that tackles the above-mentioned limitations by producing data that are optimally suited for the application of post-facto image restoration, and by operating at a modulation frequency of 100 Hz to reduce SIC. Methods: We describe the instrument in depth, including the fast pnCCD camera employed, the achromatic modulator package, the main calibration steps, the effects of the modulation frequency on the levels of seeing-induced spurious signals, and the effect of the camera properties on the image restoration quality. Results: The pnCCD camera reaches 400 fps while keeping a high duty cycle (98.6%) and very low noise (4.94 e- rms). The modulator is optimized to have high (>80%) total polarimetric efficiency in the visible spectral range. This allows FSP to acquire 100 photon-noise-limited, full-Stokes measurements per second. We found that the seeing induced signals that are present in narrow-band, non-modulated, quiet-sun measurements are (a) lower than the noise (7 × 10-5) after integrating 7.66 min, (b) lower than the noise (2.3 × 10-4) after integrating 1.16 min and (c) slightly above the noise (4 × 10-3) after restoring case (b) by means of a multi-object multi-frame blind deconvolution. In addition, we demonstrate that by using only narrow-band images (with low S/N of 13.9) of an active region, we can obtain one complete set of high-quality restored measurements about every 2 s.

  7. Fruit fly optimization based least square support vector regression for blind image restoration

    NASA Astrophysics Data System (ADS)

    Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

    2014-11-01

    The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and performs better. Both objective and subjective restoration performances are studied in the comparison experiments.

  8. Library Optimization in EDXRF Spectral Deconvolution for Multi-element Analysis of Ambient Aerosols

    EPA Science Inventory

    In multi-element analysis of atmospheric aerosols, attempts are made to fit overlapping elemental spectral lines for many elements that may be undetectable in samples due to low concentrations. Fitting with many library reference spectra has the unwanted effect of raising the an...

  9. Restoration of solar and star images with phase diversity-based blind deconvolution

    NASA Astrophysics Data System (ADS)

    Li, Qiang; Liao, Sheng; Wei, Honggang; Shen, Mangzuo

    2007-04-01

    The images recorded by a ground-based telescope are often degraded by atmospheric turbulence and the aberration of the optical system. Phase diversity-based blind deconvolution is an effective post-processing method that can be used to overcome the turbulence-induced degradation. The method uses an ensemble of short-exposure images obtained simultaneously from multiple cameras to jointly estimate the object and the wavefront distribution on pupil. Based on signal estimation theory and optimization theory, we derive the cost function and solve the large-scale optimization problem using a limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method. We apply the method to the turbulence-degraded images generated with computer, the solar images acquired with the swedish vacuum solar telescope (SVST, 0.475 m) in La Palma and the star images collected with 1.2-m telescope in Yunnan Observatory. In order to avoid edge effect in the restoration of the solar images, a modified Hanning apodized window is adopted. The star image still can be restored when the defocus distance is measured inaccurately. The restored results demonstrate that the method is efficient for removing the effect of turbulence and reconstructing the point-like or extended objects.

  10. Blind deconvolution of 2-D and 3-D fluorescent micrographs

    NASA Astrophysics Data System (ADS)

    Krishnamurthi, Vijaykumar; Liu, Yi-Hwa; Holmes, Timothy J.; Roysam, Badrinath; Turner, James N.

    1992-06-01

    This paper presents recent results of our reconstructions of 3-D data from Drosophila chromosomes as well as our simulations with a refined version of the algorithm used in the former. It is well known that the calibration of the point spread function (PSF) of a fluorescence microscope is a tedious process and involves esoteric techniques in most cases. This problem is further compounded in the case of confocal microscopy where the measured intensities are usually low. A number of techniques have been developed to solve this problem, all of which are methods in blind deconvolution. These are so called because the measured PSF is not required in the deconvolution of degraded images from any optical system. Our own efforts in this area involved the maximum likelihood (ML) method, the numerical solution to which is obtained by the expectation maximization (EM) algorithm. Based on the reasonable early results obtained during our simulations with 2-D phantoms, we carried out experiments with real 3-D data. We found that the blind deconvolution method using the ML approach gave reasonable reconstructions. Next we tried to perform the reconstructions using some 2-D data, but we found that the results were not encouraging. We surmised that the poor reconstructions were primarily due to the large values of dark current in the input data. This, coupled with the fact that we are likely to have similar data with considerable dark current from a confocal microscope prompted us to look into ways of constraining the solution of the PSF. We observed that in the 2-D case, the reconstructed PSF has a tendency to retain values larger than those of the theoretical PSF in regions away from the center (outside of those we considered to be its region of support). This observation motivated us to apply an upper bound constraint on the PSF in these regions. Furthermore, we constrain the solution of the PSF to be a bandlimited function, as in the case in the true situation. We have derived two separate approaches for implementing the constraint. One approach involves the mathematical rigors of Lagrange multipliers. This approach is discussed in another paper. The second approach involves an adaptation of the Gershberg Saxton algorithm, which ensures bandlimitedness and non-negativity of the PSF. Although the latter approach is mathematically less rigorous than the former, we currently favor it because it has a simpler implementation on a computer and has smaller memory requirements. The next section describes briefly the theory and derivation of these constraint equations using Lagrange multipliers.

  11. Strehl-constrained reconstruction of post-adaptive optics data and the Software Package AIRY, v. 6.1

    NASA Astrophysics Data System (ADS)

    Carbillet, Marcel; La Camera, Andrea; Deguignet, Jérémy; Prato, Marco; Bertero, Mario; Aristidi, Éric; Boccacci, Patrizia

    2014-08-01

    We first briefly present the last version of the Software Package AIRY, version 6.1, a CAOS-based tool which includes various deconvolution methods, accelerations, regularizations, super-resolution, boundary effects reduction, point-spread function extraction/extrapolation, stopping rules, and constraints in the case of iterative blind deconvolution (IBD). Then, we focus on a new formulation of our Strehl-constrained IBD, here quantitatively compared to the original formulation for simulated near-infrared data of an 8-m class telescope equipped with adaptive optics (AO), showing their equivalence. Next, we extend the application of the original method to the visible domain with simulated data of an AO-equipped 1.5-m telescope, testing also the robustness of the method with respect to the Strehl ratio estimation.

  12. A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Zuo, B.; Hu, X.; Li, H.

    2011-12-01

    A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.

  13. The Last Meter: Blind Visual Guidance to a Target.

    PubMed

    Manduchi, Roberto; Coughlan, James M

    2014-01-01

    Smartphone apps can use object recognition software to provide information to blind or low vision users about objects in the visual environment. A crucial challenge for these users is aiming the camera properly to take a well-framed picture of the desired target object. We investigate the effects of two fundamental constraints of object recognition - frame rate and camera field of view - on a blind person's ability to use an object recognition smartphone app. The app was used by 18 blind participants to find visual targets beyond arm's reach and approach them to within 30 cm. While we expected that a faster frame rate or wider camera field of view should always improve search performance, our experimental results show that in many cases increasing the field of view does not help, and may even hurt, performance. These results have important implications for the design of object recognition systems for blind users.

  14. Binary Classification of an Unknown Object through Atmospheric Turbulence Using a Polarimetric Blind-Deconvolution Algorithm Augmented with Adaptive Degree of Linear Polarization Priors

    DTIC Science & Technology

    2012-03-01

    geometry of reflection from a smooth (or mirror-like) surface [27]. In passive polarimetry , the angle of polarization (AoP) provides information about... polarimetry for remote sens- ing applications”. Appl. Opt., 45(22):5453–5469, Aug 2006. URL http://ao.osa.org/abstract.cfm?URI=ao-45-22-5453. 27

  15. Multichannel blind deconvolution of spatially misaligned images.

    PubMed

    Sroubek, Filip; Flusser, Jan

    2005-07-01

    Existing multichannel blind restoration techniques assume perfect spatial alignment of channels, correct estimation of blur size, and are prone to noise. We developed an alternating minimization scheme based on a maximum a posteriori estimation with a priori distribution of blurs derived from the multichannel framework and a priori distribution of original images defined by the variational integral. This stochastic approach enables us to recover the blurs and the original image from channels severely corrupted by noise. We observe that the exact knowledge of the blur size is not necessary, and we prove that translation misregistration up to a certain extent can be automatically removed in the restoration process.

  16. LES-Modeling of a Partially Premixed Flame using a Deconvolution Turbulence Closure

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Wu, Hao; Ihme, Matthias

    2015-11-01

    The modeling of the turbulence/chemistry interaction in partially premixed and multi-stream combustion remains an outstanding issue. By extending a recently developed constrained minimum mean-square error deconvolution (CMMSED) method, to objective of this work is to develop a source-term closure for turbulent multi-stream combustion. In this method, the chemical source term is obtained from a three-stream flamelet model, and CMMSED is used as closure model, thereby eliminating the need for presumed PDF-modeling. The model is applied to LES of a piloted turbulent jet flame with inhomogeneous inlets, and simulation results are compared with experiments. Comparisons with presumed PDF-methods are performed, and issues regarding resolution and conservation of the CMMSED method are examined. The author would like to acknowledge the support of funding from Stanford Graduate Fellowship.

  17. Multi-images deconvolution improves signal-to-noise ratio on gated stimulated emission depletion microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castello, Marco; DIBRIS, University of Genoa, Via Opera Pia 13, Genoa 16145; Diaspro, Alberto

    2014-12-08

    Time-gated detection, namely, only collecting the fluorescence photons after a time-delay from the excitation events, reduces complexity, cost, and illumination intensity of a stimulated emission depletion (STED) microscope. In the gated continuous-wave- (CW-) STED implementation, the spatial resolution improves with increased time-delay, but the signal-to-noise ratio (SNR) reduces. Thus, in sub-optimal conditions, such as a low photon-budget regime, the SNR reduction can cancel-out the expected gain in resolution. Here, we propose a method which does not discard photons, but instead collects all the photons in different time-gates and recombines them through a multi-image deconvolution. Our results, obtained on simulated andmore » experimental data, show that the SNR of the restored image improves relative to the gated image, thereby improving the effective resolution.« less

  18. Streaming Multiframe Deconvolutions on GPUs

    NASA Astrophysics Data System (ADS)

    Lee, M. A.; Budavári, T.

    2015-09-01

    Atmospheric turbulence distorts all ground-based observations, which is especially detrimental to faint detections. The point spread function (PSF) defining this blur is unknown for each exposure and varies significantly over time, making image analysis difficult. Lucky imaging and traditional co-adding throws away lots of information. We developed blind deconvolution algorithms that can simultaneously obtain robust solutions for the background image and all the PSFs. It is done in a streaming setting, which makes it practical for large number of big images. We implemented a new tool that runs of GPUs and achieves exceptional running times that can scale to the new time-domain surveys. Our code can quickly and effectively recover high-resolution images exceeding the quality of traditional co-adds. We demonstrate the power of the method on the repeated exposures in the Sloan Digital Sky Survey's Stripe 82.

  19. Seismic interferometry by crosscorrelation and by multidimensional deconvolution: a systematic comparison

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Juerg; Slob, Evert; Thorbecke, Jan; Snieder, Roel

    2010-05-01

    In recent years, seismic interferometry (or Green's function retrieval) has led to many applications in seismology (exploration, regional and global), underwater acoustics and ultrasonics. One of the explanations for this broad interest lies in the simplicity of the methodology. In passive data applications a simple crosscorrelation of responses at two receivers gives the impulse response (Green's function) at one receiver as if there were a source at the position of the other. In controlled-source applications the procedure is similar, except that it involves in addition a summation along the sources. It has also been recognized that the simple crosscorrelation approach has its limitations. From the various theoretical models it follows that there are a number of underlying assumptions for retrieving the Green's function by crosscorrelation. The most important assumptions are that the medium is lossless and that the waves are equipartitioned. In heuristic terms the latter condition means that the receivers are illuminated isotropically from all directions, which is for example achieved when the sources are regularly distributed along a closed surface, the sources are mutually uncorrelated and their power spectra are identical. Despite the fact that in practical situations these conditions are at most only partly fulfilled, the results of seismic interferometry are generally quite robust, but the retrieved amplitudes are unreliable and the results are often blurred by artifacts. Several researchers have proposed to address some of the shortcomings by replacing the correlation process by deconvolution. In most cases the employed deconvolution procedure is essentially 1-D (i.e., trace-by-trace deconvolution). This compensates the anelastic losses, but it does not account for the anisotropic illumination of the receivers. To obtain more accurate results, seismic interferometry by deconvolution should acknowledge the 3-D nature of the seismic wave field. Hence, from a theoretical point of view, the trace-by-trace process should be replaced by a full 3-D wave field deconvolution process. Interferometry by multidimensional deconvolution is more accurate than the trace-by-trace correlation and deconvolution approaches but the processing is more involved. In the presentation we will give a systematic analysis of seismic interferometry by crosscorrelation versus multi-dimensional deconvolution and discuss applications of both approaches.

  20. High Resolution Imaging Using Phase Retrieval. Volume 2

    DTIC Science & Technology

    1991-10-01

    aberrations of the telescope. It will also correct aberrations due to atmospheric turbulence for a ground- based telescope, and can be used with several other...retrieval algorithm, based on the Ayers/Dainty blind deconvolution algorithm, was also developed. A new methodology for exploring the uniqueness of phase...Simulation Experiments ..................... 42 3.3.1 Initial Simulations with Noisy Modulus Data ..... 45 3.3.2 Simulations of a Space- Based Amplitude

  1. Polarimeter Blind Deconvolution Using Image Diversity

    DTIC Science & Technology

    2007-09-01

    significant presence when imaging through turbulence and its ease of production in the labora- tory. An innovative algorithm for detection and estimation...1.2.2.2 Atmospheric Turbulence . Atmospheric turbulence spatially distorts the wavefront as light passes through it and causes blurring of images in an...intensity image . Various values of β are used in the experiments. The optimal β value varied with the input and the algorithm . The hybrid seemed to

  2. Three-dimensional FLASH Laser Radar Range Estimation via Blind Deconvolution

    DTIC Science & Technology

    2009-10-01

    scene can result in errors due to several factors including the optical spatial impulse response, detector blurring, photon noise , timing jitter, and...estimation error include spatial blur, detector blurring, noise , timing jitter, and inter-sample targets. Unlike previous research, this paper ac- counts...for pixel coupling by defining the range image mathematical model as a 2D convolution between the system spatial impulse response and the object (target

  3. Blind Deconvolution of Astronomical Images with a Constraint on Bandwidth Determined by the Parameters of the Optical System

    NASA Astrophysics Data System (ADS)

    Luo, Lin; Fan, Min; Shen, Mang-zuo

    2008-01-01

    Atmospheric turbulence severely restricts the spatial resolution of astronomical images obtained by a large ground-based telescope. In order to reduce effectively this effect, we propose a method of blind deconvolution, with a bandwidth constraint determined by the parameters of the telescope's optical system based on the principle of maximum likelihood estimation, in which the convolution error function is minimized by using the conjugate gradient algorithm. A relation between the parameters of the telescope optical system and the image's frequency-domain bandwidth is established, and the speed of convergence of the algorithm is improved by using the positivity constraint on the variables and the limited-bandwidth constraint on the point spread function. To avoid the effective Fourier frequencies exceed the cut-off frequency, it is required that each single image element (e.g., the pixel in the CCD imaging) in the sampling focal plane should be smaller than one fourth of the diameter of the diffraction spot. In the algorithm, no object-centered constraint was used, so the proposed method is suitable for the image restoration of a whole field of objects. By the computer simulation and by the restoration of an actually-observed image of α Piscium, the effectiveness of the proposed method is demonstrated.

  4. Blind deconvolution with principal components analysis for wide-field and small-aperture telescopes

    NASA Astrophysics Data System (ADS)

    Jia, Peng; Sun, Rongyu; Wang, Weinan; Cai, Dongmei; Liu, Huigen

    2017-09-01

    Telescopes with a wide field of view (greater than 1°) and small apertures (less than 2 m) are workhorses for observations such as sky surveys and fast-moving object detection, and play an important role in time-domain astronomy. However, images captured by these telescopes are contaminated by optical system aberrations, atmospheric turbulence, tracking errors and wind shear. To increase the quality of images and maximize their scientific output, we propose a new blind deconvolution algorithm based on statistical properties of the point spread functions (PSFs) of these telescopes. In this new algorithm, we first construct the PSF feature space through principal component analysis, and then classify PSFs from a different position and time using a self-organizing map. According to the classification results, we divide images of the same PSF types and select these PSFs to construct a prior PSF. The prior PSF is then used to restore these images. To investigate the improvement that this algorithm provides for data reduction, we process images of space debris captured by our small-aperture wide-field telescopes. Comparing the reduced results of the original images and the images processed with the standard Richardson-Lucy method, our method shows a promising improvement in astrometry accuracy.

  5. Windprofiler optimization using digital deconvolution procedures

    NASA Astrophysics Data System (ADS)

    Hocking, W. K.; Hocking, A.; Hocking, D. G.; Garbanzo-Salas, M.

    2014-10-01

    Digital improvements to data acquisition procedures used for windprofiler radars have the potential for improving the height coverage at optimum resolution, and permit improved height resolution. A few newer systems already use this capability. Real-time deconvolution procedures offer even further optimization, and this has not been effectively employed in recent years. In this paper we demonstrate the advantages of combining these features, with particular emphasis on the advantages of real-time deconvolution. Using several multi-core CPUs, we have been able to achieve speeds of up to 40 GHz from a standard commercial motherboard, allowing data to be digitized and processed without the need for any type of hardware except for a transmitter (and associated drivers), a receiver and a digitizer. No Digital Signal Processor chips are needed, allowing great flexibility with analysis algorithms. By using deconvolution procedures, we have then been able to not only optimize height resolution, but also have been able to make advances in dealing with spectral contaminants like ground echoes and other near-zero-Hz spectral contamination. Our results also demonstrate the ability to produce fine-resolution measurements, revealing small-scale structures within the backscattered echoes that were previously not possible to see. Resolutions of 30 m are possible for VHF radars. Furthermore, our deconvolution technique allows the removal of range-aliasing effects in real time, a major bonus in many instances. Results are shown using new radars in Canada and Costa Rica.

  6. A Comparative Study of Different Deblurring Methods Using Filters

    NASA Astrophysics Data System (ADS)

    Srimani, P. K.; Kavitha, S.

    2011-12-01

    This paper attempts to undertake the study of Restored Gaussian Blurred Images by using four types of techniques of deblurring image viz., Wiener filter, Regularized filter, Lucy Richardson deconvolution algorithm and Blind deconvolution algorithm with an information of the Point Spread Function (PSF) corrupted blurred image. The same is applied to the scanned image of seven months baby in the womb and they are compared with one another, so as to choose the best technique for restored or deblurring image. This paper also attempts to undertake the study of restored blurred image using Regualr Filter(RF) with no information about the Point Spread Function (PSF) by using the same four techniques after executing the guess of the PSF. The number of iterations and the weight threshold of it to choose the best guesses for restored or deblurring image of these techniques are determined.

  7. Acoustic Blind Deconvolution and Frequency-Difference Beamforming in Shallow Ocean Environments

    DTIC Science & Technology

    2012-01-01

    acoustic field experiment (FAF06) conducted in July 2006 off the west coast of Italy. Dr. Heechun Song of the Scripps Institution of Oceanography...from seismic surveying and whale calls recorded on a vertical array with 12 elements. The whale call frequencies range from 100 to 500 Hz and the water...underway. Together Ms. Abadi and Dr. Thode had considerable success simulating the experimental environment, deconvolving whale calls, ranging the

  8. Blind Deconvolution Method of Image Deblurring Using Convergence of Variance

    DTIC Science & Technology

    2011-03-24

    random variable x is [9] fX (x) = 1√ 2πσ e−(x−m) 2/2σ2 −∞ < x <∞, σ > 0 (6) where m is the mean and σ is the variance. 7 Figure 1: Gaussian distribution...of the MAP Estimation algorithm when N was set to 50. The APEX method is not without its own difficulties when dealing with astro - nomical data

  9. Unsupervised Blind Deconvolution

    DTIC Science & Technology

    2013-09-01

    is: )()()( uuu  HOI  (4) where u  is a spatial frequency vector in the Fourier plane and )(u  I , )(u  O and )(u  H stand for...exposures is given by:       uuu LEL  HHH  0 (6)       uuu SES  HHH  0 (7) where  uLE  H represents

  10. An improved robust blind motion de-blurring algorithm for remote sensing images

    NASA Astrophysics Data System (ADS)

    He, Yulong; Liu, Jin; Liang, Yonghui

    2016-10-01

    Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.

  11. DECONVOLUTION OF IMAGES FROM BLAST 2005: INSIGHT INTO THE K3-50 AND IC 5146 STAR-FORMING REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, Arabindo; Netterfield, Calvin B.; Ade, Peter A. R.

    2011-04-01

    We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed itsmore » performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4.'5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and {sup 12}CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below {approx} 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region.« less

  12. Deconvolution of Images from BLAST 2005: Insight into the K3-50 and IC 5146 Star-forming Regions

    NASA Astrophysics Data System (ADS)

    Roy, Arabindo; Ade, Peter A. R.; Bock, James J.; Brunt, Christopher M.; Chapin, Edward L.; Devlin, Mark J.; Dicker, Simon R.; France, Kevin; Gibb, Andrew G.; Griffin, Matthew; Gundersen, Joshua O.; Halpern, Mark; Hargrave, Peter C.; Hughes, David H.; Klein, Jeff; Marsden, Gaelen; Martin, Peter G.; Mauskopf, Philip; Netterfield, Calvin B.; Olmi, Luca; Patanchon, Guillaume; Rex, Marie; Scott, Douglas; Semisch, Christopher; Truch, Matthew D. P.; Tucker, Carole; Tucker, Gregory S.; Viero, Marco P.; Wiebe, Donald V.

    2011-04-01

    We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed its performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4farcm5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and 12CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below ~ 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region.

  13. Quasi-Speckle Measurements of Close Double Stars With a CCD Camera

    NASA Astrophysics Data System (ADS)

    Harshaw, Richard

    2017-01-01

    CCD measurements of visual double stars have been an active area of amateur observing for several years now. However, most CCD measurements rely on “lucky imaging” (selecting a very small percentage of the best frames of a larger frame set so as to get the best “frozen” atmosphere for the image), a technique that has limitations with regards to how close the stars can be and still be cleanly resolved in the lucky image. In this paper, the author reports how using deconvolution stars in the analysis of close double stars can greatly enhance the quality of the autocorellogram, leading to a more precise solution using speckle reduction software rather than lucky imaging.

  14. High Resolution Optical Imaging through the Atmosphere

    DTIC Science & Technology

    1989-12-28

    34Iterative Blind Deconvolution Method and its Applications’, Opt. Lett., 13, p.54 7 . Fienup, J.R. 1978, Opt. Lett., 3, 27. Karovska , M., Nisenson, P., and...Noyes, R. (1987), ’High Angular Resolution Speckle Imaging of Alpha Ori", BAAS, Vol.19, No. 2. Karovska , M., Koechlin, L., Nisenson, P., Papaliolios...Publishers. Karovska , M., Nisenson, P., Papaliolios, C., Stendley, C. (1989), "High Angular Speckle Observations of SN1987A. Days 40-580.", BAAS, Vol

  15. Acoustic Blind Deconvolution and Unconventional Nonlinear Beamforming in Shallow Ocean Environments

    DTIC Science & Technology

    2013-09-30

    this year’s work, contains natural bowhead whale calls recorded with a 12-element vertical array in the Arctic Ocean off the north coast of Alaska...This data set was collected and shared with this research project by Dr. Aaron Thode of Scripps Institution of Oceanography. The whale call frequencies...performance of STR and conventional mode filtering for ranging the recorded whale calls. Figure 1. Arctic ocean sound channel used for simulations of

  16. Frequency-Difference Source Localization and Blind Deconvolution in Shallow Ocean Environments

    DTIC Science & Technology

    2014-09-30

    investigations were recorded as part of the KAM11 experiment and were provided for this research effort by Dr. Heechun Song of Scripps Institution of...kHz ≤ f ≤ 20 kHz, could not. Based on this simulation success, suitable broadband experimental measurements were sought, and Dr. Song of SIO...PROJECTS This project currently uses acoustic array recordings of sounds that propagated through the ocean. In FY14, Dr. Heechun Song of SIO

  17. A comparison of deconvolution and the Rutland-Patlak plot in parenchymal renal uptake rate.

    PubMed

    Al-Shakhrah, Issa A

    2012-07-01

    Deconvolution and the Rutland-Patlak (R-P) plot are two of the most commonly used methods for analyzing dynamic radionuclide renography. Both methods allow estimation of absolute and relative renal uptake of radiopharmaceutical and of its rate of transit through the kidney. Seventeen patients (32 kidneys) were referred for further evaluation by renal scanning. All patients were positioned supine with their backs to the scintillation gamma camera, so that the kidneys and the heart are both in the field of view. Approximately 5-7 mCi of (99m)Tc-DTPA (diethylinetriamine penta-acetic acid) in about 0.5 ml of saline is injected intravenously and sequential 20 s frames were acquired, the study on each patient lasts for approximately 20 min. The time-activity curves of the parenchymal region of interest of each kidney, as well as the heart were obtained for analysis. The data were then analyzed with deconvolution and the R-P plot. A strong positive association (n = 32; r = 0.83; R (2) = 0.68) was found between the values that obtained by applying the two methods. Bland-Altman statistical analysis demonstrated that ninety seven percent of the values in the study (31 cases from 32 cases, 97% of the cases) were within limits of agreement (mean ± 1.96 standard deviation). We believe that R-P analysis method is expected to be more reproducible than iterative deconvolution method, because the deconvolution technique (the iterative method) relies heavily on the accuracy of the first point analyzed, as any errors are carried forward into the calculations of all the subsequent points, whereas R-P technique is based on an initial analysis of the data by means of the R-P plot, and it can be considered as an alternative technique to find and calculate the renal uptake rate.

  18. The algorithm of motion blur image restoration based on PSF half-blind estimation

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ke; Lin, Zhe

    2011-08-01

    A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the mathematical model of image degradation was established with the transcendental information of multi-frame images, and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.

  19. Single frequency thermal wave radar: A next-generation dynamic thermography for quantitative non-destructive imaging over wide modulation frequency ranges.

    PubMed

    Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas

    2018-04-01

    Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.

  20. Single frequency thermal wave radar: A next-generation dynamic thermography for quantitative non-destructive imaging over wide modulation frequency ranges

    NASA Astrophysics Data System (ADS)

    Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas

    2018-04-01

    Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.

  1. Triaxial ellipsoid dimensions and rotational poles of seven asteroids from Lick Observatory adaptive optics images, and of Ceres

    NASA Astrophysics Data System (ADS)

    Drummond, Jack; Christou, Julian

    2008-10-01

    Seven main belt asteroids, 2 Pallas, 3 Juno, 4 Vesta, 16 Psyche, 87 Sylvia, 324 Bamberga, and 707 Interamnia, were imaged with the adaptive optics system on the 3 m Shane telescope at Lick Observatory in the near infrared, and their triaxial ellipsoid dimensions and rotational poles have been determined with parametric blind deconvolution. In addition, the dimensions and pole for 1 Ceres are derived from resolved images at multiple epochs, even though it is an oblate spheroid.

  2. Tracking Virus Particles in Fluorescence Microscopy Images Using Multi-Scale Detection and Multi-Frame Association.

    PubMed

    Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl

    2015-11-01

    Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.

  3. A Quantum Multi-proxy Blind Signature Scheme Based on Genuine Four-Qubit Entangled State

    NASA Astrophysics Data System (ADS)

    Tian, Juan-Hong; Zhang, Jian-Zhong; Li, Yan-Ping

    2016-02-01

    In this paper, we propose a multi-proxy blind signature scheme based on controlled teleportation. Genuine four-qubit entangled state functions as quantum channel. The scheme uses the physical characteristics of quantum mechanics to implement delegation, signature and verification. The security analysis shows the scheme satisfies the security features of multi-proxy signature, unforgeability, undeniability, blindness and unconditional security.

  4. Optimisation of chromatographic resolution using objective functions including both time and spectral information.

    PubMed

    Torres-Lapasió, J R; Pous-Torres, S; Ortiz-Bolsico, C; García-Alvarez-Coque, M C

    2015-01-16

    The optimisation of the resolution in high-performance liquid chromatography is traditionally performed attending only to the time information. However, even in the optimal conditions, some peak pairs may remain unresolved. Such incomplete resolution can be still accomplished by deconvolution, which can be carried out with more guarantees of success by including spectral information. In this work, two-way chromatographic objective functions (COFs) that incorporate both time and spectral information were tested, based on the peak purity (analyte peak fraction free of overlapping) and the multivariate selectivity (figure of merit derived from the net analyte signal) concepts. These COFs are sensitive to situations where the components that coelute in a mixture show some spectral differences. Therefore, they are useful to find out experimental conditions where the spectrochromatograms can be recovered by deconvolution. Two-way multivariate selectivity yielded the best performance and was applied to the separation using diode-array detection of a mixture of 25 phenolic compounds, which remained unresolved in the chromatographic order using linear and multi-linear gradients of acetonitrile-water. Peak deconvolution was carried out using the combination of orthogonal projection approach and alternating least squares. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Optical Diagnostics of Multi-Gap Gas Switches for Linear Transformer Drivers

    NASA Astrophysics Data System (ADS)

    Sheng, Liang; Li, Yang; Sun, Tieping; Cong, Peitian; Zhang, Mei; Peng, Bodong; Zhao, Jizhen; Yue, Zhiqin; Wei, Fuli; Yuan, Yuan

    2014-07-01

    The trigger characteristics of a multi-gap gas switch with double insulating layers, a square-groove electrode supporter and a UV pre-ionizing structure are investigated aided by a high sensitivity fiber-bundle array detector, a UV fiber detector, and a framing camera, in addition to standard electrical diagnostics. The fiber-bundle-array detector is used to track the turn-on sequence of each electrode gap at a timing precision of 0.6 ns. Each fiber bundle, including five fibers with different azimuth angles, aims at the whole emitting area of each electrode gap and is fed to a photomultiplier tube. The UV fiber detector with a spectrum response of 260-320 nm, including a fused-quartz fiber of 200 μm in diameter and a solar-blinded photomultiplier tube, is adopted to study the effect of UV pre-ionizing on trigger characteristics. The framing camera, with a capacity of 4 frames per shot and an exposure time of 5 ns, is employed to capture the evolution of channel arcs. Based on the turn-on light signal of each electrode gap, the breakdown delay is divided into statistical delay and formative delay. A decrease in both of them, a smaller switch jitter and more channel arcs are observed with lower gas pressure. An increase in trigger voltage can reduce the statistical delay and its jitter, while higher trigger voltage has a relatively small influence on the formative delay and the number of channel arcs. With the UV pre-ionizing structure at 0.24 MPa gas pressure and 60 kV trigger voltage, the statistical delay and its jitter can be reduced by 1.8 ns and 0.67 ns, while the formative delay and its jitter can only be reduced by 0.5 ns and 0.25 ns.

  6. Blind identification of the kinetic parameters in three-compartment models

    NASA Astrophysics Data System (ADS)

    Riabkov, Dmitri Y.; Di Bella, Edward V. R.

    2004-03-01

    Quantified knowledge of tissue kinetic parameters in the regions of the brain and other organs can offer information useful in clinical and research applications. Dynamic medical imaging with injection of radioactive or paramagnetic tracer can be used for this measurement. The kinetics of some widely used tracers such as [18F]2-fluoro-2-deoxy-D-glucose can be described by a three-compartment physiological model. The kinetic parameters of the tissue can be estimated from dynamically acquired images. Feasibility of estimation by blind identification, which does not require knowledge of the blood input, is considered analytically and numerically in this work for the three-compartment type of tissue response. The non-uniqueness of the two-region case for blind identification of kinetic parameters in three-compartment model is shown; at least three regions are needed for the blind identification to be unique. Numerical results for the accuracy of these blind identification methods in different conditions were considered. Both a separable variables least-squares (SLS) approach and an eigenvector-based algorithm for multichannel blind deconvolution approach were used. The latter showed poor accuracy. Modifications for non-uniform time sampling were also developed. Also, another method which uses a model for the blood input was compared. Results for the macroparameter K, which reflects the metabolic rate of glucose usage, using three regions with noise showed comparable accuracy for the separable variables least squares method and for the input model-based method, and slightly worse accuracy for SLS with the non-uniform sampling modification.

  7. In Vivo Neuromechanics: Decoding Causal Motor Neuron Behavior with Resulting Musculoskeletal Function.

    PubMed

    Sartori, Massimo; Yavuz, Utku Ş; Farina, Dario

    2017-10-18

    Human motor function emerges from the interaction between the neuromuscular and the musculoskeletal systems. Despite the knowledge of the mechanisms underlying neural and mechanical functions, there is no relevant understanding of the neuro-mechanical interplay in the neuro-musculo-skeletal system. This currently represents the major challenge to the understanding of human movement. We address this challenge by proposing a paradigm for investigating spinal motor neuron contribution to skeletal joint mechanical function in the intact human in vivo. We employ multi-muscle spatial sampling and deconvolution of high-density fiber electrical activity to decode accurate α-motor neuron discharges across five lumbosacral segments in the human spinal cord. We use complete α-motor neuron discharge series to drive forward subject-specific models of the musculoskeletal system in open-loop with no corrective feedback. We perform validation tests where mechanical moments are estimated with no knowledge of reference data over unseen conditions. This enables accurate blinded estimation of ankle function purely from motor neuron information. Remarkably, this enables observing causal associations between spinal motor neuron activity and joint moment control. We provide a new class of neural data-driven musculoskeletal modeling formulations for bridging between movement neural and mechanical levels in vivo with implications for understanding motor physiology, pathology, and recovery.

  8. Soil Characterization and Site Response of Marine and Continental Environments

    NASA Astrophysics Data System (ADS)

    Contreras-Porras, R. S.; Huerta-Lopez, C. I.; Martinez-Cruzado, J. A.; Gaherty, J. B.; Collins, J. A.

    2009-05-01

    An in situ soil properties study was conducted to characterize both site and shallow layer sediments under marine and continental environments. Data from the SCoOBA (Sea of Cortez Ocean Bottom Array) seismic experiment and in land ambient vibration measurements on the urban areas of Tijuana, B. C., and Ensenada, B. C., Mexico were used in the analysis. The goal of this investigation is to identify and to analyze the effect of the physical/geotechnical properties of the ground on the site response upon seismic excitations in both marine and continental environments. The time series were earthquakes and background noise recorded within interval of 10/2005 to 10/2006 in the Gulf of California (GoC) with very-broadband Ocean Bottom Seismographs (OBS), and ambient vibration measurements collected during different time periods on Tijuana and Ensenada urban areas. The data processing and analysis was conducted by means of the H/V Spectral Ratios (HVSPR) of multi component data, the Random Decrement Method (RDM), and Blind Deconvolution (BD). This study presents ongoing results of a long term project to characterize the local site response of soil layers upon dynamic excitations using digital signal processing algorithms on time series, as well as the comparison between the results these methodologies are providing.

  9. Image restoration for civil engineering structure monitoring using imaging system embedded on UAV

    NASA Astrophysics Data System (ADS)

    Vozel, Benoit; Dumoulin, Jean; Chehdi, Kacem

    2013-04-01

    Nowadays, civil engineering structures are periodically surveyed by qualified technicians (i.e. alpinist) operating visual inspection using heavy mechanical pods. This method is far to be safe, not only for civil engineering structures monitoring staff, but also for users. Due to the unceasing traffic increase, making diversions or closing lanes on bridge becomes more and more difficult. New inspection methods have to be found. One of the most promising technique is to develop inspection method using images acquired by a dedicated monitoring system operating around the civil engineering structures, without disturbing the traffic. In that context, the use of images acquired with an UAV, which fly around the structures is of particular interest. The UAV can be equipped with different vision system (digital camera, infrared sensor, video, etc.). Nonetheless, detection of small distresses on images (like cracks of 1 mm or less) depends on image quality, which is sensitive to internal parameters of the UAV (vibration modes, video exposure times, etc.) and to external parameters (turbulence, bad illumination of the scene, etc.). Though progresses were made at UAV level and at sensor level (i.e. optics), image deterioration is still an open problem. These deteriorations are mainly represented by motion blur that can be coupled with out-of-focus blur and observation noise on acquired images. In practice, deteriorations are unknown if no a priori information is available or dedicated additional instrumentation is set-up at UAV level. Image restoration processing is therefore required. This is a difficult problem [1-3] which has been intensively studied over last decades [4-12]. Image restoration can be addressed by following a blind approach or a myopic one. In both cases, it includes two processing steps that can be implemented in sequential or alternate mode. The first step carries out the identification of the blur impulse response and the second one makes use of this estimated blur kernel for performing the deconvolution of the acquired image. In the present work, different regularization methods, mainly based on the pseudo norm aforementioned Total Variation, are studied and analysed. The key point of their respective implementation, their properties and limits are investigated in this particular applicative context. References [1] J. Hadamard. Lectures on Cauchy's problem in linear partial differential equations. Yale University Press, 1923. [2] A. N. Tihonov. On the resolution of incorrectly posed problems and regularisation method (in Russian). Doklady A. N.SSSR, 151(3), 1963. [3] C. R. Vogel. Computational Methods for inverse problems, SIAM, 2002. [4] A. K. Katsaggelos, J. Biemond, R.W. Schafer, and R. M. Mersereau, "A regularized iterative image restoration algorithm," IEEE Transactions on Signal Processing, vol.39, no. 4, pp. 914-929, 1991. [5] J. Biemond, R. L. Lagendijk, and R. M. Mersereau, "Iterative methods for image deblurring," Proceedings of the IEEE, vol. 78, no. 5, pp. 856-883, 1990. [6] D. Kundur and D. Hatzinakos, "Blind image deconvolution," IEEE Signal Processing Magazine, vol. 13, no. 3, pp. 43-64, 1996. [7] Y. L. You and M. Kaveh, "A regularization approach to joint blur identification and image restoration," IEEE Transactions on Image Processing, vol. 5, no. 3, pp. 416-428, 1996. [8] T. F. Chan and C. K. Wong, "Total variation blind deconvolution," IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 370-375, 1998. [9] S. Chardon, B. Vozel, and K. Chehdi. Parametric Blur Estimation Using the GCV Criterion and a Smoothness Constraint on the Image. Multidimensional Systems and Signal Processing Journal, Kluwer Ed., 10:395-414, 1999 [10] B. Vozel, K. Chehdi, and J. Dumoulin. Myopic image restoration for civil structures inspection using UAV (in French). In GRETSI, 2005. [11] L. Bar, N. Sochen, and N. Kiryati. Semi-blind image restoration via Mumford-Shah regularization. IEEE Transactions on Image Processing, 15(2), 2006. [12] J. H. Money and S. H. Kang, "Total variation minimizing blind deconvolution with shock filter reference," Image and Vision Computing, vol. 26, no. 2, pp. 302-314, 2008.

  10. Methods and apparatus for analysis of chromatographic migration patterns

    DOEpatents

    Stockham, Thomas G.; Ives, Jeffrey T.

    1993-01-01

    A method and apparatus for sharpening signal peaks in a signal representing the distribution of biological or chemical components of a mixture separated by a chromatographic technique such as, but not limited to, electrophoresis. A key step in the method is the use of a blind deconvolution technique, presently embodied as homomorphic filtering, to reduce the contribution of a blurring function to the signal encoding the peaks of the distribution. The invention further includes steps and apparatus directed to determination of a nucleotide sequence from a set of four such signals representing DNA sequence data derived by electrophoretic means.

  11. Effects of Three Types of Digital Camera Sensors on Dental Specialists' Perception of Smile Esthetics: A Preliminary Double-Blind Clinical Trial.

    PubMed

    Sajjadi, Seyed Hadi; Khosravanifard, Behnam; Moazzami, Fatemeh; Rakhshan, Vahid; Esmaeilpour, Mozhgan

    2016-12-01

    The effect of image quality or dental specialties on the subjective judgment of facial beauty has not been evaluated in any study. This study assessed the effect of digital sensors and specialties on the perception of smile beauty. In the first phase of this double-blind clinical trial, 40 female smile photographs (taken from dental students) were evaluated by a panel of three prosthodontists, six orthodontists, and three specialists in restorative dentistry to select the most beautiful smiles. In the second phase, the 20 students having the most appealing smiles were again photographed in standard conditions, but this time with three different digital sensors: full-frame 21.1-megapixel, half-frame 18.0-megapixel, and compact 10.4-megapixel. The same panel judged smile beauty on a visual analog scale. The referees were blinded to the type of sensors, and the images were all coded. The data were analyzed using two-way ANOVA, Kruskal-Wallis, and Mann-Whitney U tests (α = 0.05 and 0.0167). The mean scores for full-frame, half-frame, and compact sensors were 6.70 ± 1.30, 4.56 ± 1.29, and 4.40 ± 1.39 [out of 10], respectively (Kruskal-Wallis p < 0.0001). The differences between the full-frame and the other sensors were statistically significant (Mann-Whitney p < 0.01); however, the difference between the half-frame and compact sensors was not statistically significant (p > 0.1). Sensors (ANOVA p < 0.00001) but not specialties (p = 0.687) affected the perception of beauty. According to the results of this study, image quality affected the perception of smile beauty. The full-frame sensor produced consistently better results and was recommended over half-frame and compact sensors. Dentists of different specialties might have similar standards of smile beauty, although this needs further assessment. © 2015 by the American College of Prosthodontists.

  12. Wavespace-Based Coherent Deconvolution

    NASA Technical Reports Server (NTRS)

    Bahr, Christopher J.; Cattafesta, Louis N., III

    2012-01-01

    Array deconvolution is commonly used in aeroacoustic analysis to remove the influence of a microphone array's point spread function from a conventional beamforming map. Unfortunately, the majority of deconvolution algorithms assume that the acoustic sources in a measurement are incoherent, which can be problematic for some aeroacoustic phenomena with coherent, spatially-distributed characteristics. While several algorithms have been proposed to handle coherent sources, some are computationally intractable for many problems while others require restrictive assumptions about the source field. Newer generalized inverse techniques hold promise, but are still under investigation for general use. An alternate coherent deconvolution method is proposed based on a wavespace transformation of the array data. Wavespace analysis offers advantages over curved-wave array processing, such as providing an explicit shift-invariance in the convolution of the array sampling function with the acoustic wave field. However, usage of the wavespace transformation assumes the acoustic wave field is accurately approximated as a superposition of plane wave fields, regardless of true wavefront curvature. The wavespace technique leverages Fourier transforms to quickly evaluate a shift-invariant convolution. The method is derived for and applied to ideal incoherent and coherent plane wave fields to demonstrate its ability to determine magnitude and relative phase of multiple coherent sources. Multi-scale processing is explored as a means of accelerating solution convergence. A case with a spherical wave front is evaluated. Finally, a trailing edge noise experiment case is considered. Results show the method successfully deconvolves incoherent, partially-coherent, and coherent plane wave fields to a degree necessary for quantitative evaluation. Curved wave front cases warrant further investigation. A potential extension to nearfield beamforming is proposed.

  13. A feasibility and optimization study to determine cooling time and burnup of advanced test reactor fuels using a nondestructive technique

    NASA Astrophysics Data System (ADS)

    Navarro, Jorge

    The goal of this study presented is to determine the best available nondestructive technique necessary to collect validation data as well as to determine burnup and cooling time of the fuel elements on-site at the Advanced Test Reactor (ATR) canal. This study makes a recommendation of the viability of implementing a permanent fuel scanning system at the ATR canal and leads to the full design of a permanent fuel scan system. The study consisted at first in determining if it was possible and which equipment was necessary to collect useful spectra from ATR fuel elements at the canal adjacent to the reactor. Once it was establish that useful spectra can be obtained at the ATR canal, the next step was to determine which detector and which configuration was better suited to predict burnup and cooling time of fuel elements nondestructively. Three different detectors of High Purity Germanium (HPGe), Lanthanum Bromide (LaBr3), and High Pressure Xenon (HPXe) in two system configurations of above and below the water pool were used during the study. The data collected and analyzed were used to create burnup and cooling time calibration prediction curves for ATR fuel. The next stage of the study was to determine which of the three detectors tested was better suited for the permanent system. From spectra taken and the calibration curves obtained, it was determined that although the HPGe detector yielded better results, a detector that could better withstand the harsh environment of the ATR canal was needed. The in-situ nature of the measurements required a rugged fuel scanning system, low in maintenance and easy to control system. Based on the ATR canal feasibility measurements and calibration results, it was determined that the LaBr3 detector was the best alternative for canal in-situ measurements; however, in order to enhance the quality of the spectra collected using this scintillator, a deconvolution method was developed. Following the development of the deconvolution method for ATR applications, the technique was tested using one-isotope, multi-isotope, and fuel simulated sources. Burnup calibrations were perfomed using convoluted and deconvoluted data. The calibrations results showed burnup prediction by this method improves using deconvolution. The final stage of the deconvolution method development was to perform an irradiation experiment in order to create a surrogate fuel source to test the deconvolution method using experimental data. A conceptual design of the fuel scan system is path forward using the rugged LaBr 3 detector in an above the water configuration and deconvolution algorithms.

  14. Joint de-blurring and nonuniformity correction method for infrared microscopy imaging

    NASA Astrophysics Data System (ADS)

    Jara, Anselmo; Torres, Sergio; Machuca, Guillermo; Ramírez, Wagner; Gutiérrez, Pablo A.; Viafora, Laura A.; Godoy, Sebastián E.; Vera, Esteban

    2018-05-01

    In this work, we present a new technique to simultaneously reduce two major degradation artifacts found in mid-wavelength infrared microscopy imagery, namely the inherent focal-plane array nonuniformity noise and the scene defocus presented due to the point spread function of the infrared microscope. We correct both nuisances using a novel, recursive method that combines the constant range nonuniformity correction algorithm with a frame-by-frame deconvolution approach. The ability of the method to jointly compensate for both nonuniformity noise and blur is demonstrated using two different real mid-wavelength infrared microscopic video sequences, which were captured from two microscopic living organisms using a Janos-Sofradir mid-wavelength infrared microscopy setup. The performance of the proposed method is assessed on real and simulated infrared data by computing the root mean-square error and the roughness-laplacian pattern index, which was specifically developed for the present work.

  15. Non-Negative Spherical Deconvolution (NNSD) for estimation of fiber Orientation Distribution Function in single-/multi-shell diffusion MRI.

    PubMed

    Cheng, Jian; Deriche, Rachid; Jiang, Tianzi; Shen, Dinggang; Yap, Pew-Thian

    2014-11-01

    Spherical Deconvolution (SD) is commonly used for estimating fiber Orientation Distribution Functions (fODFs) from diffusion-weighted signals. Existing SD methods can be classified into two categories: 1) Continuous Representation based SD (CR-SD), where typically Spherical Harmonic (SH) representation is used for convenient analytical solutions, and 2) Discrete Representation based SD (DR-SD), where the signal profile is represented by a discrete set of basis functions uniformly oriented on the unit sphere. A feasible fODF should be non-negative and should integrate to unity throughout the unit sphere S(2). However, to our knowledge, most existing SH-based SD methods enforce non-negativity only on discretized points and not the whole continuum of S(2). Maximum Entropy SD (MESD) and Cartesian Tensor Fiber Orientation Distributions (CT-FOD) are the only SD methods that ensure non-negativity throughout the unit sphere. They are however computational intensive and are susceptible to errors caused by numerical spherical integration. Existing SD methods are also known to overestimate the number of fiber directions, especially in regions with low anisotropy. DR-SD introduces additional error in peak detection owing to the angular discretization of the unit sphere. This paper proposes a SD framework, called Non-Negative SD (NNSD), to overcome all the limitations above. NNSD is significantly less susceptible to the false-positive peaks, uses SH representation for efficient analytical spherical deconvolution, and allows accurate peak detection throughout the whole unit sphere. We further show that NNSD and most existing SD methods can be extended to work on multi-shell data by introducing a three-dimensional fiber response function. We evaluated NNSD in comparison with Constrained SD (CSD), a quadratic programming variant of CSD, MESD, and an L1-norm regularized non-negative least-squares DR-SD. Experiments on synthetic and real single-/multi-shell data indicate that NNSD improves estimation performance in terms of mean difference of angles, peak detection consistency, and anisotropy contrast between isotropic and anisotropic regions. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. A pratical deconvolution algorithm in multi-fiber spectra extraction

    NASA Astrophysics Data System (ADS)

    Zhang, Haotong; Li, Guangwei; Bai, Zhongrui

    2015-08-01

    Deconvolution algorithm is a very promising method in multi-fiber spectroscopy data reduction, the method can extract spectra to the photo noise level as well as improve the spectral resolution, but as mentioned in Bolton & Schlegel (2010), it is limited by its huge computation requirement and thus can not be implemented directly in actual data reduction. We develop a practical algorithm to solve the computation problem. The new algorithm can deconvolve a 2D fiber spectral image of any size with actual PSFs, which may vary with positions. We further consider the influence of noise, which is thought to be an intrinsic ill-posed problem in deconvolution algorithms. We modify our method with a Tikhonov regularization item to depress the method induced noise. A series of simulations based on LAMOST data are carried out to test our method under more real situations with poisson noise and extreme cross talk, i.e., the fiber-to-fiber distance is comparable to the FWHM of the fiber profile. Compared with the results of traditional extraction methods, i.e., the Aperture Extraction Method and the Profile Fitting Method, our method shows both higher S/N and spectral resolution. The computaion time for a noise added image with 250 fibers and 4k pixels in wavelength direction, is about 2 hours when the fiber cross talk is not in the extreme case and 3.5 hours for the extreme fiber cross talk. We finally apply our method to real LAMOST data. We find that the 1D spectrum extracted by our method has both higher SNR and resolution than the traditional methods, but there are still some suspicious weak features possibly caused by the noise sensitivity of the method around the strong emission lines. How to further attenuate the noise influence will be the topic of our future work. As we have demonstrated, multi-fiber spectra extracted by our method will have higher resolution and signal to noise ratio thus will provide more accurate information (such as higher radial velocity and metallicity measurement accuracy in stellar physics) to astronomers than traditional methods.

  17. A Third-Party E-payment Protocol Based on Quantum Multi-proxy Blind Signature

    NASA Astrophysics Data System (ADS)

    Niu, Xu-Feng; Zhang, Jian-Zhong; Xie, Shu-Cui; Chen, Bu-Qing

    2018-05-01

    A third-party E-payment protocol is presented in this paper. It is based on quantum multi-proxy blind signature. Adopting the techniques of quantum key distribution, one-time pad and quantum multi-proxy blind signature, our third-party E-payment system could protect user's anonymity as the traditional E-payment systems do, and also have unconditional security which the classical E-payment systems can not provide. Furthermore, compared with the existing quantum E-payment systems, the proposed system could support the E-payment which using the third-party platforms.

  18. Enhancing Image Processing Performance for PCID in a Heterogeneous Network of Multi-code Processors

    NASA Astrophysics Data System (ADS)

    Linderman, R.; Spetka, S.; Fitzgerald, D.; Emeny, S.

    The Physically-Constrained Iterative Deconvolution (PCID) image deblurring code is being ported to heterogeneous networks of multi-core systems, including Intel Xeons and IBM Cell Broadband Engines. This paper reports results from experiments using the JAWS supercomputer at MHPCC (60 TFLOPS of dual-dual Xeon nodes linked with Infiniband) and the Cell Cluster at AFRL in Rome, NY. The Cell Cluster has 52 TFLOPS of Playstation 3 (PS3) nodes with IBM Cell Broadband Engine multi-cores and 15 dual-quad Xeon head nodes. The interconnect fabric includes Infiniband, 10 Gigabit Ethernet and 1 Gigabit Ethernet to each of the 336 PS3s. The results compare approaches to parallelizing FFT executions across the Xeons and the Cell's Synergistic Processing Elements (SPEs) for frame-level image processing. The experiments included Intel's Performance Primitives and Math Kernel Library, FFTW3.2, and Carnegie Mellon's SPIRAL. Optimization of FFTs in the PCID code led to a decrease in relative processing time for FFTs. Profiling PCID version 6.2, about one year ago, showed the 13 functions that accounted for the highest percentage of processing were all FFT processing functions. They accounted for over 88% of processing time in one run on Xeons. FFT optimizations led to improvement in the current PCID version 8.0. A recent profile showed that only two of the 19 functions with the highest processing time were FFT processing functions. Timing measurements showed that FFT processing for PCID version 8.0 has been reduced to less than 19% of overall processing time. We are working toward a goal of scaling to 200-400 cores per job (1-2 imagery frames/core). Running a pair of cores on each set of frames reduces latency by implementing parallel FFT processing. Our current results show scaling well out to 100 pairs of cores. These results support the next higher level of parallelism in PCID, where groups of several hundred frames each producing one resolved image are sent to cliques of several hundred cores in a round robin fashion. Current efforts toward further performance enhancement for PCID are shifting toward using the Playstations in conjunction with the Xeons to take advantage of outstanding price/performance as well as the Flops/Watt cost advantage. We are fine-tuning the PCID parallization strategy to balance processing over Xeons and Cell BEs to find an optimal partitioning of PCID over the heterogeneous processors. A high performance information management system that exploits native Infiniband multicast is used to improve latency among the head nodes. Using a publication/subscription oriented information management system to implement a unified communications platform makes runs on large HPCs with thousands of intercommunicating cores more flexible and more fault tolerant. It features a loose couplingof publishers to subscribers through intervening brokers. We are also working on enhancing performance for both Xeons and Cell BEs, buy moving selected operations to single precision. Techniques for adapting the code to single precision and performance results are reported.

  19. Using spectral methods to obtain particle size information from optical data: applications to measurements from CARES 2010

    NASA Astrophysics Data System (ADS)

    Atkinson, Dean B.; Pekour, Mikhail; Chand, Duli; Radney, James G.; Kolesar, Katheryn R.; Zhang, Qi; Setyan, Ari; O'Neill, Norman T.; Cappa, Christopher D.

    2018-04-01

    Multi-wavelength in situ aerosol extinction, absorption and scattering measurements made at two ground sites during the 2010 Carbonaceous Aerosols and Radiative Effects Study (CARES) are analyzed using a spectral deconvolution method that allows extraction of particle-size-related information, including the fraction of extinction produced by the fine-mode particles and the effective radius of the fine mode. The spectral deconvolution method is typically applied to analysis of remote sensing measurements. Here, its application to in situ measurements allows for comparison with more direct measurement methods and validation of the retrieval approach. Overall, the retrieved fine-mode fraction and effective radius compare well with other in situ measurements, including size distribution measurements and scattering and absorption measurements made separately for PM1 and PM10, although there were some periods during which the different methods yielded different results. One key contributor to differences between the results obtained is the alternative, spectrally based definitions of fine and coarse modes from the optical methods, relative to instruments that use a physically defined cut point. These results indicate that for campaigns where size, composition and multi-wavelength optical property measurements are made, comparison of the results can result in closure or can identify unusual circumstances. The comparison here also demonstrates that in situ multi-wavelength optical property measurements can be used to determine information about particle size distributions in situations where direct size distribution measurements are not available.

  20. Using spectral methods to obtain particle size information from optical data: applications to measurements from CARES 2010

    DOE PAGES

    Atkinson, Dean B.; Pekour, Mikhail; Chand, Duli; ...

    2018-04-23

    Here, multi-wavelength in situ aerosol extinction, absorption and scattering measurements made at two ground sites during the 2010 Carbonaceous Aerosols and Radiative Effects Study (CARES) are analyzed using a spectral deconvolution method that allows extraction of particle-size-related information, including the fraction of extinction produced by the fine-mode particles and the effective radius of the fine mode. The spectral deconvolution method is typically applied to analysis of remote sensing measurements. Here, its application to in situ measurements allows for comparison with more direct measurement methods and validation of the retrieval approach. Overall, the retrieved fine-mode fraction and effective radius compare wellmore » with other in situ measurements, including size distribution measurements and scattering and absorption measurements made separately for PM 1 and PM 10, although there were some periods during which the different methods yielded different results. One key contributor to differences between the results obtained is the alternative, spectrally based definitions of fine and coarse modes from the optical methods, relative to instruments that use a physically defined cut point. These results indicate that for campaigns where size, composition and multi-wavelength optical property measurements are made, comparison of the results can result in closure or can identify unusual circumstances. The comparison here also demonstrates that in situ multi-wavelength optical property measurements can be used to determine information about particle size distributions in situations where direct size distribution measurements are not available.« less

  1. Using spectral methods to obtain particle size information from optical data: applications to measurements from CARES 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atkinson, Dean B.; Pekour, Mikhail; Chand, Duli

    Here, multi-wavelength in situ aerosol extinction, absorption and scattering measurements made at two ground sites during the 2010 Carbonaceous Aerosols and Radiative Effects Study (CARES) are analyzed using a spectral deconvolution method that allows extraction of particle-size-related information, including the fraction of extinction produced by the fine-mode particles and the effective radius of the fine mode. The spectral deconvolution method is typically applied to analysis of remote sensing measurements. Here, its application to in situ measurements allows for comparison with more direct measurement methods and validation of the retrieval approach. Overall, the retrieved fine-mode fraction and effective radius compare wellmore » with other in situ measurements, including size distribution measurements and scattering and absorption measurements made separately for PM 1 and PM 10, although there were some periods during which the different methods yielded different results. One key contributor to differences between the results obtained is the alternative, spectrally based definitions of fine and coarse modes from the optical methods, relative to instruments that use a physically defined cut point. These results indicate that for campaigns where size, composition and multi-wavelength optical property measurements are made, comparison of the results can result in closure or can identify unusual circumstances. The comparison here also demonstrates that in situ multi-wavelength optical property measurements can be used to determine information about particle size distributions in situations where direct size distribution measurements are not available.« less

  2. Using spectral methods to obtain particle size information from optical data: applications to measurements from CARES 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atkinson, Dean B.; Pekour, Mikhail; Chand, Duli

    Multi-wavelength in situ aerosol extinction, absorption and scattering measurements made at two ground sites during the 2010 Carbonaceous Aerosols and Radiative Effects Study (CARES) are analyzed using a spectral deconvolution method that allows extraction of particle-size-related information, including the fraction of extinction produced by the fine-mode particles and the effective radius of the fine mode. The spectral deconvolution method is typically applied to analysis of remote sensing measurements. Here, its application to in situ measurements allows for comparison with more direct measurement methods and validation of the retrieval approach. Overall, the retrieved fine-mode fraction and effective radius compare well withmore » other in situ measurements, including size distribution measurements and scattering and absorption measurements made separately for PM 1 and PM 10, although there were some periods during which the different methods yielded different results. One key contributor to differences between the results obtained is the alternative, spectrally based definitions of fine and coarse modes from the optical methods, relative to instruments that use a physically defined cut point. These results indicate that for campaigns where size, composition and multi-wavelength optical property measurements are made, comparison of the results can result in closure or can identify unusual circumstances. The comparison here also demonstrates that in situ multi-wavelength optical property measurements can be used to determine information about particle size distributions in situations where direct size distribution measurements are not available.« less

  3. Multi-kernel deconvolution for contrast improvement in a full field imaging system with engineered PSFs using conical diffraction

    NASA Astrophysics Data System (ADS)

    Enguita, Jose M.; Álvarez, Ignacio; González, Rafael C.; Cancelas, Jose A.

    2018-01-01

    The problem of restoration of a high-resolution image from several degraded versions of the same scene (deconvolution) has been receiving attention in the last years in fields such as optics and computer vision. Deconvolution methods are usually based on sets of images taken with small (sub-pixel) displacements or slightly different focus. Techniques based on sets of images obtained with different point-spread-functions (PSFs) engineered by an optical system are less popular and mostly restricted to microscopic systems, where a spot of light is projected onto the sample under investigation, which is then scanned point-by-point. In this paper, we use the effect of conical diffraction to shape the PSFs in a full-field macroscopic imaging system. We describe a series of simulations and real experiments that help to evaluate the possibilities of the system, showing the enhancement in image contrast even at frequencies that are strongly filtered by the lens transfer function or when sampling near the Nyquist frequency. Although results are preliminary and there is room to optimize the prototype, the idea shows promise to overcome the limitations of the image sensor technology in many fields, such as forensics, medical, satellite, or scientific imaging.

  4. Developmental vision determines the reference frame for the multisensory control of action.

    PubMed

    Röder, Brigitte; Kusmierek, Anna; Spence, Charles; Schicke, Tobias

    2007-03-13

    Both animal and human studies suggest that action goals are defined in external coordinates regardless of their sensory modality. The present study used an auditory-manual task to test whether the default use of such an external reference frame is innately determined or instead acquired during development because of the increasing dominance of vision over manual control. In Experiment I, congenitally blind, late blind, and age-matched sighted adults had to press a left or right response key depending on the bandwidth of pink noise bursts presented from either the left or right loudspeaker. Although the spatial location of the sounds was entirely task-irrelevant, all groups responded more efficiently with uncrossed hands when the sound was presented from the same side as the responding hand ("Simon effect"). This effect reversed with crossed hands only in the congenitally blind: They responded faster with the hand that was located contralateral to the sound source. In Experiment II, the instruction to the participants was changed: They now had to respond with the hand located next to the sound source. In contrast to Experiment I ("Simon-task"), this task required an explicit matching of the sound's location with the position of the responding hand. In Experiment II, the congenitally blind participants showed a significantly larger crossing deficit than both the sighted and late blind adults. This pattern of results implies that developmental vision induces the default use of an external coordinate frame for multisensory action control; this facilitates not only visual but also auditory-manual control.

  5. Multichannel myopic deconvolution in underwater acoustic channels via low-rank recovery

    PubMed Central

    Tian, Ning; Byun, Sung-Hoon; Sabra, Karim; Romberg, Justin

    2017-01-01

    This paper presents a technique for solving the multichannel blind deconvolution problem. The authors observe the convolution of a single (unknown) source with K different (unknown) channel responses; from these channel outputs, the authors want to estimate both the source and the channel responses. The authors show how this classical signal processing problem can be viewed as solving a system of bilinear equations, and in turn can be recast as recovering a rank-1 matrix from a set of linear observations. Results of prior studies in the area of low-rank matrix recovery have identified effective convex relaxations for problems of this type and efficient, scalable heuristic solvers that enable these techniques to work with thousands of unknown variables. The authors show how a priori information about the channels can be used to build a linear model for the channels, which in turn makes solving these systems of equations well-posed. This study demonstrates the robustness of this methodology to measurement noises and parametrization errors of the channel impulse responses with several stylized and shallow water acoustic channel simulations. The performance of this methodology is also verified experimentally using shipping noise recorded on short bottom-mounted vertical line arrays. PMID:28599565

  6. Multi-server blind quantum computation over collective-noise channels

    NASA Astrophysics Data System (ADS)

    Xiao, Min; Liu, Lin; Song, Xiuli

    2018-03-01

    Blind quantum computation (BQC) enables ordinary clients to securely outsource their computation task to costly quantum servers. Besides two essential properties, namely correctness and blindness, practical BQC protocols also should make clients as classical as possible and tolerate faults from nonideal quantum channel. In this paper, using logical Bell states as quantum resource, we propose multi-server BQC protocols over collective-dephasing noise channel and collective-rotation noise channel, respectively. The proposed protocols permit completely or almost classical client, meet the correctness and blindness requirements of BQC protocol, and are typically practical BQC protocols.

  7. Predicting Teacher Emotional Labour Based on Multi-Frame Leadership Orientations: A Case from Turkey

    ERIC Educational Resources Information Center

    Özdemir, Murat; Koçak, Seval

    2018-01-01

    Human behaviours in organisations are closely associated with leadership styles. The main purpose of this study is to find out the relationship between teachers' perception about multi-frame leadership orientations of principals and teachers' emotional labour. The study is based on Bolman and Deal's Four Frames Model, and, therefore, the…

  8. Recovering the fine structures in solar images

    NASA Technical Reports Server (NTRS)

    Karovska, Margarita; Habbal, S. R.; Golub, L.; Deluca, E.; Hudson, Hugh S.

    1994-01-01

    Several examples of the capability of the blind iterative deconvolution (BID) technique to recover the real point spread function, when limited a priori information is available about its characteristics. To demonstrate the potential of image post-processing for probing the fine scale and temporal variability of the solar atmosphere, the BID technique is applied to different samples of solar observations from space. The BID technique was originally proposed for correction of the effects of atmospheric turbulence on optical images. The processed images provide a detailed view of the spatial structure of the solar atmosphere at different heights in regions with different large-scale magnetic field structures.

  9. Methods and apparatus for analysis of chromatographic migration patterns

    DOEpatents

    Stockham, T.G.; Ives, J.T.

    1993-12-28

    A method and apparatus are presented for sharpening signal peaks in a signal representing the distribution of biological or chemical components of a mixture separated by a chromatographic technique such as, but not limited to, electrophoresis. A key step in the method is the use of a blind deconvolution technique, presently embodied as homomorphic filtering, to reduce the contribution of a blurring function to the signal encoding the peaks of the distribution. The invention further includes steps and apparatus directed to determination of a nucleotide sequence from a set of four such signals representing DNA sequence data derived by electrophoretic means. 16 figures.

  10. Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang

    2018-04-01

    Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.

  11. Multichannel blind iterative image restoration.

    PubMed

    Sroubek, Filip; Flusser, Jan

    2003-01-01

    Blind image deconvolution is required in many applications of microscopy imaging, remote sensing, and astronomical imaging. Unfortunately in a single-channel framework, serious conceptual and numerical problems are often encountered. Very recently, an eigenvector-based method (EVAM) was proposed for a multichannel framework which determines perfectly convolution masks in a noise-free environment if channel disparity, called co-primeness, is satisfied. We propose a novel iterative algorithm based on recent anisotropic denoising techniques of total variation and a Mumford-Shah functional with the EVAM restoration condition included. A linearization scheme of half-quadratic regularization together with a cell-centered finite difference discretization scheme is used in the algorithm and provides a unified approach to the solution of total variation or Mumford-Shah. The algorithm performs well even on very noisy images and does not require an exact estimation of mask orders. We demonstrate capabilities of the algorithm on synthetic data. Finally, the algorithm is applied to defocused images taken with a digital camera and to data from astronomical ground-based observations of the Sun.

  12. Condition sensor system and method

    NASA Technical Reports Server (NTRS)

    Polhemus, J. T.; Morgan, J. E.; Mandell, A. (Inventor)

    1978-01-01

    The condition sensor system comprises a condition detector which produces a pulse when a parameter of the monitored condition exceeds a desired threshold. A resettable condition counter counts each pulse. A resettable timer is preset to produce a particular time frame. The counter produces a condition signal when the accumulated number of pulses within the time frame is equal to or greater than a preset count. Control means responsive to the incoming pulses and to the condition signal produce control signals that control utilization devices. After a suitable delay, the last detected pulse simultaneously resets the pulse counter and the timer, and prepares them for sensing another condition occurrence within the time frame. The invention has particular utility in the process of detecting rocking motions of blind people. A controlled, audible, bio-feedback signal is provided which constitutes a warning to the blind person that he is rocking.

  13. The Blind Leading the Blind: Goalball as Engaged Scholarship

    ERIC Educational Resources Information Center

    Van Rheenen, Derek

    2016-01-01

    The paper describes an engaged scholarship course at a large public research university on the west coast of the United States. The pilot course introduces students to the scholarship on disability framed within the cultural studies of sport. Participants engage with existing literature while actively participating in goalball, a sport designed…

  14. Inequality Frames: How Teachers Inhabit Color-Blind Ideology

    ERIC Educational Resources Information Center

    Cobb, Jessica S.

    2017-01-01

    This paper examines how public school teachers take up, modify, or resist the dominant ideology of color-blind racism. This examination is based on in-depth interviews with 60 teachers at three segregated schools: one was race/class privileged and two were disadvantaged. Inductive coding revealed that teachers at each school articulated a shared…

  15. A note on the blind deconvolution of multiple sparse signals from unknown subspaces

    NASA Astrophysics Data System (ADS)

    Cosse, Augustin

    2017-08-01

    This note studies the recovery of multiple sparse signals, xn ∈ ℝL, n = 1, . . . , N, from the knowledge of their convolution with an unknown point spread function h ∈ ℝL. When the point spread function is known to be nonzero, |h[k]| > 0, this blind deconvolution problem can be relaxed into a linear, ill-posed inverse problem in the vector concatenating the unknown inputs xn together with the inverse of the filter, d ∈ ℝL where d[k] := 1/h[k]. When prior information is given on the input subspaces, the resulting overdetermined linear system can be solved efficiently via least squares (see Ling et al. 20161). When no information is given on those subspaces, and the inputs are only known to be sparse, it still remains possible to recover these inputs along with the filter by considering an additional l1 penalty. This note certifies exact recovery of both the unknown PSF and unknown sparse inputs, from the knowledge of their convolutions, as soon as the number of inputs N and the dimension of each input, L , satisfy L ≳ N and N ≳ T2max, up to log factors. Here Tmax = maxn{Tn} and Tn, n = 1, . . . , N denote the supports of the inputs xn. Our proof system combines the recent results on blind deconvolution via least squares to certify invertibility of the linear map encoding the convolutions, with the construction of a dual certificate following the structure first suggested in Candés et al. 2007.2 Unlike in these papers, however, it is not possible to rely on the norm ||(A*TAT)-1|| to certify recovery. We instead use a combination of the Schur Complement and Neumann series to compute an expression for the inverse (A*TAT)-1. Given this expression, it is possible to show that the poorly scaled blocks in (A*TAT)-1 are multiplied by the better scaled ones or vanish in the construction of the certificate. Recovery is certified with high probablility on the choice of the supports and distribution of the signs of each input xn on the support. The paper follows the line of previous work by Wang et al. 20163 where the authors guarantee recovery for subgaussian × Bernoulli inputs satisfying 𝔼xn|k| ∈ [1/10, 1] as soon as N ≳ L. Examples of applications include seismic imaging with unknown source or marine seismic data deghosting, magnetic resonance autocalibration or multiple channel estimation in communication. Numerical experiments are provided along with a discussion on the sample complexity tightness.

  16. A feasibility and optimization study to determine cooling time and burnup of advanced test reactor fuels using a nondestructive technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Navarro, Jorge

    2013-12-01

    The goal of this study presented is to determine the best available non-destructive technique necessary to collect validation data as well as to determine burn-up and cooling time of the fuel elements onsite at the Advanced Test Reactor (ATR) canal. This study makes a recommendation of the viability of implementing a permanent fuel scanning system at the ATR canal and leads3 to the full design of a permanent fuel scan system. The study consisted at first in determining if it was possible and which equipment was necessary to collect useful spectra from ATR fuel elements at the canal adjacent tomore » the reactor. Once it was establish that useful spectra can be obtained at the ATR canal the next step was to determine which detector and which configuration was better suited to predict burnup and cooling time of fuel elements non-destructively. Three different detectors of High Purity Germanium (HPGe), Lanthanum Bromide (LaBr3), and High Pressure Xenon (HPXe) in two system configurations of above and below the water pool were used during the study. The data collected and analyzed was used to create burnup and cooling time calibration prediction curves for ATR fuel. The next stage of the study was to determine which of the three detectors tested was better suited for the permanent system. From spectra taken and the calibration curves obtained, it was determined that although the HPGe detector yielded better results, a detector that could better withstand the harsh environment of the ATR canal was needed. The in-situ nature of the measurements required a rugged fuel scanning system, low in maintenance and easy to control system. Based on the ATR canal feasibility measurements and calibration results it was determined that the LaBr3 detector was the best alternative for canal in-situ measurements; however in order to enhance the quality of the spectra collected using this scintillator a deconvolution method was developed. Following the development of the deconvolution method for ATR applications the technique was tested using one-isotope, multi-isotope and fuel simulated sources. Burnup calibrations were perfomed using convoluted and deconvoluted data. The calibrations results showed burnup prediction by this method improves using deconvolution. The final stage of the deconvolution method development was to perform an irradiation experiment in order to create a surrogate fuel source to test the deconvolution method using experimental data. A conceptual design of the fuel scan system is path forward using the rugged LaBr3 detector in an above the water configuration and deconvolution algorithms.« less

  17. Congenital blindness limits allocentric to egocentric switching ability.

    PubMed

    Ruggiero, Gennaro; Ruotolo, Francesco; Iachini, Tina

    2018-03-01

    Many everyday spatial activities require the cooperation or switching between egocentric (subject-to-object) and allocentric (object-to-object) spatial representations. The literature on blind people has reported that the lack of vision (congenital blindness) may limit the capacity to represent allocentric spatial information. However, research has mainly focused on the selective involvement of egocentric or allocentric representations, not the switching between them. Here we investigated the effect of visual deprivation on the ability to switch between spatial frames of reference. To this aim, congenitally blind (long-term visual deprivation), blindfolded sighted (temporary visual deprivation) and sighted (full visual availability) participants were compared on the Ego-Allo switching task. This task assessed the capacity to verbally judge the relative distances between memorized stimuli in switching (from egocentric-to-allocentric: Ego-Allo; from allocentric-to-egocentric: Allo-Ego) and non-switching (only-egocentric: Ego-Ego; only-allocentric: Allo-Allo) conditions. Results showed a difficulty in congenitally blind participants when switching from allocentric to egocentric representations, not when the first anchor point was egocentric. In line with previous results, a deficit in processing allocentric representations in non-switching conditions also emerged. These findings suggest that the allocentric deficit in congenital blindness may determine a difficulty in simultaneously maintaining and combining different spatial representations. This deficit alters the capacity to switch between reference frames specifically when the first anchor point is external and not body-centered.

  18. Multiframe video coding for improved performance over wireless channels.

    PubMed

    Budagavi, M; Gibson, J D

    2001-01-01

    We propose and evaluate a multi-frame extension to block motion compensation (BMC) coding of videoconferencing-type video signals for wireless channels. The multi-frame BMC (MF-BMC) coder makes use of the redundancy that exists across multiple frames in typical videoconferencing sequences to achieve additional compression over that obtained by using the single frame BMC (SF-BMC) approach, such as in the base-level H.263 codec. The MF-BMC approach also has an inherent ability of overcoming some transmission errors and is thus more robust when compared to the SF-BMC approach. We model the error propagation process in MF-BMC coding as a multiple Markov chain and use Markov chain analysis to infer that the use of multiple frames in motion compensation increases robustness. The Markov chain analysis is also used to devise a simple scheme which randomizes the selection of the frame (amongst the multiple previous frames) used in BMC to achieve additional robustness. The MF-BMC coders proposed are a multi-frame extension of the base level H.263 coder and are found to be more robust than the base level H.263 coder when subjected to simulated errors commonly encountered on wireless channels.

  19. Speckle imaging through turbulent atmosphere based on adaptable pupil segmentation

    NASA Astrophysics Data System (ADS)

    Loktev, Mikhail; Soloviev, Oleg; Savenko, Svyatoslav; Vdovin, Gleb

    2011-07-01

    We report on the first results to our knowledge obtained with adaptable multiaperture imaging through turbulence on a horizontal atmospheric path. We show that the resolution can be improved by adaptively matching the size of the subaperture to the characteristic size of the turbulence. Further improvement is achieved by the deconvolution of a number of subimages registered simultaneously through multiple subapertures. Different implementations of multiaperture geometry, including pupil multiplication, pupil image sampling, and a plenoptic telescope, are considered. Resolution improvement has been demonstrated on a ˜550m horizontal turbulent path, using a combination of aperture sampling, speckle image processing, and, optionally, frame selection.

  20. Broadband Studies of Semsmic Sources at Regional and Teleseismic Distances Using Advanced Time Series Analysis Methods. Volume 1.

    DTIC Science & Technology

    1991-03-21

    discussion of spectral factorability and motivations for broadband analysis, the report is subdivided into four main sections. In Section 1.0, we...estimates. The motivation for developing our multi-channel deconvolution method was to gain information about seismic sources, most notably, nuclear...with complex constraints for estimating the rupture history. Such methods (applied mostly to data sets that also include strong rmotion data), were

  1. Visualizing photosynthesis through processing of chlorophyll fluorescence images

    NASA Astrophysics Data System (ADS)

    Daley, Paul F.; Ball, J. Timothy; Berry, Joseph A.; Patzke, Juergen; Raschke, Klaus E.

    1990-05-01

    Measurements of terrestrial plant photosynthesis frequently exploit sensing of gas exchange from leaves enclosed in gas-tight, climate controlled chambers. These methods are typically slow, and do not resolve variation in photosynthesis below the whole leaf level. A photosynthesis visualization technique is presented that uses images of leaves employing light from chlorophyll (Chl) fluorescence. Images of Chl fluorescence from whole leaves undergoing steady-state photosynthesis, photosynthesis induction, or response to stress agents were digitized during light flashes that saturated photochemical reactions. Use of saturating flashes permitted deconvolution of photochemical energy use from biochemical quenching mechanisms (qN) that dissipate excess excitation energy, otherwise damaging to the light harvesting apparatus. Combination of the digital image frames of variable fluorescence with reference frames obtained from the same leaves when dark-adapted permitted derivation of frames in which grey scale represented the magnitude of qN. Simultaneous measurements with gas-exchange apparatus provided data for non-linear calibration filters for subsequent rendering of grey-scale "images" of photosynthesis. In several experiments significant non-homogeneity of photosynthetic activity was observed following treatment with growth hormones, or shifts in light or humidity, and following infection by virus. The technique provides a rapid, non-invasive probe for stress physiology and plant disease detection.

  2. SLO blind data set inversion and classification using physically complete models

    NASA Astrophysics Data System (ADS)

    Shamatava, I.; Shubitidze, F.; Fernández, J. P.; Barrowes, B. E.; O'Neill, K.; Grzegorczyk, T. M.; Bijamov, A.

    2010-04-01

    Discrimination studies carried out on TEMTADS and Metal Mapper blind data sets collected at the San Luis Obispo UXO site are presented. The data sets included four types of targets of interest: 2.36" rockets, 60-mm mortar shells, 81-mm projectiles, and 4.2" mortar items. The total parameterized normalized magnetic source (NSMS) amplitudes were used to discriminate TOI from metallic clutter and among the different hazardous UXO. First, in object's frame coordinate, the total NSMS were determined for each TOI along three orthogonal axes from the training data provided by the Strategic Environmental Research and Development Program (SERDP) along with the referred blind data sets. Then the inverted total NSMS were used to extract the time-decay classification features. Once our inversion and classification algorithms were tested on the calibration data sets then we applied the same procedure to all blind data sets. The combined NSMS and differential evolution algorithm is utilized for determine the NSMS strengths for each cell. The obtained total NSMS time-decay curves were used to extract the discrimination features and perform classification using the training data as reference. In addition, for cross validation, the inverted locations and orientations from NSMS-DE algorithm were compared against the inverted data that obtained via the magnetic field, vector and scalar potentials (HAP) method and the combined dipole and Gauss-Newton approach technique. We examined the entire time decay history of the total NSMS case-by-case for classification purposes. Also, we use different multi-class statistical classification algorithms for separating the dangerous objects from non hazardous items. The inverted targets were ranked by target ID and submitted to SERDP for independent scoring. The independent scoring results are presented.

  3. Standing up in multiple sclerosis (SUMS): protocol for a multi-centre randomised controlled trial evaluating the clinical and cost effectiveness of a home-based self-management standing frame programme in people with progressive multiple sclerosis.

    PubMed

    Freeman, J A; Hendrie, W; Creanor, S; Jarrett, L; Barton, A; Green, C; Marsden, J; Rogers, E; Zajicek, J

    2016-05-05

    Multiple sclerosis (MS) is an incurable, unpredictable but typically progressive neurological condition. It is the most common cause of neurological disability in young adults. Within 15 years of diagnosis, approximately 50 % of affected people are unable to walk unaided, and over time an estimated 25 % depend on a wheelchair. Typically, people with such limited mobility are excluded from clinical trials. Severely impaired people with MS spend much of their day sitting, often with limited ability to change position. In response, secondary complications can occur including: muscle wasting, pain, reduced skin integrity, spasms, limb stiffness, constipation, and associated psychosocial problems such as depression and lowered self-esteem. Effective self-management strategies, which can be implemented relatively easily and cheaply within people's homes, are needed to improve or maintain mobility and reduce sedentary behaviour. However this is challenging, particularly in the latter stages of disease. Regular supported standing using standing frames is one potential option. SUMS is a pragmatic multi-centre randomised controlled trial evaluating use of Oswestry standing frames with blinded outcome assessment and full economic evaluation. Participants will be randomly allocated (1:1) to either a home-based, self-management standing programme (with advice and support) along with their usual care or to usual care alone. Those in the intervention group will be asked to stand for a minimum of 30 min three times weekly over 20 weeks. Each participant will be followed-up at 20 and 36 weeks post baseline. The primary clinical outcome is motor function, assessed using the Amended Motor Club Assessment. The primary economic endpoint is quality-adjusted life years. The secondary outcomes include measures of explanatory physical impairments, key clinical outcomes, and health-related quality of life. An embedded qualitative component will explore participant's and carer's experiences of the standing programme. This is the first large scale multi-centre trial to assess the clinical and cost effectiveness of a home based standing frame programme for people who are severely impaired by MS. If demonstrated to be effective and cost-effective, we will use this evidence to develop recommendations for a health service delivery model which could be implemented across the United Kingdom. ISRCTN69614598 DATE OF REGISTRATION: 3.2.16 (retrospectively registered).

  4. The fabrication of a multi-spectral lens array and its application in assisting color blindness

    NASA Astrophysics Data System (ADS)

    Di, Si; Jin, Jian; Tang, Guanrong; Chen, Xianshuai; Du, Ruxu

    2016-01-01

    This article presents a compact multi-spectral lens array and describes its application in assisting color-blindness. The lens array consists of 9 microlens, and each microlens is coated with a different color filter. Thus, it can capture different light bands, including red, orange, yellow, green, cyan, blue, violet, near-infrared, and the entire visible band. First, the fabrication process is described in detail. Second, an imaging system is setup and a color blindness testing card is selected as the sample. By the system, the vision results of normal people and color blindness can be captured simultaneously. Based on the imaging results, it is possible to be used for helping color-blindness to recover normal vision.

  5. 78 FR 48656 - Procurement List; Proposed Additions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-09

    ..., Synthetic Mesh, 24x36, Locking Drawstring NSN: 3510-00-NIB-0013--Heavy Duty, \\3/16\\'' Hole Size. NSN: 3510-00-NIB-0014--Medium Duty, \\1/16\\'' Hole Size. NPA: Bestwork Industries for the Blind, Inc., Runnemede... NSN: 7510-01-462-1383--View Framed, Navy Blue, \\1/2\\''. NSN: 7510-01-462-1384--View Framed, Black, \\1...

  6. Application of an NLME-Stochastic Deconvolution Approach to Level A IVIVC Modeling.

    PubMed

    Kakhi, Maziar; Suarez-Sharp, Sandra; Shepard, Terry; Chittenden, Jason

    2017-07-01

    Stochastic deconvolution is a parameter estimation method that calculates drug absorption using a nonlinear mixed-effects model in which the random effects associated with absorption represent a Wiener process. The present work compares (1) stochastic deconvolution and (2) numerical deconvolution, using clinical pharmacokinetic (PK) data generated for an in vitro-in vivo correlation (IVIVC) study of extended release (ER) formulations of a Biopharmaceutics Classification System class III drug substance. The preliminary analysis found that numerical and stochastic deconvolution yielded superimposable fraction absorbed (F abs ) versus time profiles when supplied with exactly the same externally determined unit impulse response parameters. In a separate analysis, a full population-PK/stochastic deconvolution was applied to the clinical PK data. Scenarios were considered in which immediate release (IR) data were either retained or excluded to inform parameter estimation. The resulting F abs profiles were then used to model level A IVIVCs. All the considered stochastic deconvolution scenarios, and numerical deconvolution, yielded on average similar results with respect to the IVIVC validation. These results could be achieved with stochastic deconvolution without recourse to IR data. Unlike numerical deconvolution, this also implies that in crossover studies where certain individuals do not receive an IR treatment, their ER data alone can still be included as part of the IVIVC analysis. Published by Elsevier Inc.

  7. High Resolution Imaging of the Sun with CORONAS-1

    NASA Technical Reports Server (NTRS)

    Karovska, Margarita

    1998-01-01

    We applied several image restoration and enhancement techniques, to CORONAS-I images. We carried out the characterization of the Point Spread Function (PSF) using the unique capability of the Blind Iterative Deconvolution (BID) technique, which recovers the real PSF at a given location and time of observation, when limited a priori information is available on its characteristics. We also applied image enhancement technique to extract the small scale structure imbeded in bright large scale structures on the disk and on the limb. The results demonstrate the capability of the image post-processing to substantially increase the yield from the space observations by improving the resolution and reducing noise in the images.

  8. No-reference image quality assessment for horizontal-path imaging scenarios

    NASA Astrophysics Data System (ADS)

    Rios, Carlos; Gladysz, Szymon

    2013-05-01

    There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.

  9. Sizing up Asteroids at Lick Observatory with Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Drummond, Jack D.; Christou, J.

    2006-12-01

    Using the Shane 3 meter telescope with adaptive optics at Lick Observatory, we have determined the triaxial dimensions and rotational poles of five asteroids, 3 Juno, 4 Vesta, 16 Psyche, 87 Sylvia, and 324 Bamberga. Parametric blind deconvolution was applied to images obtained mostly at 2.5 microns in 2004 and 2006. This is the first time Bamberga’s pole has been determined, and the results for the other four asteroids are in agreement with the analysis of decades of lightcurves by others. The techniques developed here to find sizes, shapes, and poles, in only one or two nights, can be applied to smaller asteroids that are resolved with larger telescopes.

  10. Dangerous gas detection based on infrared video

    NASA Astrophysics Data System (ADS)

    Ding, Kang; Hong, Hanyu; Huang, Likun

    2018-03-01

    As the gas leak infrared imaging detection technology has significant advantages of high efficiency and remote imaging detection, in order to enhance the detail perception of observers and equivalently improve the detection limit, we propose a new type of gas leak infrared image detection method, which combines background difference methods and multi-frame interval difference method. Compared to the traditional frame methods, the multi-frame interval difference method we proposed can extract a more complete target image. By fusing the background difference image and the multi-frame interval difference image, we can accumulate the information of infrared target image of the gas leak in many aspect. The experiment demonstrate that the completeness of the gas leakage trace information is enhanced significantly, and the real-time detection effect can be achieved.

  11. A Model-Based Approach for Microvasculature Structure Distortion Correction in Two-Photon Fluorescence Microscopy Images

    PubMed Central

    Dao, Lam; Glancy, Brian; Lucotte, Bertrand; Chang, Lin-Ching; Balaban, Robert S; Hsu, Li-Yueh

    2015-01-01

    SUMMARY This paper investigates a post-processing approach to correct spatial distortion in two-photon fluorescence microscopy images for vascular network reconstruction. It is aimed at in vivo imaging of large field-of-view, deep-tissue studies of vascular structures. Based on simple geometric modeling of the object-of-interest, a distortion function is directly estimated from the image volume by deconvolution analysis. Such distortion function is then applied to sub volumes of the image stack to adaptively adjust for spatially varying distortion and reduce the image blurring through blind deconvolution. The proposed technique was first evaluated in phantom imaging of fluorescent microspheres that are comparable in size to the underlying capillary vascular structures. The effectiveness of restoring three-dimensional spherical geometry of the microspheres using the estimated distortion function was compared with empirically measured point-spread function. Next, the proposed approach was applied to in vivo vascular imaging of mouse skeletal muscle to reduce the image distortion of the capillary structures. We show that the proposed method effectively improve the image quality and reduce spatially varying distortion that occurs in large field-of-view deep-tissue vascular dataset. The proposed method will help in qualitative interpretation and quantitative analysis of vascular structures from fluorescence microscopy images. PMID:26224257

  12. Multiple-frame IR photo-recorder KIT-3M

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roos, E; Wilkins, P; Nebeker, N

    2006-05-15

    This paper reports the experimental results of a high-speed multi-frame infrared camera which has been developed in Sarov at VNIIEF. Earlier [1] we discussed the possibility of creation of the multi-frame infrared radiation photo-recorder with framing frequency about 1 MHz. The basis of the photo-recorder is a semiconductor ionization camera [2, 3], which converts IR radiation of spectral range 1-10 micrometers into a visible image. Several sequential thermal images are registered by using the IR converter in conjunction with a multi-frame electron-optical camera. In the present report we discuss the performance characteristics of a prototype commercial 9-frame high-speed IR photo-recorder.more » The image converter records infrared images of thermal fields corresponding to temperatures ranging from 300 C to 2000 C with an exposure time of 1-20 {micro}s at a frame frequency up to 500 KHz. The IR-photo-recorder camera is useful for recording the time evolution of thermal fields in fast processes such as gas dynamics, ballistics, pulsed welding, thermal processing, automotive industry, aircraft construction, in pulsed-power electric experiments, and for the measurement of spatial mode characteristics of IR-laser radiation.« less

  13. The 2008 National Child Count of Children and Youth Who Are Deaf-Blind

    ERIC Educational Resources Information Center

    National Consortium on Deaf-Blindness, 2009

    2009-01-01

    The "National Child Count of Children and Youth who are Deaf-Blind" is the first and longest running registry and knowledge base of children who are deaf-blind in the world. It represents a 25 year collaborative effort between the National Consortium on Deaf-Blindness (NCDB), its predecessors and each state/multi-state deaf-blind project…

  14. Gender-Blind Sexism and Rape Myth Acceptance.

    PubMed

    Stoll, Laurie Cooper; Lilley, Terry Glenn; Pinter, Kelly

    2017-01-01

    The purpose of this article is to explore whether gender-blind sexism, as an extension of Bonilla-Silva's racialized social system theory, is an appropriate theoretical framework for understanding the creation and continued prevalence of rape myth acceptance. Specifically, we hypothesize that individuals who hold attitudes consistent with the frames of gender-blind sexism are more likely to accept common rape myths. Data for this article come from an online survey administered to the entire undergraduate student body at a large Midwestern institution (N = 1,401). Regression analysis showed strong support for the effects of gender-blind sexism on rape myth acceptance. © The Author(s) 2016.

  15. Reduction of speckle noise from optical coherence tomography images using multi-frame weighted nuclear norm minimization method

    NASA Astrophysics Data System (ADS)

    Thapa, Damber; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2015-12-01

    In this paper, we propose a speckle noise reduction method for spectral-domain optical coherence tomography (SD-OCT) images called multi-frame weighted nuclear norm minimization (MWNNM). This method is a direct extension of weighted nuclear norm minimization (WNNM) in the multi-frame framework since an adequately denoised image could not be achieved with single-frame denoising methods. The MWNNM method exploits multiple B-scans collected from a small area of a SD-OCT volumetric image, and then denoises and averages them together to obtain a high signal-to-noise ratio B-scan. The results show that the image quality metrics obtained by denoising and averaging only five nearby B-scans with MWNNM method is considerably better than those of the average image obtained by registering and averaging 40 azimuthally repeated B-scans.

  16. The effect of vertical and horizontal symmetry on memory for tactile patterns in late blind individuals.

    PubMed

    Cattaneo, Zaira; Vecchi, Tomaso; Fantino, Micaela; Herbert, Andrew M; Merabet, Lotfi B

    2013-02-01

    Visual stimuli that exhibit vertical symmetry are easier to remember than stimuli symmetric along other axes, an advantage that extends to the haptic modality as well. Critically, the vertical symmetry memory advantage has not been found in early blind individuals, despite their overall superior memory, as compared with sighted individuals, and the presence of an overall advantage for identifying symmetric over asymmetric patterns. The absence of the vertical axis memory advantage in the early blind may depend on their total lack of visual experience or on the effect of prolonged visual deprivation. To disentangle this issue, in this study, we measured the ability of late blind individuals to remember tactile spatial patterns that were either vertically or horizontally symmetric or asymmetric. Late blind participants showed better memory performance for symmetric patterns. An additional advantage for the vertical axis of symmetry over the horizontal one was reported, but only for patterns presented in the frontal plane. In the horizontal plane, no difference was observed between vertical and horizontal symmetric patterns, due to the latter being recalled particularly well. These results are discussed in terms of the influence of the spatial reference frame adopted during exploration. Overall, our data suggest that prior visual experience is sufficient to drive the vertical symmetry memory advantage, at least when an external reference frame based on geocentric cues (i.e., gravity) is adopted.

  17. 4D PET iterative deconvolution with spatiotemporal regularization for quantitative dynamic PET imaging.

    PubMed

    Reilhac, Anthonin; Charil, Arnaud; Wimberley, Catriona; Angelis, Georgios; Hamze, Hasar; Callaghan, Paul; Garcia, Marie-Paule; Boisson, Frederic; Ryder, Will; Meikle, Steven R; Gregoire, Marie-Claude

    2015-09-01

    Quantitative measurements in dynamic PET imaging are usually limited by the poor counting statistics particularly in short dynamic frames and by the low spatial resolution of the detection system, resulting in partial volume effects (PVEs). In this work, we present a fast and easy to implement method for the restoration of dynamic PET images that have suffered from both PVE and noise degradation. It is based on a weighted least squares iterative deconvolution approach of the dynamic PET image with spatial and temporal regularization. Using simulated dynamic [(11)C] Raclopride PET data with controlled biological variations in the striata between scans, we showed that the restoration method provides images which exhibit less noise and better contrast between emitting structures than the original images. In addition, the method is able to recover the true time activity curve in the striata region with an error below 3% while it was underestimated by more than 20% without correction. As a result, the method improves the accuracy and reduces the variability of the kinetic parameter estimates calculated from the corrected images. More importantly it increases the accuracy (from less than 66% to more than 95%) of measured biological variations as well as their statistical detectivity. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  18. Effect of random phase mask on input plane in photorefractive authentic memory with two-wave encryption method

    NASA Astrophysics Data System (ADS)

    Mita, Akifumi; Okamoto, Atsushi; Funakoshi, Hisatoshi

    2004-06-01

    We have proposed an all-optical authentic memory with the two-wave encryption method. In the recording process, the image data are encrypted to a white noise by the random phase masks added on the input beam with the image data and the reference beam. Only reading beam with the phase-conjugated distribution of the reference beam can decrypt the encrypted data. If the encrypted data are read out with an incorrect phase distribution, the output data are transformed into a white noise. Moreover, during read out, reconstructions of the encrypted data interfere destructively resulting in zero intensity. Therefore our memory has a merit that we can detect unlawful accesses easily by measuring the output beam intensity. In our encryption method, the random phase mask on the input plane plays important roles in transforming the input image into a white noise and prohibiting to decrypt a white noise to the input image by the blind deconvolution method. Without this mask, when unauthorized users observe the output beam by using CCD in the readout with the plane wave, the completely same intensity distribution as that of Fourier transform of the input image is obtained. Therefore the encrypted image will be decrypted easily by using the blind deconvolution method. However in using this mask, even if unauthorized users observe the output beam using the same method, the encrypted image cannot be decrypted because the observed intensity distribution is dispersed at random by this mask. Thus it can be said the robustness is increased by this mask. In this report, we compare two correlation coefficients, which represents the degree of a white noise of the output image, between the output image and the input image in using this mask or not. We show that the robustness of this encryption method is increased as the correlation coefficient is improved from 0.3 to 0.1 by using this mask.

  19. Application of an improved minimum entropy deconvolution method for railway rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei

    2018-07-01

    Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.

  20. Connected Component Model for Multi-Object Tracking.

    PubMed

    He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan

    2016-08-01

    In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.

  1. Speckle imaging through turbulent atmosphere based on adaptable pupil segmentation.

    PubMed

    Loktev, Mikhail; Soloviev, Oleg; Savenko, Svyatoslav; Vdovin, Gleb

    2011-07-15

    We report on the first results to our knowledge obtained with adaptable multiaperture imaging through turbulence on a horizontal atmospheric path. We show that the resolution can be improved by adaptively matching the size of the subaperture to the characteristic size of the turbulence. Further improvement is achieved by the deconvolution of a number of subimages registered simultaneously through multiple subapertures. Different implementations of multiaperture geometry, including pupil multiplication, pupil image sampling, and a plenoptic telescope, are considered. Resolution improvement has been demonstrated on a ∼550 m horizontal turbulent path, using a combination of aperture sampling, speckle image processing, and, optionally, frame selection. © 2011 Optical Society of America

  2. Dissecting key components of the Ca2+ homeostasis game by multifunctional fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Bastianello, Stefano; Ciubotaru, Catalin D.; Beltramello, Martina; Mammano, Fabio

    2004-07-01

    Different sub-cellular compartments and organelles, such as cytosol, endoplasmic reticulum and mitochondria, are known to be differentially involved in Ca2+ homeostasis. It is thus of primary concern to develop imaging paradigms that permit to make out these diverse components. To this end, we have constructed a complete system that performs multi-functional imaging under software control. The main hardware components of this system are a piezoelectric actuator, used to set objective lens position, a fast-switching monochromator, used to select excitation wavelength, a beam splitter, used to separate emission wavelengths, and a I/O interface to control the hardware. For these demonstrative experiments, cultured HeLa cells were transfected with a Ca2+ sensitive fluorescent biosensor (cameleon) targeted to the mitochondria (mtCam), and also loaded with cytosolic Fura2. The main system clock was provided by the frame-valid signal (FVAL) of a cooled CCD camera that captured wide-field fluorescence images of the two probes. Excitation wavelength and objective lens position were rapidly set during silent periods between successive exposures, with a minimum inter-frame interval of 2 ms. Triplets of images were acquired at 340, 380 and 430 nm excitation wavelengths at each one of three adjacent focal planes, separated by 250 nm. Optical sectioning was enhanced off-line by applying a nearest-neighbor deconvolution algorithm based on a directly estimated point-spread function (PSF). To measure the PSF, image stacks of sub-resolution fluorescent beads, incorporated in the cell cytoplasm by electroporation, were acquired under identical imaging conditions. The different dynamics of cytosolic and mitochondrial Ca2+ signals evoked by histamine could be distinguished clearly, with sub-micron resolution. Other FRET-based probes capable of sensing different chemical modifications of the cellular environment can be integrated in this approach, which is intrinsically suitable for the analysis of the interactions and cross-talks between different signaling pathways (e.g. Ca2+ and cAMP).

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webb, Aaron P.; Carlson, Charles T.

    A multi-part mask has a pattern plate, which includes a planar portion that has the desired aperture pattern to be used during workpiece processing. The multi-part mask also has a mounting frame, which is used to hold the pattern plate. Prior to assembly, the pattern plate has an aligning portion, which has one or more holes through which reusable alignment pins are inserted. These alignment pins enter kinematic joints disposed on the mounting frame, which serve to precisely align the pattern plate to the mounting frame. After the pattern plate has been secured to the mounting frame, the aligning portionmore » can be detached from the pattern plate. The alignment pins can be reused at a later time. In some embodiments, the pattern plate can later be removed from the mounting frame, so that the mounting frame may be reused.« less

  4. A method for environmental acoustic analysis improvement based on individual evaluation of common sources in urban areas.

    PubMed

    López-Pacheco, María G; Sánchez-Fernández, Luis P; Molina-Lozano, Herón

    2014-01-15

    Noise levels of common sources such as vehicles, whistles, sirens, car horns and crowd sounds are mixed in urban soundscapes. Nowadays, environmental acoustic analysis is performed based on mixture signals recorded by monitoring systems. These mixed signals make it difficult for individual analysis which is useful in taking actions to reduce and control environmental noise. This paper aims at separating, individually, the noise source from recorded mixtures in order to evaluate the noise level of each estimated source. A method based on blind deconvolution and blind source separation in the wavelet domain is proposed. This approach provides a basis to improve results obtained in monitoring and analysis of common noise sources in urban areas. The method validation is through experiments based on knowledge of the predominant noise sources in urban soundscapes. Actual recordings of common noise sources are used to acquire mixture signals using a microphone array in semi-controlled environments. The developed method has demonstrated great performance improvements in identification, analysis and evaluation of common urban sources. © 2013 Elsevier B.V. All rights reserved.

  5. “Is a cure in my sight?” Multi-stakeholder perspectives on phase I choroideremia gene transfer clinical trials

    PubMed Central

    Benjaminy, Shelly; MacDonald, Ian; Bubela, Tania

    2014-01-01

    Purpose: Ocular gene transfer clinical trials are raising patient hopes for the treatment of choroideremia – a blinding degenerative retinopathy. Phase I choroideremia gene transfer trials necessitate communicating about the risks of harm and potential benefits with patients while avoiding the sensationalism that has historically undermined this field of translational medicine. Methods: We conducted interviews between June 2011 and June 2012 with 6 choroideremia patient advocates, 20 patients, and 15 clinicians about their hopes for benefits, perceived risks of harm, and hopes for the time frame of clinical implementation of choroideremia gene transfer. Results: Despite the safety focus of phase I trials, participants hoped for direct visual benefits with evident discrepancies between stakeholder perspectives about the degree of visual benefit. Clinicians and patient advocates were concerned by limited patient attention to risks of harm. Interviews revealed confusion about the time frames for the clinical implementation of choroideremia gene transfer and patient urgency to access gene transfer within a limited therapeutic window. Conclusion: Differences in stakeholder perspectives about choroideremia gene transfer necessitate strategies that promote responsible communications about choroideremia gene transfer and aid in its translation. Strategies should counter historical sensationalism associated with gene transfer, promote informed consent, and honor patient hope while grounding communications in current clinical realities. PMID:24071795

  6. Deconvolution method for accurate determination of overlapping peak areas in chromatograms.

    PubMed

    Nelson, T J

    1991-12-20

    A method is described for deconvoluting chromatograms which contain overlapping peaks. Parameters can be selected to ensure that attenuation of peak areas is uniform over any desired range of peak widths. A simple extension of the method greatly reduces the negative overshoot frequently encountered with deconvolutions. The deconvoluted chromatograms are suitable for integration by conventional methods.

  7. The Ways of the Hand: A Study of Hand Function among Blind, Visually Impaired and Visually Impaired Multi-Handicapped Children and Adolescents.

    ERIC Educational Resources Information Center

    Rogow, Sally M.

    1987-01-01

    The manual development of 148 blind, visually impaired, and visually impaired multi-handicapped students, aged 3-19, was studied. Results indicated a significant relationship between object manipulation and speech, and an inverse relationship between object manipulation and stereotypic hand mannerisms. Optimal development of manual functions and…

  8. Enhancing Learning Management Systems Utility for Blind Students: A Task-Oriented, User-Centered, Multi-Method Evaluation Technique

    ERIC Educational Resources Information Center

    Babu, Rakesh; Singh, Rahul

    2013-01-01

    This paper presents a novel task-oriented, user-centered, multi-method evaluation (TUME) technique and shows how it is useful in providing a more complete, practical and solution-oriented assessment of the accessibility and usability of Learning Management Systems (LMS) for blind and visually impaired (BVI) students. Novel components of TUME…

  9. Georgia Deaf-Blind Project. Final Report, 1992-1995. State and Multi-State Projects for Children with Deaf-Blindness.

    ERIC Educational Resources Information Center

    Georgia State Dept. of Education, Atlanta.

    This final report describes activities and accomplishments of the Georgia Deaf-Blind Project, a 3-year federally supported project encompassing 159 counties and providing technical assistance to 237 infants, children, and youth with deaf-blindness along with their families and their service providers. Project accomplishments included: (1) more…

  10. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  11. Improved optical flow motion estimation for digital image stabilization

    NASA Astrophysics Data System (ADS)

    Lai, Lijun; Xu, Zhiyong; Zhang, Xuyao

    2015-11-01

    Optical flow is the instantaneous motion vector at each pixel in the image frame at a time instant. The gradient-based approach for optical flow computation can't work well when the video motion is too large. To alleviate such problem, we incorporate this algorithm into a pyramid multi-resolution coarse-to-fine search strategy. Using pyramid strategy to obtain multi-resolution images; Using iterative relationship from the highest level to the lowest level to obtain inter-frames' affine parameters; Subsequence frames compensate back to the first frame to obtain stabilized sequence. The experiment results demonstrate that the promoted method has good performance in global motion estimation.

  12. Multi-frame image processing with panning cameras and moving subjects

    NASA Astrophysics Data System (ADS)

    Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric

    2014-06-01

    Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.

  13. Deconvolution of continuous paleomagnetic data from pass-through magnetometer: A new algorithm to restore geomagnetic and environmental information based on realistic optimization

    NASA Astrophysics Data System (ADS)

    Oda, Hirokuni; Xuan, Chuang

    2014-10-01

    development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.

  14. UDECON: deconvolution optimization software for restoring high-resolution records from pass-through paleomagnetic measurements

    NASA Astrophysics Data System (ADS)

    Xuan, Chuang; Oda, Hirokuni

    2015-11-01

    The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.

  15. Enhancing multi-spot structured illumination microscopy with fluorescence difference

    NASA Astrophysics Data System (ADS)

    Ward, Edward N.; Torkelsen, Frida H.; Pal, Robert

    2018-03-01

    Structured illumination microscopy is a super-resolution technique used extensively in biological research. However, this technique is limited in the maximum possible resolution increase. Here we report the results of simulations of a novel enhanced multi-spot structured illumination technique. This method combines the super-resolution technique of difference microscopy with structured illumination deconvolution. Initial results give at minimum a 1.4-fold increase in resolution over conventional structured illumination in a low-noise environment. This new technique also has the potential to be expanded to further enhance axial resolution with three-dimensional difference microscopy. The requirement for precise pattern determination in this technique also led to the development of a new pattern estimation algorithm which proved more efficient and reliable than other methods tested.

  16. Image analysis in modern ophthalmology: from acquisition to computer assisted diagnosis and telemedicine

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés G.; Millán, María S.; Cristóbal, Gabriel; Gabarda, Salvador; Sorel, Michal; Sroubek, Filip

    2012-06-01

    Medical digital imaging has become a key element of modern health care procedures. It provides visual documentation and a permanent record for the patients, and most important the ability to extract information about many diseases. Modern ophthalmology thrives and develops on the advances in digital imaging and computing power. In this work we present an overview of recent image processing techniques proposed by the authors in the area of digital eye fundus photography. Our applications range from retinal image quality assessment to image restoration via blind deconvolution and visualization of structural changes in time between patient visits. All proposed within a framework for improving and assisting the medical practice and the forthcoming scenario of the information chain in telemedicine.

  17. High-resolution imaging spectroscopy of two micro-pores and an arch filament system in a small emerging-flux region

    NASA Astrophysics Data System (ADS)

    González Manrique, S. J.; Bello González, N.; Denker, C.

    2017-04-01

    Context. Emerging flux regions mark the first stage in the accumulation of magnetic flux eventually leading to pores, sunspots, and (complex) active regions. These flux regions are highly dynamic, show a variety of fine structure, and in many cases live only for a short time (less than a day) before dissolving quickly into the ubiquitous quiet-Sun magnetic field. Aims: The purpose of this investigation is to characterize the temporal evolution of a minute emerging flux region, the associated photospheric and chromospheric flow fields, and the properties of the accompanying arch filament system. We aim to explore flux emergence and decay processes and investigate if they scale with structure size and magnetic flux contents. Methods: This study is based on imaging spectroscopy with the Göttingen Fabry-Pérot Interferometer at the Vacuum Tower Telescope, Observatorio del Teide, Tenerife, Spain on 2008 August 7. Photospheric horizontal proper motions were measured with Local correlation tracking using broadband images restored with multi-object multi-frame blind deconvolution. Cloud model (CM) inversions of line scans in the strong chromospheric absorption Hαλ656.28 nm line yielded CM parameters (Doppler velocity, Doppler width, optical thickness, and source function), which describe the cool plasma contained in the arch filament system. Results: The high-resolution observations cover the decay and convergence of two micro-pores with diameters of less than one arcsecond and provide decay rates for intensity and area. The photospheric horizontal flow speed is suppressed near the two micro-pores indicating that the magnetic field is already sufficiently strong to affect the convective energy transport. The micro-pores are accompanied by a small arch filament system as seen in Hα, where small-scale loops connect two regions with Hα line-core brightenings containing an emerging flux region with opposite polarities. The Doppler width, optical thickness, and source function reach the largest values near the Hα line-core brightenings. The chromospheric velocity of the cloud material is predominantly directed downwards near the footpoints of the loops with velocities of up to 12 km s-1, whereas loop tops show upward motions of about 3 km s-1. Some of the loops exhibit signs of twisting motions along the loop axis. Conclusions: Micro-pores are the smallest magnetic field concentrations leaving a photometric signature in the photosphere. In the observed case, they are accompanied by a miniature arch filament system indicative of newly emerging flux in the form of Ω-loops. Flux emergence and decay take place on a time-scale of about two days, whereas the photometric decay of the micro-pores is much more rapid (a few hours), which is consistent with the incipient submergence of Ω-loops. Considering lifetime and evolution timescales, impact on the surrounding photospheric proper motions, and flow speed of the chromospheric plasma at the loop tops and footpoints, the results are representative for the smallest emerging flux regions still recognizable as such.

  18. Crowded field photometry with deconvolved images.

    NASA Astrophysics Data System (ADS)

    Linde, P.; Spännare, S.

    A local implementation of the Lucy-Richardson algorithm has been used to deconvolve a set of crowded stellar field images. The effects of deconvolution on detection limits as well as on photometric and astrometric properties have been investigated as a function of the number of deconvolution iterations. Results show that deconvolution improves detection of faint stars, although artifacts are also found. Deconvolution provides more stars measurable without significant degradation of positional accuracy. The photometric precision is affected by deconvolution in several ways. Errors due to unresolved images are notably reduced, while flux redistribution between stars and background increases the errors.

  19. Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.

    2003-01-01

    Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.

  20. Deconvolution of Stark broadened spectra for multi-point density measurements in a flow Z-pinch

    DOE PAGES

    Vogman, G. V.; Shumlak, U.

    2011-10-13

    Stark broadened emission spectra, once separated from other broadening effects, provide a convenient non-perturbing means of making plasma density measurements. A deconvolution technique has been developed to measure plasma densities in the ZaP flow Z-pinch experiment. The ZaP experiment uses sheared flow to mitigate MHD instabilities. The pinches exhibit Stark broadened emission spectra, which are captured at 20 locations using a multi-chord spectroscopic system. Spectra that are time- and chord-integrated are well approximated by a Voigt function. The proposed method simultaneously resolves plasma electron density and ion temperature by deconvolving the spectral Voigt profile into constituent functions: a Gaussian functionmore » associated with instrument effects and Doppler broadening by temperature; and a Lorentzian function associated with Stark broadening by electron density. The method uses analytic Fourier transforms of the constituent functions to fit the Voigt profile in the Fourier domain. The method is discussed and compared to a basic least-squares fit. The Fourier transform fitting routine requires fewer fitting parameters and shows promise in being less susceptible to instrumental noise and to contamination from neighboring spectral lines. The method is evaluated and tested using simulated lines and is applied to experimental data for the 229.69 nm C III line from multiple chords to determine plasma density and temperature across the diameter of the pinch. As a result, these measurements are used to gain a better understanding of Z-pinch equilibria.« less

  1. Deconvolution of Stark broadened spectra for multi-point density measurements in a flow Z-pinch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogman, G. V.; Shumlak, U.

    2011-10-15

    Stark broadened emission spectra, once separated from other broadening effects, provide a convenient non-perturbing means of making plasma density measurements. A deconvolution technique has been developed to measure plasma densities in the ZaP flow Z-pinch experiment. The ZaP experiment uses sheared flow to mitigate MHD instabilities. The pinches exhibit Stark broadened emission spectra, which are captured at 20 locations using a multi-chord spectroscopic system. Spectra that are time- and chord-integrated are well approximated by a Voigt function. The proposed method simultaneously resolves plasma electron density and ion temperature by deconvolving the spectral Voigt profile into constituent functions: a Gaussian functionmore » associated with instrument effects and Doppler broadening by temperature; and a Lorentzian function associated with Stark broadening by electron density. The method uses analytic Fourier transforms of the constituent functions to fit the Voigt profile in the Fourier domain. The method is discussed and compared to a basic least-squares fit. The Fourier transform fitting routine requires fewer fitting parameters and shows promise in being less susceptible to instrumental noise and to contamination from neighboring spectral lines. The method is evaluated and tested using simulated lines and is applied to experimental data for the 229.69 nm C III line from multiple chords to determine plasma density and temperature across the diameter of the pinch. These measurements are used to gain a better understanding of Z-pinch equilibria.« less

  2. Design optimization of the S-frame to improve crashworthiness

    NASA Astrophysics Data System (ADS)

    Liu, Shu-Tian; Tong, Ze-Qi; Tang, Zhi-Liang; Zhang, Zong-Hua

    2014-08-01

    In this paper, the S-frames, the front side rail structures of automobile, were investigated for crashworthiness. Various cross-sections including regular polygon, non-convex polygon and multi-cell with inner stiffener sections were investigated in terms of energy absorption of S-frames. It was determined through extensive numerical simulation that a multi-cell S-frame with double vertical internal stiffeners can absorb more energy than the other configurations. Shape optimization was also carried out to improve energy absorption of the S-frame with a rectangular section. The center composite design of experiment and the sequential response surface method (SRSM) were adopted to construct the approximate design sub-problem, which was then solved by the feasible direction method. An innovative double S-frame was obtained from the optimal result. The optimum configuration of the S-frame was crushed numerically and more plastic hinges as well as shear zones were observed during the crush process. The energy absorption efficiency of the structure with the optimal configuration was improved compared to the initial configuration.

  3. Correction for frequency-dependent hydrophone response to nonlinear pressure waves using complex deconvolution and rarefactional filtering: application with fiber optic hydrophones.

    PubMed

    Wear, Keith; Liu, Yunbo; Gammell, Paul M; Maruvada, Subha; Harris, Gerald R

    2015-01-01

    Nonlinear acoustic signals contain significant energy at many harmonic frequencies. For many applications, the sensitivity (frequency response) of a hydrophone will not be uniform over such a broad spectrum. In a continuation of a previous investigation involving deconvolution methodology, deconvolution (implemented in the frequency domain as an inverse filter computed from frequency-dependent hydrophone sensitivity) was investigated for improvement of accuracy and precision of nonlinear acoustic output measurements. Timedelay spectrometry was used to measure complex sensitivities for 6 fiber-optic hydrophones. The hydrophones were then used to measure a pressure wave with rich harmonic content. Spectral asymmetry between compressional and rarefactional segments was exploited to design filters used in conjunction with deconvolution. Complex deconvolution reduced mean bias (for 6 fiber-optic hydrophones) from 163% to 24% for peak compressional pressure (p+), from 113% to 15% for peak rarefactional pressure (p-), and from 126% to 29% for pulse intensity integral (PII). Complex deconvolution reduced mean coefficient of variation (COV) (for 6 fiber optic hydrophones) from 18% to 11% (p+), 53% to 11% (p-), and 20% to 16% (PII). Deconvolution based on sensitivity magnitude or the minimum phase model also resulted in significant reductions in mean bias and COV of acoustic output parameters but was less effective than direct complex deconvolution for p+ and p-. Therefore, deconvolution with appropriate filtering facilitates reliable nonlinear acoustic output measurements using hydrophones with frequency-dependent sensitivity.

  4. An Approach to Co-Channel Talker Interference Suppression Using a Sinusoidal Model for Speech

    DTIC Science & Technology

    1988-02-05

    Massachusetts Institute of Technologp, with the ’Apport of the Department of the Air Force under Contract F19628-85-C-0002. ŕir re-port tniay be...Extracted from Summed Vocalic Waveforms 28 5-1 Failure of the Least Squares Solution with Closely-Spaced Frequencies. (a) Crossing Frequency Tracks, (b... Crossing Pitch Contours. 31 5-2 Multi-Frame Interpolation 33 5-3 Different Forms of Multi-Frame Interpolation 33 5-4 Recovery of Missing Lobe with Multi

  5. Fast analytical spectral filtering methods for magnetic resonance perfusion quantification.

    PubMed

    Reddy, Kasireddy V; Mitra, Abhishek; Yalavarthy, Phaneendra K

    2016-08-01

    The deconvolution in the perfusion weighted imaging (PWI) plays an important role in quantifying the MR perfusion parameters. The PWI application to stroke and brain tumor studies has become a standard clinical practice. The standard approach for this deconvolution is oscillatory-limited singular value decomposition (oSVD) and frequency domain deconvolution (FDD). The FDD is widely recognized as the fastest approach currently available for deconvolution of MR perfusion data. In this work, two fast deconvolution methods (namely analytical fourier filtering and analytical showalter spectral filtering) are proposed. Through systematic evaluation, the proposed methods are shown to be computationally efficient and quantitatively accurate compared to FDD and oSVD.

  6. Differences between early-blind, late-blind, and blindfolded-sighted people in haptic spatial-configuration learning and resulting memory traces.

    PubMed

    Postma, Albert; Zuidhoek, Sander; Noordzij, Matthijs L; Kappers, Astrid M L

    2007-01-01

    The roles of visual and haptic experience in different aspects of haptic processing of objects in peripersonal space are examined. In three trials, early-blind, late-blind, and blindfolded-sighted individuals had to match ten shapes haptically to the cut-outs in a board as fast as possible. Both blind groups were much faster than the sighted in all three trials. All three groups improved considerably from trial to trial. In particular, the sighted group showed a strong improvement from the first to the second trial. While superiority of the blind remained for speeded matching after rotation of the stimulus frame, coordinate positional-memory scores in a non-speeded free-recall trial showed no significant differences between the groups. Moreover, when assessed with a verbal response, categorical spatial-memory appeared strongest in the late-blind group. The role of haptic and visual experience thus appears to depend on the task aspect tested.

  7. Color Comprehension and Color Categories among Blind Students: A Multi-Sensory Approach in Implementing Concrete Language to Include All Students in Advanced Writing Classes

    ERIC Educational Resources Information Center

    Antarasena, Salinee

    2009-01-01

    This study investigates teaching methods regarding color comprehension and color categorization among blind students, as compared to their non-blind peers and whether they understand and represent the same color comprehension and color categories. Then after digit codes for color comprehension teaching and assistive technology for the blind had…

  8. System identification based on deconvolution and cross correlation: An application to a 20‐story instrumented building in Anchorage, Alaska

    USGS Publications Warehouse

    Wen, Weiping; Kalkan, Erol

    2017-01-01

    Deconvolution and cross‐correlation techniques are used for system identification of a 20‐story steel, moment‐resisting frame building in downtown Anchorage, Alaska. This regular‐plan midrise structure is instrumented with a 32‐channel accelerometer array at 10 levels. The impulse response functions (IRFs) and correlation functions (CFs) are computed based on waveforms recorded from ambient vibrations and five local and regional earthquakes. The earthquakes occurred from 2005 to 2014 with moment magnitudes between 4.7 and 6.2 over a range of azimuths at epicenter distances of 13.3–183 km. The building’s fundamental frequencies and mode shapes are determined using a complex mode indicator function based on singular value decomposition of multiple reference frequency‐response functions. The traveling waves, identified in IRFs with a virtual source at the roof, and CFs are used to estimate the intrinsic attenuation associated with the fundamental modes and shear‐wave velocity in the building. Although the cross correlation of the waveforms at various levels with the corresponding waveform at the first floor provides more complicated wave propagation than that from the deconvolution with virtual source at the roof, the shear‐wave velocities identified by both techniques are consistent—the largest difference in average values is within 8%. The median shear‐wave velocity from the IRFs of five earthquakes is 191  m/s for the east–west (E‐W), 205  m/s for the north–south (N‐S), and 176  m/s for the torsional responses. The building’s average intrinsic‐damping ratio is estimated to be 3.7% and 3.4% in the 0.2–1 Hz frequency band for the E‐W and N‐S directions, respectively. These results are intended to serve as reference for the undamaged condition of the building, which may be used for tracking changes in structural integrity during and after future earthquakes.

  9. Safety and Efficacy of the BrainPort V100 Device in Individuals Blinded by Traumatic Injury

    DTIC Science & Technology

    2015-10-01

    within the next quarter. 15 . SUBJECT TERMS BrainPort, V100, V200, blindness, visual impairment, assistive device, assistive technology, visual aid, non...2. Keywords 4 3. Accomplishments 4 4. Impact 10 5 . Changes/Problems 10 6. Products 10 7. Participants & Other...design were finalized during the 4 th quarter. The headset frame design (plastic and silicone components) was completed and device hardware 5 and

  10. Timbre Analysis of the Sonicguide

    ERIC Educational Resources Information Center

    Welch, James

    1977-01-01

    The timbre of various Sonicguide (an ultar-sonic binaural device mounted in a spectacle frame) signals was measured and analyzed by a "Sona-Graph" (sound spectral analyzer) to aid/mobility training of blind Ss. (Author/MH)

  11. A multi-frame soft x-ray pinhole imaging diagnostic for single-shot applicationsa)

    NASA Astrophysics Data System (ADS)

    Wurden, G. A.; Coffey, S. K.

    2012-10-01

    For high energy density magnetized target fusion experiments at the Air Force Research Laboratory FRCHX machine, obtaining multi-frame soft x-ray images of the field reversed configuration (FRC) plasma as it is being compressed will provide useful dynamics and symmetry information. However, vacuum hardware will be destroyed during the implosion. We have designed a simple in-vacuum pinhole nosecone attachment, fitting onto a Conflat window, coated with 3.2 mg/cm2 of P-47 phosphor, and covered with a thin 50-nm aluminum reflective overcoat, lens-coupled to a multi-frame Hadland Ultra intensified digital camera. We compare visible and soft x-ray axial images of translating (˜200 eV) plasmas in the FRX-L and FRCHX machines in Los Alamos and Albuquerque.

  12. Dynamic configuration management of a multi-standard and multi-mode reconfigurable multi-ASIP architecture for turbo decoding

    NASA Astrophysics Data System (ADS)

    Lapotre, Vianney; Gogniat, Guy; Baghdadi, Amer; Diguet, Jean-Philippe

    2017-12-01

    The multiplication of connected devices goes along with a large variety of applications and traffic types needing diverse requirements. Accompanying this connectivity evolution, the last years have seen considerable evolutions of wireless communication standards in the domain of mobile telephone networks, local/wide wireless area networks, and Digital Video Broadcasting (DVB). In this context, intensive research has been conducted to provide flexible turbo decoder targeting high throughput, multi-mode, multi-standard, and power consumption efficiency. However, flexible turbo decoder implementations have not often considered dynamic reconfiguration issues in this context that requires high speed configuration switching. Starting from this assessment, this paper proposes the first solution that allows frame-by-frame run-time configuration management of a multi-processor turbo decoder without compromising the decoding performances.

  13. The Cortically Blind Infant: Educational Guidelines and Suggestions.

    ERIC Educational Resources Information Center

    Silverrain, Ann

    Cortical blindness is defined and its diagnosis is explained. Guidelines and sample activities are presented for use in a cognitive/visual/multi-sensory stimulation program to produce progress in cortically blind infants. The importance of using the eyes from birth through early development in order to form the nerve pathways responsible for…

  14. Pacific Basin Deaf-Blind Project. State & Multi State Projects for Children with Deaf-Blindness. Final Report, 1992-1995.

    ERIC Educational Resources Information Center

    Kelly, Dotty; Guerrero, Vincent Leon

    This final report describes activities and accomplishments of the Pacific Basin Deaf-Blind Project, a 3-year federally funded project to provide technical assistance to public and private agencies, institutions, and organizations providing early intervention, educational, transitional, vocational, early identification, and related services to…

  15. Enhancing multi-spot structured illumination microscopy with fluorescence difference

    PubMed Central

    Torkelsen, Frida H.

    2018-01-01

    Structured illumination microscopy is a super-resolution technique used extensively in biological research. However, this technique is limited in the maximum possible resolution increase. Here we report the results of simulations of a novel enhanced multi-spot structured illumination technique. This method combines the super-resolution technique of difference microscopy with structured illumination deconvolution. Initial results give at minimum a 1.4-fold increase in resolution over conventional structured illumination in a low-noise environment. This new technique also has the potential to be expanded to further enhance axial resolution with three-dimensional difference microscopy. The requirement for precise pattern determination in this technique also led to the development of a new pattern estimation algorithm which proved more efficient and reliable than other methods tested. PMID:29657751

  16. Kinetic analysis of non-isothermal solid-state reactions: multi-stage modeling without assumptions in the reaction mechanism.

    PubMed

    Pomerantsev, Alexey L; Kutsenova, Alla V; Rodionova, Oxana Ye

    2017-02-01

    A novel non-linear regression method for modeling non-isothermal thermogravimetric data is proposed. Experiments for several heating rates are analyzed simultaneously. The method is applicable to complex multi-stage processes when the number of stages is unknown. Prior knowledge of the type of kinetics is not required. The main idea is a consequent estimation of parameters when the overall model is successively changed from one level of modeling to another. At the first level, the Avrami-Erofeev functions are used. At the second level, the Sestak-Berggren functions are employed with the goal to broaden the overall model. The method is tested using both simulated and real-world data. A comparison of the proposed method with a recently published 'model-free' deconvolution method is presented.

  17. Analysis of blind identification methods for estimation of kinetic parameters in dynamic medical imaging

    NASA Astrophysics Data System (ADS)

    Riabkov, Dmitri

    Compartment modeling of dynamic medical image data implies that the concentration of the tracer over time in a particular region of the organ of interest is well-modeled as a convolution of the tissue response with the tracer concentration in the blood stream. The tissue response is different for different tissues while the blood input is assumed to be the same for different tissues. The kinetic parameters characterizing the tissue responses can be estimated by blind identification methods. These algorithms use the simultaneous measurements of concentration in separate regions of the organ; if the regions have different responses, the measurement of the blood input function may not be required. In this work it is shown that the blind identification problem has a unique solution for two-compartment model tissue response. For two-compartment model tissue responses in dynamic cardiac MRI imaging conditions with gadolinium-DTPA contrast agent, three blind identification algorithms are analyzed here to assess their utility: Eigenvector-based Algorithm for Multichannel Blind Deconvolution (EVAM), Cross Relations (CR), and Iterative Quadratic Maximum Likelihood (IQML). Comparisons of accuracy with conventional (not blind) identification techniques where the blood input is known are made as well. The statistical accuracies of estimation for the three methods are evaluated and compared for multiple parameter sets. The results show that the IQML method gives more accurate estimates than the other two blind identification methods. A proof is presented here that three-compartment model blind identification is not unique in the case of only two regions. It is shown that it is likely unique for the case of more than two regions, but this has not been proved analytically. For the three-compartment model the tissue responses in dynamic FDG PET imaging conditions are analyzed with the blind identification algorithms EVAM and Separable variables Least Squares (SLS). A method of identification that assumes that FDG blood input in the brain can be modeled as a function of time and several parameters (IFM) is analyzed also. Nonuniform sampling SLS (NSLS) is developed due to the rapid change of the FDG concentration in the blood during the early postinjection stage. Comparisons of accuracy of EVAM, SLS, NSLS and IFM identification techniques are made.

  18. Deblending of simultaneous-source data using iterative seislet frame thresholding based on a robust slope estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Yatong; Han, Chunying; Chi, Yue

    2018-06-01

    In a simultaneous source survey, no limitation is required for the shot scheduling of nearby sources and thus a huge acquisition efficiency can be obtained but at the same time making the recorded seismic data contaminated by strong blending interference. In this paper, we propose a multi-dip seislet frame based sparse inversion algorithm to iteratively separate simultaneous sources. We overcome two inherent drawbacks of traditional seislet transform. For the multi-dip problem, we propose to apply a multi-dip seislet frame thresholding strategy instead of the traditional seislet transform for deblending simultaneous-source data that contains multiple dips, e.g., containing multiple reflections. The multi-dip seislet frame strategy solves the conflicting dip problem that degrades the performance of the traditional seislet transform. For the noise issue, we propose to use a robust dip estimation algorithm that is based on velocity-slope transformation. Instead of calculating the local slope directly using the plane-wave destruction (PWD) based method, we first apply NMO-based velocity analysis and obtain NMO velocities for multi-dip components that correspond to multiples of different orders, then a fairly accurate slope estimation can be obtained using the velocity-slope conversion equation. An iterative deblending framework is given and validated through a comprehensive analysis over both numerical synthetic and field data examples.

  19. SU-F-T-478: Effect of Deconvolution in Analysis of Mega Voltage Photon Beam Profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muthukumaran, M; Manigandan, D; Murali, V

    2016-06-15

    Purpose: To study and compare the penumbra of 6 MV and 15 MV photon beam profiles after deconvoluting different volume ionization chambers. Methods: 0.125cc Semi-Flex chamber, Markus Chamber and PTW Farmer chamber were used to measure the in-plane and cross-plane profiles at 5 cm depth for 6 MV and 15 MV photons. The profiles were measured for various field sizes starting from 2×2 cm till 30×30 cm. PTW TBA scan software was used for the measurements and the “deconvolution” functionality in the software was used to remove the volume averaging effect due to finite volume of the chamber along lateralmore » and longitudinal directions for all the ionization chambers. The predicted true profile was compared and the change in penumbra before and after deconvolution was studied. Results: After deconvoluting the penumbra decreased by 1 mm for field sizes ranging from 2 × 2 cm till 20 x20 cm. This is observed for along both lateral and longitudinal directions. However for field sizes from 20 × 20 till 30 ×30 cm the difference in penumbra was around 1.2 till 1.8 mm. This was observed for both 6 MV and 15 MV photon beams. The penumbra was always lesser in the deconvoluted profiles for all the ionization chambers involved in the study. The variation in difference in penumbral values were in the order of 0.1 till 0.3 mm between the deconvoluted profile along lateral and longitudinal directions for all the chambers under study. Deconvolution of the profiles along longitudinal direction for Farmer chamber was not good and is not comparable with other deconvoluted profiles. Conclusion: The results of the deconvoluted profiles for 0.125cc and Markus chamber was comparable and the deconvolution functionality can be used to overcome the volume averaging effect.« less

  20. An adaptive sparse deconvolution method for distinguishing the overlapping echoes of ultrasonic guided waves for pipeline crack inspection

    NASA Astrophysics Data System (ADS)

    Chang, Yong; Zi, Yanyang; Zhao, Jiyuan; Yang, Zhe; He, Wangpeng; Sun, Hailiang

    2017-03-01

    In guided wave pipeline inspection, echoes reflected from closely spaced reflectors generally overlap, meaning useful information is lost. To solve the overlapping problem, sparse deconvolution methods have been developed in the past decade. However, conventional sparse deconvolution methods have limitations in handling guided wave signals, because the input signal is directly used as the prototype of the convolution matrix, without considering the waveform change caused by the dispersion properties of the guided wave. In this paper, an adaptive sparse deconvolution (ASD) method is proposed to overcome these limitations. First, the Gaussian echo model is employed to adaptively estimate the column prototype of the convolution matrix instead of directly using the input signal as the prototype. Then, the convolution matrix is constructed upon the estimated results. Third, the split augmented Lagrangian shrinkage (SALSA) algorithm is introduced to solve the deconvolution problem with high computational efficiency. To verify the effectiveness of the proposed method, guided wave signals obtained from pipeline inspection are investigated numerically and experimentally. Compared to conventional sparse deconvolution methods, e.g. the {{l}1} -norm deconvolution method, the proposed method shows better performance in handling the echo overlap problem in the guided wave signal.

  1. Removing flicker based on sparse color correspondences in old film restoration

    NASA Astrophysics Data System (ADS)

    Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran

    2018-04-01

    In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.

  2. Telemetry Standards, Part 1

    DTIC Science & Technology

    2015-07-01

    IMAGE FRAME RATE (R-x\\ IFR -n) PRE-TRIGGER FRAMES (R-x\\PTG-n) TOTAL FRAMES (R-x\\TOTF-n) EXPOSURE TIME (R-x\\EXP-n) SENSOR ROTATION (R-x...0” (Single frame). “1” (Multi-frame). “2” (Continuous). Allowed when: When R\\CDT is “IMGIN” IMAGE FRAME RATE R-x\\ IFR -n R/R Ch 10 Status: RO...the settings that the user wishes to modify. Return Value The impact : A partial IHAL <configuration> element containing only the new settings for

  3. Mission Capability Gains from Multi-Mode Propulsion Thrust Variations on a Variety Spacecraft Orbital Maneuvers

    DTIC Science & Technology

    2011-03-01

    Geocentric -Equatorial Reference Frame2 ....................................................................... 31  Figure 8: Perifocal and Geocentric ...67  Figure 25: Mission 3 Geocentric Equatorial Reference Frame ...................................................... 69  Figure 26: Mission 3...Coordinate system, the Geocentric -Equatorial Reference frame and the reference frame depicted on one another is shown below. The following figures are from

  4. What do you gain from deconvolution? - Observing faint galaxies with the Hubble Space Telescope Wide Field Camera

    NASA Technical Reports Server (NTRS)

    Schade, David J.; Elson, Rebecca A. W.

    1993-01-01

    We describe experiments with deconvolutions of simulations of deep HST Wide Field Camera images containing faint, compact galaxies to determine under what circumstances there is a quantitative advantage to image deconvolution, and explore whether it is (1) helpful for distinguishing between stars and compact galaxies, or between spiral and elliptical galaxies, and whether it (2) improves the accuracy with which characteristic radii and integrated magnitudes may be determined. The Maximum Entropy and Richardson-Lucy deconvolution algorithms give the same results. For medium and low S/N images, deconvolution does not significantly improve our ability to distinguish between faint stars and compact galaxies, nor between spiral and elliptical galaxies. Measurements from both raw and deconvolved images are biased and must be corrected; it is easier to quantify and remove the biases for cases that have not been deconvolved. We find no benefit from deconvolution for measuring luminosity profiles, but these results are limited to low S/N images of very compact (often undersampled) galaxies.

  5. Touch HDR: photograph enhancement by user controlled wide dynamic range adaptation

    NASA Astrophysics Data System (ADS)

    Verrall, Steve; Siddiqui, Hasib; Atanassov, Kalin; Goma, Sergio; Ramachandra, Vikas

    2013-03-01

    High Dynamic Range (HDR) technology enables photographers to capture a greater range of tonal detail. HDR is typically used to bring out detail in a dark foreground object set against a bright background. HDR technologies include multi-frame HDR and single-frame HDR. Multi-frame HDR requires the combination of a sequence of images taken at different exposures. Single-frame HDR requires histogram equalization post-processing of a single image, a technique referred to as local tone mapping (LTM). Images generated using HDR technology can look less natural than their non- HDR counterparts. Sometimes it is only desired to enhance small regions of an original image. For example, it may be desired to enhance the tonal detail of one subject's face while preserving the original background. The Touch HDR technique described in this paper achieves these goals by enabling selective blending of HDR and non-HDR versions of the same image to create a hybrid image. The HDR version of the image can be generated by either multi-frame or single-frame HDR. Selective blending can be performed as a post-processing step, for example, as a feature of a photo editor application, at any time after the image has been captured. HDR and non-HDR blending is controlled by a weighting surface, which is configured by the user through a sequence of touches on a touchscreen.

  6. Non-rigid multi-frame registration of cell nuclei in live cell fluorescence microscopy image data.

    PubMed

    Tektonidis, Marco; Kim, Il-Han; Chen, Yi-Chun M; Eils, Roland; Spector, David L; Rohr, Karl

    2015-01-01

    The analysis of the motion of subcellular particles in live cell microscopy images is essential for understanding biological processes within cells. For accurate quantification of the particle motion, compensation of the motion and deformation of the cell nucleus is required. We introduce a non-rigid multi-frame registration approach for live cell fluorescence microscopy image data. Compared to existing approaches using pairwise registration, our approach exploits information from multiple consecutive images simultaneously to improve the registration accuracy. We present three intensity-based variants of the multi-frame registration approach and we investigate two different temporal weighting schemes. The approach has been successfully applied to synthetic and live cell microscopy image sequences, and an experimental comparison with non-rigid pairwise registration has been carried out. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. A multi-frame soft x-ray pinhole imaging diagnostic for single-shot applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurden, G. A.; Coffey, S. K.

    2012-10-15

    For high energy density magnetized target fusion experiments at the Air Force Research Laboratory FRCHX machine, obtaining multi-frame soft x-ray images of the field reversed configuration (FRC) plasma as it is being compressed will provide useful dynamics and symmetry information. However, vacuum hardware will be destroyed during the implosion. We have designed a simple in-vacuum pinhole nosecone attachment, fitting onto a Conflat window, coated with 3.2 mg/cm{sup 2} of P-47 phosphor, and covered with a thin 50-nm aluminum reflective overcoat, lens-coupled to a multi-frame Hadland Ultra intensified digital camera. We compare visible and soft x-ray axial images of translating ({approx}200more » eV) plasmas in the FRX-L and FRCHX machines in Los Alamos and Albuquerque.« less

  8. Pointing at targets by children with congenital and transient blindness.

    PubMed

    Gaunet, Florence; Ittyerah, Miriam; Rossetti, Yves

    2007-04-01

    The study investigated pointing at memorized targets in reachable space in congenitally blind (CB) and blindfolded sighted (BS) children (6, 8, 10 and 12 years; ten children in each group). The target locations were presented on a sagittal plane by passive positioning of the left index finger. A go signal for matching the target location with the right index finger was provided 0 or 4 s after demonstration. An age effect was found only for absolute distance errors and the surface area of pointing was smaller for the CB children. Results indicate that early visual experience and age are not predictive factors for pointing in children. The delay was an important factor at all ages and for both groups, indicating distinct spatial representations such as egocentric and allocentric frames of reference, for immediate and delayed pointing, respectively. Therefore, the CB like the BS children are able to use both ego- and allocentric frames of reference.

  9. SeaTouch: A Haptic and Auditory Maritime Environment for Non Visual Cognitive Mapping of Blind Sailors

    NASA Astrophysics Data System (ADS)

    Simonnet, Mathieu; Jacobson, Dan; Vieilledent, Stephane; Tisseau, Jacques

    Navigating consists of coordinating egocentric and allocentric spatial frames of reference. Virtual environments have afforded researchers in the spatial community with tools to investigate the learning of space. The issue of the transfer between virtual and real situations is not trivial. A central question is the role of frames of reference in mediating spatial knowledge transfer to external surroundings, as is the effect of different sensory modalities accessed in simulated and real worlds. This challenges the capacity of blind people to use virtual reality to explore a scene without graphics. The present experiment involves a haptic and auditory maritime virtual environment. In triangulation tasks, we measure systematic errors and preliminary results show an ability to learn configurational knowledge and to navigate through it without vision. Subjects appeared to take advantage of getting lost in an egocentric “haptic” view in the virtual environment to improve performances in the real environment.

  10. An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks.

    PubMed

    Shamwell, E Jared; Nothwang, William D; Perlis, Donald

    2018-05-04

    Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.

  11. Liquid chromatography with diode array detection combined with spectral deconvolution for the analysis of some diterpene esters in Arabica coffee brew.

    PubMed

    Erny, Guillaume L; Moeenfard, Marzieh; Alves, Arminda

    2015-02-01

    In this manuscript, the separation of kahweol and cafestol esters from Arabica coffee brews was investigated using liquid chromatography with a diode array detector. When detected in conjunction, cafestol, and kahweol esters were eluted together, but, after optimization, the kahweol esters could be selectively detected by setting the wavelength at 290 nm to allow their quantification. Such an approach was not possible for the cafestol esters, and spectral deconvolution was used to obtain deconvoluted chromatograms. In each of those chromatograms, the four esters were baseline separated allowing for the quantification of the eight targeted compounds. Because kahweol esters could be quantified either using the chromatogram obtained by setting the wavelength at 290 nm or using the deconvoluted chromatogram, those compounds were used to compare the analytical performances. Slightly better limits of detection were obtained using the deconvoluted chromatogram. Identical concentrations were found in a real sample with both approaches. The peak areas in the deconvoluted chromatograms were repeatable (intraday repeatability of 0.8%, interday repeatability of 1.0%). This work demonstrates the accuracy of spectral deconvolution when using liquid chromatography to mathematically separate coeluting compounds using the full spectra recorded by a diode array detector. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Photon-efficient super-resolution laser radar

    NASA Astrophysics Data System (ADS)

    Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.

    2017-08-01

    The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.

  13. Locating and Quantifying Broadband Fan Sources Using In-Duct Microphones

    NASA Technical Reports Server (NTRS)

    Dougherty, Robert P.; Walker, Bruce E.; Sutliff, Daniel L.

    2010-01-01

    In-duct beamforming techniques have been developed for locating broadband noise sources on a low-speed fan and quantifying the acoustic power in the inlet and aft fan ducts. The NASA Glenn Research Center's Advanced Noise Control Fan was used as a test bed. Several of the blades were modified to provide a broadband source to evaluate the efficacy of the in-duct beamforming technique. Phased arrays consisting of rings and line arrays of microphones were employed. For the imaging, the data were mathematically resampled in the frame of reference of the rotating fan. For both the imaging and power measurement steps, array steering vectors were computed using annular duct modal expansions, selected subsets of the cross spectral matrix elements were used, and the DAMAS and CLEAN-SC deconvolution algorithms were applied.

  14. Pre-Placement Program for Severely Multi-Handicapped Deaf-Blind Children, 1980-1981. Final Evaluation Report.

    ERIC Educational Resources Information Center

    Tobias, Robert; And Others

    Evaluation of the sixth and final year of operation for a preplacement program for 13 severely multiply handicapped deaf blind children, located in the Industrial Home for the Blind, is reported. The program is explained to prepare students for entrance into their existing special education programs. Qualitative findings on the physical setting,…

  15. The 2013 National Child Count of Children and Youth Who Are Deaf-Blind

    ERIC Educational Resources Information Center

    National Center on Deaf-Blindness, 2014

    2014-01-01

    The National Child Count of Children and Youth who are Deaf-Blind is the first and longest running registry and knowledge base of children who are deaf-blind in the world. It has been collaboratively designed, implemented and revised to serve as the common vehicle to meet federal grant requirements for both the State/Multi-State and National…

  16. Calibration of Wide-Field Deconvolution Microscopy for Quantitative Fluorescence Imaging

    PubMed Central

    Lee, Ji-Sook; Wee, Tse-Luen (Erika); Brown, Claire M.

    2014-01-01

    Deconvolution enhances contrast in fluorescence microscopy images, especially in low-contrast, high-background wide-field microscope images, improving characterization of features within the sample. Deconvolution can also be combined with other imaging modalities, such as confocal microscopy, and most software programs seek to improve resolution as well as contrast. Quantitative image analyses require instrument calibration and with deconvolution, necessitate that this process itself preserves the relative quantitative relationships between fluorescence intensities. To ensure that the quantitative nature of the data remains unaltered, deconvolution algorithms need to be tested thoroughly. This study investigated whether the deconvolution algorithms in AutoQuant X3 preserve relative quantitative intensity data. InSpeck Green calibration microspheres were prepared for imaging, z-stacks were collected using a wide-field microscope, and the images were deconvolved using the iterative deconvolution algorithms with default settings. Afterwards, the mean intensities and volumes of microspheres in the original and the deconvolved images were measured. Deconvolved data sets showed higher average microsphere intensities and smaller volumes than the original wide-field data sets. In original and deconvolved data sets, intensity means showed linear relationships with the relative microsphere intensities given by the manufacturer. Importantly, upon normalization, the trend lines were found to have similar slopes. In original and deconvolved images, the volumes of the microspheres were quite uniform for all relative microsphere intensities. We were able to show that AutoQuant X3 deconvolution software data are quantitative. In general, the protocol presented can be used to calibrate any fluorescence microscope or image processing and analysis procedure. PMID:24688321

  17. Application of an improved maximum correlated kurtosis deconvolution method for fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo

    2017-08-01

    The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.

  18. Solid-state framing camera with multiple time frames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, K. L.; Stewart, R. E.; Steele, P. T.

    2013-10-07

    A high speed solid-state framing camera has been developed which can operate over a wide range of photon energies. This camera measures the two-dimensional spatial profile of the flux incident on a cadmium selenide semiconductor at multiple times. This multi-frame camera has been tested at 3.1 eV and 4.5 keV. The framing camera currently records two frames with a temporal separation between the frames of 5 ps but this separation can be varied between hundreds of femtoseconds up to nanoseconds and the number of frames can be increased by angularly multiplexing the probe beam onto the cadmium selenide semiconductor.

  19. Gamma-Ray Simulated Spectrum Deconvolution of a LaBr₃ 1-in. x 1-in. Scintillator for Nondestructive ATR Fuel Burnup On-Site Predictions

    DOE PAGES

    Navarro, Jorge; Ring, Terry A.; Nigg, David W.

    2015-03-01

    A deconvolution method for a LaBr₃ 1"x1" detector for nondestructive Advanced Test Reactor (ATR) fuel burnup applications was developed. The method consisted of obtaining the detector response function, applying a deconvolution algorithm to 1”x1” LaBr₃ simulated, data along with evaluating the effects that deconvolution have on nondestructively determining ATR fuel burnup. The simulated response function of the detector was obtained using MCNPX as well with experimental data. The Maximum-Likelihood Expectation Maximization (MLEM) deconvolution algorithm was selected to enhance one-isotope source-simulated and fuel- simulated spectra. The final evaluation of the study consisted of measuring the performance of the fuel burnup calibrationmore » curve for the convoluted and deconvoluted cases. The methodology was developed in order to help design a reliable, high resolution, rugged and robust detection system for the ATR fuel canal capable of collecting high performance data for model validation, along with a system that can calculate burnup and using experimental scintillator detector data.« less

  20. Seismic interferometry by multidimensional deconvolution as a means to compensate for anisotropic illumination

    NASA Astrophysics Data System (ADS)

    Wapenaar, K.; van der Neut, J.; Ruigrok, E.; Draganov, D.; Hunziker, J.; Slob, E.; Thorbecke, J.; Snieder, R.

    2008-12-01

    It is well-known that under specific conditions the crosscorrelation of wavefields observed at two receivers yields the impulse response between these receivers. This principle is known as 'Green's function retrieval' or 'seismic interferometry'. Recently it has been recognized that in many situations it can be advantageous to replace the correlation process by deconvolution. One of the advantages is that deconvolution compensates for the waveform emitted by the source; another advantage is that it is not necessary to assume that the medium is lossless. The approaches that have been developed to date employ a 1D deconvolution process. We propose a method for seismic interferometry by multidimensional deconvolution and show that under specific circumstances the method compensates for irregularities in the source distribution. This is an important difference with crosscorrelation methods, which rely on the condition that waves are equipartitioned. This condition is for example fulfilled when the sources are regularly distributed along a closed surface and the power spectra of the sources are identical. The proposed multidimensional deconvolution method compensates for anisotropic illumination, without requiring knowledge about the positions and the spectra of the sources.

  1. Composite Characterization Using Laser Doppler Vibrometry and Multi-Frequency Wavenumber Analysis

    NASA Technical Reports Server (NTRS)

    Juarez, Peter; Leckey, Cara

    2015-01-01

    NASA has recognized the need for better characterization of composite materials to support advances in aeronautics and the next generation of space exploration vehicles. An area of related research is the evaluation of impact induced delaminations. Presented is a non-contact method of measuring the ply depth of impact delamination damage in a composite through use of a Scanning Laser Doppler Vibrometer (SLDV), multi-frequency wavenumber analysis, and a wavenumber-ply correlation algorithm. A single acquisition of a chirp excited lamb wavefield in an impacted composite is post-processed into a numerous single frequency excitation wavefields through a deconvolution process. A spatially windowed wavenumber analysis then extracts local wavenumbers from the wavefield, which are then correlated to theoretical dispersion curves for ply depth determination. SLDV based methods to characterize as-manufactured composite variation using wavefield analysis will also be discussed.

  2. Iterative and function-continuation Fourier deconvolution methods for enhancing mass spectrometer resolution

    NASA Technical Reports Server (NTRS)

    Ioup, J. W.; Ioup, G. E.; Rayborn, G. H., Jr.; Wood, G. M., Jr.; Upchurch, B. T.

    1984-01-01

    Mass spectrometer data in the form of ion current versus mass-to-charge ratio often include overlapping mass peaks, especially in low- and medium-resolution instruments. Numerical deconvolution of such data effectively enhances the resolution by decreasing the overlap of mass peaks. In this paper two approaches to deconvolution are presented: a function-domain iterative technique and a Fourier transform method which uses transform-domain function-continuation. Both techniques include data smoothing to reduce the sensitivity of the deconvolution to noise. The efficacy of these methods is demonstrated through application to representative mass spectrometer data and the deconvolved results are discussed and compared to data obtained from a spectrometer with sufficient resolution to achieve separation of the mass peaks studied. A case for which the deconvolution is seriously affected by Gibbs oscillations is analyzed.

  3. Mission Capability Gains from Multi-Mode Propulsion Thrust Profile Variations for a Plane Change Maneuver

    DTIC Science & Technology

    2010-12-29

    propellant mass [kg] msc = mass of the spacecraft [kg] MMP = multi-mode propulsion   = position in the Geocentric Equatorial Reference...thrust burn time [s] Tsc = thrust of the spacecraft [N] = vector between current and final velocity vector   = velocity vector in the Geocentric ...Equatorial Reference Frame of spacecraft in intended orbit [km/s]   = velocity vector in the Geocentric Equatorial Reference Frame of spacecraft in

  4. Ultra-fast high-resolution hybrid and monolithic CMOS imagers in multi-frame radiography

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Kris; Douence, Vincent; Bai, Yibin; Nedrow, Paul; Mariam, Fesseha; Merrill, Frank; Morris, Christopher L.; Saunders, Andy

    2014-09-01

    A new burst-mode, 10-frame, hybrid Si-sensor/CMOS-ROIC FPA chip has been recently fabricated at Teledyne Imaging Sensors. The intended primary use of the sensor is in the multi-frame 800 MeV proton radiography at LANL. The basic part of the hybrid is a large (48×49 mm2) stitched CMOS chip of 1100×1100 pixel count, with a minimum shutter speed of 50 ns. The performance parameters of this chip are compared to the first generation 3-frame 0.5-Mpixel custom hybrid imager. The 3-frame cameras have been in continuous use for many years, in a variety of static and dynamic experiments at LANSCE. The cameras can operate with a per-frame adjustable integration time of ~ 120ns-to- 1s, and inter-frame time of 250ns to 2s. Given the 80 ms total readout time, the original and the new imagers can be externally synchronized to 0.1-to-5 Hz, 50-ns wide proton beam pulses, and record up to ~1000-frame radiographic movies typ. of 3-to-30 minute duration. The performance of the global electronic shutter is discussed and compared to that of a high-resolution commercial front-illuminated monolithic CMOS imager.

  5. Multi-volumetric registration and mosaicking using swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Bozic, Ivan; El-Haddad, Mohamed T.; Malone, Joseph D.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2017-02-01

    Ophthalmic diagnostic imaging using optical coherence tomography (OCT) is limited by bulk eye motions and a fundamental trade-off between field-of-view (FOV) and sampling density. Here, we introduced a novel multi-volumetric registration and mosaicking method using our previously described multimodal swept-source spectrally encoded scanning laser ophthalmoscopy and OCT (SS-SESLO-OCT) system. Our SS-SESLO-OCT acquires an entire en face fundus SESLO image simultaneously with every OCT cross-section at 200 frames-per-second. In vivo human retinal imaging was performed in a healthy volunteer, and three volumetric datasets were acquired with the volunteer moving freely and refixating between each acquisition. In post-processing, SESLO frames were used to estimate en face rotational and translational motions by registering every frame in all three volumetric datasets to the first frame in the first volume. OCT cross-sections were contrast-normalized and registered axially and rotationally across all volumes. Rotational and translational motions calculated from SESLO frames were applied to corresponding OCT B-scans to compensate for interand intra-B-scan bulk motions, and the three registered volumes were combined into a single interpolated multi-volumetric mosaic. Using complementary information from SESLO and OCT over serially acquired volumes, we demonstrated multivolumetric registration and mosaicking to recover regions of missing data resulting from blinks, saccades, and ocular drifts. We believe our registration method can be directly applied for multi-volumetric motion compensation, averaging, widefield mosaicking, and vascular mapping with potential applications in ophthalmic clinical diagnostics, handheld imaging, and intraoperative guidance.

  6. Rapidly converging multigrid reconstruction of cone-beam tomographic data

    NASA Astrophysics Data System (ADS)

    Myers, Glenn R.; Kingston, Andrew M.; Latham, Shane J.; Recur, Benoit; Li, Thomas; Turner, Michael L.; Beeching, Levi; Sheppard, Adrian P.

    2016-10-01

    In the context of large-angle cone-beam tomography (CBCT), we present a practical iterative reconstruction (IR) scheme designed for rapid convergence as required for large datasets. The robustness of the reconstruction is provided by the "space-filling" source trajectory along which the experimental data is collected. The speed of convergence is achieved by leveraging the highly isotropic nature of this trajectory to design an approximate deconvolution filter that serves as a pre-conditioner in a multi-grid scheme. We demonstrate this IR scheme for CBCT and compare convergence to that of more traditional techniques.

  7. Deconvolution Methods for Multi-Detectors

    DTIC Science & Technology

    1989-08-30

    in [7). We will say sometimes that the family of distributions jI,..,’m is strongly coprime. It might be useful to explain why is (4) called a...form g In the variable ?. given by n 3(11) g q(z~t,p):= 1 kz()(k k=1 Given a family of m entire holomorphic functions f n’*If m its zero set Z is defined...write g1 g jdk" Recall the k=l coefficients gi are holomorphic in both z and t. Let F be the vector valued holomorphic function F: - (f1 ’..’,f ) we

  8. Parsimonious Charge Deconvolution for Native Mass Spectrometry

    PubMed Central

    2018-01-01

    Charge deconvolution infers the mass from mass over charge (m/z) measurements in electrospray ionization mass spectra. When applied over a wide input m/z or broad target mass range, charge-deconvolution algorithms can produce artifacts, such as false masses at one-half or one-third of the correct mass. Indeed, a maximum entropy term in the objective function of MaxEnt, the most commonly used charge deconvolution algorithm, favors a deconvolved spectrum with many peaks over one with fewer peaks. Here we describe a new “parsimonious” charge deconvolution algorithm that produces fewer artifacts. The algorithm is especially well-suited to high-resolution native mass spectrometry of intact glycoproteins and protein complexes. Deconvolution of native mass spectra poses special challenges due to salt and small molecule adducts, multimers, wide mass ranges, and fewer and lower charge states. We demonstrate the performance of the new deconvolution algorithm on a range of samples. On the heavily glycosylated plasma properdin glycoprotein, the new algorithm could deconvolve monomer and dimer simultaneously and, when focused on the m/z range of the monomer, gave accurate and interpretable masses for glycoforms that had previously been analyzed manually using m/z peaks rather than deconvolved masses. On therapeutic antibodies, the new algorithm facilitated the analysis of extensions, truncations, and Fab glycosylation. The algorithm facilitates the use of native mass spectrometry for the qualitative and quantitative analysis of protein and protein assemblies. PMID:29376659

  9. Broadband ion mobility deconvolution for rapid analysis of complex mixtures.

    PubMed

    Pettit, Michael E; Brantley, Matthew R; Donnarumma, Fabrizio; Murray, Kermit K; Solouki, Touradj

    2018-05-04

    High resolving power ion mobility (IM) allows for accurate characterization of complex mixtures in high-throughput IM mass spectrometry (IM-MS) experiments. We previously demonstrated that pure component IM-MS data can be extracted from IM unresolved post-IM/collision-induced dissociation (CID) MS data using automated ion mobility deconvolution (AIMD) software [Matthew Brantley, Behrooz Zekavat, Brett Harper, Rachel Mason, and Touradj Solouki, J. Am. Soc. Mass Spectrom., 2014, 25, 1810-1819]. In our previous reports, we utilized a quadrupole ion filter for m/z-isolation of IM unresolved monoisotopic species prior to post-IM/CID MS. Here, we utilize a broadband IM-MS deconvolution strategy to remove the m/z-isolation requirement for successful deconvolution of IM unresolved peaks. Broadband data collection has throughput and multiplexing advantages; hence, elimination of the ion isolation step reduces experimental run times and thus expands the applicability of AIMD to high-throughput bottom-up proteomics. We demonstrate broadband IM-MS deconvolution of two separate and unrelated pairs of IM unresolved isomers (viz., a pair of isomeric hexapeptides and a pair of isomeric trisaccharides) in a simulated complex mixture. Moreover, we show that broadband IM-MS deconvolution improves high-throughput bottom-up characterization of a proteolytic digest of rat brain tissue. To our knowledge, this manuscript is the first to report successful deconvolution of pure component IM and MS data from an IM-assisted data-independent analysis (DIA) or HDMSE dataset.

  10. Oscillatory activity reflects differential use of spatial reference frames by sighted and blind individuals in tactile attention.

    PubMed

    Schubert, Jonathan T W; Buchholz, Verena N; Föcker, Julia; Engel, Andreas K; Röder, Brigitte; Heed, Tobias

    2015-08-15

    Touch can be localized either on the skin in anatomical coordinates, or, after integration with posture, in external space. Sighted individuals are thought to encode touch in both coordinate systems concurrently, whereas congenitally blind individuals exhibit a strong bias for using anatomical coordinates. We investigated the neural correlates of this differential dominance in the use of anatomical and external reference frames by assessing oscillatory brain activity during a tactile spatial attention task. The EEG was recorded while sighted and congenitally blind adults received tactile stimulation to uncrossed and crossed hands while detecting rare tactile targets at one cued hand only. In the sighted group, oscillatory alpha-band activity (8-12Hz) in the cue-target interval was reduced contralaterally and enhanced ipsilaterally with uncrossed hands. Hand crossing attenuated the degree of posterior parietal alpha-band lateralization, indicating that attention deployment was affected by external spatial coordinates. Beamforming suggested that this posture effect originated in the posterior parietal cortex. In contrast, cue-related lateralization of central alpha-band as well as of beta-band activity (16-24Hz) were unaffected by hand crossing, suggesting that these oscillations exclusively encode anatomical coordinates. In the blind group, central alpha-band activity was lateralized, but did not change across postures. The pattern of beta-band activity was indistinguishable between groups. Because the neural mechanisms for posterior alpha-band generation seem to be linked to developmental vision, we speculate that the lack of this neural mechanism in blind individuals is related to their preferred use of anatomical over external spatial codes in sensory processing. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Multi-layer laminate structure and manufacturing method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keenihan, James R; Cleereman, Robert J; Eurich, Gerald

    2012-04-24

    The present invention is premised upon a multi-layer laminate structure and method of manufacture, more particularly to a method of constructing the multi-layer laminate structure utilizing a laminate frame and at least one energy activated flowable polymer.

  12. Multi-layer laminate structure and manufacturing method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keenihan, James R.; Cleereman, Robert J.; Eurich, Gerald

    2013-01-29

    The present invention is premised upon a multi-layer laminate structure and method of manufacture, more particularly to a method of constructing the multi-layer laminate structure utilizing a laminate frame and at least one energy activated flowable polymer.

  13. UAV remote sensing atmospheric degradation image restoration based on multiple scattering APSF estimation

    NASA Astrophysics Data System (ADS)

    Qiu, Xiang; Dai, Ming; Yin, Chuan-li

    2017-09-01

    Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.

  14. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  15. Least-squares (LS) deconvolution of a series of overlapping cortical auditory evoked potentials: a simulation and experimental study

    NASA Astrophysics Data System (ADS)

    Bardy, Fabrice; Van Dun, Bram; Dillon, Harvey; Cowan, Robert

    2014-08-01

    Objective. To evaluate the viability of disentangling a series of overlapping ‘cortical auditory evoked potentials’ (CAEPs) elicited by different stimuli using least-squares (LS) deconvolution, and to assess the adaptation of CAEPs for different stimulus onset-asynchronies (SOAs). Approach. Optimal aperiodic stimulus sequences were designed by controlling the condition number of matrices associated with the LS deconvolution technique. First, theoretical considerations of LS deconvolution were assessed in simulations in which multiple artificial overlapping responses were recovered. Second, biological CAEPs were recorded in response to continuously repeated stimulus trains containing six different tone-bursts with frequencies 8, 4, 2, 1, 0.5, 0.25 kHz separated by SOAs jittered around 150 (120-185), 250 (220-285) and 650 (620-685) ms. The control condition had a fixed SOA of 1175 ms. In a second condition, using the same SOAs, trains of six stimuli were separated by a silence gap of 1600 ms. Twenty-four adults with normal hearing (<20 dB HL) were assessed. Main results. Results showed disentangling of a series of overlapping responses using LS deconvolution on simulated waveforms as well as on real EEG data. The use of rapid presentation and LS deconvolution did not however, allow the recovered CAEPs to have a higher signal-to-noise ratio than for slowly presented stimuli. The LS deconvolution technique enables the analysis of a series of overlapping responses in EEG. Significance. LS deconvolution is a useful technique for the study of adaptation mechanisms of CAEPs for closely spaced stimuli whose characteristics change from stimulus to stimulus. High-rate presentation is necessary to develop an understanding of how the auditory system encodes natural speech or other intrinsically high-rate stimuli.

  16. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    PubMed

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  17. Scalar flux modeling in turbulent flames using iterative deconvolution

    NASA Astrophysics Data System (ADS)

    Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.

    2018-04-01

    In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.

  18. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    NASA Astrophysics Data System (ADS)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  19. Rapid perfusion quantification using Welch-Satterthwaite approximation and analytical spectral filtering

    NASA Astrophysics Data System (ADS)

    Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.

    2017-02-01

    CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.

  20. Portable lensless wide-field microscopy imaging platform based on digital inline holography and multi-frame pixel super-resolution

    PubMed Central

    Sobieranski, Antonio C; Inci, Fatih; Tekin, H Cumhur; Yuksekkaya, Mehmet; Comunello, Eros; Cobra, Daniel; von Wangenheim, Aldo; Demirci, Utkan

    2017-01-01

    In this paper, an irregular displacement-based lensless wide-field microscopy imaging platform is presented by combining digital in-line holography and computational pixel super-resolution using multi-frame processing. The samples are illuminated by a nearly coherent illumination system, where the hologram shadows are projected into a complementary metal-oxide semiconductor-based imaging sensor. To increase the resolution, a multi-frame pixel resolution approach is employed to produce a single holographic image from multiple frame observations of the scene, with small planar displacements. Displacements are resolved by a hybrid approach: (i) alignment of the LR images by a fast feature-based registration method, and (ii) fine adjustment of the sub-pixel information using a continuous optimization approach designed to find the global optimum solution. Numerical method for phase-retrieval is applied to decode the signal and reconstruct the morphological details of the analyzed sample. The presented approach was evaluated with various biological samples including sperm and platelets, whose dimensions are in the order of a few microns. The obtained results demonstrate a spatial resolution of 1.55 µm on a field-of-view of ≈30 mm2. PMID:29657866

  1. Multi-Handicapped Blind Persons Can Work.

    ERIC Educational Resources Information Center

    Rusalem, Herbert; Richterman, Harold

    The demonstration project assessed an innovative approach to the provision of remunerative work for evaluation, training, and employment purposes in sheltered workshops for 291 blind individuals who also were limited by vocationally significant intellectual, physical, emotional, and/or social disabilities. The multiply handicapped subgroup of the…

  2. ANTONIA perfusion and stroke. A software tool for the multi-purpose analysis of MR perfusion-weighted datasets and quantitative ischemic stroke assessment.

    PubMed

    Forkert, N D; Cheng, B; Kemmling, A; Thomalla, G; Fiehler, J

    2014-01-01

    The objective of this work is to present the software tool ANTONIA, which has been developed to facilitate a quantitative analysis of perfusion-weighted MRI (PWI) datasets in general as well as the subsequent multi-parametric analysis of additional datasets for the specific purpose of acute ischemic stroke patient dataset evaluation. Three different methods for the analysis of DSC or DCE PWI datasets are currently implemented in ANTONIA, which can be case-specifically selected based on the study protocol. These methods comprise a curve fitting method as well as a deconvolution-based and deconvolution-free method integrating a previously defined arterial input function. The perfusion analysis is extended for the purpose of acute ischemic stroke analysis by additional methods that enable an automatic atlas-based selection of the arterial input function, an analysis of the perfusion-diffusion and DWI-FLAIR mismatch as well as segmentation-based volumetric analyses. For reliability evaluation, the described software tool was used by two observers for quantitative analysis of 15 datasets from acute ischemic stroke patients to extract the acute lesion core volume, FLAIR ratio, perfusion-diffusion mismatch volume with manually as well as automatically selected arterial input functions, and follow-up lesion volume. The results of this evaluation revealed that the described software tool leads to highly reproducible results for all parameters if the automatic arterial input function selection method is used. Due to the broad selection of processing methods that are available in the software tool, ANTONIA is especially helpful to support image-based perfusion and acute ischemic stroke research projects.

  3. TIME-DEPENDENT MULTI-GROUP MULTI-DIMENSIONAL RELATIVISTIC RADIATIVE TRANSFER CODE BASED ON SPHERICAL HARMONIC DISCRETE ORDINATE METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I., E-mail: tominaga@konan-u.ac.jp, E-mail: sshibata@post.kek.jp, E-mail: Sergei.Blinnikov@itep.ru

    We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source functionmore » is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.« less

  4. Multi-model stereo restitution

    USGS Publications Warehouse

    Dueholm, K.S.

    1990-01-01

    Methods are described that permit simultaneous orientation of many small-frame photogrammetric models in an analytical plotter. The multi-model software program enables the operator to move freely between the oriented models during interpretation and mapping. Models change automatically when the measuring mark is moved from one frame to another, moving to the same ground coordinates in the neighboring model. Thus, data collection and plotting can be performed continuously across model boundaries. The orientation of the models is accomplished by a bundle block adjustment. -from Author

  5. A Quantum Multi-Proxy Weak Blind Signature Scheme Based on Entanglement Swapping

    NASA Astrophysics Data System (ADS)

    Yan, LiLi; Chang, Yan; Zhang, ShiBin; Han, GuiHua; Sheng, ZhiWei

    2017-02-01

    In this paper, we present a multi-proxy weak blind signature scheme based on quantum entanglement swapping of Bell states. In the scheme, proxy signers can finish the signature instead of original singer with his/her authority. It can be applied to the electronic voting system, electronic paying system, etc. The scheme uses the physical characteristics of quantum mechanics to implement delegation, signature and verification. It could guarantee not only the unconditionally security but also the anonymity of the message owner. The security analysis shows the scheme satisfies the security features of multi-proxy weak signature, singers cannot disavowal his/her signature while the signature cannot be forged by others, and the message owner can be traced.

  6. The multi-line slope method for measuring the effective magnetic field of cool stars: an application to the solar-like cycle of ɛ Eri

    NASA Astrophysics Data System (ADS)

    Scalia, C.; Leone, F.; Gangi, M.; Giarrusso, M.; Stift, M. J.

    2017-12-01

    One method for the determination of integrated longitudinal stellar fields from low-resolution spectra is the so-called slope method, which is based on the regression of the Stokes V signal against the first derivative of Stokes I. Here we investigate the possibility of extending this technique to measure the magnetic fields of cool stars from high-resolution spectra. For this purpose we developed a multi-line modification to the slope method, called the multi-line slope method. We tested this technique by analysing synthetic spectra computed with the COSSAM code and real observations obtained with the high-resolution spectropolarimeters Narval, HARPSpol and the Catania Astrophysical Observatory Spectropolarimeter (CAOS). We show that the multi-line slope method is a fast alternative to the least squares deconvolution technique for the measurement of the effective magnetic fields of cool stars. Using a Fourier transform on the effective magnetic field variations of the star ε Eri, we find that the long-term periodicity of the field corresponds to the 2.95-yr period of the stellar dynamo, revealed by the variation of the activity index.

  7. An optimized algorithm for multiscale wideband deconvolution of radio astronomical images

    NASA Astrophysics Data System (ADS)

    Offringa, A. R.; Smirnov, O.

    2017-10-01

    We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.

  8. Far-Infrared Based Pedestrian Detection for Driver-Assistance Systems Based on Candidate Filters, Gradient-Based Feature and Multi-Frame Approval Matching

    PubMed Central

    Wang, Guohua; Liu, Qiong

    2015-01-01

    Far-infrared pedestrian detection approaches for advanced driver-assistance systems based on high-dimensional features fail to simultaneously achieve robust and real-time detection. We propose a robust and real-time pedestrian detection system characterized by novel candidate filters, novel pedestrian features and multi-frame approval matching in a coarse-to-fine fashion. Firstly, we design two filters based on the pedestrians’ head and the road to select the candidates after applying a pedestrian segmentation algorithm to reduce false alarms. Secondly, we propose a novel feature encapsulating both the relationship of oriented gradient distribution and the code of oriented gradient to deal with the enormous variance in pedestrians’ size and appearance. Thirdly, we introduce a multi-frame approval matching approach utilizing the spatiotemporal continuity of pedestrians to increase the detection rate. Large-scale experiments indicate that the system works in real time and the accuracy has improved about 9% compared with approaches based on high-dimensional features only. PMID:26703611

  9. Far-Infrared Based Pedestrian Detection for Driver-Assistance Systems Based on Candidate Filters, Gradient-Based Feature and Multi-Frame Approval Matching.

    PubMed

    Wang, Guohua; Liu, Qiong

    2015-12-21

    Far-infrared pedestrian detection approaches for advanced driver-assistance systems based on high-dimensional features fail to simultaneously achieve robust and real-time detection. We propose a robust and real-time pedestrian detection system characterized by novel candidate filters, novel pedestrian features and multi-frame approval matching in a coarse-to-fine fashion. Firstly, we design two filters based on the pedestrians' head and the road to select the candidates after applying a pedestrian segmentation algorithm to reduce false alarms. Secondly, we propose a novel feature encapsulating both the relationship of oriented gradient distribution and the code of oriented gradient to deal with the enormous variance in pedestrians' size and appearance. Thirdly, we introduce a multi-frame approval matching approach utilizing the spatiotemporal continuity of pedestrians to increase the detection rate. Large-scale experiments indicate that the system works in real time and the accuracy has improved about 9% compared with approaches based on high-dimensional features only.

  10. Methods and Apparatus for Reducing Multipath Signal Error Using Deconvolution

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor); Lau, Kenneth H. (Inventor)

    1999-01-01

    A deconvolution approach to adaptive signal processing has been applied to the elimination of signal multipath errors as embodied in one preferred embodiment in a global positioning system receiver. The method and receiver of the present invention estimates then compensates for multipath effects in a comprehensive manner. Application of deconvolution, along with other adaptive identification and estimation techniques, results in completely novel GPS (Global Positioning System) receiver architecture.

  11. Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths.

    PubMed

    Ingaramo, Maria; York, Andrew G; Hoogendoorn, Eelco; Postma, Marten; Shroff, Hari; Patterson, George H

    2014-03-17

    We use Richardson-Lucy (RL) deconvolution to combine multiple images of a simulated object into a single image in the context of modern fluorescence microscopy techniques. RL deconvolution can merge images with very different point-spread functions, such as in multiview light-sheet microscopes,1, 2 while preserving the best resolution information present in each image. We show that RL deconvolution is also easily applied to merge high-resolution, high-noise images with low-resolution, low-noise images, relevant when complementing conventional microscopy with localization microscopy. We also use RL deconvolution to merge images produced by different simulated illumination patterns, relevant to structured illumination microscopy (SIM)3, 4 and image scanning microscopy (ISM). The quality of our ISM reconstructions is at least as good as reconstructions using standard inversion algorithms for ISM data, but our method follows a simpler recipe that requires no mathematical insight. Finally, we apply RL deconvolution to merge a series of ten images with varying signal and resolution levels. This combination is relevant to gated stimulated-emission depletion (STED) microscopy, and shows that merges of high-quality images are possible even in cases for which a non-iterative inversion algorithm is unknown. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Least-squares deconvolution of evoked potentials and sequence optimization for multiple stimuli under low-jitter conditions.

    PubMed

    Bardy, Fabrice; Dillon, Harvey; Van Dun, Bram

    2014-04-01

    Rapid presentation of stimuli in an evoked response paradigm can lead to overlap of multiple responses and consequently difficulties interpreting waveform morphology. This paper presents a deconvolution method allowing overlapping multiple responses to be disentangled. The deconvolution technique uses a least-squared error approach. A methodology is proposed to optimize the stimulus sequence associated with the deconvolution technique under low-jitter conditions. It controls the condition number of the matrices involved in recovering the responses. Simulations were performed using the proposed deconvolution technique. Multiple overlapping responses can be recovered perfectly in noiseless conditions. In the presence of noise, the amount of error introduced by the technique can be controlled a priori by the condition number of the matrix associated with the used stimulus sequence. The simulation results indicate the need for a minimum amount of jitter, as well as a sufficient number of overlap combinations to obtain optimum results. An aperiodic model is recommended to improve reconstruction. We propose a deconvolution technique allowing multiple overlapping responses to be extracted and a method of choosing the stimulus sequence optimal for response recovery. This technique may allow audiologists, psychologists, and electrophysiologists to optimize their experimental designs involving rapidly presented stimuli, and to recover evoked overlapping responses. Copyright © 2013 International Federation of Clinical Neurophysiology. All rights reserved.

  13. Change of reference frame for tactile localization during child development.

    PubMed

    Pagel, Birthe; Heed, Tobias; Röder, Brigitte

    2009-11-01

    Temporal order judgements (TOJ) for two tactile stimuli, one presented to the left and one to the right hand, are less precise when the hands are crossed over the midline than when the hands are uncrossed. This 'crossed hand' effect has been considered as evidence for a remapping of tactile input into an external reference frame. Since late, but not early, blind individuals show such remapping, it has been hypothesized that the use of an external reference frame develops during childhood. Five- to 10-year-old children were therefore tested with the tactile TOJ task, both with uncrossed and crossed hands. Overall performance in the TOJ task improved with age. While children older than 5 1/2 years displayed a crossed hand effect, younger children did not. Therefore the use of an external reference frame for tactile, and possibly multisensory, localization seems to be acquired at age 5.

  14. Development of a fast multi-line x-ray CT detector for NDT

    NASA Astrophysics Data System (ADS)

    Hofmann, T.; Nachtrab, F.; Schlechter, T.; Neubauer, H.; Mühlbauer, J.; Schröpfer, S.; Ernst, J.; Firsching, M.; Schweiger, T.; Oberst, M.; Meyer, A.; Uhlmann, N.

    2015-04-01

    Typical X-ray detectors for non-destructive testing (NDT) are line detectors or area detectors, like e.g. flat panel detectors. Multi-line detectors are currently only available in medical Computed Tomography (CT) scanners. Compared to flat panel detectors, line and multi-line detectors can achieve much higher frame rates. This allows time-resolved 3D CT scans of an object under investigation. Also, an improved image quality can be achieved due to reduced scattered radiation from object and detector themselves. Another benefit of line and multi-line detectors is that very wide detectors can be assembled easily, while flat panel detectors are usually limited to an imaging field with a size of approx. 40 × 40 cm2 at maximum. The big disadvantage of line detectors is the limited number of object slices that can be scanned simultaneously. This leads to long scan times for large objects. Volume scans with a multi-line detector are much faster, but with almost similar image quality. Due to the promising properties of multi-line detectors their application outside of medical CT would also be very interesting for NDT. However, medical CT multi-line detectors are optimized for the scanning of human bodies. Many non-medical applications require higher spatial resolutions and/or higher X-ray energies. For those non-medical applications we are developing a fast multi-line X-ray detector.In the scope of this work, we present the current state of the development of the novel detector, which includes several outstanding properties like an adjustable curved design for variable focus-detector-distances, conserving nearly uniform perpendicular irradiation over the entire detector width. Basis of the detector is a specifically designed, radiation hard CMOS imaging sensor with a pixel pitch of 200 μ m. Each pixel has an automatic in-pixel gain adjustment, which allows for both: a very high sensitivity and a wide dynamic range. The final detector is planned to have 256 lines of pixels. By using a modular assembly of the detector, the width can be chosen as multiples of 512 pixels. With a frame rate of up to 300 frames/s (full resolution) or 1200 frame/s (analog binning to 400 μ m pixel pitch) time-resolved 3D CT applications become possible. Two versions of the detector are in development, one with a high resolution scintillator and one with a thick, structured and very efficient scintillator (pitch 400 μ m). This way the detector can even work with X-ray energies up to 450 kVp.

  15. An accelerated non-Gaussianity based multichannel predictive deconvolution method with the limited supporting region of filters

    NASA Astrophysics Data System (ADS)

    Li, Zhong-xiao; Li, Zhen-chun

    2016-09-01

    The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.

  16. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    PubMed

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  17. Studing Regional Wave Source Time Functions Using A Massive Automated EGF Deconvolution Procedure

    NASA Astrophysics Data System (ADS)

    Xie, J. "; Schaff, D. P.

    2010-12-01

    Reliably estimated source time functions (STF) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection, and minimization of parameter trade-off in attenuation studies. The empirical Green’s function (EGF) method can be used for estimating STF, but it requires a strict recording condition. Waveforms from pairs of events that are similar in focal mechanism, but different in magnitude must be on-scale recorded on the same stations for the method to work. Searching for such waveforms can be very time consuming, particularly for regional waves that contain complex path effects and have reduced S/N ratios due to attenuation. We have developed a massive, automated procedure to conduct inter-event waveform deconvolution calculations from many candidate event pairs. The procedure automatically evaluates the “spikiness” of the deconvolutions by calculating their “sdc”, which is defined as the peak divided by the background value. The background value is calculated as the mean absolute value of the deconvolution, excluding 10 s around the source time function. When the sdc values are about 10 or higher, the deconvolutions are found to be sufficiently spiky (pulse-like), indicating similar path Green’s functions and good estimates of the STF. We have applied this automated procedure to Lg waves and full regional wavetrains from 989 M ≥ 5 events in and around China, calculating about a million deconvolutions. Of these we found about 2700 deconvolutions with sdc greater than 9, which, if having a sufficiently broad frequency band, can be used to estimate the STF of the larger events. We are currently refining our procedure, as well as the estimated STFs. We will infer the source scaling using the STFs. We will also explore the possibility that the deconvolution procedure could complement cross-correlation in a real time event-screening process.

  18. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less

  19. Is There a Direct Correlation Between Microvascular Wall Structure and k-Trans Values Obtained From Perfusion CT Measurements in Lymphomas?

    PubMed

    Horger, Marius; Fallier-Becker, Petra; Thaiss, Wolfgang M; Sauter, Alexander; Bösmüller, Hans; Martella, Manuela; Preibsch, Heike; Fritz, Jan; Nikolaou, Konstantin; Kloth, Christopher

    2018-05-03

    This study aimed to test the hypothesis that ultrastructural wall abnormalities of lymphoma vessels correlate with perfusion computed tomography (PCT) kinetics. Our local institutional review board approved this prospective study. Between February 2013 and June 2016, we included 23 consecutive subjects with newly diagnosed lymphoma, who were referred for computed tomography-guided biopsy (6 women, 17 men; mean age, 60.61 ± 12.43 years; range, 28-74 years) and additionally agreed to undergo PCT of the target lymphoma tissues. PCT was obtained for 40 seconds using 80 kV, 120 mAs, 64 × 0.6-mm collimation, 6.9-cm z-axis coverage, and 26 volume measurements. Mean and maximum k-trans (mL/100 mL/min), blood flow (BF; mL/100 mL/min) and blood volume (BV) were quantified using the deconvolution and the maximum slope + Patlak calculation models. Immunohistochemical staining was performed for microvessel density quantification (vessels/m 2 ), and electron microscopy was used to determine the presence or absence of tight junctions, endothelial fenestration, basement membrane, and pericytes, and to measure extracellular matrix thickness. Extracellular matrix thickness as well as the presence or absence of tight junctions, basal lamina, and pericytes did not correlate with computed tomography perfusion parameters. Endothelial fenestrations correlated significantly with mean BF deconvolution (P = .047, r = 0.418) and additionally was significantly associated with higher mean BV deconvolution (P < .005). Mean k-trans Patlak correlated strongly with mean k-trans deconvolution (r = 0.939, P = .001), and both correlated with mean BF deconvolution (P = .001, r = 0.748), max BF deconvolution (P = .028, r = 0.564), mean BV deconvolution (P = .001, r = 0.752), and max BV deconvolution (P = .001, r = 0.771). Microvessel density correlated with max k-trans deconvolution (r = 0.564, P = .023). Vascular endothelial growth factor receptor-3 expression (receptor specific for lymphatics) correlated significantly with max k-trans Patlak (P = .041, r = 0.686) and mean BF deconvolution (P = .038, r = 0.695). k-Trans values of PCT do not correlate with ultrastructural microvessel features, whereas endothelial fenestrations correlate with increased intra-tumoral BVs. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  20. Extended output phasor representation of multi-spectral fluorescence lifetime imaging microscopy

    PubMed Central

    Campos-Delgado, Daniel U.; Navarro, O. Gutiérrez; Arce-Santana, E. R.; Jo, Javier A.

    2015-01-01

    In this paper, we investigate novel low-dimensional and model-free representations for multi-spectral fluorescence lifetime imaging microscopy (m-FLIM) data. We depart from the classical definition of the phasor in the complex plane to propose the extended output phasor (EOP) and extended phasor (EP) for multi-spectral information. The frequency domain properties of the EOP and EP are analytically studied based on a multiexponential model for the impulse response of the imaged tissue. For practical implementations, the EOP is more appealing since there is no need to perform deconvolution of the instrument response from the measured m-FLIM data, as in the case of EP. Our synthetic and experimental evaluations with m-FLIM datasets of human coronary atherosclerotic plaques show that low frequency indexes have to be employed for a distinctive representation of the EOP and EP, and to reduce noise distortion. The tissue classification of the m-FLIM datasets by EOP and EP also improves with low frequency indexes, and does not present significant differences by using either phasor. PMID:26114031

  1. Development of a compact and cost effective multi-input digital signal processing system

    NASA Astrophysics Data System (ADS)

    Darvish-Molla, Sahar; Chin, Kenrick; Prestwich, William V.; Byun, Soo Hyun

    2018-01-01

    A prototype digital signal processing system (DSP) was developed using a microcontroller interfaced with a 12-bit sampling ADC, which offers a considerably inexpensive solution for processing multiple detectors with high throughput. After digitization of the incoming pulses, in order to maximize the output counting rate, a simple algorithm was employed for pulse height analysis. Moreover, an algorithm aiming at the real-time pulse pile-up deconvolution was implemented. The system was tested using a NaI(Tl) detector in comparison with a traditional analogue and commercial digital systems for a variety of count rates. The performance of the prototype system was consistently superior to the analogue and the commercial digital systems up to the input count rate of 61 kcps while was slightly inferior to the commercial digital system but still superior to the analogue system in the higher input rates. Considering overall cost, size and flexibility, this custom made multi-input digital signal processing system (MMI-DSP) was the best reliable choice for the purpose of the 2D microdosimetric data collection, or for any measurement in which simultaneous multi-data collection is required.

  2. A novel optimised and validated method for analysis of multi-residues of pesticides in fruits and vegetables by microwave-assisted extraction (MAE)-dispersive solid-phase extraction (d-SPE)-retention time locked (RTL)-gas chromatography-mass spectrometry with Deconvolution reporting software (DRS).

    PubMed

    Satpathy, Gouri; Tyagi, Yogesh Kumar; Gupta, Rajinder Kumar

    2011-08-01

    A rapid, effective and ecofriendly method for sensitive screening and quantification of 72 pesticides residue in fruits and vegetables, by microwave-assisted extraction (MAE) followed by dispersive solid-phase extraction (d-SPE), retention time locked (RTL) capillary gas-chromatographic separation in trace ion mode mass spectrometric determination has been validated as per ISO/IEC: 17025:2005. Identification and reporting with total and extracted ion chromatograms were facilitated to a great extent by Deconvolution reporting software (DRS). For all compounds LOD were 0.002-0.02mg/kg and LOQ were 0.025-0.100mg/kg. Correlation coefficients of the calibration curves in the range of 0.025-0.50mg/kg were >0.993. To validate matrix effects repeatability, reproducibility, recovery and overall uncertainty were calculated for the 35 matrices at 0.025, 0.050 and 0.100mg/kg. Recovery ranged between 72% and 114% with RSD of <20% for repeatability and intermediate precision. The reproducibility of the method was evaluated by an inter laboratory participation and Z score obtained within ±2. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Data enhancement and analysis through mathematical deconvolution of signals from scientific measuring instruments

    NASA Technical Reports Server (NTRS)

    Wood, G. M.; Rayborn, G. H.; Ioup, J. W.; Ioup, G. E.; Upchurch, B. T.; Howard, S. J.

    1981-01-01

    Mathematical deconvolution of digitized analog signals from scientific measuring instruments is shown to be a means of extracting important information which is otherwise hidden due to time-constant and other broadening or distortion effects caused by the experiment. Three different approaches to deconvolution and their subsequent application to recorded data from three analytical instruments are considered. To demonstrate the efficacy of deconvolution, the use of these approaches to solve the convolution integral for the gas chromatograph, magnetic mass spectrometer, and the time-of-flight mass spectrometer are described. Other possible applications of these types of numerical treatment of data to yield superior results from analog signals of the physical parameters normally measured in aerospace simulation facilities are suggested and briefly discussed.

  4. Improved deconvolution of very weak confocal signals.

    PubMed

    Day, Kasey J; La Rivière, Patrick J; Chandler, Talon; Bindokas, Vytas P; Ferrier, Nicola J; Glick, Benjamin S

    2017-01-01

    Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.

  5. Blind Quantum Signature with Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Li, Wei; Shi, Ronghua; Guo, Ying

    2017-04-01

    Blind quantum computation allows a client without quantum abilities to interact with a quantum server to perform a unconditional secure computing protocol, while protecting client's privacy. Motivated by confidentiality of blind quantum computation, a blind quantum signature scheme is designed with laconic structure. Different from the traditional signature schemes, the signing and verifying operations are performed through measurement-based quantum computation. Inputs of blind quantum computation are securely controlled with multi-qubit entangled states. The unique signature of the transmitted message is generated by the signer without leaking information in imperfect channels. Whereas, the receiver can verify the validity of the signature using the quantum matching algorithm. The security is guaranteed by entanglement of quantum system for blind quantum computation. It provides a potential practical application for e-commerce in the cloud computing and first-generation quantum computation.

  6. Electroencephalography: Subdural Multi-Electrode Brain Chip.

    DTIC Science & Technology

    1995-12-01

    showing a blind subject reading Braille letters that had been inserted into his visual cortex by stimulating appropriate sets of electrodes. The...subject in Dobelle’s experiment 50 had been blind for 10 years and was able to read Braille at 30 letters a minute using a 64 electrode array for...Evans, "’ Braille ’ Reading by a Blind Volunteer by Visual cortex Stimulation," Nature, 259: 111-112 (January 1976). A.K. Engel, et al. "Temporal Coding

  7. Septal penetration correction in I-131 imaging following thyroid cancer treatment

    NASA Astrophysics Data System (ADS)

    Barrack, Fiona; Scuffham, James; McQuaid, Sarah

    2018-04-01

    Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ  =  0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ  =  0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that could potentially improve the diagnostic accuracy of I-131 imaging. Novelty and significance This work has demonstrated that scatter correction combined with deconvolution can be used to substantially reduce the appearance of septal penetration artefacts in I-131 phantom and patient gamma camera planar images, enable improved visualisation of the I-131 distribution. Deconvolution with symmetric PSF has previously been used to reduce artefacts in gamma camera images however this work details the novel use of an asymmetric PSF to remove the angularly dependent septal penetration artefacts.

  8. Multipoint Optimal Minimum Entropy Deconvolution and Convolution Fix: Application to vibration fault detection

    NASA Astrophysics Data System (ADS)

    McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.

  9. Evaluation of deconvolution modelling applied to numerical combustion

    NASA Astrophysics Data System (ADS)

    Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît

    2018-01-01

    A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.

  10. Faceting for direction-dependent spectral deconvolution

    NASA Astrophysics Data System (ADS)

    Tasse, C.; Hugo, B.; Mirmont, M.; Smirnov, O.; Atemkeng, M.; Bester, L.; Hardcastle, M. J.; Lakhoo, R.; Perkins, S.; Shimwell, T.

    2018-04-01

    The new generation of radio interferometers is characterized by high sensitivity, wide fields of view and large fractional bandwidth. To synthesize the deepest images enabled by the high dynamic range of these instruments requires us to take into account the direction-dependent Jones matrices, while estimating the spectral properties of the sky in the imaging and deconvolution algorithms. In this paper we discuss and implement a wideband wide-field spectral deconvolution framework (DDFacet) based on image plane faceting, that takes into account generic direction-dependent effects. Specifically, we present a wide-field co-planar faceting scheme, and discuss the various effects that need to be taken into account to solve for the deconvolution problem (image plane normalization, position-dependent Point Spread Function, etc). We discuss two wideband spectral deconvolution algorithms based on hybrid matching pursuit and sub-space optimisation respectively. A few interesting technical features incorporated in our imager are discussed, including baseline dependent averaging, which has the effect of improving computing efficiency. The version of DDFacet presented here can account for any externally defined Jones matrices and/or beam patterns.

  11. Intrinsic fluorescence spectroscopy of glutamate dehydrogenase: Integrated behavior and deconvolution analysis

    NASA Astrophysics Data System (ADS)

    Pompa, P. P.; Cingolani, R.; Rinaldi, R.

    2003-07-01

    In this paper, we present a deconvolution method aimed at spectrally resolving the broad fluorescence spectra of proteins, namely, of the enzyme bovine liver glutamate dehydrogenase (GDH). The analytical procedure is based on the deconvolution of the emission spectra into three distinct Gaussian fluorescing bands Gj. The relative changes of the Gj parameters are directly related to the conformational changes of the enzyme, and provide interesting information about the fluorescence dynamics of the individual emitting contributions. Our deconvolution method results in an excellent fitting of all the spectra obtained with GDH in a number of experimental conditions (various conformational states of the protein) and describes very well the dynamics of a variety of phenomena, such as the dependence of hexamers association on protein concentration, the dynamics of thermal denaturation, and the interaction process between the enzyme and external quenchers. The investigation was carried out by means of different optical experiments, i.e., native enzyme fluorescence, thermal-induced unfolding, and fluorescence quenching studies, utilizing both the analysis of the “average” behavior of the enzyme and the proposed deconvolution approach.

  12. The role of iconic memory in change-detection tasks.

    PubMed

    Becker, M W; Pashler, H; Anstis, S M

    2000-01-01

    In three experiments, subjects attempted to detect the change of a single item in a visually presented array of items. Subjects' ability to detect a change was greatly reduced if a blank interstimulus interval (ISI) was inserted between the original array and an array in which one item had changed ('change blindness'). However, change detection improved when the location of the change was cued during the blank ISI. This suggests that people represent more information of a scene than change blindness might suggest. We test two possible hypotheses why, in the absence of a cue, this representation fails to produce good change detection. The first claims that the intervening events employed to create change blindness result in multiple neural transients which co-occur with the to-be-detected change. Poor detection rates occur because a serial search of all the transient locations is required to detect the change, during which time the representation of the original scene fades. The second claims that the occurrence of the second frame overwrites the representation of the first frame, unless that information is insulated against overwriting by attention. The results support the second hypothesis. We conclude that people may have a fairly rich visual representation of a scene while the scene is present, but fail to detect changes because they lack the ability to simultaneously represent two complete visual representations.

  13. 4Pi microscopy deconvolution with a variable point-spread function.

    PubMed

    Baddeley, David; Carl, Christian; Cremer, Christoph

    2006-09-20

    To remove the axial sidelobes from 4Pi images, deconvolution forms an integral part of 4Pi microscopy. As a result of its high axial resolution, the 4Pi point spread function (PSF) is particularly susceptible to imperfect optical conditions within the sample. This is typically observed as a shift in the position of the maxima under the PSF envelope. A significantly varying phase shift renders deconvolution procedures based on a spatially invariant PSF essentially useless. We present a technique for computing the forward transformation in the case of a varying phase at a computational expense of the same order of magnitude as that of the shift invariant case, a method for the estimation of PSF phase from an acquired image, and a deconvolution procedure built on these techniques.

  14. Improved deconvolution of very weak confocal signals

    PubMed Central

    Day, Kasey J.; La Rivière, Patrick J.; Chandler, Talon; Bindokas, Vytas P.; Ferrier, Nicola J.; Glick, Benjamin S.

    2017-01-01

    Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage. PMID:28868135

  15. Improved deconvolution of very weak confocal signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon

    Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less

  16. Improved deconvolution of very weak confocal signals

    DOE PAGES

    Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon; ...

    2017-06-06

    Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less

  17. "One-Stop Shop": Free-Breathing Dynamic Contrast-Enhanced Magnetic Resonance Imaging of the Kidney Using Iterative Reconstruction and Continuous Golden-Angle Radial Sampling.

    PubMed

    Riffel, Philipp; Zoellner, Frank G; Budjan, Johannes; Grimm, Robert; Block, Tobias K; Schoenberg, Stefan O; Hausmann, Daniel

    2016-11-01

    The purpose of the present study was to evaluate a recently introduced technique for free-breathing dynamic contrast-enhanced renal magnetic resonance imaging (MRI) applying a combination of radial k-space sampling, parallel imaging, and compressed sensing. The technique allows retrospective reconstruction of 2 motion-suppressed sets of images from the same acquisition: one with lower temporal resolution but improved image quality for subjective image analysis, and one with high temporal resolution for quantitative perfusion analysis. In this study, 25 patients underwent a kidney examination, including a prototypical fat-suppressed, golden-angle radial stack-of-stars T1-weighted 3-dimensional spoiled gradient-echo examination (GRASP) performed after contrast agent administration during free breathing. Images were reconstructed at temporal resolutions of 55 spokes per frame (6.2 seconds) and 13 spokes per frame (1.5 seconds). The GRASP images were evaluated by 2 blinded radiologists. First, the reconstructions with low temporal resolution underwent subjective image analysis: the radiologists assessed the best arterial phase and the best renal phase and rated image quality score for each patient on a 5-point Likert-type scale.In addition, the diagnostic confidence was rated according to a 3-point Likert-type scale. Similarly, respiratory motion artifacts and streak artifacts were rated according to a 3-point Likert-type scale.Then, the reconstructions with high temporal resolution were analyzed with a voxel-by-voxel deconvolution approach to determine the renal plasma flow, and the results were compared with values reported in previous literature. Reader 1 and reader 2 rated the overall image quality score for the best arterial phase and the best renal phase with a median image quality score of 4 (good image quality) for both phases, respectively. A high diagnostic confidence (median score of 3) was observed. There were no respiratory motion artifacts in any of the patients. Streak artifacts were present in all of the patients, but did not compromise diagnostic image quality.The estimated renal plasma flow was slightly higher (295 ± 78 mL/100 mL per minute) than reported in previous MRI-based studies, but also closer to the physiologically expected value. Dynamic, motion-suppressed contrast-enhanced renal MRI can be performed in high diagnostic quality during free breathing using a combination of golden-angle radial sampling, parallel imaging, and compressed sensing. Both morphologic and quantitative functional information can be acquired within a single acquisition.

  18. Touch the Invisible Sky: A Multi-Wavelength Braille Book Featuring Tactile NASA Images

    NASA Astrophysics Data System (ADS)

    Grice, N.; Steel, S.; Daou, D.

    2008-06-01

    According to the American Foundation for the Blind and the National Federation of the Blind, there are approximately 10 million blind and visually impaired people in the United States. Because astronomy is often visually based, many people assume that it cannot be made accessible. A new astronomy book, Touch the Invisible Sky, makes wavelengths not visible to human eyes, accessible to all audiences through text in print and Braille and with pictures that are touchable and in color.

  19. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program.

    PubMed

    Afouxenidis, D; Polymeris, G S; Tsirliganis, N C; Kitis, G

    2012-05-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the GLOw Curve ANalysis INtercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters.

  20. Deconvolution of noisy transient signals: a Kalman filtering application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J.V.; Zicker, J.E.

    The deconvolution of transient signals from noisy measurements is a common problem occuring in various tests at Lawrence Livermore National Laboratory. The transient deconvolution problem places atypical constraints on algorithms presently available. The Schmidt-Kalman filter, a time-varying, tunable predictor, is designed using a piecewise constant model of the transient input signal. A simulation is developed to test the algorithm for various input signal bandwidths and different signal-to-noise ratios for the input and output sequences. The algorithm performance is reasonable.

  1. Quadratic Blind Linear Unmixing: A Graphical User Interface for Tissue Characterization

    PubMed Central

    Gutierrez-Navarro, O.; Campos-Delgado, D.U.; Arce-Santana, E. R.; Jo, Javier A.

    2016-01-01

    Spectral unmixing is the process of breaking down data from a sample into its basic components and their abundances. Previous work has been focused on blind unmixing of multi-spectral fluorescence lifetime imaging microscopy (m-FLIM) datasets under a linear mixture model and quadratic approximations. This method provides a fast linear decomposition and can work without a limitation in the maximum number of components or end-members. Hence this work presents an interactive software which implements our blind end-member and abundance extraction (BEAE) and quadratic blind linear unmixing (QBLU) algorithms in Matlab. The options and capabilities of our proposed software are described in detail. When the number of components is known, our software can estimate the constitutive end-members and their abundances. When no prior knowledge is available, the software can provide a completely blind solution to estimate the number of components, the end-members and their abundances. The characterization of three case studies validates the performance of the new software: ex-vivo human coronary arteries, human breast cancer cell samples, and in-vivo hamster oral mucosa. The software is freely available in a hosted webpage by one of the developing institutions, and allows the user a quick, easy-to-use and efficient tool for multi/hyper-spectral data decomposition. PMID:26589467

  2. Quadratic blind linear unmixing: A graphical user interface for tissue characterization.

    PubMed

    Gutierrez-Navarro, O; Campos-Delgado, D U; Arce-Santana, E R; Jo, Javier A

    2016-02-01

    Spectral unmixing is the process of breaking down data from a sample into its basic components and their abundances. Previous work has been focused on blind unmixing of multi-spectral fluorescence lifetime imaging microscopy (m-FLIM) datasets under a linear mixture model and quadratic approximations. This method provides a fast linear decomposition and can work without a limitation in the maximum number of components or end-members. Hence this work presents an interactive software which implements our blind end-member and abundance extraction (BEAE) and quadratic blind linear unmixing (QBLU) algorithms in Matlab. The options and capabilities of our proposed software are described in detail. When the number of components is known, our software can estimate the constitutive end-members and their abundances. When no prior knowledge is available, the software can provide a completely blind solution to estimate the number of components, the end-members and their abundances. The characterization of three case studies validates the performance of the new software: ex-vivo human coronary arteries, human breast cancer cell samples, and in-vivo hamster oral mucosa. The software is freely available in a hosted webpage by one of the developing institutions, and allows the user a quick, easy-to-use and efficient tool for multi/hyper-spectral data decomposition. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. Multi-frame X-ray Phase Contrast Imaging (MPCI) for Dynamic Experiments

    NASA Astrophysics Data System (ADS)

    Iverson, Adam; Carlson, Carl; Sanchez, Nathaniel; Jensen, Brian

    2017-06-01

    Recent advances in coupling synchrotron X-ray diagnostics to dynamic experiments are providing new information about the response of materials at extremes. For example, propagation based X-ray Phase Contrast Imaging (PCI) which is sensitive to differences in density has been successfully used to study a wide range of phenomena, e.g. jet-formation, compression of additive manufactured (AM) materials, and detonator dynamics. In this talk, we describe the current multi-frame X-ray phase contrast imaging (MPCI) system which allows up to eight frames per experiment, remote optimization, and an improved optical design that increases optical efficiency and accommodates dual-magnification during a dynamic event. Data will be presented that used the dual-magnification feature to obtain multiple images of an exploding foil initiator. In addition, results from static testing will be presented that used a multiple scintillator configuration required to extend the density retrieval to multi-constituent, or heterogeneous systems. The continued development of this diagnostic is fundamentally important to capabilities at the APS including IMPULSE and the Dynamic Compression Sector (DCS), and will benefit future facilities such as MaRIE at Los Alamos National Laboratory.

  4. X-ray Diffraction and Multi-Frame Phase Contrast Imaging Diagnostics for IMPULSE at the Advanced Photon Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iverson, Adam; Carlson, Carl; Young, Jason

    2013-07-08

    The diagnostic needs of any dynamic loading platform present unique technical challenges that must be addressed in order to accurately measure in situ material properties in an extreme environment. The IMPULSE platform (IMPact system for Ultrafast Synchrotron Experiments) at the Advanced Photon Source (APS) is no exception and, in fact, may be more challenging, as the imaging diagnostics must be synchronized to both the experiment and the 60 ps wide x-ray bunches produced at APS. The technical challenges of time-resolved x-ray diffraction imaging and high-resolution multi-frame phase contrast imaging (PCI) are described in this paper. Example data from recent IMPULSEmore » experiments are shown to illustrate the advances and evolution of these diagnostics with a focus on comparing the performance of two intensified CCD cameras and their suitability for multi-frame PCI. The continued development of these diagnostics is fundamentally important to IMPULSE and many other loading platforms and will benefit future facilities such as the Dynamic Compression Sector at APS and MaRIE at Los Alamos National Laboratory.« less

  5. Multi-material size optimization of a ladder frame chassis

    NASA Astrophysics Data System (ADS)

    Baker, Michael

    The Corporate Average Fuel Economy (CAFE) is an American fuel standard that sets regulations on fuel economy in vehicles. This law ultimately shapes the development and design research for automakers. Reducing the weight of conventional cars offers a way to improve fuel efficiency. This research investigated the optimality of an automobile's ladder frame chassis (LFC) by conducting multi-objective optimization on the LFC in order to reduce the weight of the chassis. The focus of the design and optimization was a ladder frame chassis commonly used for mass production light motor vehicles with an open-top rear cargo area. This thesis is comprised of two major sections. The first looked to perform thickness optimization in the outer walls of the ladder frame. In the second section, many multi-material distributions, including steel and aluminium varieties, were investigated. A simplified model was used to do an initial hand calculation analysis of the problem. This was used to create a baseline validation to compare the theory with the modeling. A CAD model of the LFC was designed. From the CAD model, a finite element model was extracted and joined using weld and bolt connectors. Following this, a linear static analysis was performed to look at displacement and stresses when subjected to loading conditions that simulate harsh driving conditions. The analysis showed significant values of stress and displacement on the ends of the rails, suggesting improvements could be made elsewhere. An optimization scheme was used to find the values of an all steel frame an optimal thickness distribution was found. This provided a 13% weight reduction over the initial model. To advance the analysis a multi-material approach was used to push the weight savings even further. Several material distributions were analyzed and the lightest utilized aluminium in all but the most strenuous subjected components. This enabled a reduction in weight of 15% over the initial model, equivalent to approximately 1 mile per gallon (MPG) in fuel economy.

  6. Range resolution improvement in passive bistatic radars using nested FM channels and least squares approach

    NASA Astrophysics Data System (ADS)

    Arslan, Musa T.; Tofighi, Mohammad; Sevimli, Rasim A.; ćetin, Ahmet E.

    2015-05-01

    One of the main disadvantages of using commercial broadcasts in a Passive Bistatic Radar (PBR) system is the range resolution. Using multiple broadcast channels to improve the radar performance is offered as a solution to this problem. However, it suffers from detection performance due to the side-lobes that matched filter creates for using multiple channels. In this article, we introduce a deconvolution algorithm to suppress the side-lobes. The two-dimensional matched filter output of a PBR is further analyzed as a deconvolution problem. The deconvolution algorithm is based on making successive projections onto the hyperplanes representing the time delay of a target. Resulting iterative deconvolution algorithm is globally convergent because all constraint sets are closed and convex. Simulation results in an FM based PBR system are presented.

  7. Imaging resolution and properties analysis of super resolution microscopy with parallel detection under different noise, detector and image restoration conditions

    NASA Astrophysics Data System (ADS)

    Yu, Zhongzhi; Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Liu, Xu

    2018-06-01

    Parallel detection, which can use the additional information of a pinhole plane image taken at every excitation scan position, could be an efficient method to enhance the resolution of a confocal laser scanning microscope. In this paper, we discuss images obtained under different conditions and using different image restoration methods with parallel detection to quantitatively compare the imaging quality. The conditions include different noise levels and different detector array settings. The image restoration methods include linear deconvolution and pixel reassignment with Richard-Lucy deconvolution and with maximum-likelihood estimation deconvolution. The results show that the linear deconvolution share properties such as high-efficiency and the best performance under all different conditions, and is therefore expected to be of use for future biomedical routine research.

  8. Astronomy with the Color Blind

    ERIC Educational Resources Information Center

    Smith, Donald A.; Melrose, Justyn

    2014-01-01

    The standard method to create dramatic color images in astrophotography is to record multiple black and white images, each with a different color filter in the optical path, and then tint each frame with a color appropriate to the corresponding filter. When combined, the resulting image conveys information about the sources of emission in the…

  9. Tracking quasi-stationary flow of weak fluorescent signals by adaptive multi-frame correlation.

    PubMed

    Ji, L; Danuser, G

    2005-12-01

    We have developed a novel cross-correlation technique to probe quasi-stationary flow of fluorescent signals in live cells at a spatial resolution that is close to single particle tracking. By correlating image blocks between pairs of consecutive frames and integrating their correlation scores over multiple frame pairs, uncertainty in identifying a globally significant maximum in the correlation score function has been greatly reduced as compared with conventional correlation-based tracking using the signal of only two consecutive frames. This approach proves robust and very effective in analysing images with a weak, noise-perturbed signal contrast where texture characteristics cannot be matched between only a pair of frames. It can also be applied to images that lack prominent features that could be utilized for particle tracking or feature-based template matching. Furthermore, owing to the integration of correlation scores over multiple frames, the method can handle signals with substantial frame-to-frame intensity variation where conventional correlation-based tracking fails. We tested the performance of the method by tracking polymer flow in actin and microtubule cytoskeleton structures labelled at various fluorophore densities providing imagery with a broad range of signal modulation and noise. In applications to fluorescent speckle microscopy (FSM), where the fluorophore density is sufficiently low to reveal patterns of discrete fluorescent marks referred to as speckles, we combined the multi-frame correlation approach proposed above with particle tracking. This hybrid approach allowed us to follow single speckles robustly in areas of high speckle density and fast flow, where previously published FSM analysis methods were unsuccessful. Thus, we can now probe cytoskeleton polymer dynamics in living cells at an entirely new level of complexity and with unprecedented detail.

  10. What makes African American health disparities newsworthy? An experiment among journalists about story framing.

    PubMed

    Hinnant, Amanda; Oh, Hyun Jee; Caburnay, Charlene A; Kreuter, Matthew W

    2011-12-01

    News stories reporting race-specific health information commonly emphasize disparities between racial groups. But recent research suggests this focus on disparities has unintended effects on African American audiences, generating negative emotions and less interest in preventive behaviors (Nicholson RA, Kreuter MW, Lapka C et al. Unintended effects of emphasizing disparities in cancer communication to African-Americans. Cancer Epidemiol Biomarkers Prev 2008; 17: 2946-52). They found that black adults are more interested in cancer screening after reading about the progress African Americans have made in fighting cancer than after reading stories emphasizing disparities between blacks and whites. This study builds on past findings by (i) examining how health journalists judge the newsworthiness of stories that report race-specific health information by emphasizing disparities versus progress and (ii) determining whether these judgments can be changed by informing journalists of audience reactions to disparity versus progress framing. In a double-blind-randomized experiment, 175 health journalists read either a disparity- or progress-framed story on colon cancer, preceded by either an inoculation about audience effects of such framing or an unrelated (i.e. control) information stimuli. Journalists rated the disparity-frame story more favorably than the progress-frame story in every category of news values. However, the inoculation significantly increased positive reactions to the progress-frame story. Informing journalists of audience reactions to race-specific health information could influence how health news stories are framed.

  11. Application of deconvolution interferometry with both Hi-net and KiK-net data

    NASA Astrophysics Data System (ADS)

    Nakata, N.

    2013-12-01

    Application of deconvolution interferometry to wavefields observed by KiK-net, a strong-motion recording network in Japan, is useful for estimating wave velocities and S-wave splitting in the near surface. Using this technique, for example, Nakata and Snieder (2011, 2012) found changed in velocities caused by Tohoku-Oki earthquake in Japan. At the location of the borehole accelerometer of each KiK-net station, a velocity sensor is also installed as a part of a high-sensitivity seismograph network (Hi-net). I present a technique that uses both Hi-net and KiK-net records for computing deconvolution interferometry. The deconvolved waveform obtained from the combination of Hi-net and KiK-net data is similar to the waveform computed from KiK-net data only, which indicates that one can use Hi-net wavefields for deconvolution interferometry. Because Hi-net records have a high signal-to-noise ratio (S/N) and high dynamic resolution, the S/N and the quality of amplitude and phase of deconvolved waveforms can be improved with Hi-net data. These advantages are especially important for short-time moving-window seismic interferometry and deconvolution interferometry using later coda waves.

  12. The Small-scale Structure of Photospheric Convection Retrieved by a Deconvolution Technique Applied to Hinode/SP Data

    NASA Astrophysics Data System (ADS)

    Oba, T.; Riethmüller, T. L.; Solanki, S. K.; Iida, Y.; Quintero Noda, C.; Shimizu, T.

    2017-11-01

    Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode/SP data in an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of -3.0 km s-1 and +3.0 km s-1 at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.

  13. Application of deterministic deconvolution of ground-penetrating radar data in a study of carbonate strata

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.

    2004-01-01

    We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.

  14. Peptide de novo sequencing of mixture tandem mass spectra

    PubMed Central

    Hotta, Stéphanie Yuki Kolbeck; Verano‐Braga, Thiago; Kjeldsen, Frank

    2016-01-01

    The impact of mixture spectra deconvolution on the performance of four popular de novo sequencing programs was tested using artificially constructed mixture spectra as well as experimental proteomics data. Mixture fragmentation spectra are recognized as a limitation in proteomics because they decrease the identification performance using database search engines. De novo sequencing approaches are expected to be even more sensitive to the reduction in mass spectrum quality resulting from peptide precursor co‐isolation and thus prone to false identifications. The deconvolution approach matched complementary b‐, y‐ions to each precursor peptide mass, which allowed the creation of virtual spectra containing sequence specific fragment ions of each co‐isolated peptide. Deconvolution processing resulted in equally efficient identification rates but increased the absolute number of correctly sequenced peptides. The improvement was in the range of 20–35% additional peptide identifications for a HeLa lysate sample. Some correct sequences were identified only using unprocessed spectra; however, the number of these was lower than those where improvement was obtained by mass spectral deconvolution. Tight candidate peptide score distribution and high sensitivity to small changes in the mass spectrum introduced by the employed deconvolution method could explain some of the missing peptide identifications. PMID:27329701

  15. Deconvolution single shot multibox detector for supermarket commodity detection and classification

    NASA Astrophysics Data System (ADS)

    Li, Dejian; Li, Jian; Nie, Binling; Sun, Shouqian

    2017-07-01

    This paper proposes an image detection model to detect and classify supermarkets shelves' commodity. Based on the principle of the features directly affects the accuracy of the final classification, feature maps are performed to combine high level features with bottom level features. Then set some fixed anchors on those feature maps, finally the label and the position of commodity is generated by doing a box regression and classification. In this work, we proposed a model named Deconvolutiuon Single Shot MultiBox Detector, we evaluated the model using 300 images photographed from real supermarket shelves. Followed the same protocol in other recent methods, the results showed that our model outperformed other baseline methods.

  16. Deconvolution using a neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  17. Deconvolution of gas chromatographic data

    NASA Technical Reports Server (NTRS)

    Howard, S.; Rayborn, G. H.

    1980-01-01

    The use of deconvolution methods on gas chromatographic data to obtain an accurate determination of the relative amounts of each material present by mathematically separating the merged peaks is discussed. Data were obtained on a gas chromatograph with a flame ionization detector. Chromatograms of five xylenes with differing degrees of separation were generated by varying the column temperature at selected rates. The merged peaks were then successfully separated by deconvolution. The concept of function continuation in the frequency domain was introduced in striving to reach the theoretical limit of accuracy, but proved to be only partially successful.

  18. Detailed interpretation of aeromagnetic data from the Patagonia Mountains area, southeastern Arizona

    USGS Publications Warehouse

    Bultman, Mark W.

    2015-01-01

    Euler deconvolution depth estimates derived from aeromagnetic data with a structural index of 0 show that mapped faults on the northern margin of the Patagonia Mountains generally agree with the depth estimates in the new geologic model. The deconvolution depth estimates also show that the concealed Patagonia Fault southwest of the Patagonia Mountains is more complex than recent geologic mapping represents. Additionally, Euler deconvolution depth estimates with a structural index of 2 locate many potential intrusive bodies that might be associated with known and unknown mineralization.

  19. Background suppression of infrared small target image based on inter-frame registration

    NASA Astrophysics Data System (ADS)

    Ye, Xiubo; Xue, Bindang

    2018-04-01

    We propose a multi-frame background suppression method for remote infrared small target detection. Inter-frame information is necessary when the heavy background clutters make it difficult to distinguish real targets and false alarms. A registration procedure based on points matching in image patches is used to compensate the local deformation of background. Then the target can be separated by background subtraction. Experiments show our method serves as an effective preliminary of target detection.

  20. Specialized CCDs for high-frame-rate visible imaging and UV imaging applications

    NASA Astrophysics Data System (ADS)

    Levine, Peter A.; Taylor, Gordon C.; Shallcross, Frank V.; Tower, John R.; Lawler, William B.; Harrison, Lorna J.; Socker, Dennis G.; Marchywka, Mike

    1993-11-01

    This paper reports recent progress by the authors in two distinct charge coupled device (CCD) technology areas. The first technology area is high frame rate, multi-port, frame transfer imagers. A 16-port, 512 X 512, split frame transfer imager and a 32-port, 1024 X 1024, split frame transfer imager are described. The thinned, backside illuminated devices feature on-chip correlated double sampling, buried blooming drains, and a room temperature dark current of less than 50 pA/cm2, without surface accumulation. The second technology area is vacuum ultraviolet (UV) frame transfer imagers. A developmental 1024 X 640 frame transfer imager with 20% quantum efficiency at 140 nm is described. The device is fabricated in a p-channel CCD process, thinned for backside illumination, and utilizes special packaging to achieve stable UV response.

  1. Inverting Monotonic Nonlinearities by Entropy Maximization

    PubMed Central

    López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261

  2. Inverting Monotonic Nonlinearities by Entropy Maximization.

    PubMed

    Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.

  3. SU-G-IeP3-08: Image Reconstruction for Scanning Imaging System Based On Shape-Modulated Point Spreading Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ruixing; Yang, LV; Xu, Kele

    Purpose: Deconvolution is a widely used tool in the field of image reconstruction algorithm when the linear imaging system has been blurred by the imperfect system transfer function. However, due to the nature of Gaussian-liked distribution for point spread function (PSF), the components with coherent high frequency in the image are hard to restored in most of the previous scanning imaging system, even the relatively accurate PSF is acquired. We propose a novel method for deconvolution of images which are obtained by using shape-modulated PSF. Methods: We use two different types of PSF - Gaussian shape and donut shape -more » to convolute the original image in order to simulate the process of scanning imaging. By employing deconvolution of the two images with corresponding given priors, the image quality of the deblurred images are compared. Then we find the critical size of the donut shape compared with the Gaussian shape which has similar deconvolution results. Through calculation of tightened focusing process using radially polarized beam, such size of donut is achievable under same conditions. Results: The effects of different relative size of donut and Gaussian shapes are investigated. When the full width at half maximum (FWHM) ratio of donut and Gaussian shape is set about 1.83, similar resolution results are obtained through our deconvolution method. Decreasing the size of donut will favor the deconvolution method. A mask with both amplitude and phase modulation is used to create a donut-shaped PSF compared with the non-modulated Gaussian PSF. Donut with size smaller than our critical value is obtained. Conclusion: The utility of donutshaped PSF are proved useful and achievable in the imaging and deconvolution processing, which is expected to have potential practical applications in high resolution imaging for biological samples.« less

  4. Effective properties of a fly ash geopolymer: Synergistic application of X-ray synchrotron tomography, nanoindentation, and homogenization models

    DOE PAGES

    Das, Sumanta; Yang, Pu; Singh, Sudhanshu S.; ...

    2015-09-02

    Microstructural and micromechanical investigation of a fly ash-based geopolymer using: (i) synchrotron x-ray tomography (XRT) to determine the volume fraction and tortuosity of pores that are influential in fluid transport, (ii) mercury intrusion porosimetry (MIP) to capture the volume fraction of smaller pores, (iii) scanning electron microscopy (SEM) combined with multi-label thresholding to identify and characterize the solid phases in the microstructure, and (iv) nanoindentation to determine the component phase elastic properties using statistical deconvolution, is reported in this paper. The phase volume fractions and elastic properties are used in multi-step mean field homogenization (Mori- Tanaka and double inclusion) modelsmore » to determine the homogenized macroscale elastic modulus of the composite. The homogenized elastic moduli are in good agreement with the flexural elastic modulus determined on macroscale paste beams. As a result, the combined use of microstructural and micromechanical characterization tools at multiple scales provides valuable information towards the material design of fly ash geopolymers.« less

  5. A feasibility study for the application of seismic interferometry by multidimensional deconvolution for lithospheric-scale imaging

    NASA Astrophysics Data System (ADS)

    Ruigrok, Elmer; van der Neut, Joost; Djikpesse, Hugues; Chen, Chin-Wu; Wapenaar, Kees

    2010-05-01

    Active-source surveys are widely used for the delineation of hydrocarbon accumulations. Most source and receiver configurations are designed to illuminate the first 5 km of the earth. For a deep understanding of the evolution of the crust, much larger depths need to be illuminated. The use of large-scale active surveys is feasible, but rather costly. As an alternative, we use passive acquisition configurations, aiming at detecting responses from distant earthquakes, in combination with seismic interferometry (SI). SI refers to the principle of generating new seismic responses by combining seismic observations at different receiver locations. We apply SI to the earthquake responses to obtain responses as if there was a source at each receiver position in the receiver array. These responses are subsequently migrated to obtain an image of the lithosphere. Conventionally, SI is applied by a crosscorrelation of responses. Recently, an alternative implementation was proposed as SI by multidimensional deconvolution (MDD) (Wapenaar et al. 2008). SI by MDD compensates both for the source-sampling and the source wavelet irregularities. Another advantage is that the MDD relation also holds for media with severe anelastic losses. A severe restriction though for the implementation of MDD was the need to estimate responses without free-surface interaction, from the earthquake responses. To mitigate this restriction, Groenestijn en Verschuur (2009) proposed to introduce the incident wavefield as an additional unknown in the inversion process. As an alternative solution, van der Neut et al. (2010) showed that the required wavefield separation may be implemented after a crosscorrelation step. These last two approaches facilitate the application of MDD for lithospheric-scale imaging. In this work, we study the feasibility for the implementation of MDD when considering teleseismic wavefields. We address specific problems for teleseismic wavefields, such as long and complicated source wavelets, source-side reverberations and illumination gaps. We exemplify the feasibility of SI by MDD on synthetic data, based on field data from the Laramie and the POLARIS-MIT array. van Groenestijn, G.J.A. & Verschuur, D.J., 2009. Estimation of primaries by sparse inversion from passive seismic data, Expanded abstracts, 1597-1601, SEG. van der Neut, J.R, Ruigrok, E.N., Draganov, D.S., & Wapenaar, K., 2010. Retrieving the earth's reflection response by multi-dimensional deconvolution of ambient seismic noise, Extended abstracts, submitted, EAGE. Wapenaar, K., van der Neut, J., & Ruigrok, E.N., 2008. Passive seismic interferometry by multidimensional deconvolution, Geophysics, 75, A51-A56.

  6. Radar Sensing for Intelligent Vehicles in Urban Environments

    PubMed Central

    Reina, Giulio; Johnson, David; Underwood, James

    2015-01-01

    Radar overcomes the shortcomings of laser, stereovision, and sonar because it can operate successfully in dusty, foggy, blizzard-blinding, and poorly lit scenarios. This paper presents a novel method for ground and obstacle segmentation based on radar sensing. The algorithm operates directly in the sensor frame, without the need for a separate synchronised navigation source, calibration parameters describing the location of the radar in the vehicle frame, or the geometric restrictions made in the previous main method in the field. Experimental results are presented in various urban scenarios to validate this approach, showing its potential applicability for advanced driving assistance systems and autonomous vehicle operations. PMID:26102493

  7. Radar Sensing for Intelligent Vehicles in Urban Environments.

    PubMed

    Reina, Giulio; Johnson, David; Underwood, James

    2015-06-19

    Radar overcomes the shortcomings of laser, stereovision, and sonar because it can operate successfully in dusty, foggy, blizzard-blinding, and poorly lit scenarios. This paper presents a novel method for ground and obstacle segmentation based on radar sensing. The algorithm operates directly in the sensor frame, without the need for a separate synchronised navigation source, calibration parameters describing the location of the radar in the vehicle frame, or the geometric restrictions made in the previous main method in the field. Experimental results are presented in various urban scenarios to validate this approach, showing its potential applicability for advanced driving assistance systems and autonomous vehicle operations.

  8. Processing strategy for water-gun seismic data from the Gulf of Mexico

    USGS Publications Warehouse

    Lee, Myung W.; Hart, Patrick E.; Agena, Warren F.

    2000-01-01

    In order to study the regional distribution of gas hydrates and their potential relationship to a large-scale sea-fl oor failures, more than 1,300 km of near-vertical-incidence seismic profi les were acquired using a 15-in3 water gun across the upper- and middle-continental slope in the Garden Banks and Green Canyon regions of the Gulf of Mexico. Because of the highly mixed phase water-gun signature, caused mainly by a precursor of the source arriving about 18 ms ahead of the main pulse, a conventional processing scheme based on the minimum phase assumption is not suitable for this data set. A conventional processing scheme suppresses the reverberations and compresses the main pulse, but the failure to suppress precursors results in complex interference between the precursors and primary refl ections, thus obscuring true refl ections. To clearly image the subsurface without interference from the precursors, a wavelet deconvolution based on the mixedphase assumption using variable norm is attempted. This nonminimum- phase wavelet deconvolution compresses a longwave- train water-gun signature into a simple zero-phase wavelet. A second-zero-crossing predictive deconvolution followed by a wavelet deconvolution suppressed variable ghost arrivals attributed to the variable depths of receivers. The processing strategy of using wavelet deconvolution followed by a secondzero- crossing deconvolution resulted in a sharp and simple wavelet and a better defi nition of the polarity of refl ections. Also, the application of dip moveout correction enhanced lateral resolution of refl ections and substantially suppressed coherent noise.

  9. Dereplication of Natural Products Using GC-TOF Mass Spectrometry: Improved Metabolite Identification by Spectral Deconvolution Ratio Analysis.

    PubMed

    Carnevale Neto, Fausto; Pilon, Alan C; Selegato, Denise M; Freire, Rafael T; Gu, Haiwei; Raftery, Daniel; Lopes, Norberto P; Castro-Gamboa, Ian

    2016-01-01

    Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, thereby avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY) with Automated Mass Spectral Deconvolution and Identification System software (AMDIS). Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential, and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication was initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor) was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts.

  10. Dereplication of Natural Products Using GC-TOF Mass Spectrometry: Improved Metabolite Identification by Spectral Deconvolution Ratio Analysis

    PubMed Central

    Carnevale Neto, Fausto; Pilon, Alan C.; Selegato, Denise M.; Freire, Rafael T.; Gu, Haiwei; Raftery, Daniel; Lopes, Norberto P.; Castro-Gamboa, Ian

    2016-01-01

    Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, thereby avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY) with Automated Mass Spectral Deconvolution and Identification System software (AMDIS). Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential, and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication was initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor) was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts. PMID:27747213

  11. Multi-GPU maximum entropy image synthesis for radio astronomy

    NASA Astrophysics Data System (ADS)

    Cárcamo, M.; Román, P. E.; Casassus, S.; Moral, V.; Rannou, F. R.

    2018-01-01

    The maximum entropy method (MEM) is a well known deconvolution technique in radio-interferometry. This method solves a non-linear optimization problem with an entropy regularization term. Other heuristics such as CLEAN are faster but highly user dependent. Nevertheless, MEM has the following advantages: it is unsupervised, it has a statistical basis, it has a better resolution and better image quality under certain conditions. This work presents a high performance GPU version of non-gridding MEM, which is tested using real and simulated data. We propose a single-GPU and a multi-GPU implementation for single and multi-spectral data, respectively. We also make use of the Peer-to-Peer and Unified Virtual Addressing features of newer GPUs which allows to exploit transparently and efficiently multiple GPUs. Several ALMA data sets are used to demonstrate the effectiveness in imaging and to evaluate GPU performance. The results show that a speedup from 1000 to 5000 times faster than a sequential version can be achieved, depending on data and image size. This allows to reconstruct the HD142527 CO(6-5) short baseline data set in 2.1 min, instead of 2.5 days that takes a sequential version on CPU.

  12. An Indoor Slam Method Based on Kinect and Multi-Feature Extended Information Filter

    NASA Astrophysics Data System (ADS)

    Chang, M.; Kang, Z.

    2017-09-01

    Based on the frame of ORB-SLAM in this paper the transformation parameters between adjacent Kinect image frames are computed using ORB keypoints, from which priori information matrix and information vector are calculated. The motion update of multi-feature extended information filter is then realized. According to the point cloud data formed by depth image, ICP algorithm was used to extract the point features of the point cloud data in the scene and built an observation model while calculating a-posteriori information matrix and information vector, and weakening the influences caused by the error accumulation in the positioning process. Furthermore, this paper applied ORB-SLAM frame to realize autonomous positioning in real time in interior unknown environment. In the end, Lidar was used to get data in the scene in order to estimate positioning accuracy put forward in this paper.

  13. The first rapid assessment of avoidable blindness (RAAB) in Thailand.

    PubMed

    Isipradit, Saichin; Sirimaharaj, Maytinee; Charukamnoetkanok, Puwat; Thonginnetra, Oraorn; Wongsawad, Warapat; Sathornsumetee, Busaba; Somboonthanakij, Sudawadee; Soomsawasdi, Piriya; Jitawatanarat, Umapond; Taweebanjongsin, Wongsiri; Arayangkoon, Eakkachai; Arame, Punyawee; Kobkoonthon, Chinsuchee; Pangputhipong, Pannet

    2014-01-01

    The majority of vision loss is preventable or treatable. Population surveys are crucial for planning, implementation, and monitoring policies and interventions to eliminate avoidable blindness and visual impairments. This is the first rapid assessment of avoidable blindness (RAAB) study in Thailand. A cross-sectional study of a population in Thailand age 50 years old or over aimed to assess the prevalence and causes of blindness and visual impairments. Using the Thailand National Census 2010 as the sampling frame, a stratified four-stage cluster sampling based on a probability proportional to size was conducted in 176 enumeration areas from 11 provinces. Participants received comprehensive eye examination by ophthalmologists. The age and sex adjusted prevalence of blindness (presenting visual acuity (VA) <20/400), severe visual impairment (VA <20/200 but ≥20/400), and moderate visual impairment (VA <20/70 but ≥20/200) were 0.6% (95% CI: 0.5-0.8), 1.3% (95% CI: 1.0-1.6), 12.6% (95% CI: 10.8-14.5). There was no significant difference among the four regions of Thailand. Cataract was the main cause of vision loss accounted for 69.7% of blindness. Cataract surgical coverage in persons was 95.1% for cut off VA of 20/400. Refractive errors, diabetic retinopathy, glaucoma, and corneal opacities were responsible for 6.0%, 5.1%, 4.0%, and 2.0% of blindness respectively. Thailand is on track to achieve the goal of VISION 2020. However, there is still much room for improvement. Policy refinements and innovative interventions are recommended to alleviate blindness and visual impairments especially regarding the backlog of blinding cataract, management of non-communicative, chronic, age-related eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy, prevention of childhood blindness, and establishment of a robust eye health information system.

  14. Exploring Value-Added Options - Opportunities in Mouldings and Millwork

    Treesearch

    Bob Smith; Philip A. Araman

    1997-01-01

    The millwork industry, which includes manufacture of doors, windows, stair parts, blinds, mouldings, picture frame material, and assorted trim, can be a lucrative value-added opportunity for sawmills. Those entering the value-added millwork market often find that it is a great opportunity to generate greater profits from upper grades and utility species, such as yellow...

  15. Locally Learning Biomedical Data Using Diffusion Frames

    DTIC Science & Technology

    2012-01-01

    age - related macular degeneration (AMD) patients. All eye- related data were collected by our collaborators at the...in Table 2. 6.2. Age - related macular degeneration Age - related macular degeneration is the most common cause of blindness among the elderly population...maculopathy and age - related macular degeneration . The international ARM epidemiological study group. Surv. Ophthalmol. 39, 367–374.

  16. Media Selection for the Development of a Hypermedia/Multimedia Instructional Model for the Navy’s 76mm Gun

    DTIC Science & Technology

    1993-09-01

    placed on the timeline when called. The timeline editor is also used when creating a hypertext/hypermedia document. a FrameMaker FrameMaker is a...Framemaker’s multi-platform capability permits document sharing and allows for easy importing of text and graphics from other application software. Framemaker ...Technology Corporation, FrameMaker Reference, September 1990. Glaser, R., "Education and Thinking: The Role of Knowledge", American Psychologist, Vol. 39, No

  17. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  18. Detecting Signage and Doors for Blind Navigation and Wayfinding

    PubMed Central

    Wang, Shuihua; Yang, Xiaodong; Tian, Yingli

    2013-01-01

    Signage plays a very important role to find destinations in applications of navigation and wayfinding. In this paper, we propose a novel framework to detect doors and signage to help blind people accessing unfamiliar indoor environments. In order to eliminate the interference information and improve the accuracy of signage detection, we first extract the attended areas by using a saliency map. Then the signage is detected in the attended areas by using a bipartite graph matching. The proposed method can handle multiple signage detection. Furthermore, in order to provide more information for blind users to access the area associated with the detected signage, we develop a robust method to detect doors based on a geometric door frame model which is independent to door appearances. Experimental results on our collected datasets of indoor signage and doors demonstrate the effectiveness and efficiency of our proposed method. PMID:23914345

  19. Detecting Signage and Doors for Blind Navigation and Wayfinding.

    PubMed

    Wang, Shuihua; Yang, Xiaodong; Tian, Yingli

    2013-07-01

    Signage plays a very important role to find destinations in applications of navigation and wayfinding. In this paper, we propose a novel framework to detect doors and signage to help blind people accessing unfamiliar indoor environments. In order to eliminate the interference information and improve the accuracy of signage detection, we first extract the attended areas by using a saliency map. Then the signage is detected in the attended areas by using a bipartite graph matching. The proposed method can handle multiple signage detection. Furthermore, in order to provide more information for blind users to access the area associated with the detected signage, we develop a robust method to detect doors based on a geometric door frame model which is independent to door appearances. Experimental results on our collected datasets of indoor signage and doors demonstrate the effectiveness and efficiency of our proposed method.

  20. Transporter-Enzyme Interplay: Deconvoluting Effects of Hepatic Transporters and Enzymes on Drug Disposition Using Static and Dynamic Mechanistic Models.

    PubMed

    Varma, Manthena V; El-Kattan, Ayman F

    2016-07-01

    A large body of evidence suggests hepatic uptake transporters, organic anion-transporting polypeptides (OATPs), are of high clinical relevance in determining the pharmacokinetics of substrate drugs, based on which recent regulatory guidances to industry recommend appropriate assessment of investigational drugs for the potential drug interactions. We recently proposed an extended clearance classification system (ECCS) framework in which the systemic clearance of class 1B and 3B drugs is likely determined by hepatic uptake. The ECCS framework therefore predicts the possibility of drug-drug interactions (DDIs) involving OATPs and the effects of genetic variants of SLCO1B1 early in the discovery and facilitates decision making in the candidate selection and progression. Although OATP-mediated uptake is often the rate-determining process in the hepatic clearance of substrate drugs, metabolic and/or biliary components also contribute to the overall hepatic disposition and, more importantly, to liver exposure. Clinical evidence suggests that alteration in biliary efflux transport or metabolic enzymes associated with genetic polymorphism leads to change in the pharmacodynamic response of statins, for which the pharmacological target resides in the liver. Perpetrator drugs may show inhibitory and/or induction effects on transporters and enzymes simultaneously. It is therefore important to adopt models that frame these multiple processes in a mechanistic sense for quantitative DDI predictions and to deconvolute the effects of individual processes on the plasma and hepatic exposure. In vitro data-informed mechanistic static and physiologically based pharmacokinetic models are proven useful in rationalizing and predicting transporter-mediated DDIs and the complex DDIs involving transporter-enzyme interplay. © 2016, The American College of Clinical Pharmacology.

  1. Multi-frame acquisition scheme for efficient energy-dispersive X-ray magnetic circular dichroism in pulsed high magnetic fields at the Fe K-edge

    PubMed Central

    Strohm, Cornelius; Perrin, Florian; Dominguez, Marie-Christine; Headspith, Jon; van der Linden, Peter; Mathon, Olivier

    2011-01-01

    Using a fast silicon strip detector, a multi-frame acquisition scheme was implemented to perform energy-dispersive X-ray magnetic circular dichroism at the iron K-edge in pulsed high magnetic fields. The acquisition scheme makes use of the entire field pulse. The quality of the signal obtained from samples of ferrimagnetic erbium iron garnet allows for quantitative evaluation of the signal amplitude. Below the compensation point, two successive field-induced phase transitions and the reversal of the net magnetization of the iron sublattices in the intermediate phase were observed. PMID:21335909

  2. GPU-based multi-volume ray casting within VTK for medical applications.

    PubMed

    Bozorgi, Mohammadmehdi; Lindseth, Frank

    2015-03-01

    Multi-volume visualization is important for displaying relevant information in multimodal or multitemporal medical imaging studies. The main objective with the current study was to develop an efficient GPU-based multi-volume ray caster (MVRC) and validate the proposed visualization system in the context of image-guided surgical navigation. Ray casting can produce high-quality 2D images from 3D volume data but the method is computationally demanding, especially when multiple volumes are involved, so a parallel GPU version has been implemented. In the proposed MVRC, imaginary rays are sent through the volumes (one ray for each pixel in the view), and at equal and short intervals along the rays, samples are collected from each volume. Samples from all the volumes are composited using front to back α-blending. Since all the rays can be processed simultaneously, the MVRC was implemented in parallel on the GPU to achieve acceptable interactive frame rates. The method is fully integrated within the visualization toolkit (VTK) pipeline with the ability to apply different operations (e.g., transformations, clipping, and cropping) on each volume separately. The implemented method is cross-platform (Windows, Linux and Mac OSX) and runs on different graphics card (NVidia and AMD). The speed of the MVRC was tested with one to five volumes of varying sizes: 128(3), 256(3), and 512(3). A Tesla C2070 GPU was used, and the output image size was 600 × 600 pixels. The original VTK single-volume ray caster and the MVRC were compared when rendering only one volume. The multi-volume rendering system achieved an interactive frame rate (> 15 fps) when rendering five small volumes (128 (3) voxels), four medium-sized volumes (256(3) voxels), and two large volumes (512(3) voxels). When rendering single volumes, the frame rate of the MVRC was comparable to the original VTK ray caster for small and medium-sized datasets but was approximately 3 frames per second slower for large datasets. The MVRC was successfully integrated in an existing surgical navigation system and was shown to be clinically useful during an ultrasound-guided neurosurgical tumor resection. A GPU-based MVRC for VTK is a useful tool in medical visualization. The proposed multi-volume GPU-based ray caster for VTK provided high-quality images at reasonable frame rates. The MVRC was effective when used in a neurosurgical navigation application.

  3. A digital algorithm for spectral deconvolution with noise filtering and peak picking: NOFIPP-DECON

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.; Settle, G. L.; Knight, R. D.

    1975-01-01

    Noise-filtering, peak-picking deconvolution software incorporates multiple convoluted convolute integers and multiparameter optimization pattern search. The two theories are described and three aspects of the software package are discussed in detail. Noise-filtering deconvolution was applied to a number of experimental cases ranging from noisy, nondispersive X-ray analyzer data to very noisy photoelectric polarimeter data. Comparisons were made with published infrared data, and a man-machine interactive language has evolved for assisting in very difficult cases. A modified version of the program is being used for routine preprocessing of mass spectral and gas chromatographic data.

  4. Sometimes doing the right thing sucks: frame combinations and multi-fetal pregnancy reduction decision difficulty.

    PubMed

    Britt, David W; Evans, Mark I

    2007-12-01

    Data are analyzed for 54 women who made an appointment with a North American Center specializing in multifetal pregnancy reduction (MFPR) to be counseled and possibly have a reduction. The impact on decision difficulty of combinations of three frames through which patients may understand and consider their options and use to justify their decisions are examined: a conceptional frame marked by a belief that life begins at conception; a medical frame marked by a belief in the statistics regarding risk and risk prevention through selective reduction; and a lifestyle frame marked by a belief that a balance of children and career has normative value. All data were gathered through semi-structured interviews and observation during the visit to the center over an average 2.5h period. Decision difficulty was indicated by self-assessed decision difficulty and by residual emotional turmoil surrounding the decision. Qualitative comparative analysis was used to analyze the impact of combinations of frames on decision difficulty. Separate analyses were conducted for those reducing only to three fetuses (or deciding not to reduce) and women who chose to reduce below three fetuses. Results indicated that for those with a non-intense conceptional frame, the decision was comparatively easy no matter whether the patients had high or low values of medical and lifestyle frames. For those with an intense conceptional frame, the decision was almost uniformly difficult, with the exception of those who chose to reduce only to three fetuses. Simplifying the results to their most parsimonious scenarios oversimplifies the results and precludes an understanding of how women can feel pulled in different directions by the dictates of the frames they hold. Variations in the characterization of intense medical frames, for example, can both pull toward reduction to two fetuses and neutralize shame and guilt by seeming to remove personal responsibility for the decision. We conclude that the examination of frame combinations is an important tool for understanding the way women carrying multiple fetuses negotiate their way through multi-fetal pregnancies, and that it may have more general relevance for understanding pregnancy decisions in context.

  5. The Small-scale Structure of Photospheric Convection Retrieved by a Deconvolution Technique Applied to Hinode /SP Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oba, T.; Riethmüller, T. L.; Solanki, S. K.

    Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode /SP data inmore » an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of −3.0 km s{sup −1} and +3.0 km s{sup −1} at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.« less

  6. Toxoplasma Modulates Signature Pathways of Human Epilepsy, Neurodegeneration & Cancer.

    PubMed

    Ngô, Huân M; Zhou, Ying; Lorenzi, Hernan; Wang, Kai; Kim, Taek-Kyun; Zhou, Yong; El Bissati, Kamal; Mui, Ernest; Fraczek, Laura; Rajagopala, Seesandra V; Roberts, Craig W; Henriquez, Fiona L; Montpetit, Alexandre; Blackwell, Jenefer M; Jamieson, Sarra E; Wheeler, Kelsey; Begeman, Ian J; Naranjo-Galvis, Carlos; Alliey-Rodriguez, Ney; Davis, Roderick G; Soroceanu, Liliana; Cobbs, Charles; Steindler, Dennis A; Boyer, Kenneth; Noble, A Gwendolyn; Swisher, Charles N; Heydemann, Peter T; Rabiah, Peter; Withers, Shawn; Soteropoulos, Patricia; Hood, Leroy; McLeod, Rima

    2017-09-13

    One third of humans are infected lifelong with the brain-dwelling, protozoan parasite, Toxoplasma gondii. Approximately fifteen million of these have congenital toxoplasmosis. Although neurobehavioral disease is associated with seropositivity, causality is unproven. To better understand what this parasite does to human brains, we performed a comprehensive systems analysis of the infected brain: We identified susceptibility genes for congenital toxoplasmosis in our cohort of infected humans and found these genes are expressed in human brain. Transcriptomic and quantitative proteomic analyses of infected human, primary, neuronal stem and monocytic cells revealed effects on neurodevelopment and plasticity in neural, immune, and endocrine networks. These findings were supported by identification of protein and miRNA biomarkers in sera of ill children reflecting brain damage and T. gondii infection. These data were deconvoluted using three systems biology approaches: "Orbital-deconvolution" elucidated upstream, regulatory pathways interconnecting human susceptibility genes, biomarkers, proteomes, and transcriptomes. "Cluster-deconvolution" revealed visual protein-protein interaction clusters involved in processes affecting brain functions and circuitry, including lipid metabolism, leukocyte migration and olfaction. Finally, "disease-deconvolution" identified associations between the parasite-brain interactions and epilepsy, movement disorders, Alzheimer's disease, and cancer. This "reconstruction-deconvolution" logic provides templates of progenitor cells' potentiating effects, and components affecting human brain parasitism and diseases.

  7. Peptide de novo sequencing of mixture tandem mass spectra.

    PubMed

    Gorshkov, Vladimir; Hotta, Stéphanie Yuki Kolbeck; Verano-Braga, Thiago; Kjeldsen, Frank

    2016-09-01

    The impact of mixture spectra deconvolution on the performance of four popular de novo sequencing programs was tested using artificially constructed mixture spectra as well as experimental proteomics data. Mixture fragmentation spectra are recognized as a limitation in proteomics because they decrease the identification performance using database search engines. De novo sequencing approaches are expected to be even more sensitive to the reduction in mass spectrum quality resulting from peptide precursor co-isolation and thus prone to false identifications. The deconvolution approach matched complementary b-, y-ions to each precursor peptide mass, which allowed the creation of virtual spectra containing sequence specific fragment ions of each co-isolated peptide. Deconvolution processing resulted in equally efficient identification rates but increased the absolute number of correctly sequenced peptides. The improvement was in the range of 20-35% additional peptide identifications for a HeLa lysate sample. Some correct sequences were identified only using unprocessed spectra; however, the number of these was lower than those where improvement was obtained by mass spectral deconvolution. Tight candidate peptide score distribution and high sensitivity to small changes in the mass spectrum introduced by the employed deconvolution method could explain some of the missing peptide identifications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. What makes African American health disparities newsworthy? An experiment among journalists about story framing

    PubMed Central

    Hinnant, Amanda; Oh, Hyun Jee; Caburnay, Charlene A.; Kreuter, Matthew W.

    2011-01-01

    News stories reporting race-specific health information commonly emphasize disparities between racial groups. But recent research suggests this focus on disparities has unintended effects on African American audiences, generating negative emotions and less interest in preventive behaviors (Nicholson RA, Kreuter MW, Lapka C et al. Unintended effects of emphasizing disparities in cancer communication to African-Americans. Cancer Epidemiol Biomarkers Prev 2008; 17: 2946–52). They found that black adults are more interested in cancer screening after reading about the progress African Americans have made in fighting cancer than after reading stories emphasizing disparities between blacks and whites. This study builds on past findings by (i) examining how health journalists judge the newsworthiness of stories that report race-specific health information by emphasizing disparities versus progress and (ii) determining whether these judgments can be changed by informing journalists of audience reactions to disparity versus progress framing. In a double-blind-randomized experiment, 175 health journalists read either a disparity- or progress-framed story on colon cancer, preceded by either an inoculation about audience effects of such framing or an unrelated (i.e. control) information stimuli. Journalists rated the disparity-frame story more favorably than the progress-frame story in every category of news values. However, the inoculation significantly increased positive reactions to the progress-frame story. Informing journalists of audience reactions to race-specific health information could influence how health news stories are framed. PMID:21911844

  9. A motion-tolerant approach for monitoring SpO2 and heart rate using photoplethysmography signal with dual frame length processing and multi-classifier fusion.

    PubMed

    Fan, Feiyi; Yan, Yuepeng; Tang, Yongzhong; Zhang, Hao

    2017-12-01

    Monitoring pulse oxygen saturation (SpO 2 ) and heart rate (HR) using photoplethysmography (PPG) signal contaminated by a motion artifact (MA) remains a difficult problem, especially when the oximeter is not equipped with a 3-axis accelerometer for adaptive noise cancellation. In this paper, we report a pioneering investigation on the impact of altering the frame length of Molgedey and Schuster independent component analysis (ICAMS) on performance, design a multi-classifier fusion strategy for selecting the PPG correlated signal component, and propose a novel approach to extract SpO 2 and HR readings from PPG signal contaminated by strong MA interference. The algorithm comprises multiple stages, including dual frame length ICAMS, a multi-classifier-based PPG correlated component selector, line spectral analysis, tree-based HR monitoring, and post-processing. Our approach is evaluated by multi-subject tests. The root mean square error (RMSE) is calculated for each trial. Three statistical metrics are selected as performance evaluation criteria: mean RMSE, median RMSE and the standard deviation (SD) of RMSE. The experimental results demonstrate that a shorter ICAMS analysis window probably results in better performance in SpO 2 estimation. Notably, the designed multi-classifier signal component selector achieved satisfactory performance. The subject tests indicate that our algorithm outperforms other baseline methods regarding accuracy under most criteria. The proposed work can contribute to improving the performance of current pulse oximetry and personal wearable monitoring devices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. A data set for evaluating the performance of multi-class multi-object video tracking

    NASA Astrophysics Data System (ADS)

    Chakraborty, Avishek; Stamatescu, Victor; Wong, Sebastien C.; Wigley, Grant; Kearney, David

    2017-05-01

    One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.

  11. Block matching and Wiener filtering approach to optical turbulence mitigation and its application to simulated and real imagery with quantitative error analysis

    NASA Astrophysics Data System (ADS)

    Hardie, Russell C.; Rucci, Michael A.; Dapore, Alexander J.; Karch, Barry K.

    2017-07-01

    We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation point spread function (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study.

  12. Enhancing Community Knowledge and Health Behaviors to Eliminate Blinding Trachoma in Mali Using Radio Messaging as a Strategy

    ERIC Educational Resources Information Center

    Bamani, Sanoussi; Toubali, Emily; Diarra, Sadio; Goita, Seydou; Berte, Zana; Coulibaly, Famolo; Sangare, Hama; Tuinsma, Marjon; Zhang, Yaobi; Dembele, Benoit; Melvin, Palesa; MacArthur, Chad

    2013-01-01

    The National Blindness Prevention Program in Mali has broadcast messages on the radio about trachoma as part of the country's trachoma elimination strategy since 2008. In 2011, a radio impact survey using multi-stage cluster sampling was conducted in the regions of Kayes and Segou to assess radio listening habits, coverage of the broadcasts,…

  13. Response of high-rise and base-isolated buildings to a hypothetical M w 7.0 blind thrust earthquake

    USGS Publications Warehouse

    Heaton, T.H.; Hall, J.F.; Wald, D.J.; Halling, M.W.

    1995-01-01

    High-rise flexible-frame buildings are commonly considered to be resistant to shaking from the largest earthquakes. In addition, base isolation has become increasingly popular for critical buildings that should still function after an earthquake. How will these two types of buildings perform if a large earthquake occurs beneath a metropolitan area? To answer this question, we simulated the near-source ground motions of a Mw 7.0 thrust earthquake and then mathematically modeled the response of a 20-story steel-frame building and a 3-story base-isolated building. The synthesized ground motions were characterized by large displacement pulses (up to 2 meters) and large ground velocities. These ground motions caused large deformation and possible collapse of the frame building, and they required exceptional measures in the design of the base-isolated building if it was to remain functional.

  14. BERKELEY LAB WINDOW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curcija, Dragan Charlie; Zhu, Ling; Czarnecki, Stephen

    WINDOW features include: - Microsoft Windows TM interface - algorithms for the calculation of total fenestration product U-values and Solar Heat Gain Coefficient consistent with ASHRAE SPC 142, ISO 15099, and the National Fenestration Rating Council - a Condensation Resistance Index in accordance with the NFRC 500 Standard - and integrated database of properties - imports data from other LBNL window analysis software: - Import THERM file into the Frame Library - Import records from IGDB and OPtics5 into the Glass Library for the optical properties of coated and uncoated glazings, laminates, and applied films. Program Capabilities WINDOW 7.2 offersmore » the following features: The ability to analyze products made from any combination of glazing layers, gas layers, frames, spacers, and dividers under any environmental conditions and at any tilt; The ability to model complex glazing systems such as venetian blinds and roller shades. Directly accessible libraries of window system components, (glazing systems, glazing layers, gas fills, frame and divider elements), and environmental conditions; The choice of working in English (IP), or Systeme International (SI) units; The ability to specify the dimensions and thermal properties of each frame element (header, sills, jamb, mullion) in a window; A multi-band (wavelength-by-wavelength) spectral model; A Glass Library which can access spectral data files for many common glazing materials from the Optics5database; A night-sky radiative model; A link with the DOE-2.1E and Energy Plus building energy analysis program. Performance Indices and Other Results For a user-defined fenestration system and user-defined environmental conditions, WINDOW calculates: The U-value, solar heat gain coefficient, shading coefficient, and visible transmittance for the complete window system; The U-value, solar heat gain coefficient, shading coefficient, and visible transmittance for the glazing system (center-of-glass values); The U-values of the frame and divider elements and corresponding edge-of-glass areas (based on generic correlations); The total solar and visible transmittance and reflectances of the glazing system. Color properties, i.e. L*, a*, and b* color coordinates, dominant wavelength, and purity for transmitted and reflected (outdoor) solar radiation; The damage-weighted transmittance of the glazing system between 0.3 an 0.38 microns; The angular dependence of the solar and visible transmittances, solar and visible reflectances, solar absorptance, and solar heat gain coefficient of the glazing system; The percent relative humidity of the inside and outside air for which condensation will occur on the interior and exterior glazing surfaces respectively; The center-of-glass temperature distribution.« less

  15. Applications of two-photon fluorescence microscopy in deep-tissue imaging

    NASA Astrophysics Data System (ADS)

    Dong, Chen-Yuan; Yu, Betty; Hsu, Lily L.; Kaplan, Peter D.; Blankschstein, D.; Langer, Robert; So, Peter T. C.

    2000-07-01

    Based on the non-linear excitation of fluorescence molecules, two-photon fluorescence microscopy has become a significant new tool for biological imaging. The point-like excitation characteristic of this technique enhances image quality by the virtual elimination of off-focal fluorescence. Furthermore, sample photodamage is greatly reduced because fluorescence excitation is limited to the focal region. For deep tissue imaging, two-photon microscopy has the additional benefit in the greatly improved imaging depth penetration. Since the near- infrared laser sources used in two-photon microscopy scatter less than their UV/glue-green counterparts, in-depth imaging of highly scattering specimen can be greatly improved. In this work, we will present data characterizing both the imaging characteristics (point-spread-functions) and tissue samples (skin) images using this novel technology. In particular, we will demonstrate how blind deconvolution can be used further improve two-photon image quality and how this technique can be used to study mechanisms of chemically-enhanced, transdermal drug delivery.

  16. Array invariant-based ranging of a source of opportunity.

    PubMed

    Byun, Gihoon; Kim, J S; Cho, Chomgun; Song, H C; Byun, Sung-Hoon

    2017-09-01

    The feasibility of tracking a ship radiating random and anisotropic noise is investigated using ray-based blind deconvolution (RBD) and array invariant (AI) with a vertical array in shallow water. This work is motivated by a recent report [Byun, Verlinden, and Sabra, J. Acoust. Soc. Am. 141, 797-807 (2017)] that RBD can be applied to ships of opportunity to estimate the Green's function. Subsequently, the AI developed for robust source-range estimation in shallow water can be applied to the estimated Green's function via RBD, exploiting multipath arrivals separated in beam angle and travel time. In this letter, a combination of the RBD and AI is demonstrated to localize and track a ship of opportunity (200-900 Hz) to within a 5% standard deviation of the relative range error along a track at ranges of 1.8-3.4 km, using a 16-element, 56-m long vertical array in approximately 100-m deep shallow water.

  17. Mr-Moose: An advanced SED-fitting tool for heterogeneous multi-wavelength datasets

    NASA Astrophysics Data System (ADS)

    Drouart, G.; Falkendal, T.

    2018-04-01

    We present the public release of Mr-Moose, a fitting procedure that is able to perform multi-wavelength and multi-object spectral energy distribution (SED) fitting in a Bayesian framework. This procedure is able to handle a large variety of cases, from an isolated source to blended multi-component sources from an heterogeneous dataset (i.e. a range of observation sensitivities and spectral/spatial resolutions). Furthermore, Mr-Moose handles upper-limits during the fitting process in a continuous way allowing models to be gradually less probable as upper limits are approached. The aim is to propose a simple-to-use, yet highly-versatile fitting tool fro handling increasing source complexity when combining multi-wavelength datasets with fully customisable filter/model databases. The complete control of the user is one advantage, which avoids the traditional problems related to the "black box" effect, where parameter or model tunings are impossible and can lead to overfitting and/or over-interpretation of the results. Also, while a basic knowledge of Python and statistics is required, the code aims to be sufficiently user-friendly for non-experts. We demonstrate the procedure on three cases: two artificially-generated datasets and a previous result from the literature. In particular, the most complex case (inspired by a real source, combining Herschel, ALMA and VLA data) in the context of extragalactic SED fitting, makes Mr-Moose a particularly-attractive SED fitting tool when dealing with partially blended sources, without the need for data deconvolution.

  18. MR-MOOSE: an advanced SED-fitting tool for heterogeneous multi-wavelength data sets

    NASA Astrophysics Data System (ADS)

    Drouart, G.; Falkendal, T.

    2018-07-01

    We present the public release of MR-MOOSE, a fitting procedure that is able to perform multi-wavelength and multi-object spectral energy distribution (SED) fitting in a Bayesian framework. This procedure is able to handle a large variety of cases, from an isolated source to blended multi-component sources from a heterogeneous data set (i.e. a range of observation sensitivities and spectral/spatial resolutions). Furthermore, MR-MOOSE handles upper limits during the fitting process in a continuous way allowing models to be gradually less probable as upper limits are approached. The aim is to propose a simple-to-use, yet highly versatile fitting tool for handling increasing source complexity when combining multi-wavelength data sets with fully customisable filter/model data bases. The complete control of the user is one advantage, which avoids the traditional problems related to the `black box' effect, where parameter or model tunings are impossible and can lead to overfitting and/or over-interpretation of the results. Also, while a basic knowledge of PYTHON and statistics is required, the code aims to be sufficiently user-friendly for non-experts. We demonstrate the procedure on three cases: two artificially generated data sets and a previous result from the literature. In particular, the most complex case (inspired by a real source, combining Herschel, ALMA, and VLA data) in the context of extragalactic SED fitting makes MR-MOOSE a particularly attractive SED fitting tool when dealing with partially blended sources, without the need for data deconvolution.

  19. Towards Mobile OCR: How To Take a Good Picture of a Document Without Sight.

    PubMed

    Cutter, Michael; Manduchi, Roberto

    The advent of mobile OCR (optical character recognition) applications on regular smartphones holds great promise for enabling blind people to access printed information. Unfortunately, these systems suffer from a problem: in order for OCR output to be meaningful, a well-framed image of the document needs to be taken, something that is difficult to do without sight. This contribution presents an experimental investigation of how blind people position and orient a camera phone while acquiring document images. We developed experimental software to investigate if verbal guidance aids in the acquisition of OCR-readable images without sight. We report on our participant's feedback and performance before and after assistance from our software.

  20. Towards Mobile OCR: How To Take a Good Picture of a Document Without Sight

    PubMed Central

    Cutter, Michael; Manduchi, Roberto

    2015-01-01

    The advent of mobile OCR (optical character recognition) applications on regular smartphones holds great promise for enabling blind people to access printed information. Unfortunately, these systems suffer from a problem: in order for OCR output to be meaningful, a well-framed image of the document needs to be taken, something that is difficult to do without sight. This contribution presents an experimental investigation of how blind people position and orient a camera phone while acquiring document images. We developed experimental software to investigate if verbal guidance aids in the acquisition of OCR-readable images without sight. We report on our participant's feedback and performance before and after assistance from our software. PMID:26677461

  1. Shape Perception and Navigation in Blind Adults

    PubMed Central

    Gori, Monica; Cappagli, Giulia; Baud-Bovy, Gabriel; Finocchietti, Sara

    2017-01-01

    Different sensory systems interact to generate a representation of space and to navigate. Vision plays a critical role in the representation of space development. During navigation, vision is integrated with auditory and mobility cues. In blind individuals, visual experience is not available and navigation therefore lacks this important sensory signal. In blind individuals, compensatory mechanisms can be adopted to improve spatial and navigation skills. On the other hand, the limitations of these compensatory mechanisms are not completely clear. Both enhanced and impaired reliance on auditory cues in blind individuals have been reported. Here, we develop a new paradigm to test both auditory perception and navigation skills in blind and sighted individuals and to investigate the effect that visual experience has on the ability to reproduce simple and complex paths. During the navigation task, early blind, late blind and sighted individuals were required first to listen to an audio shape and then to recognize and reproduce it by walking. After each audio shape was presented, a static sound was played and the participants were asked to reach it. Movements were recorded with a motion tracking system. Our results show three main impairments specific to early blind individuals. The first is the tendency to compress the shapes reproduced during navigation. The second is the difficulty to recognize complex audio stimuli, and finally, the third is the difficulty in reproducing the desired shape: early blind participants occasionally reported perceiving a square but they actually reproduced a circle during the navigation task. We discuss these results in terms of compromised spatial reference frames due to lack of visual input during the early period of development. PMID:28144226

  2. Towards robust deconvolution of low-dose perfusion CT: sparse perfusion deconvolution using online dictionary learning.

    PubMed

    Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C

    2013-05-01

    Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Deconvolution of azimuthal mode detection measurements

    NASA Astrophysics Data System (ADS)

    Sijtsma, Pieter; Brouwer, Harry

    2018-05-01

    Unequally spaced transducer rings make it possible to extend the range of detectable azimuthal modes. The disadvantage is that the response of the mode detection algorithm to a single mode is distributed over all detectable modes, similarly to the Point Spread Function of Conventional Beamforming with microphone arrays. With multiple modes the response patterns interfere, leading to a relatively high "noise floor" of spurious modes in the detected mode spectrum, in other words, to a low dynamic range. In this paper a deconvolution strategy is proposed for increasing this dynamic range. It starts with separating the measured sound into shaft tones and broadband noise. For broadband noise modes, a standard Non-Negative Least Squares solver appeared to be a perfect deconvolution tool. For shaft tones a Matching Pursuit approach is proposed, taking advantage of the sparsity of dominant modes. The deconvolution methods were applied to mode detection measurements in a fan rig. An increase in dynamic range of typically 10-15 dB was found.

  4. Determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1991-01-01

    The final report for work on the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution is presented. Papers and theses prepared during the research report period are included. Among all the research results reported, note should be made of the specific investigation of the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution. A methodology was developed to determine design and operation parameters for error minimization when deconvolution is included in data analysis. An error surface is plotted versus the signal-to-noise ratio (SNR) and all parameters of interest. Instrumental characteristics will determine a curve in this space. The SNR and parameter values which give the projection from the curve to the surface, corresponding to the smallest value for the error, are the optimum values. These values are constrained by the curve and so will not necessarily correspond to an absolute minimum in the error surface.

  5. Monitoring of Time-Dependent System Profiles by Multiplex Gas Chromatography with Maximum Entropy Demodulation

    NASA Technical Reports Server (NTRS)

    Becker, Joseph F.; Valentin, Jose

    1996-01-01

    The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.

  6. Joint deconvolution and classification with applications to passive acoustic underwater multipath.

    PubMed

    Anderson, Hyrum S; Gupta, Maya R

    2008-11-01

    This paper addresses the problem of classifying signals that have been corrupted by noise and unknown linear time-invariant (LTI) filtering such as multipath, given labeled uncorrupted training signals. A maximum a posteriori approach to the deconvolution and classification is considered, which produces estimates of the desired signal, the unknown channel, and the class label. For cases in which only a class label is needed, the classification accuracy can be improved by not committing to an estimate of the channel or signal. A variant of the quadratic discriminant analysis (QDA) classifier is proposed that probabilistically accounts for the unknown LTI filtering, and which avoids deconvolution. The proposed QDA classifier can work either directly on the signal or on features whose transformation by LTI filtering can be analyzed; as an example a classifier for subband-power features is derived. Results on simulated data and real Bowhead whale vocalizations show that jointly considering deconvolution with classification can dramatically improve classification performance over traditional methods over a range of signal-to-noise ratios.

  7. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  8. A new scoring function for top-down spectral deconvolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kou, Qiang; Wu, Si; Liu, Xiaowen

    2014-12-18

    Background: Top-down mass spectrometry plays an important role in intact protein identification and characterization. Top-down mass spectra are more complex than bottom-up mass spectra because they often contain many isotopomer envelopes from highly charged ions, which may overlap with one another. As a result, spectral deconvolution, which converts a complex top-down mass spectrum into a monoisotopic mass list, is a key step in top-down spectral interpretation. Results: In this paper, we propose a new scoring function, L-score, for evaluating isotopomer envelopes. By combining L-score with MS-Deconv, a new software tool, MS-Deconv+, was developed for top-down spectral deconvolution. Experimental results showedmore » that MS-Deconv+ outperformed existing software tools in top-down spectral deconvolution. Conclusions: L-score shows high discriminative ability in identification of isotopomer envelopes. Using L-score, MS-Deconv+ reports many correct monoisotopic masses missed by other software tools, which are valuable for proteoform identification and characterization.« less

  9. Bayesian Deconvolution for Angular Super-Resolution in Forward-Looking Scanning Radar

    PubMed Central

    Zha, Yuebo; Huang, Yulin; Sun, Zhichao; Wang, Yue; Yang, Jianyu

    2015-01-01

    Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP) criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson–Lucy algorithm. PMID:25806871

  10. Towards robust deconvolution of low-dose perfusion CT: Sparse perfusion deconvolution using online dictionary learning

    PubMed Central

    Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C.

    2014-01-01

    Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. PMID:23542422

  11. Waveform LiDAR processing: comparison of classic approaches and optimized Gold deconvolution to characterize vegetation structure and terrain elevation

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.

    2016-12-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: 1) direct decomposition, 2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from discrete LiDAR data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, < 0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, < 1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (< 1.01m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE.

  12. [Application of AOTF in spectral analysis. 1. Hardware and software designs for the self-constructed visible AOTF spectrophotometer].

    PubMed

    He, Jia-yao; Peng, Rong-fei; Zhang, Zhan-xia

    2002-02-01

    A self-constructed visible spectrophotometer using an acousto-optic tunable filter(AOTF) as a dispersing element is described. Two different AOTFs (one from The Institute for Silicate (Shanghai, China) and the other from Brimrose(USA)) are tested. The software written with visual C++ and operated on a Window98 platform is an applied program with dual database and multi-windows. Four independent windows, namely scanning, quantitative, calibration and result are incorporated. The Fourier self-deconvolution algorithm is also incorporated to improve the spectral resolution. The wavelengths are calibrated using the polynomial curve fitting method. The spectra and calibration curves of soluble aniline blue and phenol red are presented to show the feasibility of the constructed spectrophotometer.

  13. GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Luong, Hiêp; Philips, Wilfried

    2017-08-01

    Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.

  14. Improved surface-wave retrieval from ambient seismic noise by multi-dimensional deconvolution

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; Ruigrok, Elmer; van der Neut, Joost; Draganov, Deyan

    2011-01-01

    The methodology of surface-wave retrieval from ambient seismic noise by crosscorrelation relies on the assumption that the noise field is equipartitioned. Deviations from equipartitioning degrade the accuracy of the retrieved surface-wave Green's function. A point-spread function, derived from the same ambient noise field, quantifies the smearing in space and time of the virtual source of the Green's function. By multidimensionally deconvolving the retrieved Green's function by the point-spread function, the virtual source becomes better focussed in space and time and hence the accuracy of the retrieved surface-wave Green's function may improve significantly. We illustrate this at the hand of a numerical example and discuss the advantages and limitations of this new methodology.

  15. Community, Diversity, and Conflict among Schoolteachers: The Ties That Blind. Advances in Contemporary Educational Thought Series.

    ERIC Educational Resources Information Center

    Achinstein, Betty

    This book explores how teacher communities differ dramatically in how they deal with conflict, collaborate, think about the purposes of schooling in relation to issues of conflict, frame and seek solutions, and utilize mechanisms to manage their differences. It reflects on current social theories of community and democracy to promote values that…

  16. Colour-Blind Praxis in Havana: Interrogating Cuban Teacher Discourses of Race and Racelessness

    ERIC Educational Resources Information Center

    Kempf, Arlo

    2013-01-01

    Despite massive gains in racial equality over the past 50 years, racism persists in twenty-first century Cuba. One of the key tools for the preservation and maintenance of racism is the discourse of racelessness through which the relevance of race is denied and silenced. Paradoxically, the racelessness frame has also been a guiding anti-racist…

  17. Evaluating Audio Books as Supported Course Materials in Distance Education: The Experiences of the Blind Learners

    ERIC Educational Resources Information Center

    Ozgur, Aydin Ziya; Kiray, Huseyin Selcuk

    2007-01-01

    Anadolu University has a technical infrastructure, well-qualified faculty, and operates in an innovative and flexible frame. It takes an initiative role to meet the needs of higher education in Turkey by providing equal opportunity not only to satisfy those who value the principle of lifelong education but also seeks new information via distance…

  18. Steps Toward Effective Production of Speech (STEPS): No. 7--How to Take Care of Glasses.

    ERIC Educational Resources Information Center

    Sheeley, Eugene C.; McQuiddy, Doris

    This guide, one of a series of booklets developed by Project STEPS (Steps Toward Effective Production of Speech), presents guidelines for parents of deaf-blind children regarding the care of eyeglasses. Basic concerns with glasses and contact lenses are noted and parents are advised to perform the following daily tasks: checking the frames,…

  19. Safe trajectory estimation at a pedestrian crossing to assist visually impaired people.

    PubMed

    Alghamdi, Saleh; van Schyndel, Ron; Khalil, Ibrahim

    2012-01-01

    The aim of this paper is to present a service for blind and people with low vision to assist them to cross the street independently. The presented approach provides the user with significant information such as detection of pedestrian crossing signal from any point of view, when the pedestrian crossing signal light is green, the detection of dynamic and fixed obstacles, predictions of the movement of fellow pedestrians and information on objects which may intersect his path. Our approach is based on capturing multiple frames using a depth camera which is attached to a user's headgear. Currently a testbed system is built on a helmet and is connected to a laptop in the user's backpack. In this paper, we discussed efficiency of using Speeded-Up Robust Features (SURF) algorithm for object recognition for purposes of blind people assistance. The system predicts the movement of objects of interest to provide the user with information on the safest path to navigate and information on the surrounding area. Evaluation of this approach on real sequence video frames provides 90% of human detection and more than 80% for recognition of other related objects.

  20. Processing of single channel air and water gun data for imaging an impact structure at the Chesapeake Bay

    USGS Publications Warehouse

    Lee, Myung W.

    1999-01-01

    Processing of 20 seismic profiles acquired in the Chesapeake Bay area aided in analysis of the details of an impact structure and allowed more accurate mapping of the depression caused by a bolide impact. Particular emphasis was placed on enhancement of seismic reflections from the basement. Application of wavelet deconvolution after a second zero-crossing predictive deconvolution improved the resolution of shallow reflections, and application of a match filter enhanced the basement reflections. The use of deconvolution and match filtering with a two-dimensional signal enhancement technique (F-X filtering) significantly improved the interpretability of seismic sections.

  1. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: III. Convolution and deconvolution.

    PubMed

    Langenbucher, Frieder

    2003-11-01

    Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.

  2. Reconnaissance blind multi-chess: an experimentation platform for ISR sensor fusion and resource management

    NASA Astrophysics Data System (ADS)

    Newman, Andrew J.; Richardson, Casey L.; Kain, Sean M.; Stankiewicz, Paul G.; Guseman, Paul R.; Schreurs, Blake A.; Dunne, Jeffrey A.

    2016-05-01

    This paper introduces the game of reconnaissance blind multi-chess (RBMC) as a paradigm and test bed for understanding and experimenting with autonomous decision making under uncertainty and in particular managing a network of heterogeneous Intelligence, Surveillance and Reconnaissance (ISR) sensors to maintain situational awareness informing tactical and strategic decision making. The intent is for RBMC to serve as a common reference or challenge problem in fusion and resource management of heterogeneous sensor ensembles across diverse mission areas. We have defined a basic rule set and a framework for creating more complex versions, developed a web-based software realization to serve as an experimentation platform, and developed some initial machine intelligence approaches to playing it.

  3. Examining the Use of Adaptive Technologies to Increase the Hands-On Participation of Students with Blindness or Low Vision in Secondary-School Chemistry and Physics

    ERIC Educational Resources Information Center

    Supalo, Cary A.; Humphrey, Jennifer R.; Mallouk, Thomas E.; Wohlers, H. David; Carlsen, William S.

    2016-01-01

    To determine whether a suite of audible adaptive technologies would increase the hands-on participation of high school students with blindness or low vision in chemistry and physics courses, data were examined from a multi-year field study conducted with students in mainstream classrooms at secondary schools across the United States. The students…

  4. High quality image-pair-based deblurring method using edge mask and improved residual deconvolution

    NASA Astrophysics Data System (ADS)

    Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting

    2017-04-01

    Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.

  5. Quantitative image fusion in infrared radiometry

    NASA Astrophysics Data System (ADS)

    Romm, Iliya; Cukurel, Beni

    2018-05-01

    Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.

  6. ALLTEM System User’s Manual, Munitions Management Projects, ALLTEM Multi-Axis Electromagnetic Induction System Demonstration and Validation, Version 1.0

    DTIC Science & Technology

    2012-03-05

    Alarm button. Under the GPS frame are two smaller frames. On the left is a frame with buttons labeled Tractor Guidance and Acquisition Error... GPS ) and the Attitude Heading Reference System (AHRS) data. 5.2 Using the Data Acquisition Simulator Software The simulator and a practice set... acquisition for one polarity of the TX (33ms dead band for relay switching + 33 ms of waveforms). When the GPS is being used this is usually “1”, but may be

  7. Prevalence and causes of blindness, visual impairment, and cataract surgery in Timor-Leste

    PubMed Central

    Correia, Marcelino; Das, Taraprasad; Magno, Julia; Pereira, Bernadette M; Andrade, Valerio; Limburg, Hans; Trevelyan, John; Keeffe, Jill; Verma, Nitin; Sapkota, Yuddha

    2017-01-01

    Purpose To estimate the prevalence and causes of blindness and visual impairment, cataract surgical coverage (CSC), visual outcome of cataract surgery, and barriers to uptake cataract surgery in Timor-Leste. Method In a nationwide rapid assessment of avoidable blindness (RAAB), the latest population (1,066,409) and household data were used to create a sampling frame which consists of 2,227 population units (study clusters) from all 13 districts, with populations of 450–900 per unit. The sample size of 3,350 was calculated with the assumed prevalence of blindness at 4.5% among people aged ≥50 years with a 20% tolerable error, 95% CI, and a 90% response rate. The team was trained in the survey methodology, and inter-observer variation was measured. Door-to-door visits, led by an ophthalmologist, were made in preselected study clusters, and data were collected in line with the RAAB5 survey protocol. An Android smart phone installed with mRAAB software was used for data collection. Result The age–gender standardized prevalence of blindness, severe visual impairment, and visual impairment were 2.8%, (1.8–3.8), 1.7% (1.7–2.3), and 8.1% (6.6–9.6), respectively. Cataract was the leading cause of blindness (79.4%). Blindness was more prevalent in the older age group and in women. CSC was 41.5% in cataract blind eyes and 48.6% in cataract blind people. Good visual outcome in the cataract-operated eyes was 62% (presenting) and 75.2% (best corrected). Two important barriers to not using available cataract surgical services were accessibility (45.5%) and lack of attendants to accompany (24.8%). Conclusion The prevalence of blindness and visual impairment in Timor-Leste remains high. CSC is unacceptably low; gender inequity in blindness and CSC exists. Lack of access is the prominent barrier to cataract surgery. PMID:29238161

  8. Multimedia-modeling integration development environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelton, Mitchell A.; Hoopes, Bonnie L.

    2002-09-02

    There are many framework systems available; however, the purpose of the framework presented here is to capitalize on the successes of the Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) and Multi-media Multi-pathway Multi-receptor Risk Assessment (3MRA) methodology as applied to the Hazardous Waste Identification Rule (HWIR) while focusing on the development of software tools to simplify the module developer?s effort of integrating a module into the framework.

  9. Damage to urban buildings in zones of intensities VIII and VII during the Wenchuan earthquake and discussion on some typical damages

    NASA Astrophysics Data System (ADS)

    Sun, Jingjiang; Tang, Yuhong; Zheng, Chao; Shi, Hongbin; Lin, Lin; Sun, Zhongxian

    2009-04-01

    The outline and typical characteristics of damages to building in Jiangyou city and Anxian county (intensity VIII), Mianyang city and Deyang city (intensity VII) are introduced in the paper. The damage ratios, based on the sample statistics of multi-story brick buildings together with multi-story brick buildings with RC frame at first story (BBF), are presented. Then some typical damages, such as horizontal cricks of brick masonry buildings, X-shaped cricks on the walls under windows, the damages to columns, beams and infill walls of frame buildings and the damage to half circle-shaped masonry walls, are discussed.

  10. A Novel Piggyback Selection Scheme in IEEE 802.11e HCCA

    NASA Astrophysics Data System (ADS)

    Lee, Hyun-Jin; Kim, Jae-Hyun

    A control frame can be piggybacked onto a data frame to increase channel efficiency in wireless communication. However, if the control frame including global control information is piggybacked, the delay of the data frame from a access point will be increased even though there is only one station with low physical transmission rate. It is similar to the anomaly phenomenon in a network which supports multi-rate transmission. In this letter, we define this phenomenon as “the piggyback problem at low physical transmission rate” and evaluate the effect of this problem with respect to physical transmission rate and normalized traffic load. Then, we propose a delay-based piggyback scheme. Simulations show that the proposed scheme reduces average frame transmission delay and improves channel utilization about 24% and 25%, respectively.

  11. The effect of hand movements on numerical bisection judgments in early blind and sighted individuals.

    PubMed

    Rinaldi, Luca; Vecchi, Tomaso; Fantino, Micaela; Merabet, Lotfi B; Cattaneo, Zaira

    2015-10-01

    Recent evidence suggests that in representing numbers blind individuals might be affected differently by proprioceptive cues (e.g., hand positions, head turns) than are sighted individuals. In this study, we asked a group of early blind and sighted individuals to perform a numerical bisection task while executing hand movements in left or right peripersonal space and with either hand. We found that in bisecting ascending numerical intervals, the hemi-space in which the hand was moved (but not the moved hand itself) influenced the bisection bias similarly in both early blind and sighted participants. However, when numerical intervals were presented in descending order, the moved hand (and not the hemi-space in which it was moved) affected the bisection bias in all participants. Overall, our data show that the operation to be performed on the mental number line affects the activated spatial reference frame, regardless of participants' previous visual experience. In particular, both sighted and early blind individuals' representation of numerical magnitude is mainly rooted in world-centered coordinates when numerical information is given in canonical orientation (i.e., from small to large), whereas hand-centered coordinates become more relevant when the scanning of the mental number line proceeds in non-canonical direction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. The effect of hand movements on numerical bisection judgments in early blind and sighted individuals

    PubMed Central

    Rinaldi, Luca; Vecchi, Tomaso; Fantino, Micaela; Merabet, Lotfi B.; Cattaneo, Zaira

    2017-01-01

    Recent evidence suggests that in representing numbers blind individuals might be affected differently by proprioceptive cues (e.g., hand positions, head turns) than are sighted individuals. In this study, we asked a group of early blind and sighted individuals to perform a numerical bisection task while executing hand movements in left or right peripersonal space and with either hand. We found that in bisecting ascending numerical intervals, the hemi-space in which the hand was moved (but not the moved hand itself) influenced the bisection bias similarly in both early blind and sighted participants. However, when numerical intervals were presented in descending order, the moved hand (and not the hemi-space in which it was moved) affected the bisection bias in all participants. Overall, our data show that the operation to be performed on the mental number line affects the activated spatial reference frame, regardless of participants’ previous visual experience. In particular, both sighted and early blind individuals’ representation of numerical magnitude is mainly rooted in world-centered coordinates when numerical information is given in canonical orientation (i.e. from small to large), whereas hand-centered coordinates become more relevant when the scanning of the mental number line proceeds in non-canonical direction. PMID:26184675

  13. A robust motion estimation system for minimal invasive laparoscopy

    NASA Astrophysics Data System (ADS)

    Marcinczak, Jan Marek; von Öhsen, Udo; Grigat, Rolf-Rainer

    2012-02-01

    Laparoscopy is a reliable imaging method to examine the liver. However, due to the limited field of view, a lot of experience is required from the surgeon to interpret the observed anatomy. Reconstruction of organ surfaces provide valuable additional information to the surgeon for a reliable diagnosis. Without an additional external tracking system the structure can be recovered from feature correspondences between different frames. In laparoscopic images blurred frames, specular reflections and inhomogeneous illumination make feature tracking a challenging task. We propose an ego-motion estimation system for minimal invasive laparoscopy that can cope with specular reflection, inhomogeneous illumination and blurred frames. To obtain robust feature correspondence, the approach combines SIFT and specular reflection segmentation with a multi-frame tracking scheme. The calibrated five-point algorithm is used with the MSAC robust estimator to compute the motion of the endoscope from multi-frame correspondence. The algorithm is evaluated using endoscopic videos of a phantom. The small incisions and the rigid endoscope limit the motion in minimal invasive laparoscopy. These limitations are considered in our evaluation and are used to analyze the accuracy of pose estimation that can be achieved by our approach. The endoscope is moved by a robotic system and the ground truth motion is recorded. The evaluation on typical endoscopic motion gives precise results and demonstrates the practicability of the proposed pose estimation system.

  14. Correction of beam-beam effects in luminosity measurement in the forward region at CLIC

    NASA Astrophysics Data System (ADS)

    Lukić, S.; Božović-Jelisavčić, I.; Pandurović, M.; Smiljanić, I.

    2013-05-01

    Procedures for correcting the beam-beam effects in luminosity measurements at CLIC at 3 TeV center-of-mass energy are described and tested using Monte Carlo simulations. The angular counting loss due to the combined Beamstrahlung and initial-state radiation effects is corrected based on the reconstructed velocity of the collision frame of the Bhabha scattering. The distortion of the luminosity spectrum due to the initial-state radiation is corrected by deconvolution. At the end, the counting bias due to the finite calorimeter energy resolution is numerically corrected. To test the procedures, BHLUMI Bhabha event generator, and Guinea-Pig beam-beam simulation were used to generate the outgoing momenta of Bhabha particles in the bunch collisions at CLIC. The systematic effects of the beam-beam interaction on the luminosity measurement are corrected with precision of 1.4 permille in the upper 5% of the energy, and 2.7 permille in the range between 80 and 90% of the nominal center-of-mass energy.

  15. A higher-speed compressive sensing camera through multi-diode design

    NASA Astrophysics Data System (ADS)

    Herman, Matthew A.; Tidman, James; Hewitt, Donna; Weston, Tyler; McMackin, Lenore

    2013-05-01

    Obtaining high frame rates is a challenge with compressive sensing (CS) systems that gather measurements in a sequential manner, such as the single-pixel CS camera. One strategy for increasing the frame rate is to divide the FOV into smaller areas that are sampled and reconstructed in parallel. Following this strategy, InView has developed a multi-aperture CS camera using an 8×4 array of photodiodes that essentially act as 32 individual simultaneously operating single-pixel cameras. Images reconstructed from each of the photodiode measurements are stitched together to form the full FOV. To account for crosstalk between the sub-apertures, novel modulation patterns have been developed to allow neighboring sub-apertures to share energy. Regions of overlap not only account for crosstalk energy that would otherwise be reconstructed as noise, but they also allow for tolerance in the alignment of the DMD to the lenslet array. Currently, the multi-aperture camera is built into a computational imaging workstation configuration useful for research and development purposes. In this configuration, modulation patterns are generated in a CPU and sent to the DMD via PCI express, which allows the operator to develop and change the patterns used in the data acquisition step. The sensor data is collected and then streamed to the workstation via an Ethernet or USB connection for the reconstruction step. Depending on the amount of data taken and the amount of overlap between sub-apertures, frame rates of 2-5 frames per second can be achieved. In a stand-alone camera platform, currently in development, pattern generation and reconstruction will be implemented on-board.

  16. Toward a Critical Multiracial Theory in Education

    ERIC Educational Resources Information Center

    Harris, Jessica C.

    2016-01-01

    This manuscript lays the foundation for a critical multiracial theory (MultiCrit) in education. The author uses extant literature and their own research that focused on multiraciality on the college campus to explore how CRT can move toward MultiCrit, which is well-positioned to frame multiracial students' experiences with race in education.

  17. Multiple Hypothesis Tracking (MHT) for Space Surveillance: Results and Simulation Studies

    NASA Astrophysics Data System (ADS)

    Singh, N.; Poore, A.; Sheaff, C.; Aristoff, J.; Jah, M.

    2013-09-01

    With the anticipated installation of more accurate sensors and the increased probability of future collisions between space objects, the potential number of observable space objects is likely to increase by an order of magnitude within the next decade, thereby placing an ever-increasing burden on current operational systems. Moreover, the need to track closely-spaced objects due, for example, to breakups as illustrated by the recent Chinese ASAT test or the Iridium-Kosmos collision, requires new, robust, and autonomous methods for space surveillance to enable the development and maintenance of the present and future space catalog and to support the overall space surveillance mission. The problem of correctly associating a stream of uncorrelated tracks (UCTs) and uncorrelated optical observations (UCOs) into common objects is critical to mitigating the number of UCTs and is a prerequisite to subsequent space catalog maintenance. Presently, such association operations are mainly performed using non-statistical simple fixed-gate association logic. In this paper, we report on the salient features and the performance of a newly-developed statistically-robust system-level multiple hypothesis tracking (MHT) system for advanced space surveillance. The multiple-frame assignment (MFA) formulation of MHT, together with supporting astrodynamics algorithms, provides a new joint capability for space catalog maintenance, UCT/UCO resolution, and initial orbit determination. The MFA-MHT framework incorporates multiple hypotheses for report to system track data association and uses a multi-arc construction to accommodate recently developed algorithms for multiple hypothesis filtering (e.g., AEGIS, CAR-MHF, UMAP, and MMAE). This MHT framework allows us to evaluate the benefits of many different algorithms ranging from single- and multiple-frame data association to filtering and uncertainty quantification. In this paper, it will be shown that the MHT system can provide superior tracking performance compared to existing methods at a lower computational cost, especially for closely-spaced objects, in realistic multi-sensor multi-object tracking scenarios over multiple regimes of space. Specifically, we demonstrate that the prototype MHT system can accurately and efficiently process tens of thousands of UCTs and angles-only UCOs emanating from thousands of objects in LEO, GEO, MEO and HELO, many of which are closely-spaced, in real-time on a single laptop computer, thereby making it well-suited for large-scale breakup and tracking scenarios. This is possible in part because complexity reduction techniques are used to control the runtime of MHT without sacrificing accuracy. We assess the performance of MHT in relation to other tracking methods in multi-target, multi-sensor scenarios ranging from easy to difficult (i.e., widely-spaced objects to closely-spaced objects), using realistic physics and probabilities of detection less than one. In LEO, it is shown that the MHT system is able to address the challenges of processing breakups by analyzing multiple frames of data simultaneously in order to improve association decisions, reduce cross-tagging, and reduce unassociated UCTs. As a result, the multi-frame MHT system can establish orbits up to ten times faster than single-frame methods. Finally, it is shown that in GEO, MEO and HELO, the MHT system is able to address the challenges of processing angles-only optical observations by providing a unified multi-frame framework.

  18. Receiver function stacks: initial steps for seismic imaging of Cotopaxi volcano, Ecuador

    NASA Astrophysics Data System (ADS)

    Bishop, J. W.; Lees, J. M.; Ruiz, M. C.

    2017-12-01

    Cotopaxi volcano is a large, andesitic stratovolcano located within 50 km of the the Ecuadorean capital of Quito. Cotopaxi most recently erupted for the first time in 73 years during August 2015. This eruptive cycle (VEI = 1) featured phreatic explosions and ejection of an ash column 9 km above the volcano edifice. Following this event, ash covered approximately 500 km2 of the surrounding area. Analysis of Multi-GAS data suggests that this eruption was fed from a shallow source. However, stratigraphic evidence surveying the last 800 years of Cotopaxi's activity suggests that there may be a deep magmatic source. To establish a geophysical framework for Cotopaxi's activity, receiver functions were calculated from well recorded earthquakes detected from April 2015 to December 2015 at 9 permanent broadband seismic stations around the volcano. These events were located, and phase arrivals were manually picked. Radial teleseismic receiver functions were then calculated using an iterative deconvolution technique with a Gaussian width of 2.5. A maximum of 200 iterations was allowed in each deconvolution. Iterations were stopped when either the maximum iteration number was reached or the percent change fell beneath a pre-determined tolerance. Receiver functions were then visually inspected for anomalous pulses before the initial P arrival or later peaks larger than the initial P-wave correlated pulse, which were also discarded. Using this data, initial crustal thickness and slab depth estimates beneath the volcano were obtained. Estimates of crustal Vp/Vs ratio for the region were also calculated.

  19. Calibration of a polarimetric imaging SAR

    NASA Technical Reports Server (NTRS)

    Sarabandi, K.; Pierce, L. E.; Ulaby, F. T.

    1991-01-01

    Calibration of polarimetric imaging Synthetic Aperture Radars (SAR's) using point calibration targets is discussed. The four-port network calibration technique is used to describe the radar error model. The polarimetric ambiguity function of the SAR is then found using a single point target, namely a trihedral corner reflector. Based on this, an estimate for the backscattering coefficient of the terrain is found by a deconvolution process. A radar image taken by the JPL Airborne SAR (AIRSAR) is used for verification of the deconvolution calibration method. The calibrated responses of point targets in the image are compared both with theory and the POLCAL technique. Also, response of a distributed target are compared using the deconvolution and POLCAL techniques.

  20. Volterra series based blind equalization for nonlinear distortions in short reach optical CAP system

    NASA Astrophysics Data System (ADS)

    Tao, Li; Tan, Hui; Fang, Chonghua; Chi, Nan

    2016-12-01

    In this paper, we propose a blind Volterra series based nonlinear equalization (VNLE) with low complexity for the nonlinear distortion mitigation in short reach optical carrierless amplitude and phase (CAP) modulation system. The principle of the blind VNLE is presented and the performance of its blind adaptive algorithms including the modified cascaded multi-mode algorithm (MCMMA) and direct detection LMS (DD-LMS) are investigated experimentally. Compared to the conventional VNLE using training symbols before demodulation, it is performed after matched filtering and downsampling, so shorter memory length is required but similar performance improvement is observed. About 1 dB improvement is observed at BER of 3.8×10-3 for 40 Gb/s CAP32 signal over 40 km standard single mode fiber.

  1. Rubber Hands Feel Touch, but Not in Blind Individuals

    PubMed Central

    Ehrsson, H. Henrik

    2012-01-01

    Psychology and neuroscience have a long-standing tradition of studying blind individuals to investigate how visual experience shapes perception of the external world. Here, we study how blind people experience their own body by exposing them to a multisensory body illusion: the somatic rubber hand illusion. In this illusion, healthy blindfolded participants experience that they are touching their own right hand with their left index finger, when in fact they are touching a rubber hand with their left index finger while the experimenter touches their right hand in a synchronized manner (Ehrsson et al. 2005). We compared the strength of this illusion in a group of blind individuals (n = 10), all of whom had experienced severe visual impairment or complete blindness from birth, and a group of age-matched blindfolded sighted participants (n = 12). The illusion was quantified subjectively using questionnaires and behaviorally by asking participants to point to the felt location of the right hand. The results showed that the sighted participants experienced a strong illusion, whereas the blind participants experienced no illusion at all, a difference that was evident in both tests employed. A further experiment testing the participants' basic ability to localize the right hand in space without vision (proprioception) revealed no difference between the two groups. Taken together, these results suggest that blind individuals with impaired visual development have a more veridical percept of self-touch and a less flexible and dynamic representation of their own body in space compared to sighted individuals. We speculate that the multisensory brain systems that re-map somatosensory signals onto external reference frames are less developed in blind individuals and therefore do not allow efficient fusion of tactile and proprioceptive signals from the two upper limbs into a single illusory experience of self-touch as in sighted individuals. PMID:22558268

  2. Rubber hands feel touch, but not in blind individuals.

    PubMed

    Petkova, Valeria I; Zetterberg, Hedvig; Ehrsson, H Henrik

    2012-01-01

    Psychology and neuroscience have a long-standing tradition of studying blind individuals to investigate how visual experience shapes perception of the external world. Here, we study how blind people experience their own body by exposing them to a multisensory body illusion: the somatic rubber hand illusion. In this illusion, healthy blindfolded participants experience that they are touching their own right hand with their left index finger, when in fact they are touching a rubber hand with their left index finger while the experimenter touches their right hand in a synchronized manner (Ehrsson et al. 2005). We compared the strength of this illusion in a group of blind individuals (n = 10), all of whom had experienced severe visual impairment or complete blindness from birth, and a group of age-matched blindfolded sighted participants (n = 12). The illusion was quantified subjectively using questionnaires and behaviorally by asking participants to point to the felt location of the right hand. The results showed that the sighted participants experienced a strong illusion, whereas the blind participants experienced no illusion at all, a difference that was evident in both tests employed. A further experiment testing the participants' basic ability to localize the right hand in space without vision (proprioception) revealed no difference between the two groups. Taken together, these results suggest that blind individuals with impaired visual development have a more veridical percept of self-touch and a less flexible and dynamic representation of their own body in space compared to sighted individuals. We speculate that the multisensory brain systems that re-map somatosensory signals onto external reference frames are less developed in blind individuals and therefore do not allow efficient fusion of tactile and proprioceptive signals from the two upper limbs into a single illusory experience of self-touch as in sighted individuals.

  3. EGF Search for Compound Source Time Functions in Microearthquakes

    NASA Astrophysics Data System (ADS)

    Ampuero, J.; Rubin, A. M.

    2003-12-01

    Numerical simulations of stopping ruptures on bimaterial interfaces seem to indicate a pronounced asymmetry in the time it takes to reach the peak Coulomb stress shortly beyond the rupture ends. For the rupture front moving in the direction of slip of the stiffer medium, the timescale is controlled by the arrival of stopping phases from the opposite side of the crack, but for the opposite rupture front this timescale is controlled by the much shorter-duration tensile stress pulse that moves in front of the crack tip as it slows down. This behavior may have implications for rupture complexity on bimaterial interfaces. In addition to observing an asymmetry in aftershock occurrence on the San Andreas fault, Rubin and Gillard (2000) noted that for all 5 of the compound earthquakes they observed in a cluster of 72 events, the second subevent occurred to the NW of the first (that is, near the rupture front moving in the direction of slip of the stiffer medium). They suggested that these 5``second events'' were simply examples of ``early aftershocks'' which also occur preferentially to the NW; however, the fact that these 5 earthquakes could not be recognized as compound at stations located to the SE indicates that the second event actually occurred on the timescale of the passage of the dynamic stress waves. Thus, observations of asymmetry in rupture complexity may form an independent dataset, complimentary to observations of aftershock asymmetry, for constraining models of rupture on bimaterial interfaces. Microseismicity recorded on dense seismological networks has proved interesting for earthquake physics because the high number of events allows one to gain statistical insight into the observed source properties. However, microearthquakes are usually so small that the range of methods that can be applied to their analysis is limited and of low resolution. To address the questions raised above we would like to characterize the source time functions (STF) of a large number of microearthquakes, in particular the statistics of compound events and the possible asymmetry of their spatial distribution. We will show results of the systematic application of empirical Green's function deconvolution methods to a large dataset from the Parkfield HRSN. On the methodological side the performance and robustness of various deconvolution schemes is tested. These range from trivially stabilized spectral division to projected Landweber and blind deconvolution. Use is also made of the redundance available in highly clustered seismicity with many similar seismograms. The observations will be interpreted in the light of recent numerical simulations of dynamic rupture on bimaterial interfaces (see abstract of Rubin and Ampuero).

  4. Histogram deconvolution - An aid to automated classifiers

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.

    1983-01-01

    It is shown that N-dimensional histograms are convolved by the addition of noise in the picture domain. Three methods are described which provide the ability to deconvolve such noise-affected histograms. The purpose of the deconvolution is to provide automated classifiers with a higher quality N-dimensional histogram from which to obtain classification statistics.

  5. Study of one- and two-dimensional filtering and deconvolution algorithms for a streaming array computer

    NASA Technical Reports Server (NTRS)

    Ioup, G. E.

    1985-01-01

    Appendix 5 of the Study of One- and Two-Dimensional Filtering and Deconvolution Algorithms for a Streaming Array Computer includes a resume of the professional background of the Principal Investigator on the project, lists of this publications and research papers, graduate thesis supervised, and grants received.

  6. The First Rapid Assessment of Avoidable Blindness (RAAB) in Thailand

    PubMed Central

    Isipradit, Saichin; Sirimaharaj, Maytinee; Charukamnoetkanok, Puwat; Thonginnetra, Oraorn; Wongsawad, Warapat; Sathornsumetee, Busaba; Somboonthanakij, Sudawadee; Soomsawasdi, Piriya; Jitawatanarat, Umapond; Taweebanjongsin, Wongsiri; Arayangkoon, Eakkachai; Arame, Punyawee; Kobkoonthon, Chinsuchee; Pangputhipong, Pannet

    2014-01-01

    Background The majority of vision loss is preventable or treatable. Population surveys are crucial for planning, implementation, and monitoring policies and interventions to eliminate avoidable blindness and visual impairments. This is the first rapid assessment of avoidable blindness (RAAB) study in Thailand. Methods A cross-sectional study of a population in Thailand age 50 years old or over aimed to assess the prevalence and causes of blindness and visual impairments. Using the Thailand National Census 2010 as the sampling frame, a stratified four-stage cluster sampling based on a probability proportional to size was conducted in 176 enumeration areas from 11 provinces. Participants received comprehensive eye examination by ophthalmologists. Results The age and sex adjusted prevalence of blindness (presenting visual acuity (VA) <20/400), severe visual impairment (VA <20/200 but ≥20/400), and moderate visual impairment (VA <20/70 but ≥20/200) were 0.6% (95% CI: 0.5–0.8), 1.3% (95% CI: 1.0–1.6), 12.6% (95% CI: 10.8–14.5). There was no significant difference among the four regions of Thailand. Cataract was the main cause of vision loss accounted for 69.7% of blindness. Cataract surgical coverage in persons was 95.1% for cut off VA of 20/400. Refractive errors, diabetic retinopathy, glaucoma, and corneal opacities were responsible for 6.0%, 5.1%, 4.0%, and 2.0% of blindness respectively. Conclusion Thailand is on track to achieve the goal of VISION 2020. However, there is still much room for improvement. Policy refinements and innovative interventions are recommended to alleviate blindness and visual impairments especially regarding the backlog of blinding cataract, management of non-communicative, chronic, age-related eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy, prevention of childhood blindness, and establishment of a robust eye health information system. PMID:25502762

  7. Providing data science support for systems pharmacology and its implications to drug discovery.

    PubMed

    Hart, Thomas; Xie, Lei

    2016-01-01

    The conventional one-drug-one-target-one-disease drug discovery process has been less successful in tracking multi-genic, multi-faceted complex diseases. Systems pharmacology has emerged as a new discipline to tackle the current challenges in drug discovery. The goal of systems pharmacology is to transform huge, heterogeneous, and dynamic biological and clinical data into interpretable and actionable mechanistic models for decision making in drug discovery and patient treatment. Thus, big data technology and data science will play an essential role in systems pharmacology. This paper critically reviews the impact of three fundamental concepts of data science on systems pharmacology: similarity inference, overfitting avoidance, and disentangling causality from correlation. The authors then discuss recent advances and future directions in applying the three concepts of data science to drug discovery, with a focus on proteome-wide context-specific quantitative drug target deconvolution and personalized adverse drug reaction prediction. Data science will facilitate reducing the complexity of systems pharmacology modeling, detecting hidden correlations between complex data sets, and distinguishing causation from correlation. The power of data science can only be fully realized when integrated with mechanism-based multi-scale modeling that explicitly takes into account the hierarchical organization of biological systems from nucleic acid to proteins, to molecular interaction networks, to cells, to tissues, to patients, and to populations.

  8. Calculation of the static in-flight telescope-detector response by deconvolution applied to point-spread function for the geostationary earth radiation budget experiment.

    PubMed

    Matthews, Grant

    2004-12-01

    The Geostationary Earth Radiation Budget (GERB) experiment is a broadband satellite radiometer instrument program intended to resolve remaining uncertainties surrounding the effect of cloud radiative feedback on future climate change. By use of a custom-designed diffraction-aberration telescope model, the GERB detector spatial response is recovered by deconvolution applied to the ground calibration point-spread function (PSF) measurements. An ensemble of randomly generated white-noise test scenes, combined with the measured telescope transfer function results in the effect of noise on the deconvolution being significantly reduced. With the recovered detector response as a base, the same model is applied in construction of the predicted in-flight field-of-view response of each GERB pixel to both short- and long-wave Earth radiance. The results of this study can now be used to simulate and investigate the instantaneous sampling errors incurred by GERB. Also, the developed deconvolution method may be highly applicable in enhancing images or PSF data for any telescope system for which a wave-front error measurement is available.

  9. Point spread functions and deconvolution of ultrasonic images.

    PubMed

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  10. Nimbus 7 earth radiation budget wide field of view climate data set improvement. I - The earth albedo from deconvolution of shortwave measurements

    NASA Technical Reports Server (NTRS)

    Hucek, Richard R.; Ardanuy, Philip E.; Kyle, H. Lee

    1987-01-01

    A deconvolution method for extracting the top of the atmosphere (TOA) mean, daily albedo field from a set of wide-FOV (WFOV) shortwave radiometer measurements is proposed. The method is based on constructing a synthetic measurement for each satellite observation. The albedo field is represented as a truncated series of spherical harmonic functions, and these linear equations are presented. Simulation studies were conducted to determine the sensitivity of the method. It is observed that a maximum of about 289 pieces of data can be extracted from a set of Nimbus 7 WFOV satellite measurements. The albedos derived using the deconvolution method are compared with albedos derived using the WFOV archival method; the developed albedo field achieved a 20 percent reduction in the global rms regional reflected flux density errors. The deconvolution method is applied to estimate the mean, daily average TOA albedo field for January 1983. A strong and extensive albedo maximum (0.42), which corresponds to the El Nino/Southern Oscillation event of 1982-1983, is detected over the south central Pacific Ocean.

  11. Hierarchical video summarization

    NASA Astrophysics Data System (ADS)

    Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.

    1998-12-01

    We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

  12. Gaussian and linear deconvolution of LC-MS/MS chromatograms of the eight aminobutyric acid isomers

    PubMed Central

    Vemula, Harika; Kitase, Yukiko; Ayon, Navid J.; Bonewald, Lynda; Gutheil, William G.

    2016-01-01

    Isomeric molecules present a challenge for analytical resolution and quantification, even with MS-based detection. The eight-aminobutyric acid (ABA) isomers are of interest for their various biological activities, particularly γ-aminobutyric acid (GABA) and the d- and l-isomers of β-aminoisobutyric acid (β-AIBA; BAIBA). This study aimed to investigate LC-MS/MS-based resolution of these ABA isomers as their Marfey's (Mar) reagent derivatives. HPLC was able to separate three Mar-ABA isomers l-β-ABA (l-BABA), and l- and d-α-ABA (AABA) completely, with three isomers (GABA, and d/l-BAIBA) in one chromatographic cluster, and two isomers (α-AIBA (AAIBA) and d-BABA) in a second cluster. Partially separated cluster components were deconvoluted using Gaussian peak fitting except for GABA and d-BAIBA. MS/MS detection of Marfey's derivatized ABA isomers provided six MS/MS fragments, with substantially different intensity profiles between structural isomers. This allowed linear deconvolution of ABA isomer peaks. Combining HPLC separation with linear and Gaussian deconvolution allowed resolution of all eight ABA isomers. Application to human serum found a substantial level of l-AABA (13 μM), an intermediate level of l-BAIBA (0.8 μM), and low but detectable levels (<0.2 μM) of GABA, l-BABA, AAIBA, d-BAIBA, and d-AABA. This approach should be useful for LC-MS/MS deconvolution of other challenging groups of isomeric molecules. PMID:27771391

  13. Deconvolution of ferredoxin, plastocyanin, and P700 transmittance changes in intact leaves with a new type of kinetic LED array spectrophotometer.

    PubMed

    Klughammer, Christof; Schreiber, Ulrich

    2016-05-01

    A newly developed compact measuring system for assessment of transmittance changes in the near-infrared spectral region is described; it allows deconvolution of redox changes due to ferredoxin (Fd), P700, and plastocyanin (PC) in intact leaves. In addition, it can also simultaneously measure chlorophyll fluorescence. The major opto-electronic components as well as the principles of data acquisition and signal deconvolution are outlined. Four original pulse-modulated dual-wavelength difference signals are measured (785-840 nm, 810-870 nm, 870-970 nm, and 795-970 nm). Deconvolution is based on specific spectral information presented graphically in the form of 'Differential Model Plots' (DMP) of Fd, P700, and PC that are derived empirically from selective changes of these three components under appropriately chosen physiological conditions. Whereas information on maximal changes of Fd is obtained upon illumination after dark-acclimation, maximal changes of P700 and PC can be readily induced by saturating light pulses in the presence of far-red light. Using the information of DMP and maximal changes, the new measuring system enables on-line deconvolution of Fd, P700, and PC. The performance of the new device is demonstrated by some examples of practical applications, including fast measurements of flash relaxation kinetics and of the Fd, P700, and PC changes paralleling the polyphasic fluorescence rise upon application of a 300-ms pulse of saturating light.

  14. Astronomy with the color blind

    NASA Astrophysics Data System (ADS)

    Smith, Donald A.; Melrose, Justyn

    2014-12-01

    The standard method to create dramatic color images in astrophotography is to record multiple black and white images, each with a different color filter in the optical path, and then tint each frame with a color appropriate to the corresponding filter. When combined, the resulting image conveys information about the sources of emission in the field, although one should be cautious in assuming that such an image shows what the subject would "really look like" if a person could see it without the aid of a telescope. The details of how the eye processes light have a significant impact on how such images should be understood, and the step from perception to interpretation is even more problematic when the viewer is color blind. We report here on an approach to manipulating stacked tricolor images that, while abandoning attempts to portray the color distribution "realistically," do result in enabling those suffering from deuteranomaly (the most common form of color blindness) to perceive color distinctions they would otherwise not be able to see.

  15. Taking Race Off the Table: Agenda Setting and Support for Color-Blind Public Policy.

    PubMed

    Chow, Rosalind M; Knowles, Eric D

    2016-01-01

    Whites are theorized to support color-blind policies as an act of racial agenda setting-an attempt to defend the existing hierarchy by excluding race from public and institutional discourse. The present analysis leverages work distinguishing between two forms of social dominance orientation (SDO): passive opposition to equality (SDO-E) and active desire for dominance (SDO-D). We hypothesized that agenda setting, as a subtle hierarchy-maintenance strategy, would be uniquely tied to high levels of SDO-E. When made to believe that the hierarchy was under threat, Whites high in SDO-E increased their endorsement of color-blind policy (Study 1), particularly when the racial hierarchy was framed as ingroup advantage (Study 2), and became less willing to include race as a topic in a hypothetical presidential debate (Study 3). Across studies, Whites high in SDO-D showed no affinity for agenda setting as a hierarchy-maintenance strategy. © 2015 by the Society for Personality and Social Psychology, Inc.

  16. Tectonic Inversion Along the Algerian and Ligurian Margins: On the Insight Provided By Latest Seismic Processing Techniques Applied to Recent and Vintage 2D Offshore Multichannel Seismic Data

    NASA Astrophysics Data System (ADS)

    Schenini, L.; Beslier, M. O.; Sage, F.; Badji, R.; Galibert, P. Y.; Lepretre, A.; Dessa, J. X.; Aidi, C.; Watremez, L.

    2014-12-01

    Recent studies on the Algerian and the North-Ligurian margins in the Western Mediterranean have evidenced inversion-related superficial structures, such as folds and asymmetric sedimentary perched basins whose geometry hints at deep compressive structures dipping towards the continent. Deep seismic imaging of these margins is difficult due to steep slope and superficial multiples, and, in the Mediterranean context, to the highly diffractive Messinian evaporitic series in the basin. During the Algerian-French SPIRAL survey (2009, R/V Atalante), 2D marine multi-channel seismic (MCS) reflection data were collected along the Algerian Margin using a 4.5 km, 360 channel digital streamer and a 3040 cu. in. air-gun array. An advanced processing workflow has been laid out using Geocluster CGG software, which includes noise attenuation, 2D SRME multiple attenuation, surface consistent deconvolution, Kirchhoff pre-stack time migration. This processing produces satisfactory seismic images of the whole sedimentary cover, and of southward dipping reflectors in the acoustic basement along the central part of the margin offshore Great Kabylia, that are interpreted as inversion-related blind thrusts as part of flat-ramp systems. We applied this successful processing workflow to old 2D marine MCS data acquired on the North-Ligurian Margin (Malis survey, 1995, R/V Le Nadir), using a 2.5 km, 96 channel streamer and a 1140 cu. in. air-gun array. Particular attention was paid to multiple attenuation in adapting our workflow. The resulting reprocessed seismic images, interpreted with a coincident velocity model obtained by wide-angle data tomography, provide (1) enhanced imaging of the sedimentary cover down to the top of the acoustic basement, including the base of the Messinian evaporites and the sub-salt Miocene series, which appear to be tectonized as far as in the mid-basin, and (2) new evidence of deep crustal structures in the margin which the initial processing had failed to reveal.

  17. SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, C; Jin, M; Ouyang, L

    2015-06-15

    Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used tomore » recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation compared to the direct method.« less

  18. Gold - A novel deconvolution algorithm with optimization for waveform LiDAR processing

    NASA Astrophysics Data System (ADS)

    Zhou, Tan; Popescu, Sorin C.; Krause, Keith; Sheridan, Ryan D.; Putman, Eric

    2017-07-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: (1) direct decomposition, (2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson-Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from the corresponding reference data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, <0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, <1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (<1.01 m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE. Additionally, the high level of uncertainty occurs more on areas with high slope and high vegetation. This study provides an alternative and innovative approach for waveform processing that will benefit high fidelity processing of waveform LiDAR data to characterize vegetation structures.

  19. Improving cell mixture deconvolution by identifying optimal DNA methylation libraries (IDOL).

    PubMed

    Koestler, Devin C; Jones, Meaghan J; Usset, Joseph; Christensen, Brock C; Butler, Rondi A; Kobor, Michael S; Wiencke, John K; Kelsey, Karl T

    2016-03-08

    Confounding due to cellular heterogeneity represents one of the foremost challenges currently facing Epigenome-Wide Association Studies (EWAS). Statistical methods leveraging the tissue-specificity of DNA methylation for deconvoluting the cellular mixture of heterogenous biospecimens offer a promising solution, however the performance of such methods depends entirely on the library of methylation markers being used for deconvolution. Here, we introduce a novel algorithm for Identifying Optimal Libraries (IDOL) that dynamically scans a candidate set of cell-specific methylation markers to find libraries that optimize the accuracy of cell fraction estimates obtained from cell mixture deconvolution. Application of IDOL to training set consisting of samples with both whole-blood DNA methylation data (Illumina HumanMethylation450 BeadArray (HM450)) and flow cytometry measurements of cell composition revealed an optimized library comprised of 300 CpG sites. When compared existing libraries, the library identified by IDOL demonstrated significantly better overall discrimination of the entire immune cell landscape (p = 0.038), and resulted in improved discrimination of 14 out of the 15 pairs of leukocyte subtypes. Estimates of cell composition across the samples in the training set using the IDOL library were highly correlated with their respective flow cytometry measurements, with all cell-specific R (2)>0.99 and root mean square errors (RMSEs) ranging from [0.97 % to 1.33 %] across leukocyte subtypes. Independent validation of the optimized IDOL library using two additional HM450 data sets showed similarly strong prediction performance, with all cell-specific R (2)>0.90 and R M S E<4.00 %. In simulation studies, adjustments for cell composition using the IDOL library resulted in uniformly lower false positive rates compared to competing libraries, while also demonstrating an improved capacity to explain epigenome-wide variation in DNA methylation within two large publicly available HM450 data sets. Despite consisting of half as many CpGs compared to existing libraries for whole blood mixture deconvolution, the optimized IDOL library identified herein resulted in outstanding prediction performance across all considered data sets and demonstrated potential to improve the operating characteristics of EWAS involving adjustments for cell distribution. In addition to providing the EWAS community with an optimized library for whole blood mixture deconvolution, our work establishes a systematic and generalizable framework for the assembly of libraries that improve the accuracy of cell mixture deconvolution.

  20. SU-C-9A-03: Simultaneous Deconvolution and Segmentation for PET Tumor Delineation Using a Variational Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L; Tan, S; Lu, W

    2014-06-01

    Purpose: To implement a new method that integrates deconvolution with segmentation under the variational framework for PET tumor delineation. Methods: Deconvolution and segmentation are both challenging problems in image processing. The partial volume effect (PVE) makes tumor boundaries in PET image blurred which affects the accuracy of tumor segmentation. Deconvolution aims to obtain a PVE-free image, which can help to improve the segmentation accuracy. Conversely, a correct localization of the object boundaries is helpful to estimate the blur kernel, and thus assist in the deconvolution. In this study, we proposed to solve the two problems simultaneously using a variational methodmore » so that they can benefit each other. The energy functional consists of a fidelity term and a regularization term, and the blur kernel was limited to be the isotropic Gaussian kernel. We minimized the energy functional by solving the associated Euler-Lagrange equations and taking the derivative with respect to the parameters of the kernel function. An alternate minimization method was used to iterate between segmentation, deconvolution and blur-kernel recovery. The performance of the proposed method was tested on clinic PET images of patients with non-Hodgkin's lymphoma, and compared with seven other segmentation methods using the dice similarity index (DSI) and volume error (VE). Results: Among all segmentation methods, the proposed one (DSI=0.81, VE=0.05) has the highest accuracy, followed by the active contours without edges (DSI=0.81, VE=0.25), while other methods including the Graph Cut and the Mumford-Shah (MS) method have lower accuracy. A visual inspection shows that the proposed method localizes the real tumor contour very well. Conclusion: The result showed that deconvolution and segmentation can contribute to each other. The proposed variational method solve the two problems simultaneously, and leads to a high performance for tumor segmentation in PET. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less

  1. Blind subjects construct conscious mental images of visual scenes encoded in musical form.

    PubMed Central

    Cronly-Dillon, J; Persaud, K C; Blore, R

    2000-01-01

    Blind (previously sighted) subjects are able to analyse, describe and graphically represent a number of high-contrast visual images translated into musical form de novo. We presented musical transforms of a random assortment of photographic images of objects and urban scenes to such subjects, a few of which depicted architectural and other landmarks that may be useful in navigating a route to a particular destination. Our blind subjects were able to use the sound representation to construct a conscious mental image that was revealed by their ability to depict a visual target by drawing it. We noted the similarity between the way the visual system integrates information from successive fixations to form a representation that is stable across eye movements and the way a succession of image frames (encoded in sound) which depict different portions of the image are integrated to form a seamless mental image. Finally, we discuss the profound resemblance between the way a professional musician carries out a structural analysis of a musical composition in order to relate its structure to the perception of musical form and the strategies used by our blind subjects in isolating structural features that collectively reveal the identity of visual form. PMID:11413637

  2. Detonator Performance Characterization using Multi-Frame Laser Schlieren Imaging

    NASA Astrophysics Data System (ADS)

    Clarke, Steven; Landon, Colin; Murphy, Michael; Martinez, Michael; Mason, Thomas; Thomas, Keith

    2009-06-01

    Multi-frame Laser Schlieren Imaging of shock waves produced by detonators in transparent witness materials can be used to evaluate detonator performance. We use inverse calculations of the 2D propagation of shock waves in the EPIC finite element model computer code to calculate a temporal-spatial-pressure profile on the surface of the detonator that is consistent with the experimental shock waves from the schlieren imaging. Examples of calculated 2D temporal-spatial-pressure profiles from a range of detonator types (EFI --exploding foil initiators, DOI -- direct optical initiation, EBW -- exploding bridge wire, hotwire), detonator HE materials (PETN, HMX, etc), and HE densities. Also pressure interaction profiles from the interaction of multiple shock waves will be shown. LA-UR-09-00909.

  3. High-speed multi-exposure laser speckle contrast imaging with a single-photon counting camera

    PubMed Central

    Dragojević, Tanja; Bronzi, Danilo; Varma, Hari M.; Valdes, Claudia P.; Castellvi, Clara; Villa, Federica; Tosi, Alberto; Justicia, Carles; Zappa, Franco; Durduran, Turgut

    2015-01-01

    Laser speckle contrast imaging (LSCI) has emerged as a valuable tool for cerebral blood flow (CBF) imaging. We present a multi-exposure laser speckle imaging (MESI) method which uses a high-frame rate acquisition with a negligible inter-frame dead time to mimic multiple exposures in a single-shot acquisition series. Our approach takes advantage of the noise-free readout and high-sensitivity of a complementary metal-oxide-semiconductor (CMOS) single-photon avalanche diode (SPAD) array to provide real-time speckle contrast measurement with high temporal resolution and accuracy. To demonstrate its feasibility, we provide comparisons between in vivo measurements with both the standard and the new approach performed on a mouse brain, in identical conditions. PMID:26309751

  4. Hyperspectral stimulated emission depletion microscopy and methods of use thereof

    DOEpatents

    Timlin, Jerilyn A; Aaron, Jesse S

    2014-04-01

    A hyperspectral stimulated emission depletion ("STED") microscope system for high-resolution imaging of samples labeled with multiple fluorophores (e.g., two to ten fluorophores). The hyperspectral STED microscope includes a light source, optical systems configured for generating an excitation light beam and a depletion light beam, optical systems configured for focusing the excitation and depletion light beams on a sample, and systems for collecting and processing data generated by interaction of the excitation and depletion light beams with the sample. Hyperspectral STED data may be analyzed using multivariate curve resolution analysis techniques to deconvolute emission from the multiple fluorophores. The hyperspectral STED microscope described herein can be used for multi-color, subdiffraction imaging of samples (e.g., materials and biological materials) and for analyzing a tissue by Forster Resonance Energy Transfer ("FRET").

  5. Spectrophotometric Determination of the Dissociation Constant of an Acid-Base Indicator Using a Mathematical Deconvolution Technique

    ERIC Educational Resources Information Center

    Alter, Krystyn P.; Molloy, John L.; Niemeyer, Emily D.

    2005-01-01

    A laboratory experiment reinforces the concept of acid-base equilibria while introducing a common application of spectrophotometry and can easily be completed within a standard four-hour laboratory period. It provides students with an opportunity to use advanced data analysis techniques like data smoothing and spectral deconvolution to…

  6. Deconvolution of Energy Spectra in the ATIC Experiment

    NASA Technical Reports Server (NTRS)

    Batkov, K. E.; Panov, A. D.; Adams, J. H.; Ahn, H. S.; Bashindzhagyan, G. L.; Chang, J.; Christl, M.; Fazley, A. R.; Ganel, O.; Gunasigha, R. M.; hide

    2005-01-01

    The Advanced Thin Ionization Calorimeter (ATIC) balloon-borne experiment is designed to perform cosmic- ray elemental spectra measurements from below 100 GeV up to tens TeV for nuclei from hydrogen to iron. The instrument is composed of a silicon matrix detector followed by a carbon target, interleaved with scintillator tracking layers, and a segmented BGO calorimeter composed of 320 individual crystals totalling 18 radiation lengths, used to determine the particle energy. The technique for deconvolution of the energy spectra measured in the thin calorimeter is based on detailed simulations of the response of the ATIC instrument to different cosmic ray nuclei over a wide energy range. The method of deconvolution is described and energy spectrum of carbon obtained by this technique is presented.

  7. SOURCE PULSE ENHANCEMENT BY DECONVOLUTION OF AN EMPIRICAL GREEN'S FUNCTION.

    USGS Publications Warehouse

    Mueller, Charles S.

    1985-01-01

    Observations of the earthquake source-time function are enhanced if path, recording-site, and instrument complexities can be removed from seismograms. Assuming that a small earthquake has a simple source, its seismogram can be treated as an empirical Green's function and deconvolved from the seismogram of a larger and/or more complex earthquake by spectral division. When the deconvolution is well posed, the quotient spectrum represents the apparent source-time function of the larger event. This study shows that with high-quality locally recorded earthquake data it is feasible to Fourier transform the quotient and obtain a useful result in the time domain. In practice, the deconvolution can be stabilized by one of several simple techniques. Application of the method is given. Refs.

  8. Deconvolution of time series in the laboratory

    NASA Astrophysics Data System (ADS)

    John, Thomas; Pietschmann, Dirk; Becker, Volker; Wagner, Christian

    2016-10-01

    In this study, we present two practical applications of the deconvolution of time series in Fourier space. First, we reconstruct a filtered input signal of sound cards that has been heavily distorted by a built-in high-pass filter using a software approach. Using deconvolution, we can partially bypass the filter and extend the dynamic frequency range by two orders of magnitude. Second, we construct required input signals for a mechanical shaker in order to obtain arbitrary acceleration waveforms, referred to as feedforward control. For both situations, experimental and theoretical approaches are discussed to determine the system-dependent frequency response. Moreover, for the shaker, we propose a simple feedback loop as an extension to the feedforward control in order to handle nonlinearities of the system.

  9. Multi-Frame Convolutional Neural Networks for Object Detection in Temporal Data

    DTIC Science & Technology

    2017-03-01

    maximum 200 words) Given the problem of detecting objects in video , existing neural-network solutions rely on a post-processing step to combine...information across frames and strengthen conclusions. This technique has been successful for videos with simple, dominant objects but it cannot detect objects...Computer Science iii THIS PAGE INTENTIONALLY LEFT BLANK iv ABSTRACT Given the problem of detecting objects in video , existing neural-network solutions rely

  10. Framing of feedback impacts student's satisfaction, self-efficacy and performance.

    PubMed

    van de Ridder, J M Monica; Peters, Claudia M M; Stokking, Karel M; de Ru, J Alexander; Ten Cate, Olle Th J

    2015-08-01

    Feedback is considered important to acquire clinical skills. Research evidence shows that feedback does not always improve learning and its effects may be small. In many studies, a variety of variables involved in feedback provision may mask either one of their effects. E.g., there is reason to believe that the way oral feedback is framed may affect its effect if other variables are held constant. In a randomised controlled trial we investigated the effect of positively and negatively framed feedback messages on satisfaction, self-efficacy, and performance. A single blind randomised controlled between-subject design was used, with framing of the feedback message (positively-negatively) as independent variable and examination of hearing abilities as the task. First year medical students' (n = 59) satisfaction, self-efficacy, and performance were the dependent variables and were measured both directly after the intervention and after a 2 weeks delay. Students in the positively framed feedback condition were significantly more satisfied and showed significantly higher self-efficacy measured directly after the performance. Effect sizes found were large, i.e., partial η (2) = 0.43 and η (2) = 0.32 respectively. They showed a better performance throughout the whole study. Significant performance differences were found both at the initial performance and when measured 2 weeks after the intervention: effects were of medium size, respectively r = -.31 and r = -.32. Over time in both conditions performance and self-efficacy decreased. Framing the feedback message in either a positive or negative manner affects students' satisfaction and self-efficacy directly after the intervention be it that these effects seem to fade out over time. Performance may be enhanced by positive framing, but additional studies need to confirm this. We recommend using a positive frame when giving feedback on clinical skills.

  11. SU-F-J-96: Comparison of Frame-Based and Mutual Information Registration Techniques for CT and MR Image Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popple, R; Bredel, M; Brezovich, I

    Purpose: To compare the accuracy of CT-MR registration using a mutual information method with registration using a frame-based localizer box. Methods: Ten patients having the Leksell head frame and scanned with a modality specific localizer box were imported into the treatment planning system. The fiducial rods of the localizer box were contoured on both the MR and CT scans. The skull was contoured on the CT images. The MR and CT images were registered by two methods. The frame-based method used the transformation that minimized the mean square distance of the centroids of the contours of the fiducial rods frommore » a mathematical model of the localizer. The mutual information method used automated image registration tools in the TPS and was restricted to a volume-of-interest defined by the skull contours with a 5 mm margin. For each case, the two registrations were adjusted by two evaluation teams, each comprised of an experienced radiation oncologist and neurosurgeon, to optimize alignment in the region of the brainstem. The teams were blinded to the registration method. Results: The mean adjustment was 0.4 mm (range 0 to 2 mm) and 0.2 mm (range 0 to 1 mm) for the frame and mutual information methods, respectively. The median difference between the frame and mutual information registrations was 0.3 mm, but was not statistically significant using the Wilcoxon signed rank test (p=0.37). Conclusion: The difference between frame and mutual information registration techniques was neither statistically significant nor, for most applications, clinically important. These results suggest that mutual information is equivalent to frame-based image registration for radiosurgery. Work is ongoing to add additional evaluators and to assess the differences between evaluators.« less

  12. Index finger somatosensory evoked potentials in blind Braille readers.

    PubMed

    Giriyappa, Dayananda; Subrahmanyam, Roopakala Mysore; Rangashetty, Srinivasa; Sharma, Rajeev

    2009-01-01

    Traditionally, vision has been considered the dominant modality in our multi-sensory perception of the surrounding world. Sensory input via non-visual tracts becomes of greater behavioural relevance in totally blind individuals to enable effective interaction with the world around them. These include audition and tactile perceptions, leading to an augmentation in these perceptions when compared with normal sighted individuals. The objective of the present work was to study the index finger somatosensory evoked potentials (SEPs) in totally blind and normal sighted individuals. SEPs were recorded in 15 Braille reading totally blind females and compared with 15 age-matched normal sighted females. Latency and amplitudes of somatosensory evoked potential waveforms (N9, N13, and N20) were measured. Amplitude of N20 SEP (a cortical somatosensory evoked potential) was significantly larger in the totally blind than in normal sighted individuals (p < 0.05). The amplitudes of N9 and N13 SEP and the latencies of all recorded SEPs showed no significant differences. Blindness has a profound effect on the Braille reading right index finger. Totally blind Braille readers have larger N20 amplitude, suggestive of greater somatosensory cortical representation of the Braille reading index finger.

  13. Contrast discrimination, non-uniform patterns and change blindness.

    PubMed Central

    Scott-Brown, K C; Orbach, H S

    1998-01-01

    Change blindness--our inability to detect large changes in natural scenes when saccades, blinks and other transients interrupt visual input--seems to contradict psychophysical evidence for our exquisite sensitivity to contrast changes. Can the type of effects described as 'change blindness' be observed with simple, multi-element stimuli, amenable to psychophysical analysis? Such stimuli, composed of five mixed contrast elements, elicited a striking increase in contrast increment thresholds compared to those for an isolated element. Cue presentation prior to the stimulus substantially reduced thresholds, as for change blindness with natural scenes. On one hand, explanations for change blindness based on abstract and sketchy representations in short-term visual memory seem inappropriate for this low-level image property of contrast where there is ample evidence for exquisite performance on memory tasks. On the other hand, the highly increased thresholds for mixed contrast elements, and the decreased thresholds when a cue is present, argue against any simple early attentional or sensory explanation for change blindness. Thus, psychophysical results for very simple patterns cannot straightforwardly predict results even for the slightly more complicated patterns studied here. PMID:9872004

  14. Sparse and redundant representations for inverse problems and recognition

    NASA Astrophysics Data System (ADS)

    Patel, Vishal M.

    Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. We demonstrate that this new imaging scheme, requires no new hardware components and allows the aperture to be compressed. Also, it presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced on-board storage requirements. The last part of the dissertation deals with object recognition based on learning dictionaries for simultaneous sparse signal approximations and feature extraction. A dictionary is learned for each object class based on given training examples which minimize the representation error with a sparseness constraint. A novel test image is then projected onto the span of the atoms in each learned dictionary. The residual vectors along with the coefficients are then used for recognition. Applications to illumination robust face recognition and automatic target recognition are presented.

  15. Wide-field Fourier ptychographic microscopy using laser illumination source

    PubMed Central

    Chung, Jaebum; Lu, Hangwen; Ou, Xiaoze; Zhou, Haojiang; Yang, Changhuei

    2016-01-01

    Fourier ptychographic (FP) microscopy is a coherent imaging method that can synthesize an image with a higher bandwidth using multiple low-bandwidth images captured at different spatial frequency regions. The method’s demand for multiple images drives the need for a brighter illumination scheme and a high-frame-rate camera for a faster acquisition. We report the use of a guided laser beam as an illumination source for an FP microscope. It uses a mirror array and a 2-dimensional scanning Galvo mirror system to provide a sample with plane-wave illuminations at diverse incidence angles. The use of a laser presents speckles in the image capturing process due to reflections between glass surfaces in the system. They appear as slowly varying background fluctuations in the final reconstructed image. We are able to mitigate these artifacts by including a phase image obtained by differential phase contrast (DPC) deconvolution in the FP algorithm. We use a 1-Watt laser configured to provide a collimated beam with 150 mW of power and beam diameter of 1 cm to allow for the total capturing time of 0.96 seconds for 96 raw FPM input images in our system, with the camera sensor’s frame rate being the bottleneck for speed. We demonstrate a factor of 4 resolution improvement using a 0.1 NA objective lens over the full camera field-of-view of 2.7 mm by 1.5 mm. PMID:27896016

  16. MODELING MULTI-WAVELENGTH STELLAR ASTROMETRY. I. SIM LITE OBSERVATIONS OF INTERACTING BINARIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coughlin, Jeffrey L.; Harrison, Thomas E.; Gelino, Dawn M.

    Interacting binaries (IBs) consist of a secondary star that fills or is very close to filling its Roche lobe, resulting in accretion onto the primary star, which is often, but not always, a compact object. In many cases, the primary star, secondary star, and the accretion disk can all be significant sources of luminosity. SIM Lite will only measure the photocenter of an astrometric target, and thus determining the true astrometric orbits of such systems will be difficult. We have modified the Eclipsing Light Curve code to allow us to model the flux-weighted reflex motions of IBs, in a codemore » we call REFLUX. This code gives us sufficient flexibility to investigate nearly every configuration of IB. We find that SIM Lite will be able to determine astrometric orbits for all sufficiently bright IBs where the primary or secondary star dominates the luminosity. For systems where there are multiple components that comprise the spectrum in the optical bandpass accessible to SIM Lite, we find it is possible to obtain absolute masses for both components, although multi-wavelength photometry will be required to disentangle the multiple components. In all cases, SIM Lite will at least yield accurate inclinations and provide valuable information that will allow us to begin to understand the complex evolution of mass-transferring binaries. It is critical that SIM Lite maintains a multi-wavelength capability to allow for the proper deconvolution of the astrometric orbits in multi-component systems.« less

  17. Coincidence ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-01

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  18. Proximity correction of high-dosed frame with PROXECCO

    NASA Astrophysics Data System (ADS)

    Eisenmann, Hans; Waas, Thomas; Hartmann, Hans

    1994-05-01

    Usefulness of electron beam lithography is strongly related to the efficiency and quality of methods used for proximity correction. This paper addresses the above issue by proposing an extension to the new proximity correction program PROXECCO. The combination of a framing step with PROXECCO produces a pattern with a very high edge accuracy and still allows usage of the fast correction procedure. Making a frame with a higher dose imitates a fine resolution correction where the coarse part is disregarded. So after handling the high resolution effect by means of framing, an additional coarse correction is still needed. Higher doses have a higher contribution to the proximity effect. This additional proximity effect is taken into account with the help of the multi-dose input of PROXECCO. The dose of the frame is variable, depending on the deposited energy coming from backscattering of the proximity. Simulation proves the very high edge accuracy of the applied method.

  19. Engaging blind and partially sighted stakeholders in transformational change.

    PubMed

    Pearson, Victoria

    2016-09-01

    For non-profit organizations in the disability sector, engaging stakeholders with disabilities on matters of strategic planning is both a responsibility and an expectation. As part of our current strategic plan, which calls for organizational and systemic transformation, the Canadian National Institute for the Blind (CNIB) has engaged blind and partially sighted stakeholders alongside other interest groups to build and advocate for a more holistic model of vision healthcare and rehabilitation. This article describes the CNIB's multi-year process, including early-stage consultations, collaborative strategy development, and political advocacy and shares our organization's key success factors and learnings in creating meaningful, mutually beneficial engagement. © 2016 The Canadian College of Health Leaders.

  20. Blind speech separation system for humanoid robot with FastICA for audio filtering and separation

    NASA Astrophysics Data System (ADS)

    Budiharto, Widodo; Santoso Gunawan, Alexander Agung

    2016-07-01

    Nowadays, there are many developments in building intelligent humanoid robot, mainly in order to handle voice and image. In this research, we propose blind speech separation system using FastICA for audio filtering and separation that can be used in education or entertainment. Our main problem is to separate the multi speech sources and also to filter irrelevant noises. After speech separation step, the results will be integrated with our previous speech and face recognition system which is based on Bioloid GP robot and Raspberry Pi 2 as controller. The experimental results show the accuracy of our blind speech separation system is about 88% in command and query recognition cases.

  1. Rovibrational spectroscopy using a kinetic energy operator in Eckart frame and the multi-configuration time-dependent Hartree (MCTDH) approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadri, Keyvan, E-mail: keyvan.sadri@pci.uni-heidelberg.de; Meyer, Hans-Dieter, E-mail: hans-dieter.meyer@pci.uni-heidelberg.de; Lauvergnat, David, E-mail: david.lauvergnat@u-psud.fr

    2014-09-21

    For computational rovibrational spectroscopy the choice of the frame is critical for an approximate separation of overall rotation from internal motions. To minimize the coupling between internal coordinates and rotation, Eckart proposed a condition [“Some studies concerning rotating axes and polyatomic molecules,” Phys. Rev. 47, 552–558 (1935)] and a frame that fulfills this condition is hence called an Eckart frame. A method is developed to introduce in a systematic way the Eckart frame for the expression of the kinetic energy operator (KEO) in the polyspherical approach. The computed energy levels of a water molecule are compared with those obtained usingmore » a KEO in the standard definition of the Body-fixed frame of the polyspherical approach. The KEO in the Eckart frame leads to a faster convergence especially for large J states and vibrationally excited states. To provide an example with more degrees of freedom, rotational states of the vibrational ground state of the trans nitrous acid (HONO) are also investigated.« less

  2. Efficient volumetric estimation from plenoptic data

    NASA Astrophysics Data System (ADS)

    Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.

    2013-03-01

    The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.

  3. Fast Fourier-based deconvolution for three-dimensional acoustic source identification with solid spherical arrays

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Chu, Zhigang; Shen, Linbang; Ping, Guoli; Xu, Zhongming

    2018-07-01

    Being capable of demystifying the acoustic source identification result fast, Fourier-based deconvolution has been studied and applied widely for the delay and sum (DAS) beamforming with two-dimensional (2D) planar arrays. It is, however so far, still blank in the context of spherical harmonics beamforming (SHB) with three-dimensional (3D) solid spherical arrays. This paper is motivated to settle this problem. Firstly, for the purpose of determining the effective identification region, the premise of deconvolution, a shift-invariant point spread function (PSF), is analyzed with simulations. To make the premise be satisfied approximately, the opening angle in elevation dimension of the surface of interest should be small, while no restriction is imposed to the azimuth dimension. Then, two kinds of deconvolution theories are built for SHB using the zero and the periodic boundary conditions respectively. Both simulations and experiments demonstrate that the periodic boundary condition is superior to the zero one, and fits the 3D acoustic source identification with solid spherical arrays better. Finally, four periodic boundary condition based deconvolution methods are formulated, and their performance is disclosed both with simulations and experimentally. All the four methods offer enhanced spatial resolution and reduced sidelobe contaminations over SHB. The recovered source strength approximates to the exact one multiplied with a coefficient that is the square of the focus distance divided by the distance from the source to the array center, while the recovered pressure contribution is scarcely affected by the focus distance, always approximating to the exact one.

  4. Detection of increased vasa vasorum in artery walls: improving CT number accuracy using image deconvolution

    NASA Astrophysics Data System (ADS)

    Rajendran, Kishore; Leng, Shuai; Jorgensen, Steven M.; Abdurakhimova, Dilbar; Ritman, Erik L.; McCollough, Cynthia H.

    2017-03-01

    Changes in arterial wall perfusion are an indicator of early atherosclerosis. This is characterized by an increased spatial density of vasa vasorum (VV), the micro-vessels that supply oxygen and nutrients to the arterial wall. Detection of increased VV during contrast-enhanced computed tomography (CT) imaging is limited due to contamination from blooming effect from the contrast-enhanced lumen. We report the application of an image deconvolution technique using a measured system point-spread function, on CT data obtained from a photon-counting CT system to reduce blooming and to improve the CT number accuracy of arterial wall, which enhances detection of increased VV. A phantom study was performed to assess the accuracy of the deconvolution technique. A porcine model was created with enhanced VV in one carotid artery; the other carotid artery served as a control. CT images at an energy range of 25-120 keV were reconstructed. CT numbers were measured for multiple locations in the carotid walls and for multiple time points, pre and post contrast injection. The mean CT number in the carotid wall was compared between the left (increased VV) and right (control) carotid arteries. Prior to deconvolution, results showed similar mean CT numbers in the left and right carotid wall due to the contamination from blooming effect, limiting the detection of increased VV in the left carotid artery. After deconvolution, the mean CT number difference between the left and right carotid arteries was substantially increased at all the time points, enabling detection of the increased VV in the artery wall.

  5. VizieR Online Data Catalog: Spatial deconvolution code (Quintero Noda+, 2015)

    NASA Astrophysics Data System (ADS)

    Quintero Noda, C.; Asensio Ramos, A.; Orozco Suarez, D.; Ruiz Cobo, B.

    2015-05-01

    This deconvolution method follows the scheme presented in Ruiz Cobo & Asensio Ramos (2013A&A...549L...4R) The Stokes parameters are projected onto a few spectral eigenvectors and the ensuing maps of coefficients are deconvolved using a standard Lucy-Richardson algorithm. This introduces a stabilization because the PCA filtering reduces the amount of noise. (1 data file).

  6. 3D image restoration for confocal microscopy: toward a wavelet deconvolution for the study of complex biological structures

    NASA Astrophysics Data System (ADS)

    Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats

    2000-05-01

    Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.

  7. A method to measure the presampling MTF in digital radiography using Wiener deconvolution

    NASA Astrophysics Data System (ADS)

    Zhou, Zhongxing; Zhu, Qingzhen; Gao, Feng; Zhao, Huijuan; Zhang, Lixin; Li, Guohui

    2013-03-01

    We developed a novel method for determining the presampling modulation transfer function (MTF) of digital radiography systems from slanted edge images based on Wiener deconvolution. The degraded supersampled edge spread function (ESF) was obtained from simulated slanted edge images with known MTF in the presence of poisson noise, and its corresponding ideal ESF without degration was constructed according to its central edge position. To meet the requirements of the absolute integrable condition of Fourier transformation, the origianl ESFs were mirrored to construct the symmetric pattern of ESFs. Then based on Wiener deconvolution technique, the supersampled line spread function (LSF) could be acquired from the symmetric pattern of degraded supersampled ESFs in the presence of ideal symmetric ESFs and system noise. The MTF is then the normalized magnitude of the Fourier transform of the LSF. The determined MTF showed a strong agreement with the theoritical true MTF when appropriated Wiener parameter was chosen. The effects of Wiener parameter value and the width of square-like wave peak in symmetric ESFs were illustrated and discussed. In conclusion, an accurate and simple method to measure the presampling MTF was established using Wiener deconvolution technique according to slanted edge images.

  8. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  9. Single-Ion Deconvolution of Mass Peak Overlaps for Atom Probe Microscopy.

    PubMed

    London, Andrew J; Haley, Daniel; Moody, Michael P

    2017-04-01

    Due to the intrinsic evaporation properties of the material studied, insufficient mass-resolving power and lack of knowledge of the kinetic energy of incident ions, peaks in the atom probe mass-to-charge spectrum can overlap and result in incorrect composition measurements. Contributions to these peak overlaps can be deconvoluted globally, by simply examining adjacent peaks combined with knowledge of natural isotopic abundances. However, this strategy does not account for the fact that the relative contributions to this convoluted signal can often vary significantly in different regions of the analysis volume; e.g., across interfaces and within clusters. Some progress has been made with spatially localized deconvolution in cases where the discrete microstructural regions can be easily identified within the reconstruction, but this means no further point cloud analyses are possible. Hence, we present an ion-by-ion methodology where the identity of each ion, normally obscured by peak overlap, is resolved by examining the isotopic abundance of their immediate surroundings. The resulting peak-deconvoluted data are a point cloud and can be analyzed with any existing tools. We present two detailed case studies and discussion of the limitations of this new technique.

  10. Comparison of active-set method deconvolution and matched-filtering for derivation of an ultrasound transit time spectrum.

    PubMed

    Wille, M-L; Zapf, M; Ruiter, N V; Gemmeke, H; Langton, C M

    2015-06-21

    The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.

  11. Chemometric Data Analysis for Deconvolution of Overlapped Ion Mobility Profiles

    NASA Astrophysics Data System (ADS)

    Zekavat, Behrooz; Solouki, Touradj

    2012-11-01

    We present the details of a data analysis approach for deconvolution of the ion mobility (IM) overlapped or unresolved species. This approach takes advantage of the ion fragmentation variations as a function of the IM arrival time. The data analysis involves the use of an in-house developed data preprocessing platform for the conversion of the original post-IM/collision-induced dissociation mass spectrometry (post-IM/CID MS) data to a Matlab compatible format for chemometric analysis. We show that principle component analysis (PCA) can be used to examine the post-IM/CID MS profiles for the presence of mobility-overlapped species. Subsequently, using an interactive self-modeling mixture analysis technique, we show how to calculate the total IM spectrum (TIMS) and CID mass spectrum for each component of the IM overlapped mixtures. Moreover, we show that PCA and IM deconvolution techniques provide complementary results to evaluate the validity of the calculated TIMS profiles. We use two binary mixtures with overlapping IM profiles, including (1) a mixture of two non-isobaric peptides (neurotensin (RRPYIL) and a hexapeptide (WHWLQL)), and (2) an isobaric sugar isomer mixture of raffinose and maltotriose, to demonstrate the applicability of the IM deconvolution.

  12. Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.

    PubMed

    Eichstädt, S; Wilkens, V

    2017-06-01

    An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.

  13. VLSI design of lossless frame recompression using multi-orientation prediction

    NASA Astrophysics Data System (ADS)

    Lee, Yu-Hsuan; You, Yi-Lun; Chen, Yi-Guo

    2016-01-01

    Pursuing an experience of high-end visual quality drives human to demand a higher display resolution and a higher frame rate. Hence, a lot of powerful coding tools are aggregated together in emerging video coding standards to improve coding efficiency. This also makes video coding standards suffer from two design challenges: heavy computation and tremendous memory bandwidth. The first issue can be properly solved by a careful hardware architecture design with advanced semiconductor processes. Nevertheless, the second one becomes a critical design bottleneck for a modern video coding system. In this article, a lossless frame recompression using multi-orientation prediction technique is proposed to overcome this bottleneck. This work is realised into a silicon chip with the technology of TSMC 0.18 µm CMOS process. Its encoding capability can reach full-HD (1920 × 1080)@48 fps. The chip power consumption is 17.31 mW@100 MHz. Core area and chip area are 0.83 × 0.83 mm2 and 1.20 × 1.20 mm2, respectively. Experiment results demonstrate that this work exhibits an outstanding performance on lossless compression ratio with a competitive hardware performance.

  14. A Multi-Resolution Mode CMOS Image Sensor with a Novel Two-Step Single-Slope ADC for Intelligent Surveillance Systems.

    PubMed

    Kim, Daehyeok; Song, Minkyu; Choe, Byeongseong; Kim, Soo Youn

    2017-06-25

    In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates.

  15. Joint Multi-Leaf Segmentation, Alignment, and Tracking for Fluorescence Plant Videos.

    PubMed

    Yin, Xi; Liu, Xiaoming; Chen, Jin; Kramer, David M

    2018-06-01

    This paper proposes a novel framework for fluorescence plant video processing. The plant research community is interested in the leaf-level photosynthetic analysis within a plant. A prerequisite for such analysis is to segment all leaves, estimate their structures, and track them over time. We identify this as a joint multi-leaf segmentation, alignment, and tracking problem. First, leaf segmentation and alignment are applied on the last frame of a plant video to find a number of well-aligned leaf candidates. Second, leaf tracking is applied on the remaining frames with leaf candidate transformation from the previous frame. We form two optimization problems with shared terms in their objective functions for leaf alignment and tracking respectively. A quantitative evaluation framework is formulated to evaluate the performance of our algorithm with four metrics. Two models are learned to predict the alignment accuracy and detect tracking failure respectively in order to provide guidance for subsequent plant biology analysis. The limitation of our algorithm is also studied. Experimental results show the effectiveness, efficiency, and robustness of the proposed method.

  16. Uncued Low SNR Detection with Likelihood from Image Multi Bernoulli Filter

    NASA Astrophysics Data System (ADS)

    Murphy, T.; Holzinger, M.

    2016-09-01

    Both SSA and SDA necessitate uncued, partially informed detection and orbit determination efforts for small space objects which often produce only low strength electro-optical signatures. General frame to frame detection and tracking of objects includes methods such as moving target indicator, multiple hypothesis testing, direct track-before-detect methods, and random finite set based multiobject tracking. This paper will apply the multi-Bernoilli filter to low signal-to-noise ratio (SNR), uncued detection of space objects for space domain awareness applications. The primary novel innovation in this paper is a detailed analysis of the existing state-of-the-art likelihood functions and a likelihood function, based on a binary hypothesis, previously proposed by the authors. The algorithm is tested on electro-optical imagery obtained from a variety of sensors at Georgia Tech, including the GT-SORT 0.5m Raven-class telescope, and a twenty degree field of view high frame rate CMOS sensor. In particular, a data set of an extended pass of the Hitomi Astro-H satellite approximately 3 days after loss of communication and potential break up is examined.

  17. Distributed Fair Auto Rate Medium Access Control for IEEE 802.11 Based WLANs

    NASA Astrophysics Data System (ADS)

    Zhu, Yanfeng; Niu, Zhisheng

    Much research has shown that a carefully designed auto rate medium access control can utilize the underlying physical multi-rate capability to exploit the time-variation of the channel. In this paper, we develop a simple analytical model to elucidate the rule that maximizes the throughput of RTS/CTS based multi-rate wireless local area networks. Based on the discovered rule, we propose two distributed fair auto rate medium access control schemes called FARM and FARM+ from the view-point of throughput fairness and time-share fairness, respectively. With the proposed schemes, after receiving a RTS frame, the receiver selectively returns the CTS frame to inform the transmitter the maximum feasible rate probed by the signal-to-noise ratio of the received RTS frame. The key feature of the proposed schemes is that they are capable of maintaining throughput/time-share fairness in asymmetric situation where the distribution of SNR varies with stations. Extensive simulation results show that the proposed schemes outperform the existing throughput/time-share fair auto rate schemes in time-varying channel conditions.

  18. Steganalysis feature improvement using expectation maximization

    NASA Astrophysics Data System (ADS)

    Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.

    2007-04-01

    Images and data files provide an excellent opportunity for concealing illegal or clandestine material. Currently, there are over 250 different tools which embed data into an image without causing noticeable changes to the image. From a forensics perspective, when a system is confiscated or an image of a system is generated the investigator needs a tool that can scan and accurately identify files suspected of containing malicious information. The identification process is termed the steganalysis problem which focuses on both blind identification, in which only normal images are available for training, and multi-class identification, in which both the clean and stego images at several embedding rates are available for training. In this paper an investigation of a clustering and classification technique (Expectation Maximization with mixture models) is used to determine if a digital image contains hidden information. The steganalysis problem is for both anomaly detection and multi-class detection. The various clusters represent clean images and stego images with between 1% and 10% embedding percentage. Based on the results it is concluded that the EM classification technique is highly suitable for both blind detection and the multi-class problem.

  19. High precision gas hydrate imaging of small-scale and high-resolution marine sparker multichannel seismic data

    NASA Astrophysics Data System (ADS)

    Luo, D.; Cai, F.

    2017-12-01

    Small-scale and high-resolution marine sparker multi-channel seismic surveys using large energy sparkers are characterized by a high dominant frequency of the seismic source, wide bandwidth, and a high resolution. The technology with a high-resolution and high-detection precision was designed to improve the imaging quality of shallow sedimentary. In the study, a 20KJ sparker and 24-channel streamer cable with a 6.25m group interval were used as a seismic source and receiver system, respectively. Key factors for seismic imaging of gas hydrate are enhancement of S/N ratio, amplitude compensation and detailed velocity analysis. However, the data in this study has some characteristics below: 1. Small maximum offsets are adverse to velocity analysis and multiple attenuation. 2. Lack of low frequency information, that is, information less than 100Hz are invisible. 3. Low S/N ratio since less coverage times (only 12 times). These characteristics make it difficult to reach the targets of seismic imaging. In the study, the target processing methods are used to improve the seismic imaging quality of gas hydrate. First, some technologies of noise suppression are combined used in pre-stack seismic data to suppression of seismic noise and improve the S/N ratio. These technologies including a spectrum sharing noise elimination method, median filtering and exogenous interference suppression method. Second, the combined method of three technologies including SRME, τ-p deconvolution and high precision Radon transformation is used to remove multiples. Third, accurate velocity field are used in amplitude energy compensation to highlight the Bottom Simulating Reflector (short for BSR, the indicator of gas hydrates) and gas migration pathways (such as gas chimneys, hot spots et al.). Fourth, fine velocity analysis technology are used to improve accuracy of velocity analysis. Fifth, pre-stack deconvolution processing technology is used to compensate for low frequency energy and suppress of ghost, thus formation reflection characteristics are highlighted. The result shows that the small-scale and high resolution marine sparker multi-channel seismic surveys are very effective in improving the resolution and quality of gas hydrate imaging than the conventional seismic acquisition technology.

  20. Boosting Contextual Information for Deep Neural Network Based Voice Activity Detection

    DTIC Science & Technology

    2015-02-01

    multi-resolution stacking (MRS), which is a stack of ensemble classifiers. Each classifier in a building block inputs the concatenation of the predictions ...a base classifier in MRS, named boosted deep neural network (bDNN). bDNN first generates multiple base predictions from different contexts of a single...frame by only one DNN and then aggregates the base predictions for a better prediction of the frame, and it is different from computationally

  1. Indoor magnetic navigation for the blind.

    PubMed

    Riehle, Timothy H; Anderson, Shane M; Lichter, Patrick A; Giudice, Nicholas A; Sheikh, Suneel I; Knuesel, Robert J; Kollmann, Daniel T; Hedin, Daniel S

    2012-01-01

    Indoor navigation technology is needed to support seamless mobility for the visually impaired. This paper describes the construction of and evaluation of a navigation system that infers the users' location using only magnetic sensing. It is well known that the environments within steel frame structures are subject to significant magnetic distortions. Many of these distortions are persistent and have sufficient strength and spatial characteristics to allow their use as the basis for a location technology. This paper describes the development and evaluation of a prototype magnetic navigation system consisting of a wireless magnetometer placed at the users' hip streaming magnetic readings to a smartphone processing location algorithms. Human trials were conducted to assess the efficacy of the system by studying route-following performance with blind and sighted subjects using the navigation system for real-time guidance.

  2. Social movement heterogeneity in public policy framing: A multi-stakeholder analysis of the Keystone XL pipeline

    NASA Astrophysics Data System (ADS)

    Wesley, David T. A.

    In 2011, stakeholders with differing objectives formed an alliance to oppose the Keystone XL heavy oil pipeline. The alliance, which came to be known as "Tar Sands Action," implemented various strategies, some of which were more successful than others. Tar Sands Action was a largely heterogeneous alliance that included indigenous tribes, environmentalists, ranchers, landowners, and trade unions, making it one of the more diverse social movement organizations in history. Each of these stakeholder categories had distinct demographic structures, representing an array of racial, ethnic, educational, occupational, and political backgrounds. Participants also had differing policy objectives that included combating climate change and protecting jobs, agricultural interests, water resources, wildlife, and human health. The current dissertation examines the Tar Sands Action movement to understand how heterogeneous social movement organizations mobilize supporters, maintain alliances, and create effective frames to achieve policy objectives. A multi-stakeholder analysis of the development, evolution and communication of frames concerning the Keystone XL controversy provides insight into the role of alliances, direct action, and the news media in challenging hegemonic frames. Previous research has ignored the potential value that SMO heterogeneity provides by treating social movements as culturally homogenous. However, diversity has been shown to affect performance in business organizations. The current study demonstrates that under some circumstances, diversity can also improve policy outcomes. Moreover, policy frames are shown to be more effective in sustaining news media and public interest through a process the author calls dynamic frame sequencing (DFS). DFS refers to a process implementing different stakeholder frames at strategically opportune moments. Finally, Tar Sands Action was one of the first SMOs to rely heavily on social media to build alliances, disseminate information, and mobilize support. This study adds to a growing body of research that considers the emerging role of social media in protest movements.

  3. Thrust-isolating mounting. [characteristics of support for loads mounted in spacecraft

    NASA Technical Reports Server (NTRS)

    Wetzler, D. G. (Inventor)

    1974-01-01

    A supporting frame for a load, such as one or more telescopes, is isolated from all multi-gravitational forces, which will be developed within that load as that load is propelled into space, by using a shroud to fully and solidly hold that load until that load has been propelled into space. Thereafter, that shroud will be jettisoned; and then supports which are on, and which are movable with, that load will have surfaces thereon moved into supporting engagement with complementary surfaces on that supporting frame to enable that supporting frame and those supports to fully and solidly hold that load.

  4. High-speed bioimaging with frequency-division-multiplexed fluorescence confocal microscopy

    NASA Astrophysics Data System (ADS)

    Mikami, Hideharu; Harmon, Jeffrey; Ozeki, Yasuyuki; Goda, Keisuke

    2017-04-01

    We present methods of fluorescence confocal microscopy that enable unprecedentedly high frame rate of > 10,000 fps. The methods are based on a frequency-division multiplexing technique, which was originally developed in the field of communication engineering. Specifically, we achieved a broad bandwidth ( 400 MHz) of detection signals using a dual- AOD method and overcame limitations in frame rate, due to a scanning device, by using a multi-line focusing method, resulting in a significant increase in frame rate. The methods have potential biomedical applications such as observation of sub-millisecond dynamics in biological tissues, in-vivo three-dimensional imaging, and fluorescence imaging flow cytometry.

  5. Standardized UXO Technology Demonstration Site Blind Grid Scoring Record No. 764

    DTIC Science & Technology

    2006-04-01

    Attainable accuracy of depth (z) ± 0.3 meter Detection performance for ferrous and nonferrous metals : will detect ammunition components 20-mm...ASSOCIATES, INC. 6832 OLD DOMINION DRIVE MCLEAN, VA 22101 TECHNOLOGY TYPE/PLATFORM: MULTI CHANNEL DETECTOR SYSTEM (AMOS)/TOWED PREPARED BY: U.S...Multi Channel Detector System (AMOS)/Towed, MEC 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON a. REPORT Unclassified b. ABSTRACT

  6. The ability of multi-site, multi-depth sacral lateral branch blocks to anesthetize the sacroiliac joint complex.

    PubMed

    Dreyfuss, Paul; Henning, Troy; Malladi, Niriksha; Goldstein, Barry; Bogduk, Nikolai

    2009-01-01

    To determine the physiologic effectiveness of multi-site, multi-depth sacral lateral branch injections. Double-blind, randomized, placebo-controlled study. Outpatient pain management center. Twenty asymptomatic volunteers. The dorsal innervation to the sacroiliac joint (SIJ) is from the L5 dorsal ramus and the S1-3 lateral branches. Multi-site, multi-depth lateral branch blocks were developed to compensate for the complex regional anatomy that limited the effectiveness of single-site, single-depth lateral branch injections. Bilateral multi-site, multi-depth lateral branch green dye injections and subsequent dissection on two cadavers revealed a 91% accuracy with this technique. Session 1: 20 asymptomatic subjects had a 25-g spinal needle probe their interosseous (IO) and dorsal sacroiliac (DSI) ligaments. The inferior dorsal SIJ was entered and capsular distension with contrast medium was performed. Discomfort had to occur with each provocation maneuver and a contained arthrogram was necessary to continue in the study. Session 2: 1 week later; computer randomized, double-blind multi-site, multi-depth lateral branch blocks injections were performed. Ten subjects received active (bupivicaine 0.75%) and 10 subjects received sham (normal saline) multi-site, multi-depth lateral branch injections. Thirty minutes later, provocation testing was repeated with identical methodology used in session 1. Presence or absence of pain for ligamentous probing and SIJ capsular distension. Seventy percent of the active group had an insensate IO and DSI ligaments, and inferior dorsal SIJ vs 0-10% of the sham group. Twenty percent of the active vs 10% of the sham group did not feel repeat capsular distension. Six of seven subjects (86%) retained the ability to feel repeat capsular distension despite an insensate dorsal SIJ complex. Multi-site, multi-depth lateral branch blocks are physiologically effective at a rate of 70%. Multi-site, multi-depth lateral branch blocks do not effectively block the intra-articular portion of the SIJ. There is physiological evidence that the intra-articular portion of the SIJ is innervated from both ventral and dorsal sources. Comparative multi-site, multi-depth lateral branch blocks should be considered a potentially valuable tool to diagnose extra-articular SIJ pain and determine if lateral branch radiofrequency neurotomy may assist one with SIJ pain.

  7. Investigation of radio astronomy image processing techniques for use in the passive millimetre-wave security screening environment

    NASA Astrophysics Data System (ADS)

    Taylor, Christopher T.; Hutchinson, Simon; Salmon, Neil A.; Wilkinson, Peter N.; Cameron, Colin D.

    2014-06-01

    Image processing techniques can be used to improve the cost-effectiveness of future interferometric Passive MilliMetre Wave (PMMW) imagers. The implementation of such techniques will allow for a reduction in the number of collecting elements whilst ensuring adequate image fidelity is maintained. Various techniques have been developed by the radio astronomy community to enhance the imaging capability of sparse interferometric arrays. The most prominent are Multi- Frequency Synthesis (MFS) and non-linear deconvolution algorithms, such as the Maximum Entropy Method (MEM) and variations of the CLEAN algorithm. This investigation focuses on the implementation of these methods in the defacto standard for radio astronomy image processing, the Common Astronomy Software Applications (CASA) package, building upon the discussion presented in Taylor et al., SPIE 8362-0F. We describe the image conversion process into a CASA suitable format, followed by a series of simulations that exploit the highlighted deconvolution and MFS algorithms assuming far-field imagery. The primary target application used for this investigation is an outdoor security scanner for soft-sided Heavy Goods Vehicles. A quantitative analysis of the effectiveness of the aforementioned image processing techniques is presented, with thoughts on the potential cost-savings such an approach could yield. Consideration is also given to how the implementation of these techniques in CASA might be adapted to operate in a near-field target environment. This may enable a much wider usability by the imaging community outside of radio astronomy and thus would be directly relevant to portal screening security systems in the microwave and millimetre wave bands.

  8. Digital enhancement of haematoxylin- and eosin-stained histological images for red-green colour-blind observers.

    PubMed

    Landini, G; Perryer, G

    2009-06-01

    Individuals with red-green colour-blindness (CB) commonly experience great difficulty differentiating between certain histological stain pairs, notably haematoxylin-eosin (H&E). The prevalence of red-green CB is high (6-10% of males), including among medical and laboratory personnel, and raises two major concerns: first, accessibility and equity issues during the education and training of individuals with this disability, and second, the likelihood of errors in critical tasks such as interpreting histological images. Here we show two methods to enhance images of H&E-stained samples so the differently stained tissues can be well discriminated by red-green CBs while remaining usable by people with normal vision. Method 1 involves rotating and stretching the range of H&E hues in the image to span the perceptual range of the CB observers. Method 2 digitally unmixes the original dyes using colour deconvolution into two separate images and repositions the information into hues that are more distinctly perceived. The benefits of these methods were tested in 36 volunteers with normal vision and 11 with red-green CB using a variety of H&E stained tissue sections paired with their enhanced versions. CB subjects reported they could better perceive the different stains using the enhanced images for 85% of preparations (method 1: 90%, method 2: 73%), compared to the H&E-stained original images. Many subjects with normal vision also preferred the enhanced images to the original H&E. The results suggest that these colour manipulations confer considerable advantage for those with red-green colour vision deficiency while not disadvantaging people with normal colour vision.

  9. Deconvolution for three-dimensional acoustic source identification based on spherical harmonics beamforming

    NASA Astrophysics Data System (ADS)

    Chu, Zhigang; Yang, Yang; He, Yansong

    2015-05-01

    Spherical Harmonics Beamforming (SHB) with solid spherical arrays has become a particularly attractive tool for doing acoustic sources identification in cabin environments. However, it presents some intrinsic limitations, specifically poor spatial resolution and severe sidelobe contaminations. This paper focuses on overcoming these limitations effectively by deconvolution. First and foremost, a new formulation is proposed, which expresses SHB's output as a convolution of the true source strength distribution and the point spread function (PSF) defined as SHB's response to a unit-strength point source. Additionally, the typical deconvolution methods initially suggested for planar arrays, deconvolution approach for the mapping of acoustic sources (DAMAS), nonnegative least-squares (NNLS), Richardson-Lucy (RL) and CLEAN, are adapted to SHB successfully, which are capable of giving rise to highly resolved and deblurred maps. Finally, the merits of the deconvolution methods are validated and the relationships of source strength and pressure contribution reconstructed by the deconvolution methods vs. focus distance are explored both with computer simulations and experimentally. Several interesting results have emerged from this study: (1) compared with SHB, DAMAS, NNLS, RL and CLEAN all can not only improve the spatial resolution dramatically but also reduce or even eliminate the sidelobes effectively, allowing clear and unambiguous identification of single source or incoherent sources. (2) The availability of RL for coherent sources is highest, then DAMAS and NNLS, and that of CLEAN is lowest due to its failure in suppressing sidelobes. (3) Whether or not the real distance from the source to the array center equals the assumed one that is referred to as focus distance, the previous two results hold. (4) The true source strength can be recovered by dividing the reconstructed one by a coefficient that is the square of the focus distance divided by the real distance from the source to the array center. (5) The reconstructed pressure contribution is almost not affected by the focus distance, always approximating to the true one. This study will be of great significance to the accurate localization and quantification of acoustic sources in cabin environments.

  10. Point Cloud Refinement with a Target-Free Intrinsic Calibration of a Mobile Multi-Beam LIDAR System

    NASA Astrophysics Data System (ADS)

    Nouiraa, H.; Deschaud, J. E.; Goulettea, F.

    2016-06-01

    LIDAR sensors are widely used in mobile mapping systems. The mobile mapping platforms allow to have fast acquisition in cities for example, which would take much longer with static mapping systems. The LIDAR sensors provide reliable and precise 3D information, which can be used in various applications: mapping of the environment; localization of objects; detection of changes. Also, with the recent developments, multi-beam LIDAR sensors have appeared, and are able to provide a high amount of data with a high level of detail. A mono-beam LIDAR sensor mounted on a mobile platform will have an extrinsic calibration to be done, so the data acquired and registered in the sensor reference frame can be represented in the body reference frame, modeling the mobile system. For a multibeam LIDAR sensor, we can separate its calibration into two distinct parts: on one hand, we have an extrinsic calibration, in common with mono-beam LIDAR sensors, which gives the transformation between the sensor cartesian reference frame and the body reference frame. On the other hand, there is an intrinsic calibration, which gives the relations between the beams of the multi-beam sensor. This calibration depends on a model given by the constructor, but the model can be non optimal, which would bring errors and noise into the acquired point clouds. In the litterature, some optimizations of the calibration parameters are proposed, but need a specific routine or environment, which can be constraining and time-consuming. In this article, we present an automatic method for improving the intrinsic calibration of a multi-beam LIDAR sensor, the Velodyne HDL-32E. The proposed approach does not need any calibration target, and only uses information from the acquired point clouds, which makes it simple and fast to use. Also, a corrected model for the Velodyne sensor is proposed. An energy function which penalizes points far from local planar surfaces is used to optimize the different proposed parameters for the corrected model, and we are able to give a confidence value for the calibration parameters found. Optimization results on both synthetic and real data are presented.

  11. Unmixing the Materials and Mechanics Contributions in Non-resolved Object Signatures

    DTIC Science & Technology

    2008-09-01

    abundances from hyperspectral or multi-spectral time - resolved signatures. A Fourier analysis of temporal variation of material abundance provides...factorization technique to extract the temporal variation of material abundances from hyperspectral or multi-spectral time - resolved signatures. A Fourier...approximately one hundred wavelengths in the visible spectrum. The frame rate for the instrument was not large enough to collect time resolved data. However

  12. Determination of ion mobility collision cross sections for unresolved isomeric mixtures using tandem mass spectrometry and chemometric deconvolution.

    PubMed

    Harper, Brett; Neumann, Elizabeth K; Stow, Sarah M; May, Jody C; McLean, John A; Solouki, Touradj

    2016-10-05

    Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting "pure" IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810-1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) "shift factors" to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288.8 Å(2), 295.1 Å(2), 296.8 Å(2), and 300.1 Å(2); all four of these CCS values were within 1.5% of independently measured DTIM-MS values. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Image restoration and superresolution as probes of small scale far-IR structure in star forming regions

    NASA Technical Reports Server (NTRS)

    Lester, D. F.; Harvey, P. M.; Joy, M.; Ellis, H. B., Jr.

    1986-01-01

    Far-infrared continuum studies from the Kuiper Airborne Observatory are described that are designed to fully exploit the small-scale spatial information that this facility can provide. This work gives the clearest picture to data on the structure of galactic and extragalactic star forming regions in the far infrared. Work is presently being done with slit scans taken simultaneously at 50 and 100 microns, yielding one-dimensional data. Scans of sources in different directions have been used to get certain information on two dimensional structure. Planned work with linear arrays will allow us to generalize our techniques to two dimensional image restoration. For faint sources, spatial information at the diffraction limit of the telescope is obtained, while for brighter sources, nonlinear deconvolution techniques have allowed us to improve over the diffraction limit by as much as a factor of four. Information on the details of the color temperature distribution is derived as well. This is made possible by the accuracy with which the instrumental point-source profile (PSP) is determined at both wavelengths. While these two PSPs are different, data at different wavelengths can be compared by proper spatial filtering. Considerable effort has been devoted to implementing deconvolution algorithms. Nonlinear deconvolution methods offer the potential of superresolution -- that is, inference of power at spatial frequencies that exceed D lambda. This potential is made possible by the implicit assumption by the algorithm of positivity of the deconvolved data, a universally justifiable constraint for photon processes. We have tested two nonlinear deconvolution algorithms on our data; the Richardson-Lucy (R-L) method and the Maximum Entropy Method (MEM). The limits of image deconvolution techniques for achieving spatial resolution are addressed.

  14. Marker-less multi-frame motion tracking and compensation in PET-brain imaging

    NASA Astrophysics Data System (ADS)

    Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.

    2015-03-01

    In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.

  15. Keyhole imaging method for dynamic objects behind the occlusion area

    NASA Astrophysics Data System (ADS)

    Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong

    2018-01-01

    A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .

  16. Wayfinding the Live 5-2-1-0 Initiative-At the Intersection between Systems Thinking and Community-Based Childhood Obesity Prevention.

    PubMed

    Amed, Shazhan; Shea, Stephanie; Pinkney, Susan; Wharf Higgins, Joan; Naylor, Patti-Jean

    2016-06-21

    Childhood obesity is complex and requires a 'systems approach' that collectively engages across multiple community settings. Sustainable Childhood Obesity Prevention through Community Engagement (SCOPE) has implemented Live 5-2-1-0-a multi-sector, multi-component childhood obesity prevention initiative informed by systems thinking and participatory research via an innovative knowledge translation (KT) model (RE-FRAME). This paper describes the protocol for implementing and evaluating RE-FRAME in two 'existing' (>2 years of implementation) and two 'new' Live 5-2-1-0 communities to understand how to facilitate and sustain systems/community-level change. In this mixed-methods study, RE-FRAME was implemented via online resources, webinars, a backbone organization (SCOPE) coordinating the initiative, and a linking system supporting KT. Qualitative and quantitative data were collected using surveys and stakeholder interviews, analyzed using thematic analysis and descriptive statistics, respectively. Existing communities described the consistency of Live 5-2-1-0 and extensive local partnerships/champions as catalysts for synergistic community-wide action; new communities felt that the simplicity of the message combined with the transfer of experiential learning would inform their own strategies and policies/programs to broadly disseminate Live 5-2-1-0. RE-FRAME effectively guided the refinement of the initiative and provided a framework upon which evaluation results described how to implement a community-based systems approach to childhood obesity prevention.

  17. Detecting Blind Fault with Fractal and Roughness Factors from High Resolution LiDAR DEM at Taiwan

    NASA Astrophysics Data System (ADS)

    Cheng, Y. S.; Yu, T. T.

    2014-12-01

    There is no obvious fault scarp associated with blind fault. The traditional method of mapping this unrevealed geological structure is the cluster of seismicity. Neither the seismic event nor the completeness of cluster could be captured by network to chart the location of the entire possible active blind fault within short period of time. High resolution DEM gathered by LiDAR could denote actual terrain information despite the existence of plantation. 1-meter interval DEM of mountain region at Taiwan is utilized by fractal, entropy and roughness calculating with MATLAB code. By jointing these handing, the regions of non-sediment deposit are charted automatically. Possible blind fault associated with Chia-Sen earthquake at southern Taiwan is served as testing ground. GIS layer help in removing the difference from various geological formation, then multi-resolution fractal index is computed around the target region. The type of fault movement controls distribution of fractal index number. The scale of blind fault governs degree of change in fractal index. Landslide induced by rainfall and/or earthquake possesses larger degree of geomorphology alteration than blind fault; special treatment in removing these phenomena is required. Highly weathered condition at Taiwan should erase the possible trace remained upon DEM from the ruptured of blind fault while reoccurrence interval is higher than hundreds of years. This is one of the obstacle in finding possible blind fault at Taiwan.

  18. Gene Expression-Based Survival Prediction in Lung Adenocarcinoma: A Multi-Site, Blinded Validation Study

    PubMed Central

    Shedden, Kerby; Taylor, Jeremy M.G.; Enkemann, Steve A.; Tsao, Ming S.; Yeatman, Timothy J.; Gerald, William L.; Eschrich, Steve; Jurisica, Igor; Venkatraman, Seshan E.; Meyerson, Matthew; Kuick, Rork; Dobbin, Kevin K.; Lively, Tracy; Jacobson, James W.; Beer, David G.; Giordano, Thomas J.; Misek, David E.; Chang, Andrew C.; Zhu, Chang Qi; Strumpf, Dan; Hanash, Samir; Shepherd, Francis A.; Ding, Kuyue; Seymour, Lesley; Naoki, Katsuhiko; Pennell, Nathan; Weir, Barbara; Verhaak, Roel; Ladd-Acosta, Christine; Golub, Todd; Gruidl, Mike; Szoke, Janos; Zakowski, Maureen; Rusch, Valerie; Kris, Mark; Viale, Agnes; Motoi, Noriko; Travis, William; Sharma, Anupama

    2009-01-01

    Although prognostic gene expression signatures for survival in early stage lung cancer have been proposed, for clinical application it is critical to establish their performance across different subject populations and in different laboratories. Here we report a large, training-testing, multi-site blinded validation study to characterize the performance of several prognostic models based on gene expression for 442 lung adenocarcinomas. The hypotheses proposed examined whether microarray measurements of gene expression either alone or combined with basic clinical covariates (stage, age, sex) can be used to predict overall survival in lung cancer subjects. Several models examined produced risk scores that substantially correlated with actual subject outcome. Most methods performed better with clinical data, supporting the combined use of clinical and molecular information when building prognostic models for early stage lung cancer. This study also provides the largest available set of microarray data with extensive pathological and clinical annotation for lung adenocarcinomas. PMID:18641660

  19. Joint channel estimation and multi-user detection for multipath fading channels in DS-CDMA systems

    NASA Astrophysics Data System (ADS)

    Wu, Sau-Hsuan; Kuo, C.-C. Jay

    2002-11-01

    The technique of joint blind channel estimation and multiple access interference (MAI) suppression for an asynchronous code-division multiple-access (CDMA) system is investigated in this research. To identify and track dispersive time-varying fading channels and to avoid the phase ambiguity that come with the second-order statistic approaches, a sliding-window scheme using the expectation maximization (EM) algorithm is proposed. The complexity of joint channel equalization and symbol detection for all users increases exponentially with system loading and the channel memory. The situation is exacerbated if strong inter-symbol interference (ISI) exists. To reduce the complexity and the number of samples required for channel estimation, a blind multiuser detector is developed. Together with multi-stage interference cancellation using soft outputs provided by this detector, our algorithm can track fading channels with no phase ambiguity even when channel gains attenuate close to zero.

  20. Effects of Four-Week Supplementation with a Multi-Vitamin/Mineral Preparation on Mood and Blood Biomarkers in Young Adults: A Randomised, Double-Blind, Placebo-Controlled Trial.

    PubMed

    White, David J; Cox, Katherine H M; Peters, Riccarda; Pipingas, Andrew; Scholey, Andrew B

    2015-10-30

    This study explored the effects of four-week multi-vitamin and mineral (MVM) supplementation on mood and neurocognitive function in healthy, young adults. Fifty-eight healthy adults, 18-40 years of age (M = 25.82 years, SD = 4.87) participated in this randomised, double-blind, placebo-controlled trial, in which mood and blood biomarkers were assessed at baseline and after four weeks of supplementation. Compared to placebo, MVM supplementation was associated with significantly lowered homocysteine and increased blood B-vitamin levels (p < 0.01). MVM treatment was also associated with significantly improved mood, as measured by reduced scores on the "depression-dejection" subscale of the Profile of Mood States (p = 0.018). These findings suggest that the four weeks of MVM supplementation may have beneficial effects on mood, underpinned by elevated B-vitamins and lowered homocysteine in healthy young adults.

  1. Restoring defect structures in 3C-SiC/Si (001) from spherical aberration-corrected high-resolution transmission electron microscope images by means of deconvolution processing.

    PubMed

    Wen, C; Wan, W; Li, F H; Tang, D

    2015-04-01

    The [110] cross-sectional samples of 3C-SiC/Si (001) were observed with a spherical aberration-corrected 300 kV high-resolution transmission electron microscope. Two images taken not close to the Scherzer focus condition and not representing the projected structures intuitively were utilized for performing the deconvolution. The principle and procedure of image deconvolution and atomic sort recognition are summarized. The defect structure restoration together with the recognition of Si and C atoms from the experimental images has been illustrated. The structure maps of an intrinsic stacking fault in the area of SiC, and of Lomer and 60° shuffle dislocations at the interface have been obtained at atomic level. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Computerized glow curve deconvolution of thermoluminescent emission from polyminerals of Jamaica Mexican flower

    NASA Astrophysics Data System (ADS)

    Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.

    The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.

  3. Punch stretching process monitoring using acoustic emission signal analysis. II - Application of frequency domain deconvolution

    NASA Technical Reports Server (NTRS)

    Liang, Steven Y.; Dornfeld, David A.; Nickerson, Jackson A.

    1987-01-01

    The coloring effect on the acoustic emission signal due to the frequency response of the data acquisition/processing instrumentation may bias the interpretation of AE signal characteristics. In this paper, a frequency domain deconvolution technique, which involves the identification of the instrumentation transfer functions and multiplication of the AE signal spectrum by the inverse of these system functions, has been carried out. In this way, the change in AE signal characteristics can be better interpreted as the result of the change in only the states of the process. Punch stretching process was used as an example to demonstrate the application of the technique. Results showed that, through the deconvolution, the frequency characteristics of AE signals generated during the stretching became more distinctive and can be more effectively used as tools for process monitoring.

  4. Improving the Ability of Image Sensors to Detect Faint Stars and Moving Objects Using Image Deconvolution Techniques

    PubMed Central

    Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D.

    2010-01-01

    In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors. PMID:22294896

  5. Improving the ability of image sensors to detect faint stars and moving objects using image deconvolution techniques.

    PubMed

    Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D

    2010-01-01

    In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.

  6. Regression-assisted deconvolution.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-06-30

    We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.

  7. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  8. Two-Dimensional Signal Processing and Storage and Theory and Applications of Electromagnetic Measurements.

    DTIC Science & Technology

    1983-06-01

    system, provides a convenient, low- noise , fully parallel method of improving contrast and enhancing structural detail in an image prior to input to a...directed towards problems in deconvolution, reconstruction from projections, bandlimited extrapolation, and shift varying deblurring of images...deconvolution algorithm has been studied with promising 5 results [I] for simulated motion blurs. Future work will focus on noise effects and the extension

  9. Chemometric Deconvolution of Continuous Electrokinetic Injection Micellar Electrokinetic Chromatography Data for the Quantitation of Trinitrotoluene in Mixtures of Other Nitroaromatic Compounds

    DTIC Science & Technology

    2014-02-24

    Suite 600 Washington, DC 20036 NRL/MR/ 6110 --14-9521 Approved for public release; distribution is unlimited. 1Science & Engineering Apprenticeship...Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/ 6110 --14-9521 Chemometric Deconvolution of Continuous Electrokinetic Injection Micellar... Engineering Apprenticeship Program American Society for Engineering Education Washington, DC Kevin Johnson Navy Technology Center for Safety and

  10. Enhanced Seismic Imaging of Turbidite Deposits in Chicontepec Basin, Mexico

    NASA Astrophysics Data System (ADS)

    Chavez-Perez, S.; Vargas-Meleza, L.

    2007-05-01

    We test, as postprocessing tools, a combination of migration deconvolution and geometric attributes to attack the complex problems of reflector resolution and detection in migrated seismic volumes. Migration deconvolution has been empirically shown to be an effective approach for enhancing the illumination of migrated images, which are blurred versions of the subsurface reflectivity distribution, by decreasing imaging artifacts, improving spatial resolution, and alleviating acquisition footprint problems. We utilize migration deconvolution as a means to improve the quality and resolution of 3D prestack time migrated results from Chicontepec basin, Mexico, a very relevant portion of the producing onshore sector of Pemex, the Mexican petroleum company. Seismic data covers the Agua Fria, Coapechaca, and Tajin fields. It exhibits acquisition footprint problems, migration artifacts and a severe lack of resolution in the target area, where turbidite deposits need to be characterized between major erosional surfaces. Vertical resolution is about 35 m and the main hydrocarbon plays are turbidite beds no more than 60 m thick. We also employ geometric attributes (e.g., coherent energy and curvature), computed after migration deconvolution, to detect and map out depositional features, and help design development wells in the area. Results of this workflow show imaging enhancement and allow us to identify meandering channels and individual sand bodies, previously undistinguishable in the original seismic migrated images.

  11. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  12. Data Dependent Peak Model Based Spectrum Deconvolution for Analysis of High Resolution LC-MS Data

    PubMed Central

    2015-01-01

    A data dependent peak model (DDPM) based spectrum deconvolution method was developed for analysis of high resolution LC-MS data. To construct the selected ion chromatogram (XIC), a clustering method, the density based spatial clustering of applications with noise (DBSCAN), is applied to all m/z values of an LC-MS data set to group the m/z values into each XIC. The DBSCAN constructs XICs without the need for a user defined m/z variation window. After the XIC construction, the peaks of molecular ions in each XIC are detected using both the first and the second derivative tests, followed by an optimized chromatographic peak model selection method for peak deconvolution. A total of six chromatographic peak models are considered, including Gaussian, log-normal, Poisson, gamma, exponentially modified Gaussian, and hybrid of exponential and Gaussian models. The abundant nonoverlapping peaks are chosen to find the optimal peak models that are both data- and retention-time-dependent. Analysis of 18 spiked-in LC-MS data demonstrates that the proposed DDPM spectrum deconvolution method outperforms the traditional method. On average, the DDPM approach not only detected 58 more chromatographic peaks from each of the testing LC-MS data but also improved the retention time and peak area 3% and 6%, respectively. PMID:24533635

  13. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  14. Partitioning of nitroxides in dispersed systems investigated by ultrafiltration, EPR and NMR spectroscopy.

    PubMed

    Krudopp, Heimke; Sönnichsen, Frank D; Steffen-Heins, Anja

    2015-08-15

    The partitioning behavior of paramagnetic nitroxides in dispersed systems can be determined by deconvolution of electron paramagnetic resonance (EPR) spectra giving equivalent results with the validated methods of ultrafiltration techniques (UF) and pulsed-field gradient nuclear magnetic resonance spectroscopy (PFG-NMR). The partitioning behavior of nitroxides with increasing lipophilicity was investigated in anionic, cationic and nonionic micellar systems and 10 wt% o/w emulsions. Apart from EPR spectra deconvolution, the PFG-NMR was used in micellar solutions as a non-destructive approach, while UF based on separation of very small volume of the aqueous phase. As a function of their substituent and lipophilicity, the proportions of nitroxides that were solubilized in the micellar or emulsion interface increased with increasing nitroxide lipophilicity for all emulsifier used. Comparing the different approaches, EPR deconvolution and UF revealed comparable nitroxide proportions that were solubilized in the interfaces. Those proportions were higher than found with PFG-NMR. For PFG-NMR self-diffusion experiments the reduced nitroxides were used revealing a high dynamic of hydroxylamines and emulsifiers. Deconvolution of EPR spectra turned out to be the preferred method for measuring the partitioning behavior of paramagnetic molecules as it enables distinguishing between several populations at their individual solubilization sites. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Extraction of near-surface properties for a lossy layered medium using the propagator matrix

    USGS Publications Warehouse

    Mehta, K.; Snieder, R.; Graizer, V.

    2007-01-01

    Near-surface properties play an important role in advancing earthquake hazard assessment. Other areas where near-surface properties are crucial include civil engineering and detection and delineation of potable groundwater. From an exploration point of view, near-surface properties are needed for wavefield separation and correcting for the local near-receiver structure. It has been shown that these properties can be estimated for a lossless homogeneous medium using the propagator matrix. To estimate the near-surface properties, we apply deconvolution to passive borehole recordings of waves excited by an earthquake. Deconvolution of these incoherent waveforms recorded by the sensors at different depths in the borehole with the recording at the surface results in waves that propagate upwards and downwards along the array. These waves, obtained by deconvolution, can be used to estimate the P- and S-wave velocities near the surface. As opposed to waves obtained by cross-correlation that represent filtered version of the sum of causal and acausal Green's function between the two receivers, the waves obtained by deconvolution represent the elements of the propagator matrix. Finally, we show analytically the extension of the propagator matrix analysis to a lossy layered medium for a special case of normal incidence. ?? 2007 The Authors Journal compilation ?? 2007 RAS.

  16. Resolving complex fibre architecture by means of sparse spherical deconvolution in the presence of isotropic diffusion

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Michailovich, O.; Rathi, Y.

    2014-03-01

    High angular resolution diffusion imaging (HARDI) improves upon more traditional diffusion tensor imaging (DTI) in its ability to resolve the orientations of crossing and branching neural fibre tracts. The HARDI signals are measured over a spherical shell in q-space, and are usually used as an input to q-ball imaging (QBI) which allows estimation of the diffusion orientation distribution functions (ODFs) associated with a given region-of interest. Unfortunately, the partial nature of single-shell sampling imposes limits on the estimation accuracy. As a result, the recovered ODFs may not possess sufficient resolution to reveal the orientations of fibre tracts which cross each other at acute angles. A possible solution to the problem of limited resolution of QBI is provided by means of spherical deconvolution, a particular instance of which is sparse deconvolution. However, while capable of yielding high-resolution reconstructions over spacial locations corresponding to white matter, such methods tend to become unstable when applied to anatomical regions with a substantial content of isotropic diffusion. To resolve this problem, a new deconvolution approach is proposed in this paper. Apart from being uniformly stable across the whole brain, the proposed method allows one to quantify the isotropic component of cerebral diffusion, which is known to be a useful diagnostic measure by itself.

  17. Model-free quantification of dynamic PET data using nonparametric deconvolution

    PubMed Central

    Zanderigo, Francesca; Parsey, Ramin V; Todd Ogden, R

    2015-01-01

    Dynamic positron emission tomography (PET) data are usually quantified using compartment models (CMs) or derived graphical approaches. Often, however, CMs either do not properly describe the tracer kinetics, or are not identifiable, leading to nonphysiologic estimates of the tracer binding. The PET data are modeled as the convolution of the metabolite-corrected input function and the tracer impulse response function (IRF) in the tissue. Using nonparametric deconvolution methods, it is possible to obtain model-free estimates of the IRF, from which functionals related to tracer volume of distribution and binding may be computed, but this approach has rarely been applied in PET. Here, we apply nonparametric deconvolution using singular value decomposition to simulated and test–retest clinical PET data with four reversible tracers well characterized by CMs ([11C]CUMI-101, [11C]DASB, [11C]PE2I, and [11C]WAY-100635), and systematically compare reproducibility, reliability, and identifiability of various IRF-derived functionals with that of traditional CMs outcomes. Results show that nonparametric deconvolution, completely free of any model assumptions, allows for estimates of tracer volume of distribution and binding that are very close to the estimates obtained with CMs and, in some cases, show better test–retest performance than CMs outcomes. PMID:25873427

  18. Coincidence ion imaging with a fast frame camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei

    2014-12-15

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less

  19. Kinetics of liquid-mediated crystallization of amorphous Ge from multi-frame dynamic transmission electron microscopy

    DOE PAGES

    Santala, M. K.; Raoux, S.; Campbell, G. H.

    2015-12-24

    The kinetics of laser-induced, liquid-mediated crystallization of amorphous Ge thin films were studied using multi-frame dynamic transmission electron microscopy (DTEM), a nanosecond-scale photo-emission transmission electron microscopy technique. In these experiments, high temperature gradients are established in thin amorphous Ge films with a 12-ns laser pulse with a Gaussian spatial profile. The hottest region at the center of the laser spot crystallizes in ~100 ns and becomes nano-crystalline. Over the next several hundred nanoseconds crystallization continues radially outward from the nano-crystalline region forming elongated grains, some many microns long. The growth rate during the formation of these radial grains is measuredmore » with time-resolved imaging experiments. Crystal growth rates exceed 10 m/s, which are consistent with crystallization mediated by a very thin, undercooled transient liquid layer, rather than a purely solid-state transformation mechanism. The kinetics of this growth mode have been studied in detail under steady-state conditions, but here we provide a detailed study of liquid-mediated growth in high temperature gradients. Unexpectedly, the propagation rate of the crystallization front was observed to remain constant during this growth mode even when passing through large local temperature gradients, in stark contrast to other similar studies that suggested the growth rate changed dramatically. As a result, the high throughput of multi-frame DTEM provides gives a more complete picture of the role of temperature and temperature gradient on laser crystallization than previous DTEM experiments.« less

  20. Kinetics of liquid-mediated crystallization of amorphous Ge from multi-frame dynamic transmission electron microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santala, M. K., E-mail: melissa.santala@oregonstate.edu; Campbell, G. H.; Raoux, S.

    2015-12-21

    The kinetics of laser-induced, liquid-mediated crystallization of amorphous Ge thin films were studied using multi-frame dynamic transmission electron microscopy (DTEM), a nanosecond-scale photo-emission transmission electron microscopy technique. In these experiments, high temperature gradients are established in thin amorphous Ge films with a 12-ns laser pulse with a Gaussian spatial profile. The hottest region at the center of the laser spot crystallizes in ∼100 ns and becomes nano-crystalline. Over the next several hundred nanoseconds crystallization continues radially outward from the nano-crystalline region forming elongated grains, some many microns long. The growth rate during the formation of these radial grains is measured withmore » time-resolved imaging experiments. Crystal growth rates exceed 10 m/s, which are consistent with crystallization mediated by a very thin, undercooled transient liquid layer, rather than a purely solid-state transformation mechanism. The kinetics of this growth mode have been studied in detail under steady-state conditions, but here we provide a detailed study of liquid-mediated growth in high temperature gradients. Unexpectedly, the propagation rate of the crystallization front was observed to remain constant during this growth mode even when passing through large local temperature gradients, in stark contrast to other similar studies that suggested the growth rate changed dramatically. The high throughput of multi-frame DTEM provides gives a more complete picture of the role of temperature and temperature gradient on laser crystallization than previous DTEM experiments.« less

Top