Sample records for constrained spherical deconvolution

  1. The Olfactory System Revealed: Non-Invasive Mapping by using Constrained Spherical Deconvolution Tractography in Healthy Humans

    PubMed Central

    Milardi, Demetrio; Cacciola, Alberto; Calamuneri, Alessandro; Ghilardi, Maria F.; Caminiti, Fabrizia; Cascio, Filippo; Andronaco, Veronica; Anastasi, Giuseppe; Mormina, Enricomaria; Arrigo, Alessandro; Bruschetta, Daniele; Quartarone, Angelo

    2017-01-01

    Although the olfactory sense has always been considered with less interest than the visual, auditive or somatic senses, it does plays a major role in our ordinary life, with important implication in dangerous situations or in social and emotional behaviors. Traditional Diffusion Tensor signal model and related tractography have been used in the past years to reconstruct the cranial nerves, including the olfactory nerve (ON). However, no supplementary information with regard to the pathways of the olfactory network have been provided. Here, by using the more advanced Constrained Spherical Deconvolution (CSD) diffusion model, we show for the first time in vivo and non-invasively that, in healthy humans, the olfactory system has a widely distributed anatomical network to several cortical regions as well as to many subcortical structures. Although the present study focuses on an healthy sample size, a similar approach could be applied in the near future to gain important insights with regard to the early involvement of olfaction in several neurodegenerative disorders. PMID:28443000

  2. Sparse Solution of Fiber Orientation Distribution Function by Diffusion Decomposition

    PubMed Central

    Yeh, Fang-Cheng; Tseng, Wen-Yih Isaac

    2013-01-01

    Fiber orientation is the key information in diffusion tractography. Several deconvolution methods have been proposed to obtain fiber orientations by estimating a fiber orientation distribution function (ODF). However, the L 2 regularization used in deconvolution often leads to false fibers that compromise the specificity of the results. To address this problem, we propose a method called diffusion decomposition, which obtains a sparse solution of fiber ODF by decomposing the diffusion ODF obtained from q-ball imaging (QBI), diffusion spectrum imaging (DSI), or generalized q-sampling imaging (GQI). A simulation study, a phantom study, and an in-vivo study were conducted to examine the performance of diffusion decomposition. The simulation study showed that diffusion decomposition was more accurate than both constrained spherical deconvolution and ball-and-sticks model. The phantom study showed that the angular error of diffusion decomposition was significantly lower than those of constrained spherical deconvolution at 30° crossing and ball-and-sticks model at 60° crossing. The in-vivo study showed that diffusion decomposition can be applied to QBI, DSI, or GQI, and the resolved fiber orientations were consistent regardless of the diffusion sampling schemes and diffusion reconstruction methods. The performance of diffusion decomposition was further demonstrated by resolving crossing fibers on a 30-direction QBI dataset and a 40-direction DSI dataset. In conclusion, diffusion decomposition can improve angular resolution and resolve crossing fibers in datasets with low SNR and substantially reduced number of diffusion encoding directions. These advantages may be valuable for human connectome studies and clinical research. PMID:24146772

  3. Non-Negative Spherical Deconvolution (NNSD) for estimation of fiber Orientation Distribution Function in single-/multi-shell diffusion MRI.

    PubMed

    Cheng, Jian; Deriche, Rachid; Jiang, Tianzi; Shen, Dinggang; Yap, Pew-Thian

    2014-11-01

    Spherical Deconvolution (SD) is commonly used for estimating fiber Orientation Distribution Functions (fODFs) from diffusion-weighted signals. Existing SD methods can be classified into two categories: 1) Continuous Representation based SD (CR-SD), where typically Spherical Harmonic (SH) representation is used for convenient analytical solutions, and 2) Discrete Representation based SD (DR-SD), where the signal profile is represented by a discrete set of basis functions uniformly oriented on the unit sphere. A feasible fODF should be non-negative and should integrate to unity throughout the unit sphere S(2). However, to our knowledge, most existing SH-based SD methods enforce non-negativity only on discretized points and not the whole continuum of S(2). Maximum Entropy SD (MESD) and Cartesian Tensor Fiber Orientation Distributions (CT-FOD) are the only SD methods that ensure non-negativity throughout the unit sphere. They are however computational intensive and are susceptible to errors caused by numerical spherical integration. Existing SD methods are also known to overestimate the number of fiber directions, especially in regions with low anisotropy. DR-SD introduces additional error in peak detection owing to the angular discretization of the unit sphere. This paper proposes a SD framework, called Non-Negative SD (NNSD), to overcome all the limitations above. NNSD is significantly less susceptible to the false-positive peaks, uses SH representation for efficient analytical spherical deconvolution, and allows accurate peak detection throughout the whole unit sphere. We further show that NNSD and most existing SD methods can be extended to work on multi-shell data by introducing a three-dimensional fiber response function. We evaluated NNSD in comparison with Constrained SD (CSD), a quadratic programming variant of CSD, MESD, and an L1-norm regularized non-negative least-squares DR-SD. Experiments on synthetic and real single-/multi-shell data indicate that NNSD improves estimation performance in terms of mean difference of angles, peak detection consistency, and anisotropy contrast between isotropic and anisotropic regions. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Isotropic non-white matter partial volume effects in constrained spherical deconvolution.

    PubMed

    Roine, Timo; Jeurissen, Ben; Perrone, Daniele; Aelterman, Jan; Leemans, Alexander; Philips, Wilfried; Sijbers, Jan

    2014-01-01

    Diffusion-weighted (DW) magnetic resonance imaging (MRI) is a non-invasive imaging method, which can be used to investigate neural tracts in the white matter (WM) of the brain. Significant partial volume effects (PVEs) are present in the DW signal due to relatively large voxel sizes. These PVEs can be caused by both non-WM tissue, such as gray matter (GM) and cerebrospinal fluid (CSF), and by multiple non-parallel WM fiber populations. High angular resolution diffusion imaging (HARDI) methods have been developed to correctly characterize complex WM fiber configurations, but to date, many of the HARDI methods do not account for non-WM PVEs. In this work, we investigated the isotropic PVEs caused by non-WM tissue in WM voxels on fiber orientations extracted with constrained spherical deconvolution (CSD). Experiments were performed on simulated and real DW-MRI data. In particular, simulations were performed to demonstrate the effects of varying the diffusion weightings, signal-to-noise ratios (SNRs), fiber configurations, and tissue fractions. Our results show that the presence of non-WM tissue signal causes a decrease in the precision of the detected fiber orientations and an increase in the detection of false peaks in CSD. We estimated 35-50% of WM voxels to be affected by non-WM PVEs. For HARDI sequences, which typically have a relatively high degree of diffusion weighting, these adverse effects are most pronounced in voxels with GM PVEs. The non-WM PVEs become severe with 50% GM volume for maximum spherical harmonics orders of 8 and below, and already with 25% GM volume for higher orders. In addition, a low diffusion weighting or SNR increases the effects. The non-WM PVEs may cause problems in connectomics, where reliable fiber tracking at the WM-GM interface is especially important. We suggest acquiring data with high diffusion-weighting 2500-3000 s/mm(2), reasonable SNR (~30) and using lower SH orders in GM contaminated regions to minimize the non-WM PVEs in CSD.

  5. A reliability assessment of constrained spherical deconvolution-based diffusion-weighted magnetic resonance imaging in individuals with chronic stroke.

    PubMed

    Snow, Nicholas J; Peters, Sue; Borich, Michael R; Shirzad, Navid; Auriat, Angela M; Hayward, Kathryn S; Boyd, Lara A

    2016-01-15

    Diffusion-weighted magnetic resonance imaging (DW-MRI) is commonly used to assess white matter properties after stroke. Novel work is utilizing constrained spherical deconvolution (CSD) to estimate complex intra-voxel fiber architecture unaccounted for with tensor-based fiber tractography. However, the reliability of CSD-based tractography has not been established in people with chronic stroke. Establishing the reliability of CSD-based DW-MRI in chronic stroke. High-resolution DW-MRI was performed in ten adults with chronic stroke during two separate sessions. Deterministic region of interest-based fiber tractography using CSD was performed by two raters. Mean fractional anisotropy (FA), apparent diffusion coefficient (ADC), tract number, and tract volume were extracted from reconstructed fiber pathways in the corticospinal tract (CST) and superior longitudinal fasciculus (SLF). Callosal fiber pathways connecting the primary motor cortices were also evaluated. Inter-rater and test-retest reliability were determined by intra-class correlation coefficients (ICCs). ICCs revealed excellent reliability for FA and ADC in ipsilesional (0.86-1.00; p<0.05) and contralesional hemispheres (0.94-1.00; p<0.0001), for CST and SLF fibers; and excellent reliability for all metrics in callosal fibers (0.85-1.00; p<0.05). ICC ranged from poor to excellent for tract number and tract volume in ipsilesional (-0.11 to 0.92; p≤0.57) and contralesional hemispheres (-0.27 to 0.93; p≤0.64), for CST and SLF fibers. Like other select DW-MRI approaches, CSD-based tractography is a reliable approach to evaluate FA and ADC in major white matter pathways, in chronic stroke. Future work should address the reproducibility and utility of CSD-based metrics of tract number and tract volume. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Toward Overcoming the Local Minimum Trap in MFBD

    DTIC Science & Technology

    2015-07-14

    the first two years of this grant: • A. Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind Deconvolution...Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Numerical Optimization Meth- ods for Blind Deconvolution, Numerical Algorithms, volume 65, issue 1...Publications (published) during reporting period: A. Cornelio, E. Loli Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind

  7. Fast Fourier-based deconvolution for three-dimensional acoustic source identification with solid spherical arrays

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Chu, Zhigang; Shen, Linbang; Ping, Guoli; Xu, Zhongming

    2018-07-01

    Being capable of demystifying the acoustic source identification result fast, Fourier-based deconvolution has been studied and applied widely for the delay and sum (DAS) beamforming with two-dimensional (2D) planar arrays. It is, however so far, still blank in the context of spherical harmonics beamforming (SHB) with three-dimensional (3D) solid spherical arrays. This paper is motivated to settle this problem. Firstly, for the purpose of determining the effective identification region, the premise of deconvolution, a shift-invariant point spread function (PSF), is analyzed with simulations. To make the premise be satisfied approximately, the opening angle in elevation dimension of the surface of interest should be small, while no restriction is imposed to the azimuth dimension. Then, two kinds of deconvolution theories are built for SHB using the zero and the periodic boundary conditions respectively. Both simulations and experiments demonstrate that the periodic boundary condition is superior to the zero one, and fits the 3D acoustic source identification with solid spherical arrays better. Finally, four periodic boundary condition based deconvolution methods are formulated, and their performance is disclosed both with simulations and experimentally. All the four methods offer enhanced spatial resolution and reduced sidelobe contaminations over SHB. The recovered source strength approximates to the exact one multiplied with a coefficient that is the square of the focus distance divided by the distance from the source to the array center, while the recovered pressure contribution is scarcely affected by the focus distance, always approximating to the exact one.

  8. Resolving complex fibre architecture by means of sparse spherical deconvolution in the presence of isotropic diffusion

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Michailovich, O.; Rathi, Y.

    2014-03-01

    High angular resolution diffusion imaging (HARDI) improves upon more traditional diffusion tensor imaging (DTI) in its ability to resolve the orientations of crossing and branching neural fibre tracts. The HARDI signals are measured over a spherical shell in q-space, and are usually used as an input to q-ball imaging (QBI) which allows estimation of the diffusion orientation distribution functions (ODFs) associated with a given region-of interest. Unfortunately, the partial nature of single-shell sampling imposes limits on the estimation accuracy. As a result, the recovered ODFs may not possess sufficient resolution to reveal the orientations of fibre tracts which cross each other at acute angles. A possible solution to the problem of limited resolution of QBI is provided by means of spherical deconvolution, a particular instance of which is sparse deconvolution. However, while capable of yielding high-resolution reconstructions over spacial locations corresponding to white matter, such methods tend to become unstable when applied to anatomical regions with a substantial content of isotropic diffusion. To resolve this problem, a new deconvolution approach is proposed in this paper. Apart from being uniformly stable across the whole brain, the proposed method allows one to quantify the isotropic component of cerebral diffusion, which is known to be a useful diagnostic measure by itself.

  9. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    NASA Astrophysics Data System (ADS)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  10. Restoring defect structures in 3C-SiC/Si (001) from spherical aberration-corrected high-resolution transmission electron microscope images by means of deconvolution processing.

    PubMed

    Wen, C; Wan, W; Li, F H; Tang, D

    2015-04-01

    The [110] cross-sectional samples of 3C-SiC/Si (001) were observed with a spherical aberration-corrected 300 kV high-resolution transmission electron microscope. Two images taken not close to the Scherzer focus condition and not representing the projected structures intuitively were utilized for performing the deconvolution. The principle and procedure of image deconvolution and atomic sort recognition are summarized. The defect structure restoration together with the recognition of Si and C atoms from the experimental images has been illustrated. The structure maps of an intrinsic stacking fault in the area of SiC, and of Lomer and 60° shuffle dislocations at the interface have been obtained at atomic level. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Application of an improved minimum entropy deconvolution method for railway rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei

    2018-07-01

    Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.

  12. Deconvolution for three-dimensional acoustic source identification based on spherical harmonics beamforming

    NASA Astrophysics Data System (ADS)

    Chu, Zhigang; Yang, Yang; He, Yansong

    2015-05-01

    Spherical Harmonics Beamforming (SHB) with solid spherical arrays has become a particularly attractive tool for doing acoustic sources identification in cabin environments. However, it presents some intrinsic limitations, specifically poor spatial resolution and severe sidelobe contaminations. This paper focuses on overcoming these limitations effectively by deconvolution. First and foremost, a new formulation is proposed, which expresses SHB's output as a convolution of the true source strength distribution and the point spread function (PSF) defined as SHB's response to a unit-strength point source. Additionally, the typical deconvolution methods initially suggested for planar arrays, deconvolution approach for the mapping of acoustic sources (DAMAS), nonnegative least-squares (NNLS), Richardson-Lucy (RL) and CLEAN, are adapted to SHB successfully, which are capable of giving rise to highly resolved and deblurred maps. Finally, the merits of the deconvolution methods are validated and the relationships of source strength and pressure contribution reconstructed by the deconvolution methods vs. focus distance are explored both with computer simulations and experimentally. Several interesting results have emerged from this study: (1) compared with SHB, DAMAS, NNLS, RL and CLEAN all can not only improve the spatial resolution dramatically but also reduce or even eliminate the sidelobes effectively, allowing clear and unambiguous identification of single source or incoherent sources. (2) The availability of RL for coherent sources is highest, then DAMAS and NNLS, and that of CLEAN is lowest due to its failure in suppressing sidelobes. (3) Whether or not the real distance from the source to the array center equals the assumed one that is referred to as focus distance, the previous two results hold. (4) The true source strength can be recovered by dividing the reconstructed one by a coefficient that is the square of the focus distance divided by the real distance from the source to the array center. (5) The reconstructed pressure contribution is almost not affected by the focus distance, always approximating to the true one. This study will be of great significance to the accurate localization and quantification of acoustic sources in cabin environments.

  13. iSAP: Interactive Sparse Astronomical Data Analysis Packages

    NASA Astrophysics Data System (ADS)

    Fourt, O.; Starck, J.-L.; Sureau, F.; Bobin, J.; Moudden, Y.; Abrial, P.; Schmitt, J.

    2013-03-01

    iSAP consists of three programs, written in IDL, which together are useful for spherical data analysis. MR/S (MultiResolution on the Sphere) contains routines for wavelet, ridgelet and curvelet transform on the sphere, and applications such denoising on the sphere using wavelets and/or curvelets, Gaussianity tests and Independent Component Analysis on the Sphere. MR/S has been designed for the PLANCK project, but can be used for many other applications. SparsePol (Polarized Spherical Wavelets and Curvelets) has routines for polarized wavelet, polarized ridgelet and polarized curvelet transform on the sphere, and applications such denoising on the sphere using wavelets and/or curvelets, Gaussianity tests and blind source separation on the Sphere. SparsePol has been designed for the PLANCK project. MS-VSTS (Multi-Scale Variance Stabilizing Transform on the Sphere), designed initially for the FERMI project, is useful for spherical mono-channel and multi-channel data analysis when the data are contaminated by a Poisson noise. It contains routines for wavelet/curvelet denoising, wavelet deconvolution, multichannel wavelet denoising and deconvolution.

  14. Strehl-constrained iterative blind deconvolution for post-adaptive-optics data

    NASA Astrophysics Data System (ADS)

    Desiderà, G.; Carbillet, M.

    2009-12-01

    Aims: We aim to improve blind deconvolution applied to post-adaptive-optics (AO) data by taking into account one of their basic characteristics, resulting from the necessarily partial AO correction: the Strehl ratio. Methods: We apply a Strehl constraint in the framework of iterative blind deconvolution (IBD) of post-AO near-infrared images simulated in a detailed end-to-end manner and considering a case that is as realistic as possible. Results: The results obtained clearly show the advantage of using such a constraint, from the point of view of both performance and stability, especially for poorly AO-corrected data. The proposed algorithm has been implemented in the freely-distributed and CAOS-based Software Package AIRY.

  15. Strehl-constrained reconstruction of post-adaptive optics data and the Software Package AIRY, v. 6.1

    NASA Astrophysics Data System (ADS)

    Carbillet, Marcel; La Camera, Andrea; Deguignet, Jérémy; Prato, Marco; Bertero, Mario; Aristidi, Éric; Boccacci, Patrizia

    2014-08-01

    We first briefly present the last version of the Software Package AIRY, version 6.1, a CAOS-based tool which includes various deconvolution methods, accelerations, regularizations, super-resolution, boundary effects reduction, point-spread function extraction/extrapolation, stopping rules, and constraints in the case of iterative blind deconvolution (IBD). Then, we focus on a new formulation of our Strehl-constrained IBD, here quantitatively compared to the original formulation for simulated near-infrared data of an 8-m class telescope equipped with adaptive optics (AO), showing their equivalence. Next, we extend the application of the original method to the visible domain with simulated data of an AO-equipped 1.5-m telescope, testing also the robustness of the method with respect to the Strehl ratio estimation.

  16. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization

    PubMed Central

    Canales-Rodríguez, Erick J.; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M.; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024

  17. Deconvolution and analysis of wide-angle longwave radiation data from Nimbus 6 Earth radiation budget experiment for the first year

    NASA Technical Reports Server (NTRS)

    Bess, T. D.; Green, R. N.; Smith, G. L.

    1980-01-01

    One year of longwave radiation data from July 1975 through June 1976 from the Nimbus 6 satellite Earth radiation budget experiment is analyzed by representing the radiation field by a spherical harmonic expansion. The data are from the wide field of view instrument. Contour maps of the longwave radiation field and spherical harmonic coefficients to degree 12 and order 12 are presented for a 12 month data period.

  18. Nimbus 7 earth radiation budget wide field of view climate data set improvement. I - The earth albedo from deconvolution of shortwave measurements

    NASA Technical Reports Server (NTRS)

    Hucek, Richard R.; Ardanuy, Philip E.; Kyle, H. Lee

    1987-01-01

    A deconvolution method for extracting the top of the atmosphere (TOA) mean, daily albedo field from a set of wide-FOV (WFOV) shortwave radiometer measurements is proposed. The method is based on constructing a synthetic measurement for each satellite observation. The albedo field is represented as a truncated series of spherical harmonic functions, and these linear equations are presented. Simulation studies were conducted to determine the sensitivity of the method. It is observed that a maximum of about 289 pieces of data can be extracted from a set of Nimbus 7 WFOV satellite measurements. The albedos derived using the deconvolution method are compared with albedos derived using the WFOV archival method; the developed albedo field achieved a 20 percent reduction in the global rms regional reflected flux density errors. The deconvolution method is applied to estimate the mean, daily average TOA albedo field for January 1983. A strong and extensive albedo maximum (0.42), which corresponds to the El Nino/Southern Oscillation event of 1982-1983, is detected over the south central Pacific Ocean.

  19. A spherical harmonic approach for the determination of HCP texture from ultrasound: A solution to the inverse problem

    NASA Astrophysics Data System (ADS)

    Lan, Bo; Lowe, Michael J. S.; Dunne, Fionn P. E.

    2015-10-01

    A new spherical convolution approach has been presented which couples HCP single crystal wave speed (the kernel function) with polycrystal c-axis pole distribution function to give the resultant polycrystal wave speed response. The three functions have been expressed as spherical harmonic expansions thus enabling application of the de-convolution technique to enable any one of the three to be determined from knowledge of the other two. Hence, the forward problem of determination of polycrystal wave speed from knowledge of single crystal wave speed response and the polycrystal pole distribution has been solved for a broad range of experimentally representative HCP polycrystal textures. The technique provides near-perfect representation of the sensitivity of wave speed to polycrystal texture as well as quantitative prediction of polycrystal wave speed. More importantly, a solution to the inverse problem is presented in which texture, as a c-axis distribution function, is determined from knowledge of the kernel function and the polycrystal wave speed response. It has also been explained why it has been widely reported in the literature that only texture coefficients up to 4th degree may be obtained from ultrasonic measurements. Finally, the de-convolution approach presented provides the potential for the measurement of polycrystal texture from ultrasonic wave speed measurements.

  20. Determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1991-01-01

    The final report for work on the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution is presented. Papers and theses prepared during the research report period are included. Among all the research results reported, note should be made of the specific investigation of the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution. A methodology was developed to determine design and operation parameters for error minimization when deconvolution is included in data analysis. An error surface is plotted versus the signal-to-noise ratio (SNR) and all parameters of interest. Instrumental characteristics will determine a curve in this space. The SNR and parameter values which give the projection from the curve to the surface, corresponding to the smallest value for the error, are the optimum values. These values are constrained by the curve and so will not necessarily correspond to an absolute minimum in the error surface.

  1. Application of constrained deconvolution technique for reconstruction of electron bunch profile with strongly non-Gaussian shape

    NASA Astrophysics Data System (ADS)

    Geloni, G.; Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.

    2004-08-01

    An effective and practical technique based on the detection of the coherent synchrotron radiation (CSR) spectrum can be used to characterize the profile function of ultra-short bunches. The CSR spectrum measurement has an important limitation: no spectral phase information is available, and the complete profile function cannot be obtained in general. In this paper we propose to use constrained deconvolution method for bunch profile reconstruction based on a priori-known information about formation of the electron bunch. Application of the method is illustrated with practically important example of a bunch formed in a single bunch-compressor. Downstream of the bunch compressor the bunch charge distribution is strongly non-Gaussian with a narrow leading peak and a long tail. The longitudinal bunch distribution is derived by measuring the bunch tail constant with a streak camera and by using a priory available information about profile function.

  2. Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data

    PubMed Central

    Pnevmatikakis, Eftychios A.; Soudry, Daniel; Gao, Yuanjun; Machado, Timothy A.; Merel, Josh; Pfau, David; Reardon, Thomas; Mu, Yu; Lacefield, Clay; Yang, Weijian; Ahrens, Misha; Bruno, Randy; Jessell, Thomas M.; Peterka, Darcy S.; Yuste, Rafael; Paninski, Liam

    2016-01-01

    SUMMARY We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multineuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data. PMID:26774160

  3. Multichannel Poisson denoising and deconvolution on the sphere: application to the Fermi Gamma-ray Space Telescope

    NASA Astrophysics Data System (ADS)

    Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.

    2012-10-01

    A multiscale representation-based denoising method for spherical data contaminated with Poisson noise, the multiscale variance stabilizing transform on the sphere (MS-VSTS), has been previously proposed. This paper first extends this MS-VSTS to spherical two and one dimensions data (2D-1D), where the two first dimensions are longitude and latitude, and the third dimension is a meaningful physical index such as energy or time. We then introduce a novel multichannel deconvolution built upon the 2D-1D MS-VSTS, which allows us to get rid of both the noise and the blur introduced by the point spread function (PSF) in each energy (or time) band. The method is applied to simulated data from the Large Area Telescope (LAT), the main instrument of the Fermi Gamma-ray Space Telescope, which detects high energy gamma-rays in a very wide energy range (from 20 MeV to more than 300 GeV), and whose PSF is strongly energy-dependent (from about 3.5 at 100 MeV to less than 0.1 at 10 GeV).

  4. Comment on ‘A novel method for fast and robust estimation of fluorescence decay dynamics using constrained least-square deconvolution with Laguerre expansion’

    NASA Astrophysics Data System (ADS)

    Zhang, Yongliang; Day-Uei Li, David

    2017-02-01

    This comment is to clarify that Poisson noise instead of Gaussian noise shall be included to assess the performances of least-squares deconvolution with Laguerre expansion (LSD-LE) for analysing fluorescence lifetime imaging data obtained from time-resolved systems. Moreover, we also corrected an equation in the paper. As the LSD-LE method is rapid and has the potential to be widely applied not only for diagnostic but for wider bioimaging applications, it is desirable to have precise noise models and equations.

  5. Dipy, a library for the analysis of diffusion MRI data.

    PubMed

    Garyfallidis, Eleftherios; Brett, Matthew; Amirbekian, Bagrat; Rokem, Ariel; van der Walt, Stefan; Descoteaux, Maxime; Nimmo-Smith, Ian

    2014-01-01

    Diffusion Imaging in Python (Dipy) is a free and open source software project for the analysis of data from diffusion magnetic resonance imaging (dMRI) experiments. dMRI is an application of MRI that can be used to measure structural features of brain white matter. Many methods have been developed to use dMRI data to model the local configuration of white matter nerve fiber bundles and infer the trajectory of bundles connecting different parts of the brain. Dipy gathers implementations of many different methods in dMRI, including: diffusion signal pre-processing; reconstruction of diffusion distributions in individual voxels; fiber tractography and fiber track post-processing, analysis and visualization. Dipy aims to provide transparent implementations for all the different steps of dMRI analysis with a uniform programming interface. We have implemented classical signal reconstruction techniques, such as the diffusion tensor model and deterministic fiber tractography. In addition, cutting edge novel reconstruction techniques are implemented, such as constrained spherical deconvolution and diffusion spectrum imaging (DSI) with deconvolution, as well as methods for probabilistic tracking and original methods for tractography clustering. Many additional utility functions are provided to calculate various statistics, informative visualizations, as well as file-handling routines to assist in the development and use of novel techniques. In contrast to many other scientific software projects, Dipy is not being developed by a single research group. Rather, it is an open project that encourages contributions from any scientist/developer through GitHub and open discussions on the project mailing list. Consequently, Dipy today has an international team of contributors, spanning seven different academic institutions in five countries and three continents, which is still growing.

  6. Dipy, a library for the analysis of diffusion MRI data

    PubMed Central

    Garyfallidis, Eleftherios; Brett, Matthew; Amirbekian, Bagrat; Rokem, Ariel; van der Walt, Stefan; Descoteaux, Maxime; Nimmo-Smith, Ian

    2014-01-01

    Diffusion Imaging in Python (Dipy) is a free and open source software project for the analysis of data from diffusion magnetic resonance imaging (dMRI) experiments. dMRI is an application of MRI that can be used to measure structural features of brain white matter. Many methods have been developed to use dMRI data to model the local configuration of white matter nerve fiber bundles and infer the trajectory of bundles connecting different parts of the brain. Dipy gathers implementations of many different methods in dMRI, including: diffusion signal pre-processing; reconstruction of diffusion distributions in individual voxels; fiber tractography and fiber track post-processing, analysis and visualization. Dipy aims to provide transparent implementations for all the different steps of dMRI analysis with a uniform programming interface. We have implemented classical signal reconstruction techniques, such as the diffusion tensor model and deterministic fiber tractography. In addition, cutting edge novel reconstruction techniques are implemented, such as constrained spherical deconvolution and diffusion spectrum imaging (DSI) with deconvolution, as well as methods for probabilistic tracking and original methods for tractography clustering. Many additional utility functions are provided to calculate various statistics, informative visualizations, as well as file-handling routines to assist in the development and use of novel techniques. In contrast to many other scientific software projects, Dipy is not being developed by a single research group. Rather, it is an open project that encourages contributions from any scientist/developer through GitHub and open discussions on the project mailing list. Consequently, Dipy today has an international team of contributors, spanning seven different academic institutions in five countries and three continents, which is still growing. PMID:24600385

  7. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    PubMed

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  8. Noise deconvolution based on the L1-metric and decomposition of discrete distributions of postsynaptic responses.

    PubMed

    Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L

    1997-04-25

    A statistical approach to analysis of amplitude fluctuations of postsynaptic responses is described. This includes (1) using a L1-metric in the space of distribution functions for minimisation with application of linear programming methods to decompose amplitude distributions into a convolution of Gaussian and discrete distributions; (2) deconvolution of the resulting discrete distribution with determination of the release probabilities and the quantal amplitude for cases with a small number (< 5) of discrete components. The methods were tested against simulated data over a range of sample sizes and signal-to-noise ratios which mimicked those observed in physiological experiments. In computer simulation experiments, comparisons were made with other methods of 'unconstrained' (generalized) and constrained reconstruction of discrete components from convolutions. The simulation results provided additional criteria for improving the solutions to overcome 'over-fitting phenomena' and to constrain the number of components with small probabilities. Application of the programme to recordings from hippocampal neurones demonstrated its usefulness for the analysis of amplitude distributions of postsynaptic responses.

  9. Parameter estimation applied to Nimbus 6 wide-angle longwave radiation measurements

    NASA Technical Reports Server (NTRS)

    Green, R. N.; Smith, G. L.

    1978-01-01

    A parameter estimation technique was used to analyze the August 1975 Nimbus 6 Earth radiation budget data to demonstrate the concept of deconvolution. The longwave radiation field at the top of the atmosphere is defined from satellite data by a fifth degree and fifth order spherical harmonic representation. The variations of the major features of the radiation field are defined by analyzing the data separately for each two-day duty cycle. A table of coefficient values for each spherical harmonic representation is given along with global mean, gradients, degree variances, and contour plots. In addition, the entire data set is analyzed to define the monthly average radiation field.

  10. Distinct contributions of the fornix and inferior longitudinal fasciculus to episodic and semantic autobiographical memory.

    PubMed

    Hodgetts, Carl J; Postans, Mark; Warne, Naomi; Varnava, Alice; Lawrence, Andrew D; Graham, Kim S

    2017-09-01

    Autobiographical memory (AM) is multifaceted, incorporating the vivid retrieval of contextual detail (episodic AM), together with semantic knowledge that infuses meaning and coherence into past events (semantic AM). While neuropsychological evidence highlights a role for the hippocampus and anterior temporal lobe (ATL) in episodic and semantic AM, respectively, it is unclear whether these constitute dissociable large-scale AM networks. We used high angular resolution diffusion-weighted imaging and constrained spherical deconvolution-based tractography to assess white matter microstructure in 27 healthy young adult participants who were asked to recall past experiences using word cues. Inter-individual variation in the microstructure of the fornix (the main hippocampal input/output pathway) related to the amount of episodic, but not semantic, detail in AMs - independent of memory age. Conversely, microstructure of the inferior longitudinal fasciculus, linking occipitotemporal regions with ATL, correlated with semantic, but not episodic, AMs. Further, these significant correlations remained when controlling for hippocampal and ATL grey matter volume, respectively. This striking correlational double dissociation supports the view that distinct, large-scale distributed brain circuits underpin context and concepts in AM. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. AIDA: an adaptive image deconvolution algorithm with application to multi-frame and three-dimensional data

    PubMed Central

    Hom, Erik F. Y.; Marchis, Franck; Lee, Timothy K.; Haase, Sebastian; Agard, David A.; Sedat, John W.

    2011-01-01

    We describe an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of multi-frame and three-dimensional data acquired through astronomical and microscopic imaging. AIDA is a reimplementation and extension of the MISTRAL method developed by Mugnier and co-workers and shown to yield object reconstructions with excellent edge preservation and photometric precision [J. Opt. Soc. Am. A 21, 1841 (2004)]. Written in Numerical Python with calls to a robust constrained conjugate gradient method, AIDA has significantly improved run times over the original MISTRAL implementation. Included in AIDA is a scheme to automatically balance maximum-likelihood estimation and object regularization, which significantly decreases the amount of time and effort needed to generate satisfactory reconstructions. We validated AIDA using synthetic data spanning a broad range of signal-to-noise ratios and image types and demonstrated the algorithm to be effective for experimental data from adaptive optics–equipped telescope systems and wide-field microscopy. PMID:17491626

  12. Deconvoluting complex structural histories archived in brittle fault zones

    NASA Astrophysics Data System (ADS)

    Viola, G.; Scheiber, T.; Fredin, O.; Zwingmann, H.; Margreth, A.; Knies, J.

    2016-11-01

    Brittle deformation can saturate the Earth's crust with faults and fractures in an apparently chaotic fashion. The details of brittle deformational histories and implications on, for example, seismotectonics and landscape, can thus be difficult to untangle. Fortunately, brittle faults archive subtle details of the stress and physical/chemical conditions at the time of initial strain localization and eventual subsequent slip(s). Hence, reading those archives offers the possibility to deconvolute protracted brittle deformation. Here we report K-Ar isotopic dating of synkinematic/authigenic illite coupled with structural analysis to illustrate an innovative approach to the high-resolution deconvolution of brittle faulting and fluid-driven alteration of a reactivated fault in western Norway. Permian extension preceded coaxial reactivation in the Jurassic and Early Cretaceous fluid-related alteration with pervasive clay authigenesis. This approach represents important progress towards time-constrained structural models, where illite characterization and K-Ar analysis are a fundamental tool to date faulting and alteration in crystalline rocks.

  13. Deconvolution of astronomical images using SOR with adaptive relaxation.

    PubMed

    Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J

    2011-07-04

    We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.

  14. Early development of structural networks and the impact of prematurity on brain connectivity.

    PubMed

    Batalle, Dafnis; Hughes, Emer J; Zhang, Hui; Tournier, J-Donald; Tusor, Nora; Aljabar, Paul; Wali, Luqman; Alexander, Daniel C; Hajnal, Joseph V; Nosarti, Chiara; Edwards, A David; Counsell, Serena J

    2017-04-01

    Preterm infants are at high risk of neurodevelopmental impairment, which may be due to altered development of brain connectivity. We aimed to (i) assess structural brain development from 25 to 45 weeks gestational age (GA) using graph theoretical approaches and (ii) test the hypothesis that preterm birth results in altered white matter network topology. Sixty-five infants underwent MRI between 25 +3 and 45 +6 weeks GA. Structural networks were constructed using constrained spherical deconvolution tractography and were weighted by measures of white matter microstructure (fractional anisotropy, neurite density and orientation dispersion index). We observed regional differences in brain maturation, with connections to and from deep grey matter showing most rapid developmental changes during this period. Intra-frontal, frontal to cingulate, frontal to caudate and inter-hemispheric connections matured more slowly. We demonstrated a core of key connections that was not affected by GA at birth. However, local connectivity involving thalamus, cerebellum, superior frontal lobe, cingulate gyrus and short range cortico-cortical connections was related to the degree of prematurity and contributed to altered global topology of the structural brain network. The relative preservation of core connections at the expense of local connections may support more effective use of impaired white matter reserve following preterm birth. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Faces of Pluto

    NASA Image and Video Library

    2015-06-11

    These images, taken by NASA's New Horizons' Long Range Reconnaissance Imager (LORRI), show four different "faces" of Pluto as it rotates about its axis with a period of 6.4 days. All the images have been rotated to align Pluto's rotational axis with the vertical direction (up-down) on the figure, as depicted schematically in the upper left. From left to right, the images were taken when Pluto's central longitude was 17, 63, 130, and 243 degrees, respectively. The date of each image, the distance of the New Horizons spacecraft from Pluto, and the number of days until Pluto closest approach are all indicated in the figure.These images show dramatic variations in Pluto's surface features as it rotates. When a very large, dark region near Pluto's equator appears near the limb, it gives Pluto a distinctly, but false, non-spherical appearance. Pluto is known to be almost perfectly spherical from previous data. These images are displayed at four times the native LORRI image size, and have been processed using a method called deconvolution, which sharpens the original images to enhance features on Pluto. Deconvolution can occasionally introduce "false" details, so the finest details in these pictures will need to be confirmed by images taken from closer range in the next few weeks. All of the images are displayed using the same brightness scale. http://photojournal.jpl.nasa.gov/catalog/PIA19686

  16. Atlasing the frontal lobe connections and their variability due to age and education: a spherical deconvolution tractography study.

    PubMed

    Rojkova, K; Volle, E; Urbanski, M; Humbert, F; Dell'Acqua, F; Thiebaut de Schotten, M

    2016-04-01

    In neuroscience, there is a growing consensus that higher cognitive functions may be supported by distributed networks involving different cerebral regions, rather than by single brain areas. Communication within these networks is mediated by white matter tracts and is particularly prominent in the frontal lobes for the control and integration of information. However, the detailed mapping of frontal connections remains incomplete, albeit crucial to an increased understanding of these cognitive functions. Based on 47 high-resolution diffusion-weighted imaging datasets (age range 22-71 years), we built a statistical normative atlas of the frontal lobe connections in stereotaxic space, using state-of-the-art spherical deconvolution tractography. We dissected 55 tracts including U-shaped fibers. We further characterized these tracts by measuring their correlation with age and education level. We reported age-related differences in the microstructural organization of several, specific frontal fiber tracts, but found no correlation with education level. Future voxel-based analyses, such as voxel-based morphometry or tract-based spatial statistics studies, may benefit from our atlas by identifying the tracts and networks involved in frontal functions. Our atlas will also build the capacity of clinicians to further understand the mechanisms involved in brain recovery and plasticity, as well as assist clinicians in the diagnosis of disconnection or abnormality within specific tracts of individual patients with various brain diseases.

  17. Wavespace-Based Coherent Deconvolution

    NASA Technical Reports Server (NTRS)

    Bahr, Christopher J.; Cattafesta, Louis N., III

    2012-01-01

    Array deconvolution is commonly used in aeroacoustic analysis to remove the influence of a microphone array's point spread function from a conventional beamforming map. Unfortunately, the majority of deconvolution algorithms assume that the acoustic sources in a measurement are incoherent, which can be problematic for some aeroacoustic phenomena with coherent, spatially-distributed characteristics. While several algorithms have been proposed to handle coherent sources, some are computationally intractable for many problems while others require restrictive assumptions about the source field. Newer generalized inverse techniques hold promise, but are still under investigation for general use. An alternate coherent deconvolution method is proposed based on a wavespace transformation of the array data. Wavespace analysis offers advantages over curved-wave array processing, such as providing an explicit shift-invariance in the convolution of the array sampling function with the acoustic wave field. However, usage of the wavespace transformation assumes the acoustic wave field is accurately approximated as a superposition of plane wave fields, regardless of true wavefront curvature. The wavespace technique leverages Fourier transforms to quickly evaluate a shift-invariant convolution. The method is derived for and applied to ideal incoherent and coherent plane wave fields to demonstrate its ability to determine magnitude and relative phase of multiple coherent sources. Multi-scale processing is explored as a means of accelerating solution convergence. A case with a spherical wave front is evaluated. Finally, a trailing edge noise experiment case is considered. Results show the method successfully deconvolves incoherent, partially-coherent, and coherent plane wave fields to a degree necessary for quantitative evaluation. Curved wave front cases warrant further investigation. A potential extension to nearfield beamforming is proposed.

  18. Fractal scaling laws of black carbon aerosol and their influence on spectral radiative properties

    NASA Astrophysics Data System (ADS)

    Tiwari, S.; Chakrabarty, R. K.; Heinson, W.

    2016-12-01

    Current estimates of the direct radiative forcing for Black Carbon (BC) aerosol span over a poorly constrained range between 0.2 and 1 W.m-2. To improve this large uncertainty, tighter constraints need to be placed on BC's key wavelength-dependent optical properties, namely, the absorption (MAC) and scattering (MSC) cross sections per unit mass and hemispherical upscatter fraction (β; a dimensionless scattering directionality parameter). These parameters are very sensitive to changes in particle morphology and complex refractive index nindex. Their interplay determines the magnitude of net positive or negative radiative forcing efficiencies. The current approach among climate modelers for estimating MAC and MSC values of BC is from their optical cross-sections calculated assuming spherical particle morphology with homogeneous, constant-valued refractive index in the visible solar spectrum. The β values are typically assumed to be a constant across this spectrum. This approach, while being computationally inexpensive and convenient, ignores the inherent fractal morphology of BC and its scaling behaviors, and resulting optical properties. In this talk, I will present recent results from my laboratory on determination of the fractal scaling laws of BC aggregate packing density and its complex refractive index for size spanning across three orders of magnitude, and their effects on spectral (Visible-infrared wavelength) scaling of MAC, MSC, and β values. Our experiments synergistically combined novel BC generation techniques, aggregation models, contact-free multi-wavelength optical measurements, and electron microscopy analysis. The scale dependence of nindex on aggregate size followed power-law exponents of -1.4 and -0.5 for sub- and super-micron size aggregates, respectively. The spherical Rayleigh-optics approximation limits, used by climate models for spectral extrapolation of BC optical cross-sections and deconvolution of multi-species mixing ratios, are redefined using the concept of phase shift parameter. I will highlight the importance of size-dependent β values and its role in offsetting the strong light absorbing nature of BC. Finally, the errors introduced in forcing efficiency calculations of BC by assuming spherical homogeneous morphology will be evaluated.

  19. LES-Modeling of a Partially Premixed Flame using a Deconvolution Turbulence Closure

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Wu, Hao; Ihme, Matthias

    2015-11-01

    The modeling of the turbulence/chemistry interaction in partially premixed and multi-stream combustion remains an outstanding issue. By extending a recently developed constrained minimum mean-square error deconvolution (CMMSED) method, to objective of this work is to develop a source-term closure for turbulent multi-stream combustion. In this method, the chemical source term is obtained from a three-stream flamelet model, and CMMSED is used as closure model, thereby eliminating the need for presumed PDF-modeling. The model is applied to LES of a piloted turbulent jet flame with inhomogeneous inlets, and simulation results are compared with experiments. Comparisons with presumed PDF-methods are performed, and issues regarding resolution and conservation of the CMMSED method are examined. The author would like to acknowledge the support of funding from Stanford Graduate Fellowship.

  20. Motor network efficiency and disability in multiple sclerosis

    PubMed Central

    Yaldizli, Özgür; Sethi, Varun; Muhlert, Nils; Liu, Zheng; Samson, Rebecca S.; Altmann, Daniel R.; Ron, Maria A.; Wheeler-Kingshott, Claudia A.M.; Miller, David H.; Chard, Declan T.

    2015-01-01

    Objective: To develop a composite MRI-based measure of motor network integrity, and determine if it explains disability better than conventional MRI measures in patients with multiple sclerosis (MS). Methods: Tract density imaging and constrained spherical deconvolution tractography were used to identify motor network connections in 22 controls. Fractional anisotropy (FA), magnetization transfer ratio (MTR), and normalized volume were computed in each tract in 71 people with relapse onset MS. Principal component analysis was used to distill the FA, MTR, and tract volume data into a single metric for each tract, which in turn was used to compute a composite measure of motor network efficiency (composite NE) using graph theory. Associations were investigated between the Expanded Disability Status Scale (EDSS) and the following MRI measures: composite motor NE, NE calculated using FA alone, FA averaged in the combined motor network tracts, brain T2 lesion volume, brain parenchymal fraction, normal-appearing white matter MTR, and cervical cord cross-sectional area. Results: In univariable analysis, composite motor NE explained 58% of the variation in EDSS in the whole MS group, more than twice that of the other MRI measures investigated. In a multivariable regression model, only composite NE and disease duration were independently associated with EDSS. Conclusions: A composite MRI measure of motor NE was able to predict disability substantially better than conventional non-network-based MRI measures. PMID:26320199

  1. The Gini coefficient: a methodological pilot study to assess fetal brain development employing postmortem diffusion MRI.

    PubMed

    Viehweger, Adrian; Riffert, Till; Dhital, Bibek; Knösche, Thomas R; Anwander, Alfred; Stepan, Holger; Sorge, Ina; Hirsch, Wolfgang

    2014-10-01

    Diffusion-weighted imaging (DWI) is important in the assessment of fetal brain development. However, it is clinically challenging and time-consuming to prepare neuromorphological examinations to assess real brain age and to detect abnormalities. To demonstrate that the Gini coefficient can be a simple, intuitive parameter for modelling fetal brain development. Postmortem fetal specimens(n = 28) were evaluated by diffusion-weighted imaging (DWI) on a 3-T MRI scanner using 60 directions, 0.7-mm isotropic voxels and b-values of 0, 150, 1,600 s/mm(2). Constrained spherical deconvolution (CSD) was used as the local diffusion model. Fractional anisotropy (FA), apparent diffusion coefficient (ADC) and complexity (CX) maps were generated. CX was defined as a novel diffusion metric. On the basis of those three parameters, the Gini coefficient was calculated. Study of fetal brain development in postmortem specimens was feasible using DWI. The Gini coefficient could be calculated for the combination of the three diffusion parameters. This multidimensional Gini coefficient correlated well with age (Adjusted R(2) = 0.59) between the ages of 17 and 26 gestational weeks. We propose a new method that uses an economics concept, the Gini coefficient, to describe the whole brain with one simple and intuitive measure, which can be used to assess the brain's developmental state.

  2. A localized Richardson-Lucy algorithm for fiber orientation estimation in high angular resolution diffusion imaging.

    PubMed

    Liu, Xiaozheng; Yuan, Zhenming; Guo, Zhongwei; Xu, Dongrong

    2015-05-01

    Diffusion tensor imaging is widely used for studying neural fiber trajectories in white matter and for quantifying changes in tissue using diffusion properties at each voxel in the brain. To better model the nature of crossing fibers within complex architectures, rather than using a simplified tensor model that assumes only a single fiber direction at each image voxel, a model mixing multiple diffusion tensors is used to profile diffusion signals from high angular resolution diffusion imaging (HARDI) data. Based on the HARDI signal and a multiple tensors model, spherical deconvolution methods have been developed to overcome the limitations of the diffusion tensor model when resolving crossing fibers. The Richardson-Lucy algorithm is a popular spherical deconvolution method used in previous work. However, it is based on a Gaussian distribution, while HARDI data are always very noisy, and the distribution of HARDI data follows a Rician distribution. This current work aims to present a novel solution to address these issues. By simultaneously considering both the Rician bias and neighbor correlation in HARDI data, the authors propose a localized Richardson-Lucy (LRL) algorithm to estimate fiber orientations for HARDI data. The proposed method can simultaneously reduce noise and correct the Rician bias. Mean angular error (MAE) between the estimated Fiber orientation distribution (FOD) field and the reference FOD field was computed to examine whether the proposed LRL algorithm offered any advantage over the conventional RL algorithm at various levels of noise. Normalized mean squared error (NMSE) was also computed to measure the similarity between the true FOD field and the estimated FOD filed. For MAE comparisons, the proposed LRL approach obtained the best results in most of the cases at different levels of SNR and b-values. For NMSE comparisons, the proposed LRL approach obtained the best results in most of the cases at b-value = 3000 s/mm(2), which is the recommended schema for HARDI data acquisition. In addition, the FOD fields estimated by the proposed LRL approach in regions of fiber crossing regions using real data sets also showed similar fiber structures which agreed with common acknowledge in these regions. The novel spherical deconvolution method for improved accuracy in investigating crossing fibers can simultaneously reduce noise and correct Rician bias. With the noise smoothed and bias corrected, this algorithm is especially suitable for estimation of fiber orientations in HARDI data. Experimental results using both synthetic and real imaging data demonstrated the success and effectiveness of the proposed LRL algorithm.

  3. The constrained inversion of Nimbus-7 wide field-of-view radiometer measurements for the Earth Radiation Budget

    NASA Technical Reports Server (NTRS)

    Hucek, Richard R.; Ardanuy, Philip; Kyle, H. Lee

    1990-01-01

    The results of a constrained, wide field-of-view radiometer measurement deconvolution are presented and compared against higher resolution results obtained from the Earth Radiation Budget instrument on the Nimbus-7 satellite and from the Earth Radiation Budget Experiment. The method is applicable to both longwave and shortwave observations and is specifically designed to treat the problem of anisotropic reflection and emission at the top of the atmosphere as well as low signal-to-noise ratios that arise regionally within a field. The procedure is reviewed, and the improvements in resolution obtained are examined. Some minor improvements in the albedo algorithm are also described.

  4. Constrained maximum consistency multi-path mitigation

    NASA Astrophysics Data System (ADS)

    Smith, George B.

    2003-10-01

    Blind deconvolution algorithms can be useful as pre-processors for signal classification algorithms in shallow water. These algorithms remove the distortion of the signal caused by multipath propagation when no knowledge of the environment is available. A framework in which filters that produce signal estimates from each data channel that are as consistent with each other as possible in a least-squares sense has been presented [Smith, J. Acoust. Soc. Am. 107 (2000)]. This framework provides a solution to the blind deconvolution problem. One implementation of this framework yields the cross-relation on which EVAM [Gurelli and Nikias, IEEE Trans. Signal Process. 43 (1995)] and Rietsch [Rietsch, Geophysics 62(6) (1997)] processing are based. In this presentation, partially blind implementations that have good noise stability properties are compared using Classification Operating Characteristics (CLOC) analysis. [Work supported by ONR under Program Element 62747N and NRL, Stennis Space Center, MS.

  5. Contact lens design with slope-constrained Q-type aspheres for myopia correction

    NASA Astrophysics Data System (ADS)

    Peng, Wei-Jei; Cheng, Yuan-Chieh; Hsu, Wei-Yao; Yu, Zong-Ru; Ho, Cheng-Fang; Abou-El-Hossein, Khaled

    2017-08-01

    The design of the rigid contact lens (CL) with slope-constrained Q-type aspheres for myopia correction is presented in this paper. The spherical CL is the most common type for myopia correction, however the spherical aberration (SA) caused from the pupil dilation in dark leads to the degradation of visual acuity which cannot be corrected by spherical surface. The spherical and aspheric CLs are designed respectively based on Liou's schematic eye model, and the criterion is the modulation transfer function (MTF) at the frequency of 100 line pair per mm, which corresponds to the normal vision of one arc-minute. After optimization, the MTF of the aspheric design is superior to that of the spherical design, because the aspheric surface corrects the SA for improving the visual acuity in dark. For avoiding the scratch caused from the contact profilometer, the aspheric surface is designed to match the measurability of the interferometer. The Q-type aspheric surface is employed to constrain the root-mean-square (rms) slope of the departure from a best-fit sphere directly, because the fringe density is limited by the interferometer. The maximum sag departure from a best-fit sphere is also controlled according to the measurability of the aspheric stitching interferometer (ASI). The inflection point is removed during optimization for measurability and appearance. In this study, the aspheric CL is successfully designed with Q-type aspheres for the measurability of the interferometer. It not only corrects the myopia but also eliminates the SA for improving the visual acuity in dark based on the schematic eye model.

  6. Sparse Poisson noisy image deblurring.

    PubMed

    Carlavan, Mikael; Blanc-Féraud, Laure

    2012-04-01

    Deblurring noisy Poisson images has recently been a subject of an increasing amount of works in many areas such as astronomy and biological imaging. In this paper, we focus on confocal microscopy, which is a very popular technique for 3-D imaging of biological living specimens that gives images with a very good resolution (several hundreds of nanometers), although degraded by both blur and Poisson noise. Deconvolution methods have been proposed to reduce these degradations, and in this paper, we focus on techniques that promote the introduction of an explicit prior on the solution. One difficulty of these techniques is to set the value of the parameter, which weights the tradeoff between the data term and the regularizing term. Only few works have been devoted to the research of an automatic selection of this regularizing parameter when considering Poisson noise; therefore, it is often set manually such that it gives the best visual results. We present here two recent methods to estimate this regularizing parameter, and we first propose an improvement of these estimators, which takes advantage of confocal images. Following these estimators, we secondly propose to express the problem of the deconvolution of Poisson noisy images as the minimization of a new constrained problem. The proposed constrained formulation is well suited to this application domain since it is directly expressed using the antilog likelihood of the Poisson distribution and therefore does not require any approximation. We show how to solve the unconstrained and constrained problems using the recent alternating-direction technique, and we present results on synthetic and real data using well-known priors, such as total variation and wavelet transforms. Among these wavelet transforms, we specially focus on the dual-tree complex wavelet transform and on the dictionary composed of curvelets and an undecimated wavelet transform.

  7. A new approach to blind deconvolution of astronomical images

    NASA Astrophysics Data System (ADS)

    Vorontsov, S. V.; Jefferies, S. M.

    2017-05-01

    We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.

  8. Robust MR-based approaches to quantifying white matter structure and structure/function alterations in Huntington's disease

    PubMed Central

    Steventon, Jessica J.; Trueman, Rebecca C.; Rosser, Anne E.; Jones, Derek K.

    2016-01-01

    Background Huge advances have been made in understanding and addressing confounds in diffusion MRI data to quantify white matter microstructure. However, there has been a lag in applying these advances in clinical research. Some confounds are more pronounced in HD which impedes data quality and interpretability of patient-control differences. This study presents an optimised analysis pipeline and addresses specific confounds in a HD patient cohort. Method 15 HD gene-positive and 13 matched control participants were scanned on a 3T MRI system with two diffusion MRI sequences. An optimised post processing pipeline included motion, eddy current and EPI correction, rotation of the B matrix, free water elimination (FWE) and tractography analysis using an algorithm capable of reconstructing crossing fibres. The corpus callosum was examined using both a region-of-interest and a deterministic tractography approach, using both conventional diffusion tensor imaging (DTI)-based and spherical deconvolution analyses. Results Correcting for CSF contamination significantly altered microstructural metrics and the detection of group differences. Reconstructing the corpus callosum using spherical deconvolution produced a more complete reconstruction with greater sensitivity to group differences, compared to DTI-based tractography. Tissue volume fraction (TVF) was reduced in HD participants and was more sensitive to disease burden compared to DTI metrics. Conclusion Addressing confounds in diffusion MR data results in more valid, anatomically faithful white matter tract reconstructions with reduced within-group variance. TVF is recommended as a complementary metric, providing insight into the relationship with clinical symptoms in HD not fully captured by conventional DTI metrics. PMID:26335798

  9. Robust MR-based approaches to quantifying white matter structure and structure/function alterations in Huntington's disease.

    PubMed

    Steventon, Jessica J; Trueman, Rebecca C; Rosser, Anne E; Jones, Derek K

    2016-05-30

    Huge advances have been made in understanding and addressing confounds in diffusion MRI data to quantify white matter microstructure. However, there has been a lag in applying these advances in clinical research. Some confounds are more pronounced in HD which impedes data quality and interpretability of patient-control differences. This study presents an optimised analysis pipeline and addresses specific confounds in a HD patient cohort. 15 HD gene-positive and 13 matched control participants were scanned on a 3T MRI system with two diffusion MRI sequences. An optimised post processing pipeline included motion, eddy current and EPI correction, rotation of the B matrix, free water elimination (FWE) and tractography analysis using an algorithm capable of reconstructing crossing fibres. The corpus callosum was examined using both a region-of-interest and a deterministic tractography approach, using both conventional diffusion tensor imaging (DTI)-based and spherical deconvolution analyses. Correcting for CSF contamination significantly altered microstructural metrics and the detection of group differences. Reconstructing the corpus callosum using spherical deconvolution produced a more complete reconstruction with greater sensitivity to group differences, compared to DTI-based tractography. Tissue volume fraction (TVF) was reduced in HD participants and was more sensitive to disease burden compared to DTI metrics. Addressing confounds in diffusion MR data results in more valid, anatomically faithful white matter tract reconstructions with reduced within-group variance. TVF is recommended as a complementary metric, providing insight into the relationship with clinical symptoms in HD not fully captured by conventional DTI metrics. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Structural Connectivity Relates to Perinatal Factors and Functional Impairment at 7 Years in Children Born Very Preterm

    PubMed Central

    Thompson, Deanne K.; Chen, Jian; Beare, Richard; Adamson, Christopher L.; Ellis, Rachel; Ahmadzai, Zohra M.; Kelly, Claire E.; Lee, Katherine J.; Zalesky, Andrew; Yang, Joseph Y.M.; Hunt, Rodney W.; Cheong, Jeanie L.Y.; Inder, Terrie E.; Doyle, Lex W.; Seal, Marc L.; Anderson, Peter J.

    2016-01-01

    Objective To use structural connectivity to (1) compare brain networks between typically and atypically developing (very preterm) children, (2) explore associations between potential perinatal developmental disturbances and brain networks, and (3) describe associations between brain networks and functional impairments in very preterm children. Methods 26 full-term and 107 very preterm 7-year-old children (born <30 weeks’ gestational age and/or <1250 g) underwent T1- and diffusion-weighted imaging. Global white matter fiber networks were produced using 80 cortical and subcortical nodes, and edges created using constrained spherical deconvolution-based tractography. Global graph theory metrics were analysed, and regional networks were identified using network-based statistics. Cognitive and motor function were assessed at 7 years of age. Results Compared with full-term children, very preterm children had reduced density, lower global efficiency and higher local efficiency. Those with lower gestational age at birth, infection or higher neonatal brain abnormality score had reduced connectivity. Reduced connectivity within a widespread network was predictive of impaired IQ, while reduced connectivity within the right parietal and temporal lobes was associated with motor impairment in very preterm children. Conclusions This study utilized an innovative structural connectivity pipeline to reveal that children born very preterm have less connected and less complex brain networks compared with typically developing term-born children. Adverse perinatal factors led to disturbances in white matter connectivity, which in turn are associated with impaired functional outcomes, highlighting novel structure-function relationships. PMID:27046108

  11. Visual System Involvement in Patients with Newly Diagnosed Parkinson Disease.

    PubMed

    Arrigo, Alessandro; Calamuneri, Alessandro; Milardi, Demetrio; Mormina, Enricomaria; Rania, Laura; Postorino, Elisa; Marino, Silvia; Di Lorenzo, Giuseppe; Anastasi, Giuseppe Pio; Ghilardi, Maria Felice; Aragona, Pasquale; Quartarone, Angelo; Gaeta, Michele

    2017-12-01

    Purpose To assess intracranial visual system changes of newly diagnosed Parkinson disease in drug-naïve patients. Materials and Methods Twenty patients with newly diagnosed Parkinson disease and 20 age-matched control subjects were recruited. Magnetic resonance (MR) imaging (T1-weighted and diffusion-weighted imaging) was performed with a 3-T MR imager. White matter changes were assessed by exploring a white matter diffusion profile by means of diffusion-tensor imaging-based parameters and constrained spherical deconvolution-based connectivity analysis and by means of white matter voxel-based morphometry (VBM). Alterations in occipital gray matter were investigated by means of gray matter VBM. Morphologic analysis of the optic chiasm was based on manual measurement of regions of interest. Statistical testing included analysis of variance, t tests, and permutation tests. Results In the patients with Parkinson disease, significant alterations were found in optic radiation connectivity distribution, with decreased lateral geniculate nucleus V2 density (F, -8.28; P < .05), a significant increase in optic radiation mean diffusivity (F, 7.5; P = .014), and a significant reduction in white matter concentration. VBM analysis also showed a significant reduction in visual cortical volumes (P < .05). Moreover, the chiasmatic area and volume were significantly reduced (P < .05). Conclusion The findings show that visual system alterations can be detected in early stages of Parkinson disease and that the entire intracranial visual system can be involved. © RSNA, 2017 Online supplemental material is available for this article.

  12. Bayesian uncertainty quantification in linear models for diffusion MRI.

    PubMed

    Sjölund, Jens; Eklund, Anders; Özarslan, Evren; Herberthson, Magnus; Bånkestad, Maria; Knutsson, Hans

    2018-03-29

    Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Blind deconvolution of 2-D and 3-D fluorescent micrographs

    NASA Astrophysics Data System (ADS)

    Krishnamurthi, Vijaykumar; Liu, Yi-Hwa; Holmes, Timothy J.; Roysam, Badrinath; Turner, James N.

    1992-06-01

    This paper presents recent results of our reconstructions of 3-D data from Drosophila chromosomes as well as our simulations with a refined version of the algorithm used in the former. It is well known that the calibration of the point spread function (PSF) of a fluorescence microscope is a tedious process and involves esoteric techniques in most cases. This problem is further compounded in the case of confocal microscopy where the measured intensities are usually low. A number of techniques have been developed to solve this problem, all of which are methods in blind deconvolution. These are so called because the measured PSF is not required in the deconvolution of degraded images from any optical system. Our own efforts in this area involved the maximum likelihood (ML) method, the numerical solution to which is obtained by the expectation maximization (EM) algorithm. Based on the reasonable early results obtained during our simulations with 2-D phantoms, we carried out experiments with real 3-D data. We found that the blind deconvolution method using the ML approach gave reasonable reconstructions. Next we tried to perform the reconstructions using some 2-D data, but we found that the results were not encouraging. We surmised that the poor reconstructions were primarily due to the large values of dark current in the input data. This, coupled with the fact that we are likely to have similar data with considerable dark current from a confocal microscope prompted us to look into ways of constraining the solution of the PSF. We observed that in the 2-D case, the reconstructed PSF has a tendency to retain values larger than those of the theoretical PSF in regions away from the center (outside of those we considered to be its region of support). This observation motivated us to apply an upper bound constraint on the PSF in these regions. Furthermore, we constrain the solution of the PSF to be a bandlimited function, as in the case in the true situation. We have derived two separate approaches for implementing the constraint. One approach involves the mathematical rigors of Lagrange multipliers. This approach is discussed in another paper. The second approach involves an adaptation of the Gershberg Saxton algorithm, which ensures bandlimitedness and non-negativity of the PSF. Although the latter approach is mathematically less rigorous than the former, we currently favor it because it has a simpler implementation on a computer and has smaller memory requirements. The next section describes briefly the theory and derivation of these constraint equations using Lagrange multipliers.

  14. Light-scattering flow cytometry for identification and characterization of blood microparticles

    NASA Astrophysics Data System (ADS)

    Konokhova, Anastasiya I.; Yurkin, Maxim A.; Moskalensky, Alexander E.; Chernyshev, Andrei V.; Tsvetovskaya, Galina A.; Chikova, Elena D.; Maltsev, Valeri P.

    2012-05-01

    We describe a novel approach to study blood microparticles using the scanning flow cytometer, which measures light scattering patterns (LSPs) of individual particles. Starting from platelet-rich plasma, we separated spherical microparticles from non-spherical plasma constituents, such as platelets and cell debris, based on similarity of their LSP to that of sphere. This provides a label-free method for identification (detection) of microparticles, including those larger than 1 μm. Next, we rigorously characterized each measured particle, determining its size and refractive index including errors of these estimates. Finally, we employed a deconvolution algorithm to determine size and refractive index distributions of the whole population of microparticles, accounting for largely different reliability of individual measurements. Developed methods were tested on a blood sample of a healthy donor, resulting in good agreement with literature data. The only limitation of this approach is size detection limit, which is currently about 0.5 μm due to used laser wavelength of 0.66 μm.

  15. DECONVOLUTION OF IMAGES FROM BLAST 2005: INSIGHT INTO THE K3-50 AND IC 5146 STAR-FORMING REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, Arabindo; Netterfield, Calvin B.; Ade, Peter A. R.

    2011-04-01

    We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed itsmore » performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4.'5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and {sup 12}CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below {approx} 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region.« less

  16. Deconvolution of Images from BLAST 2005: Insight into the K3-50 and IC 5146 Star-forming Regions

    NASA Astrophysics Data System (ADS)

    Roy, Arabindo; Ade, Peter A. R.; Bock, James J.; Brunt, Christopher M.; Chapin, Edward L.; Devlin, Mark J.; Dicker, Simon R.; France, Kevin; Gibb, Andrew G.; Griffin, Matthew; Gundersen, Joshua O.; Halpern, Mark; Hargrave, Peter C.; Hughes, David H.; Klein, Jeff; Marsden, Gaelen; Martin, Peter G.; Mauskopf, Philip; Netterfield, Calvin B.; Olmi, Luca; Patanchon, Guillaume; Rex, Marie; Scott, Douglas; Semisch, Christopher; Truch, Matthew D. P.; Tucker, Carole; Tucker, Gregory S.; Viero, Marco P.; Wiebe, Donald V.

    2011-04-01

    We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed its performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4farcm5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and 12CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below ~ 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region.

  17. Geopotential Field Anomaly Continuation with Multi-Altitude Observations

    NASA Technical Reports Server (NTRS)

    Kim, Jeong Woo; Kim, Hyung Rae; von Frese, Ralph; Taylor, Patrick; Rangelova, Elena

    2012-01-01

    Conventional gravity and magnetic anomaly continuation invokes the standard Poisson boundary condition of a zero anomaly at an infinite vertical distance from the observation surface. This simple continuation is limited, however, where multiple altitude slices of the anomaly field have been observed. Increasingly, areas are becoming available constrained by multiple boundary conditions from surface, airborne, and satellite surveys. This paper describes the implementation of continuation with multi-altitude boundary conditions in Cartesian and spherical coordinates and investigates the advantages and limitations of these applications. Continuations by EPS (Equivalent Point Source) inversion and the FT (Fourier Transform), as well as by SCHA (Spherical Cap Harmonic Analysis) are considered. These methods were selected because they are especially well suited for analyzing multi-altitude data over finite patches of the earth such as covered by the ADMAP database. In general, continuations constrained by multi-altitude data surfaces are invariably superior to those constrained by a single altitude data surface due to anomaly measurement errors and the non-uniqueness of continuation.

  18. Geopotential Field Anomaly Continuation with Multi-Altitude Observations

    NASA Technical Reports Server (NTRS)

    Kim, Jeong Woo; Kim, Hyung Rae; vonFrese, Ralph; Taylor, Patrick; Rangelova, Elena

    2011-01-01

    Conventional gravity and magnetic anomaly continuation invokes the standard Poisson boundary condition of a zero anomaly at an infinite vertical distance from the observation surface. This simple continuation is limited, however, where multiple altitude slices of the anomaly field have been observed. Increasingly, areas are becoming available constrained by multiple boundary conditions from surface, airborne, and satellite surveys. This paper describes the implementation of continuation with multi-altitude boundary conditions in Cartesian and spherical coordinates and investigates the advantages and limitations of these applications. Continuations by EPS (Equivalent Point Source) inversion and the FT (Fourier Transform), as well as by SCHA (Spherical Cap Harmonic Analysis) are considered. These methods were selected because they are especially well suited for analyzing multi-altitude data over finite patches of the earth such as covered by the ADMAP database. In general, continuations constrained by multi-altitude data surfaces are invariably superior to those constrained by a single altitude data surface due to anomaly measurement errors and the non-uniqueness of continuation.

  19. Improved images of crustal structures in the Bergslagen, central Sweden, through seismic reprocessing of BABEL lines 1, 6 and 7

    NASA Astrophysics Data System (ADS)

    Buntin, Sebastian; Malehmir, Alireza; Malinowski, Michał; Högdahl, Karin; Juhlin, Christopher; Buske, Stefan

    2017-04-01

    In a joint effort through the BABEL project, geoscientists from five countries acquired marine seismic data in the Baltic Sea with a total length of 2268 km in the year 1989. These consisted of near-vertical reflection and wide-angle refraction seismic data, providing insights into the subsurface down to the Moho and suggesting the existence of plate tectonics already during the Paleoproterozoic. The seismic data were acquired using a receiver group interval of 50 m and a total cable length of 3 km. In total, 60 groups of 64 hydrophones at 15 m depth were used. An airgun array consisting of six equal subarrays towed at 7.5 m depth was used to generate the seismic signal. The shot interval and the corresponding record lengths were different among the lines. A record length of 25 s and 75 m shot spacing for lines 1 and 7, respectively and 23 s and 62.5 m for line 6, respectively was used. The sampling rate was 4 ms for all three profiles. Lines 1, 6 and 7 are located at the boundary to the world-class and historical Bergslagen mineral district, and are being revisited in this study. Improved images can be used to refine previous interpretations, particularly at shallower depths (< 5 km). About 27 years after the acquisition, these data have been processed again in our study. Aside from the original processing steps, like spherical divergence correction, deconvolution and NMO corrections, additional processing steps such as DMO corrections or pre- and post-stack deconvolutions and coherency enhancements were applied. The reprocessing revealed reflections in the shallow part of the profiles, likely from major deformation (multi-phase) zones extending down to the lower crust, which were not present in the previous images. Also the images of the reflections in the deeper parts are remarkably improved. This also includes a few sub-Moho reflections. The three reprocessed profiles help constrain the nature of the northern boundary of Bergslagen and associated crustal structures. Furthermore they should assist in the planning of an onshore refraction and reflection profile, to be acquired in 2017, crossing the northern boundary of the Bergslagen district. Acknowledgments: This work is supported by the Swedish Research Council (VR) grant number 2015-05177 for which we are grateful. S. Buntin's PhD work is supported by the grant.

  20. Myocardial perfusion quantification using simultaneously acquired 13 NH3 -ammonia PET and dynamic contrast-enhanced MRI in patients at rest and stress.

    PubMed

    Kunze, Karl P; Nekolla, Stephan G; Rischpler, Christoph; Zhang, Shelley HuaLei; Hayes, Carmel; Langwieser, Nicolas; Ibrahim, Tareq; Laugwitz, Karl-Ludwig; Schwaiger, Markus

    2018-04-19

    Systematic differences with respect to myocardial perfusion quantification exist between DCE-MRI and PET. Using the potential of integrated PET/MRI, this study was conceived to compare perfusion quantification on the basis of simultaneously acquired 13 NH 3 -ammonia PET and DCE-MRI data in patients at rest and stress. Twenty-nine patients were examined on a 3T PET/MRI scanner. DCE-MRI was implemented in dual-sequence design and additional T 1 mapping for signal normalization. Four different deconvolution methods including a modified version of the Fermi technique were compared against 13 NH 3 -ammonia results. Cohort-average flow comparison yielded higher resting flows for DCE-MRI than for PET and, therefore, significantly lower DCE-MRI perfusion ratios under the common assumption of equal arterial and tissue hematocrit. Absolute flow values were strongly correlated in both slice-average (R 2  = 0.82) and regional (R 2  = 0.7) evaluations. Different DCE-MRI deconvolution methods yielded similar flow result with exception of an unconstrained Fermi method exhibiting outliers at high flows when compared with PET. Thresholds for Ischemia classification may not be directly tradable between PET and MRI flow values. Differences in perfusion ratios between PET and DCE-MRI may be lifted by using stress/rest-specific hematocrit conversion. Proper physiological constraints are advised in model-constrained deconvolution. © 2018 International Society for Magnetic Resonance in Medicine.

  1. Parotid gland tumours: MR tractography to assess contact with the facial nerve.

    PubMed

    Attyé, Arnaud; Karkas, Alexandre; Troprès, Irène; Roustit, Matthieu; Kastler, Adrian; Bettega, Georges; Lamalle, Laurent; Renard, Félix; Righini, Christian; Krainik, Alexandre

    2016-07-01

    To assess the feasibility of intraparotid facial nerve (VIIn) tractographic reconstructions in estimating the presence of a contact between the VIIn and the tumour, in patients requiring surgical resection of parotid tumours. Patients underwent MR scans with VIIn tractography calculated with the constrained spherical deconvolution model. The parameters of the diffusion sequence were: b-value of 1000 s/mm(2); 32 directions; voxel size: 2 mm isotropic; scan time: 9'31'. The potential contacts between VIIn branches and tumours were estimated with different initial fractional anisotropy (iFA) cut-offs compared to surgical data. Surgeons were blinded to the tractography reconstructions and identified both nerves and contact with tumours using nerve stimulation and reference photographs. Twenty-six patients were included in this study and the mean patient age was 55.2 years. Surgical direct assessment of VIIn allowed identifying 0.1 as the iFA threshold with the best sensitivity to detect tumour contact. In all patients with successful VIIn identification by tractography, surgeons confirmed nerve courses as well as lesion location in parotid glands. Mean VIIn branch FA values were significantly lower in cases with tumour contact (t-test; p ≤ 0.01). This study showed the feasibility of intraparotid VIIn tractography to identify nerve contact with parotid tumours. • Diffusion imaging is an efficient method for highlighting the intraparotid VIIn. • Visualization of the VIIn may help to better manage patients before surgery. • We bring new insights to future trials for patients with VIIn dysfunction. • We aimed to provide radio-anatomical references for further studies.

  2. Constrained field theories on spherically symmetric spacetimes with horizons

    NASA Astrophysics Data System (ADS)

    Fernandes, Karan; Lahiri, Amitabha; Ghosh, Suman

    2017-02-01

    We apply the Dirac-Bergmann algorithm for the analysis of constraints to gauge theories defined on spherically symmetric black hole backgrounds. We find that the constraints for a given theory are modified on such spacetimes through the presence of additional contributions from the horizon. As a concrete example, we consider the Maxwell field on a black hole background, and determine the role of the horizon contributions on the dynamics of the theory.

  3. Spherical cows in the sky with fab four

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaloper, Nemanja; Sandora, McCullen, E-mail: kaloper@physics.ucdavis.edu, E-mail: mesandora@ucdavis.edu

    2014-05-01

    We explore spherically symmetric static solutions in a subclass of unitary scalar-tensor theories of gravity, called the 'Fab Four' models. The weak field large distance solutions may be phenomenologically viable, but only if the Gauss-Bonnet term is negligible. Only in this limit will the Vainshtein mechanism work consistently. Further, classical constraints and unitarity bounds constrain the models quite tightly. Nevertheless, in the limits where the range of individual terms at large scales is respectively Kinetic Braiding, Horndeski, and Gauss-Bonnet, the horizon scale effects may occur while the theory satisfies Solar system constraints and, marginally, unitarity bounds. On the other hand,more » to bring the cutoff down to below a millimeter constrains all the couplings scales such that 'Fab Fours' can't be heard outside of the Solar system.« less

  4. Structure of neutron star crusts from new Skyrme effective interactions constrained by chiral effective field theory

    NASA Astrophysics Data System (ADS)

    Lim, Yeunhwan; Holt, Jeremy W.

    2017-06-01

    We investigate the structure of neutron star crusts, including the crust-core boundary, based on new Skyrme mean field models constrained by the bulk-matter equation of state from chiral effective field theory and the ground-state energies of doubly-magic nuclei. Nuclear pasta phases are studied using both the liquid drop model as well as the Thomas-Fermi approximation. We compare the energy per nucleon for each geometry (spherical nuclei, cylindrical nuclei, nuclear slabs, cylindrical holes, and spherical holes) to obtain the ground state phase as a function of density. We find that the size of the Wigner-Seitz cell depends strongly on the model parameters, especially the coefficients of the density gradient interaction terms. We employ also the thermodynamic instability method to check the validity of the numerical solutions based on energy comparisons.

  5. Effective Alternating Direction Optimization Methods for Sparsity-Constrained Blind Image Deblurring.

    PubMed

    Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi

    2017-01-18

    Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.

  6. A Model-Based Approach for Microvasculature Structure Distortion Correction in Two-Photon Fluorescence Microscopy Images

    PubMed Central

    Dao, Lam; Glancy, Brian; Lucotte, Bertrand; Chang, Lin-Ching; Balaban, Robert S; Hsu, Li-Yueh

    2015-01-01

    SUMMARY This paper investigates a post-processing approach to correct spatial distortion in two-photon fluorescence microscopy images for vascular network reconstruction. It is aimed at in vivo imaging of large field-of-view, deep-tissue studies of vascular structures. Based on simple geometric modeling of the object-of-interest, a distortion function is directly estimated from the image volume by deconvolution analysis. Such distortion function is then applied to sub volumes of the image stack to adaptively adjust for spatially varying distortion and reduce the image blurring through blind deconvolution. The proposed technique was first evaluated in phantom imaging of fluorescent microspheres that are comparable in size to the underlying capillary vascular structures. The effectiveness of restoring three-dimensional spherical geometry of the microspheres using the estimated distortion function was compared with empirically measured point-spread function. Next, the proposed approach was applied to in vivo vascular imaging of mouse skeletal muscle to reduce the image distortion of the capillary structures. We show that the proposed method effectively improve the image quality and reduce spatially varying distortion that occurs in large field-of-view deep-tissue vascular dataset. The proposed method will help in qualitative interpretation and quantitative analysis of vascular structures from fluorescence microscopy images. PMID:26224257

  7. Ejecta distribution patterns at Meteor Crater, Arizona: On the applicability of lithologic end-member deconvolution for spaceborne thermal infrared data of Earth and Mars

    NASA Astrophysics Data System (ADS)

    Ramsey, Michael S.

    2002-08-01

    A spectral deconvolution using a constrained least squares approach was applied to airborne thermal infrared multispectral scanner (TIMS) data of Meteor Crater, Arizona. The three principal sedimentary units sampled by the impact were chosen as end-members, and their spectra were derived from the emissivity images. To validate previous estimates of the erosion of the near-rim ejecta, the model was used to identify the areal extent of the reworked material. The outputs of the algorithm reveal subtle mixing patterns in the ejecta, identified larger ejecta blocks, and were used to further constrain the volume of Coconino Sandstone present in the vicinity of the crater. The availability of the multialtitude data set also provided a means to examine the effects of resolution degradation and quantify the subsequent errors on the model. These data served as a test case for the use of image-derived lithologic end-members at various scales, which is critical for examining thermal infrared data of planetary surfaces. The model results indicate that the Coconino Ss. reworked ejecta is detectable over 3 km from the crater. This was confirmed by field sampling within the primary ejecta field and wind streak. The areal distribution patterns of this unit imply past erosion and subsequent sediment transport that was low to moderate compared with early studies and therefore places further constraints on the ejecta degradation of Meteor Crater. It also provides an important example of the analysis that can be performed on thermal infrared data currently being returned from Earth orbit and expected from Mars in 2002.

  8. A simple method for correcting spatially resolved solar intensity oscillation observations for variations in scattered light

    NASA Technical Reports Server (NTRS)

    Jefferies, S. M.; Duvall, T. L., Jr.

    1991-01-01

    A measurement of the intensity distribution in an image of the solar disk will be corrupted by a spatial redistribution of the light that is caused by the earth's atmosphere and the observing instrument. A simple correction method is introduced here that is applicable for solar p-mode intensity observations obtained over a period of time in which there is a significant change in the scattering component of the point spread function. The method circumvents the problems incurred with an accurate determination of the spatial point spread function and its subsequent deconvolution from the observations. The method only corrects the spherical harmonic coefficients that represent the spatial frequencies present in the image and does not correct the image itself.

  9. Improved Phased Array Imaging of a Model Jet

    NASA Technical Reports Server (NTRS)

    Dougherty, Robert P.; Podboy, Gary G.

    2010-01-01

    An advanced phased array system, OptiNav Array 48, and a new deconvolution algorithm, TIDY, have been used to make octave band images of supersonic and subsonic jet noise produced by the NASA Glenn Small Hot Jet Acoustic Rig (SHJAR). The results are much more detailed than previous jet noise images. Shock cell structures and the production of screech in an underexpanded supersonic jet are observed directly. Some trends are similar to observations using spherical and elliptic mirrors that partially informed the two-source model of jet noise, but the radial distribution of high frequency noise near the nozzle appears to differ from expectations of this model. The beamforming approach has been validated by agreement between the integrated image results and the conventional microphone data.

  10. Periventricular Nodular Heterotopia: Detection of Abnormal Microanatomic Fiber Structures with Whole-Brain Diffusion MR Imaging Tractography.

    PubMed

    Farquharson, Shawna; Tournier, J-Donald; Calamante, Fernando; Mandelstam, Simone; Burgess, Rosemary; Schneider, Michal E; Berkovic, Samuel F; Scheffer, Ingrid E; Jackson, Graeme D; Connelly, Alan

    2016-12-01

    Purpose To investigate whether it is possible in patients with periventricular nodular heterotopia (PVNH) to detect abnormal fiber projections that have only previously been reported in the histopathology literature. Materials and Methods Whole-brain diffusion-weighted (DW) imaging data from 14 patients with bilateral PVNH and 14 age- and sex-matched healthy control subjects were prospectively acquired by using 3.0-T magnetic resonance (MR) imaging between August 1, 2008, and December 5, 2012. All participants provided written informed consent. The DW imaging data were processed to generate whole-brain constrained spherical deconvolution (CSD)-based tractography data and super-resolution track-density imaging (TDI) maps. The tractography data were overlaid on coregistered three-dimensional T1-weighted images to visually assess regions of heterotopia. A panel of MR imaging researchers independently assessed each case and indicated numerically (no = 1, yes = 2) as to the presence of abnormal fiber tracks in nodular tissue. The Fleiss κ statistical measure was applied to assess the reader agreement. Results Abnormal fiber tracks emanating from one or more regions of heterotopia were reported by all four readers in all 14 patients with PVNH (Fleiss κ = 1). These abnormal structures were not visible on the tractography data from any of the control subjects and were not discernable on the conventional T1-weighted images of the patients with PVNH. Conclusion Whole-brain CSD-based fiber tractography and super-resolution TDI mapping reveals abnormal fiber projections in nodular tissue suggestive of abnormal organization of white matter (with abnormal fibers both within nodules and projecting to the surrounding white matter) in patients with bilateral PVNH. © RSNA, 2016.

  11. New insights in the homotopic and heterotopic connectivity of the frontal portion of the human corpus callosum revealed by microdissection and diffusion tractography.

    PubMed

    De Benedictis, Alessandro; Petit, Laurent; Descoteaux, Maxime; Marras, Carlo Efisio; Barbareschi, Mattia; Corsini, Francesco; Dallabona, Monica; Chioffi, Franco; Sarubbo, Silvio

    2016-12-01

    Extensive studies revealed that the human corpus callosum (CC) plays a crucial role in providing large-scale bi-hemispheric integration of sensory, motor and cognitive processing, especially within the frontal lobe. However, the literature lacks of conclusive data regarding the structural macroscopic connectivity of the frontal CC. In this study, a novel microdissection approach was adopted, to expose the frontal fibers of CC from the dorsum to the lateral cortex in eight hemispheres and in one entire brain. Post-mortem results were then combined with data from advanced constrained spherical deconvolution in 130 healthy subjects. We demonstrated as the frontal CC provides dense inter-hemispheric connections. In particular, we found three types of fronto-callosal fibers, having a dorso-ventral organization. First, the dorso-medial CC fibers subserve homotopic connections between the homologous medial cortices of the superior frontal gyrus. Second, the ventro-lateral CC fibers subserve homotopic connections between lateral frontal cortices, including both the middle frontal gyrus and the inferior frontal gyrus, as well as heterotopic connections between the medial and lateral frontal cortices. Third, the ventro-striatal CC fibers connect the medial and lateral frontal cortices with the contralateral putamen and caudate nucleus. We also highlighted an intricate crossing of CC fibers with the main association pathways terminating in the lateral regions of the frontal lobes. This combined approach of ex vivo microdissection and in vivo diffusion tractography allowed demonstrating a previously unappreciated three-dimensional architecture of the anterior frontal CC, thus clarifying the functional role of the CC in mediating the inter-hemispheric connectivity. Hum Brain Mapp 37:4718-4735, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. Advanced fiber tracking in early acquired brain injury causing cerebral palsy.

    PubMed

    Lennartsson, F; Holmström, L; Eliasson, A-C; Flodmark, O; Forssberg, H; Tournier, J-D; Vollmer, B

    2015-01-01

    Diffusion-weighted MR imaging and fiber tractography can be used to investigate alterations in white matter tracts in patients with early acquired brain lesions and cerebral palsy. Most existing studies have used diffusion tensor tractography, which is limited in areas of complex fiber structures or pathologic processes. We explored a combined normalization and probabilistic fiber-tracking method for more realistic fiber tractography in this patient group. This cross-sectional study included 17 children with unilateral cerebral palsy and 24 typically developing controls. DWI data were collected at 1.5T (45 directions, b=1000 s/mm(2)). Regions of interest were defined on a study-specific fractional anisotropy template and mapped onto subjects for fiber tracking. Probabilistic fiber tracking of the corticospinal tract and thalamic projections to the somatosensory cortex was performed by using constrained spherical deconvolution. Tracts were qualitatively assessed, and DTI parameters were extracted close to and distant from lesions and compared between groups. The corticospinal tract and thalamic projections to the somatosensory cortex were realistically reconstructed in both groups. Structural changes to tracts were seen in the cerebral palsy group and included splits, dislocations, compaction of the tracts, or failure to delineate the tract and were associated with underlying pathology seen on conventional MR imaging. Comparisons of DTI parameters indicated primary and secondary neurodegeneration along the corticospinal tract. Corticospinal tract and thalamic projections to the somatosensory cortex showed dissimilarities in both structural changes and DTI parameters. Our proposed method offers a sensitive means to explore alterations in WM tracts to further understand pathophysiologic changes following early acquired brain injury. © 2015 by American Journal of Neuroradiology.

  13. Revisiting the human uncinate fasciculus, its subcomponents and asymmetries with stem-based tractography and microdissection validation.

    PubMed

    Hau, Janice; Sarubbo, Silvio; Houde, Jean Christophe; Corsini, Francesco; Girard, Gabriel; Deledalle, Charles; Crivello, Fabrice; Zago, Laure; Mellet, Emmanuel; Jobard, Gaël; Joliot, Marc; Mazoyer, Bernard; Tzourio-Mazoyer, Nathalie; Descoteaux, Maxime; Petit, Laurent

    2017-05-01

    Despite its significant functional and clinical interest, the anatomy of the uncinate fasciculus (UF) has received little attention. It is known as a 'hook-shaped' fascicle connecting the frontal and anterior temporal lobes and is believed to consist of multiple subcomponents. However, the knowledge of its precise connectional anatomy in humans is lacking, and its subcomponent divisions are unclear. In the present study, we evaluate the anatomy of the UF and provide its detailed normative description in 30 healthy subjects with advanced particle-filtering tractography with anatomical priors and robustness to crossing fibers with constrained spherical deconvolution. We extracted the UF by defining its stem encompassing all streamlines that converge into a compact bundle, which consisted not only of the classic hook-shaped fibers, but also of straight horizontally oriented. We applied an automatic-clustering method to subdivide the UF bundle and revealed five subcomponents in each hemisphere with distinct connectivity profiles, including different asymmetries. A layer-by-layer microdissection of the ventral part of the external and extreme capsules using Klingler's preparation also demonstrated five types of uncinate fibers that, according to their pattern, depth, and cortical terminations, were consistent with the diffusion-based UF subcomponents. The present results shed new light on the UF cortical terminations and its multicomponent internal organization with extended cortical connections within the frontal and temporal cortices. The different lateralization patterns we report within the UF subcomponents reconcile the conflicting asymmetry findings of the literature. Such results clarifying the UF structural anatomy lay the groundwork for more targeted investigations of its functional role, especially in semantic language processing.

  14. Abnormal fronto-parietal white matter organisation in the superior longitudinal fasciculus branches in autism spectrum disorders.

    PubMed

    Fitzgerald, Jacqueline; Leemans, Alexander; Kehoe, Elizabeth; O'Hanlon, Erik; Gallagher, Louise; McGrath, Jane

    2018-03-01

    Core features of autism spectrum disorder (ASD) may be underpinned by disrupted functional and structural neural connectivity. Abnormal fronto-parietal functional connectivity has been widely reported in the literature; this may be underpinned by disrupted microstructural organisation of white matter. The superior longitudinal fasciculus (SLF) is a major fronto-parietal white matter tract, the structure of which has been little studied in ASD. The fronto-parietal projections of this tract (SLF I, II and III) are thought to play an important role in a number of cognitive functions including attention and visuospatial processing. To date, the isolation of the fronto-parietal branches of the SLF has been hampered by limitations of traditional tractography approaches. Constrained spherical deconvolution (CSD)-based tractography is an advanced approach that allows valid isolation of the fronto-parietal branches of the SLF. Diffusion MRI data were acquired from 45 participants with ASD and 45 age- and IQ-matched controls. The SLF I, II and III branches were isolated using CSD-based tractography in ExploreDTI. Significantly greater fractional anisotropy (FA) was observed in the right SLF II relative to controls. The ASD group also showed greater linear diffusion coefficient in the left SLF I and the right SLF II. In the SLF II, the ASD group had significantly greater right lateralisation of FA in comparison with the control group. The clinical and functional implications of increased FA in white matter are poorly understood; however, it is possible that this increased white matter organisation in the SLF in ASD may contribute to relative processing advantages in the condition. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  15. Abnormal functional connectivity during visuospatial processing is associated with disrupted organisation of white matter in autism

    PubMed Central

    McGrath, Jane; Johnson, Katherine; O'Hanlon, Erik; Garavan, Hugh; Leemans, Alexander; Gallagher, Louise

    2013-01-01

    Disruption of structural and functional neural connectivity has been widely reported in Autism Spectrum Disorder (ASD) but there is a striking lack of research attempting to integrate analysis of functional and structural connectivity in the same study population, an approach that may provide key insights into the specific neurobiological underpinnings of altered functional connectivity in autism. The aims of this study were (1) to determine whether functional connectivity abnormalities were associated with structural abnormalities of white matter (WM) in ASD and (2) to examine the relationships between aberrant neural connectivity and behavior in ASD. Twenty-two individuals with ASD and 22 age, IQ-matched controls completed a high-angular-resolution diffusion MRI scan. Structural connectivity was analysed using constrained spherical deconvolution (CSD) based tractography. Regions for tractography were generated from the results of a previous study, in which 10 pairs of brain regions showed abnormal functional connectivity during visuospatial processing in ASD. WM tracts directly connected 5 of the 10 region pairs that showed abnormal functional connectivity; linking a region in the left occipital lobe (left BA19) and five paired regions: left caudate head, left caudate body, left uncus, left thalamus, and left cuneus. Measures of WM microstructural organization were extracted from these tracts. Fractional anisotropy (FA) reductions in the ASD group relative to controls were significant for WM connecting left BA19 to left caudate head and left BA19 to left thalamus. Using a multimodal imaging approach, this study has revealed aberrant WM microstructure in tracts that directly connect brain regions that are abnormally functionally connected in ASD. These results provide novel evidence to suggest that structural brain pathology may contribute (1) to abnormal functional connectivity and (2) to atypical visuospatial processing in ASD. PMID:24133425

  16. Multidimensional deconvolution of optical microscope and ultrasound imaging using adaptive least-mean-square (LMS) inverse filtering

    NASA Astrophysics Data System (ADS)

    Sapia, Mark Angelo

    2000-11-01

    Three-dimensional microscope images typically suffer from reduced resolution due to the effects of convolution, optical aberrations and out-of-focus blurring. Two- dimensional ultrasound images are also degraded by convolutional bluffing and various sources of noise. Speckle noise is a major problem in ultrasound images. In microscopy and ultrasound, various methods of digital filtering have been used to improve image quality. Several methods of deconvolution filtering have been used to improve resolution by reversing the convolutional effects, many of which are based on regularization techniques and non-linear constraints. The technique discussed here is a unique linear filter for deconvolving 3D fluorescence microscopy or 2D ultrasound images. The process is to solve for the filter completely in the spatial-domain using an adaptive algorithm to converge to an optimum solution for de-blurring and resolution improvement. There are two key advantages of using an adaptive solution: (1)it efficiently solves for the filter coefficients by taking into account all sources of noise and degraded resolution at the same time, and (2)achieves near-perfect convergence to the ideal linear deconvolution filter. This linear adaptive technique has other advantages such as avoiding artifacts of frequency-domain transformations and concurrent adaptation to suppress noise. Ultimately, this approach results in better signal-to-noise characteristics with virtually no edge-ringing. Many researchers have not adopted linear techniques because of poor convergence, noise instability and negative valued data in the results. The methods presented here overcome many of these well-documented disadvantages and provide results that clearly out-perform other linear methods and may also out-perform regularization and constrained algorithms. In particular, the adaptive solution is most responsible for overcoming the poor performance associated with linear techniques. This linear adaptive approach to deconvolution is demonstrated with results of restoring blurred phantoms for both microscopy and ultrasound and restoring 3D microscope images of biological cells and 2D ultrasound images of human subjects (courtesy of General Electric and Diasonics, Inc.).

  17. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    PubMed

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Application of an NLME-Stochastic Deconvolution Approach to Level A IVIVC Modeling.

    PubMed

    Kakhi, Maziar; Suarez-Sharp, Sandra; Shepard, Terry; Chittenden, Jason

    2017-07-01

    Stochastic deconvolution is a parameter estimation method that calculates drug absorption using a nonlinear mixed-effects model in which the random effects associated with absorption represent a Wiener process. The present work compares (1) stochastic deconvolution and (2) numerical deconvolution, using clinical pharmacokinetic (PK) data generated for an in vitro-in vivo correlation (IVIVC) study of extended release (ER) formulations of a Biopharmaceutics Classification System class III drug substance. The preliminary analysis found that numerical and stochastic deconvolution yielded superimposable fraction absorbed (F abs ) versus time profiles when supplied with exactly the same externally determined unit impulse response parameters. In a separate analysis, a full population-PK/stochastic deconvolution was applied to the clinical PK data. Scenarios were considered in which immediate release (IR) data were either retained or excluded to inform parameter estimation. The resulting F abs profiles were then used to model level A IVIVCs. All the considered stochastic deconvolution scenarios, and numerical deconvolution, yielded on average similar results with respect to the IVIVC validation. These results could be achieved with stochastic deconvolution without recourse to IR data. Unlike numerical deconvolution, this also implies that in crossover studies where certain individuals do not receive an IR treatment, their ER data alone can still be included as part of the IVIVC analysis. Published by Elsevier Inc.

  19. Electrons on a spherical surface: Physical properties and hollow spherical clusters

    NASA Astrophysics Data System (ADS)

    Cricchio, Dario; Fiordilino, Emilio; Persico, Franco

    2012-07-01

    We discuss the physical properties of a noninteracting electron gas constrained to a spherical surface. In particular we consider its chemical potentials, its ionization potential, and its electric static polarizability. All these properties are discussed analytically as functions of the number N of electrons. The trends obtained with increasing N are compared with those of the corresponding properties experimentally measured or theoretically evaluated for quasispherical hollow atomic and molecular clusters. Most of the properties investigated display similar trends, characterized by a prominence of shell effects. This leads to the definition of a scale-invariant distribution of magic numbers which follows a power law with critical exponent -0.5. We conclude that our completely mechanistic and analytically tractable model can be useful for the analysis of self-assembling complex systems.

  20. Experimental and modeling studies of small molecule chemistry in expanding spherical flames

    NASA Astrophysics Data System (ADS)

    Santner, Jeffrey

    Accurate models of flame chemistry are required in order to predict emissions and flame properties, such that clean, efficient engines can be designed more easily. There are three primary methods used to improve such combustion chemistry models - theoretical reaction rate calculations, elementary reaction rate experiments, and combustion system experiments. This work contributes to model improvement through the third method - measurements and analysis of the laminar burning velocity at constraining conditions. Modern combustion systems operate at high pressure with strong exhaust gas dilution in order to improve efficiency and reduce emissions. Additionally, flames under these conditions are sensitized to elementary reaction rates such that measurements constrain modeling efforts. Measurement conditions of the present work operate within this intersection between applications and fundamental science. Experiments utilize a new pressure-release, heated spherical combustion chamber with a variety of fuels (high hydrogen content fuels, formaldehyde (via 1,3,5-trioxane), and C2 fuels) at pressures from 0.5--25 atm, often with dilution by water vapor or carbon dioxide to flame temperatures below 2000 K. The constraining ability of these measurements depends on their uncertainty. Thus, the present work includes a novel analytical estimate of the effects of thermal radiative heat loss on burning velocity measurements in spherical flames. For 1,3,5-trioxane experiments, global measurements are sufficiently sensitive to elementary reaction rates that optimization techniques are employed to indirectly measure the reaction rates of HCO consumption. Besides the influence of flame chemistry on propagation, this work also explores the chemistry involved in production of nitric oxide, a harmful pollutant, within flames. We find significant differences among available chemistry models, both in mechanistic structure and quantitative reaction rates. There is a lack of well-defined measurements of nitric oxide formation at high temperatures, contributing to disagreement between chemical models. This work accomplishes several goals. It identifies disagreements in pollutant formation chemistry. It creates a novel database of burning velocity measurements at relevant, sensitive conditions. It presents a simple, conservative estimate of radiation-induced measurement uncertainty in spherical flames. Finally, it utilizes systems-level flame experiments to indirectly measure elementary reaction rates.

  1. Partial Deconvolution with Inaccurate Blur Kernel.

    PubMed

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.

  2. An RBF-based compression method for image-based relighting.

    PubMed

    Leung, Chi-Sing; Wong, Tien-Tsin; Lam, Ping-Man; Choy, Kwok-Hung

    2006-04-01

    In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed.

  3. Towards anti-causal Green's function for three-dimensional sub-diffraction focusing

    NASA Astrophysics Data System (ADS)

    Ma, Guancong; Fan, Xiying; Ma, Fuyin; de Rosny, Julien; Sheng, Ping; Fink, Mathias

    2018-06-01

    In causal physics, the causal Green's function describes the radiation of a point source. Its counterpart, the anti-causal Green's function, depicts a spherically converging wave. However, in free space, any converging wave must be followed by a diverging one. Their interference gives rise to the diffraction limit that constrains the smallest possible dimension of a wave's focal spot in free space, which is half the wavelength. Here, we show with three-dimensional acoustic experiments that we can realize a stand-alone anti-causal Green's function in a large portion of space up to a subwavelength distance from the focus point by introducing a near-perfect absorber for spherical waves at the focus. We build this subwavelength absorber based on membrane-type acoustic metamaterial, and experimentally demonstrate focusing of spherical waves beyond the diffraction limit.

  4. Deconvolution method for accurate determination of overlapping peak areas in chromatograms.

    PubMed

    Nelson, T J

    1991-12-20

    A method is described for deconvoluting chromatograms which contain overlapping peaks. Parameters can be selected to ensure that attenuation of peak areas is uniform over any desired range of peak widths. A simple extension of the method greatly reduces the negative overshoot frequently encountered with deconvolutions. The deconvoluted chromatograms are suitable for integration by conventional methods.

  5. Deconvolution of continuous paleomagnetic data from pass-through magnetometer: A new algorithm to restore geomagnetic and environmental information based on realistic optimization

    NASA Astrophysics Data System (ADS)

    Oda, Hirokuni; Xuan, Chuang

    2014-10-01

    development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.

  6. UDECON: deconvolution optimization software for restoring high-resolution records from pass-through paleomagnetic measurements

    NASA Astrophysics Data System (ADS)

    Xuan, Chuang; Oda, Hirokuni

    2015-11-01

    The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.

  7. A neural network approach for the blind deconvolution of turbulent flows

    NASA Astrophysics Data System (ADS)

    Maulik, R.; San, O.

    2017-11-01

    We present a single-layer feedforward artificial neural network architecture trained through a supervised learning approach for the deconvolution of flow variables from their coarse grained computations such as those encountered in large eddy simulations. We stress that the deconvolution procedure proposed in this investigation is blind, i.e. the deconvolved field is computed without any pre-existing information about the filtering procedure or kernel. This may be conceptually contrasted to the celebrated approximate deconvolution approaches where a filter shape is predefined for an iterative deconvolution process. We demonstrate that the proposed blind deconvolution network performs exceptionally well in the a-priori testing of both two-dimensional Kraichnan and three-dimensional Kolmogorov turbulence and shows promise in forming the backbone of a physics-augmented data-driven closure for the Navier-Stokes equations.

  8. Caging Mechanism for a drag-free satellite position sensor

    NASA Technical Reports Server (NTRS)

    Hacker, R.; Mathiesen, J.; Debra, D. B.

    1976-01-01

    A disturbance compensation system for satellites based on the drag-free concept was mechanized and flown, using a spherical proof mass and a cam-guided caging mechanism. The caging mechanism controls the location of the proof mass for testing and constrains it during launch. Design requirements, design details, and hardware are described.

  9. Crowded field photometry with deconvolved images.

    NASA Astrophysics Data System (ADS)

    Linde, P.; Spännare, S.

    A local implementation of the Lucy-Richardson algorithm has been used to deconvolve a set of crowded stellar field images. The effects of deconvolution on detection limits as well as on photometric and astrometric properties have been investigated as a function of the number of deconvolution iterations. Results show that deconvolution improves detection of faint stars, although artifacts are also found. Deconvolution provides more stars measurable without significant degradation of positional accuracy. The photometric precision is affected by deconvolution in several ways. Errors due to unresolved images are notably reduced, while flux redistribution between stars and background increases the errors.

  10. Probing Hotspot Conditions in Spherically Shock Compressed Matter

    NASA Astrophysics Data System (ADS)

    Bachmann, Benjamin; Nilsen, J.; Kritcher, A. L.; Swift, D.; Rygg, J. R.; Collins, G. W.; Divol, L.; Falcone, R. W.; Gaffney, J.; Glenzer, S. H.; Hatarik, R.; Hawreliak, J.; Khan, S.; Kraus, D.; Landen, O. L.; Masters, N.; Nagel, S. R.; Pardini, T.; Zimmerman, G.; Doeppner, T.

    2015-11-01

    We present results of an approach to experimentally determine the conditions in the center of a CD2 sphere that has been compressed to petapascal pressures by spherically converging shocks. By measuring the hotspot size using penumbral imaging, hotspot temperature using two-color spectroscopy, the neutron yield from DD nuclear reactions and the x-ray burn width, we infer average hotspot densities of 43 g/cm3 at 1.6 keV temperature. These conditions correspond to pressures of 4.4 petapascal (44 Gbar) in an ideal gas and 3.5 petapascal from independently performed rad.-hydro. simulations. The experimentally determined neutron yield, temperature and density constrain the EOS in a regime that exceeds previously reported pressures obtained in carbon EOS measurements by three orders of magnitude. The results show a path for constraining the EOS of matter at conditions that have been inaccessible with state-of-the-art experimental EOS techniques. This work was performed under the auspices of the U.S. DOE by LLNL under Contract DE-AC52-07NA27344 and LDRD Grant 13-ERD-073

  11. Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.

    2003-01-01

    Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.

  12. Correction for frequency-dependent hydrophone response to nonlinear pressure waves using complex deconvolution and rarefactional filtering: application with fiber optic hydrophones.

    PubMed

    Wear, Keith; Liu, Yunbo; Gammell, Paul M; Maruvada, Subha; Harris, Gerald R

    2015-01-01

    Nonlinear acoustic signals contain significant energy at many harmonic frequencies. For many applications, the sensitivity (frequency response) of a hydrophone will not be uniform over such a broad spectrum. In a continuation of a previous investigation involving deconvolution methodology, deconvolution (implemented in the frequency domain as an inverse filter computed from frequency-dependent hydrophone sensitivity) was investigated for improvement of accuracy and precision of nonlinear acoustic output measurements. Timedelay spectrometry was used to measure complex sensitivities for 6 fiber-optic hydrophones. The hydrophones were then used to measure a pressure wave with rich harmonic content. Spectral asymmetry between compressional and rarefactional segments was exploited to design filters used in conjunction with deconvolution. Complex deconvolution reduced mean bias (for 6 fiber-optic hydrophones) from 163% to 24% for peak compressional pressure (p+), from 113% to 15% for peak rarefactional pressure (p-), and from 126% to 29% for pulse intensity integral (PII). Complex deconvolution reduced mean coefficient of variation (COV) (for 6 fiber optic hydrophones) from 18% to 11% (p+), 53% to 11% (p-), and 20% to 16% (PII). Deconvolution based on sensitivity magnitude or the minimum phase model also resulted in significant reductions in mean bias and COV of acoustic output parameters but was less effective than direct complex deconvolution for p+ and p-. Therefore, deconvolution with appropriate filtering facilitates reliable nonlinear acoustic output measurements using hydrophones with frequency-dependent sensitivity.

  13. Fast analytical spectral filtering methods for magnetic resonance perfusion quantification.

    PubMed

    Reddy, Kasireddy V; Mitra, Abhishek; Yalavarthy, Phaneendra K

    2016-08-01

    The deconvolution in the perfusion weighted imaging (PWI) plays an important role in quantifying the MR perfusion parameters. The PWI application to stroke and brain tumor studies has become a standard clinical practice. The standard approach for this deconvolution is oscillatory-limited singular value decomposition (oSVD) and frequency domain deconvolution (FDD). The FDD is widely recognized as the fastest approach currently available for deconvolution of MR perfusion data. In this work, two fast deconvolution methods (namely analytical fourier filtering and analytical showalter spectral filtering) are proposed. Through systematic evaluation, the proposed methods are shown to be computationally efficient and quantitatively accurate compared to FDD and oSVD.

  14. Optimized Deconvolution for Maximum Axial Resolution in Three-Dimensional Aberration-Corrected Scanning Transmission Electron Microscopy

    PubMed Central

    Ramachandra, Ranjan; de Jonge, Niels

    2012-01-01

    Three-dimensional (3D) data sets were recorded of gold nanoparticles placed on both sides of silicon nitride membranes using focal series aberration-corrected scanning transmission electron microscopy (STEM). The deconvolution of the 3D datasets was optimized to obtain the highest possible axial resolution. The deconvolution involved two different point spread function (PSF)s, each calculated iteratively via blind deconvolution.. Supporting membranes of different thicknesses were tested to study the effect of beam broadening on the deconvolution. It was found that several iterations of deconvolution was efficient in reducing the imaging noise. With an increasing number of iterations, the axial resolution was increased, and most of the structural information was preserved. Additional iterations improved the axial resolution by maximal a factor of 4 to 6, depending on the particular dataset, and up to 8 nm maximal, but at the cost of a reduction of the lateral size of the nanoparticles in the image. Thus, the deconvolution procedure optimized for highest axial resolution is best suited for applications where one is interested in the 3D locations of nanoparticles only. PMID:22152090

  15. Quantitative imaging of aggregated emulsions.

    PubMed

    Penfold, Robert; Watson, Andrew D; Mackie, Alan R; Hibberd, David J

    2006-02-28

    Noise reduction, restoration, and segmentation methods are developed for the quantitative structural analysis in three dimensions of aggregated oil-in-water emulsion systems imaged by fluorescence confocal laser scanning microscopy. Mindful of typical industrial formulations, the methods are demonstrated for concentrated (30% volume fraction) and polydisperse emulsions. Following a regularized deconvolution step using an analytic optical transfer function and appropriate binary thresholding, novel application of the Euclidean distance map provides effective discrimination of closely clustered emulsion droplets with size variation over at least 1 order of magnitude. The a priori assumption of spherical nonintersecting objects provides crucial information to combat the ill-posed inverse problem presented by locating individual particles. Position coordinates and size estimates are recovered with sufficient precision to permit quantitative study of static geometrical features. In particular, aggregate morphology is characterized by a novel void distribution measure based on the generalized Apollonius problem. This is also compared with conventional Voronoi/Delauney analysis.

  16. Monkey to human comparative anatomy of the frontal lobe association tracts.

    PubMed

    Thiebaut de Schotten, Michel; Dell'Acqua, Flavio; Valabregue, Romain; Catani, Marco

    2012-01-01

    The greater expansion of the frontal lobes along the phylogeny scale has been interpreted as the signature of evolutionary changes underlying higher cognitive abilities in humans functions in humans. However, it is unknown how an increase in number of gyri, sulci and cortical areas in the frontal lobe have coincided with a parallel increase in connectivity. Here, using advanced tractography based on spherical deconvolution, we produced an atlas of human frontal association connections that we compared with axonal tracing studies of the monkey brain. We report several similarities between human and monkey in the cingulum, uncinate, superior longitudinal fasciculus, frontal aslant tract and orbito-polar tract. These similarities suggest to preserved functions across anthropoids. In addition, we found major differences in the arcuate fasciculus and the inferior fronto-occipital fasciculus. These differences indicate possible evolutionary changes in the connectional anatomy of the frontal lobes underlying unique human abilities. Copyright © 2011 Elsevier Srl. All rights reserved.

  17. Trust in Testimony: How Children Learn about Science and Religion

    ERIC Educational Resources Information Center

    Harris, Paul L.; Koenig, Melissa A.

    2006-01-01

    Many adult beliefs are based on the testimony provided by other people rather than on firsthand observation. Children also learn from other people's testimony. For example, they learn that mental processes depend on the brain, that the earth is spherical, and that hidden bodily organs constrain life and death. Such learning might indicate that…

  18. Effects of the oceans on polar motion: Extended investigations

    NASA Technical Reports Server (NTRS)

    Dickman, Steven R.

    1986-01-01

    A method was found for expressing the tide current velocities in terms of the tide height (with all variables expanded in spherical harmonics). All time equations were then combined into a single, nondifferential matrix equation involving only the unknown tide height. The pole tide was constrained so that no tidewater flows across continental boundaries. The constraint was derived for the case of turbulent oceans; with the tide velocities expressed in terms of the tide height. The two matrix equations were combined. Simple matrix inversion then yielded the constrained solution. Programs to construct and invert the matrix equations were written. Preliminary results were obtained and are discussed.

  19. SU-F-T-478: Effect of Deconvolution in Analysis of Mega Voltage Photon Beam Profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muthukumaran, M; Manigandan, D; Murali, V

    2016-06-15

    Purpose: To study and compare the penumbra of 6 MV and 15 MV photon beam profiles after deconvoluting different volume ionization chambers. Methods: 0.125cc Semi-Flex chamber, Markus Chamber and PTW Farmer chamber were used to measure the in-plane and cross-plane profiles at 5 cm depth for 6 MV and 15 MV photons. The profiles were measured for various field sizes starting from 2×2 cm till 30×30 cm. PTW TBA scan software was used for the measurements and the “deconvolution” functionality in the software was used to remove the volume averaging effect due to finite volume of the chamber along lateralmore » and longitudinal directions for all the ionization chambers. The predicted true profile was compared and the change in penumbra before and after deconvolution was studied. Results: After deconvoluting the penumbra decreased by 1 mm for field sizes ranging from 2 × 2 cm till 20 x20 cm. This is observed for along both lateral and longitudinal directions. However for field sizes from 20 × 20 till 30 ×30 cm the difference in penumbra was around 1.2 till 1.8 mm. This was observed for both 6 MV and 15 MV photon beams. The penumbra was always lesser in the deconvoluted profiles for all the ionization chambers involved in the study. The variation in difference in penumbral values were in the order of 0.1 till 0.3 mm between the deconvoluted profile along lateral and longitudinal directions for all the chambers under study. Deconvolution of the profiles along longitudinal direction for Farmer chamber was not good and is not comparable with other deconvoluted profiles. Conclusion: The results of the deconvoluted profiles for 0.125cc and Markus chamber was comparable and the deconvolution functionality can be used to overcome the volume averaging effect.« less

  20. Reconstructing matter profiles of spherically compensated cosmic regions in ΛCDM cosmology

    NASA Astrophysics Data System (ADS)

    de Fromont, Paul; Alimi, Jean-Michel

    2018-02-01

    The absence of a physically motivated model for large-scale profiles of cosmic voids limits our ability to extract valuable cosmological information from their study. In this paper, we address this problem by introducing the spherically compensated cosmic regions, named CoSpheres. Such cosmic regions are identified around local extrema in the density field and admit a unique compensation radius R1 where the internal spherical mass is exactly compensated. Their origin is studied by extending the standard peak model and implementing the compensation condition. Since the compensation radius evolves as the Universe itself, R1(t) ∝ a(t), CoSpheres behave as bubble Universes with fixed comoving volume. Using the spherical collapse model, we reconstruct their profiles with a very high accuracy until z = 0 in N-body simulations. CoSpheres are symmetrically defined and reconstructed for both central maximum (seeding haloes and galaxies) and minimum (identified with cosmic voids). We show that the full non-linear dynamics can be solved analytically around this particular compensation radius, providing useful predictions for cosmology. This formalism highlights original correlations between local extremum and their large-scale cosmic environment. The statistical properties of these spherically compensated cosmic regions and the possibilities to constrain efficiently both cosmology and gravity will be investigated in companion papers.

  1. An adaptive sparse deconvolution method for distinguishing the overlapping echoes of ultrasonic guided waves for pipeline crack inspection

    NASA Astrophysics Data System (ADS)

    Chang, Yong; Zi, Yanyang; Zhao, Jiyuan; Yang, Zhe; He, Wangpeng; Sun, Hailiang

    2017-03-01

    In guided wave pipeline inspection, echoes reflected from closely spaced reflectors generally overlap, meaning useful information is lost. To solve the overlapping problem, sparse deconvolution methods have been developed in the past decade. However, conventional sparse deconvolution methods have limitations in handling guided wave signals, because the input signal is directly used as the prototype of the convolution matrix, without considering the waveform change caused by the dispersion properties of the guided wave. In this paper, an adaptive sparse deconvolution (ASD) method is proposed to overcome these limitations. First, the Gaussian echo model is employed to adaptively estimate the column prototype of the convolution matrix instead of directly using the input signal as the prototype. Then, the convolution matrix is constructed upon the estimated results. Third, the split augmented Lagrangian shrinkage (SALSA) algorithm is introduced to solve the deconvolution problem with high computational efficiency. To verify the effectiveness of the proposed method, guided wave signals obtained from pipeline inspection are investigated numerically and experimentally. Compared to conventional sparse deconvolution methods, e.g. the {{l}1} -norm deconvolution method, the proposed method shows better performance in handling the echo overlap problem in the guided wave signal.

  2. Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data

    NASA Astrophysics Data System (ADS)

    Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam

    2018-06-01

    Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.

  3. Photon-efficient super-resolution laser radar

    NASA Astrophysics Data System (ADS)

    Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.

    2017-08-01

    The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.

  4. High Performance Computing Software Applications for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  5. Effect of 100 MeV swift Si8+ ions on structural and thermoluminescence properties of Y2O3:Dy3+nanophosphor

    NASA Astrophysics Data System (ADS)

    Shivaramu, N. J.; Lakshminarasappa, B. N.; Nagabhushana, K. R.; Singh, Fouran

    2016-05-01

    Nanoparticles of Y2O3:Dy3+ were prepared by the solution combustion method. The X-ray diffraction pattern of the 900°C annealed sample shows a cubic structure and the average crystallite size was found to be 31.49 nm. The field emission scanning electron microscopy image of the 900°C annealed sample shows well-separated spherical shape particles and the average particle size is found to be in a range 40 nm. Pellets of Y2O3:Dy3+ were irradiated with 100 MeV swift Si8+ ions for the fluence range of 3 × 1011_3 × 1013 ions cm-2. Pristine Y2O3:Dy3+ shows seven Raman modes with peaks at 129, 160, 330, 376, 434, 467 and 590 cm-1. The intensity of these modes decreases with an increase in ion fluence. A well-resolved thermoluminescence glow with peaks at ∼414 K (Tm1) and ∼614 K (Tm2) were observed in Si8+ ion-irradiated samples. It is found that glow peak intensity at 414 K increases with an increase in the dopant concentration up to 0.6 mol% and then decreases with an increase in dopant concentration. The high-temperature glow peak (614 K) intensity linearly increases with an increase in ion fluence. The broad TL glow curves were deconvoluted using the glow curve deconvoluted method and kinetic parameters were calculated using the general order kinetic equation.

  6. What do you gain from deconvolution? - Observing faint galaxies with the Hubble Space Telescope Wide Field Camera

    NASA Technical Reports Server (NTRS)

    Schade, David J.; Elson, Rebecca A. W.

    1993-01-01

    We describe experiments with deconvolutions of simulations of deep HST Wide Field Camera images containing faint, compact galaxies to determine under what circumstances there is a quantitative advantage to image deconvolution, and explore whether it is (1) helpful for distinguishing between stars and compact galaxies, or between spiral and elliptical galaxies, and whether it (2) improves the accuracy with which characteristic radii and integrated magnitudes may be determined. The Maximum Entropy and Richardson-Lucy deconvolution algorithms give the same results. For medium and low S/N images, deconvolution does not significantly improve our ability to distinguish between faint stars and compact galaxies, nor between spiral and elliptical galaxies. Measurements from both raw and deconvolved images are biased and must be corrected; it is easier to quantify and remove the biases for cases that have not been deconvolved. We find no benefit from deconvolution for measuring luminosity profiles, but these results are limited to low S/N images of very compact (often undersampled) galaxies.

  7. Post-processing of adaptive optics images based on frame selection and multi-frame blind deconvolution

    NASA Astrophysics Data System (ADS)

    Tian, Yu; Rao, Changhui; Wei, Kai

    2008-07-01

    The adaptive optics can only partially compensate the image blurred by atmospheric turbulence due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frames blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are suitable for blind deconvolution from the recorded AO close-loop frames series are selected by the frame selection technique and then do the multi-frame blind deconvolution. There is no priori knowledge except for the positive constraint in blind deconvolution. It is benefit for the use of multi-frame images to improve the stability and convergence of the blind deconvolution algorithm. The method had been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system at Yunnan Observatory. The results show that the method can effectively improve the images partially corrected by adaptive optics.

  8. Blind source deconvolution for deep Earth seismology

    NASA Astrophysics Data System (ADS)

    Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.

    2007-12-01

    We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.

  9. Resolution and quantification accuracy enhancement of functional delay and sum beamforming for three-dimensional acoustic source identification with solid spherical arrays

    NASA Astrophysics Data System (ADS)

    Chu, Zhigang; Yang, Yang; Shen, Linbang

    2017-05-01

    Functional delay and sum (FDAS) is a novel beamforming algorithm introduced for the three-dimensional (3D) acoustic source identification with solid spherical microphone arrays. Being capable of offering significantly attenuated sidelobes with a fast speed, the algorithm promises to play an important role in interior acoustic source identification. However, it presents some intrinsic imperfections, specifically poor spatial resolution and low quantification accuracy. This paper focuses on conquering these imperfections by ridge detection (RD) and deconvolution approach for the mapping of acoustic sources (DAMAS). The suggested methods are referred to as FDAS+RD and FDAS+RD+DAMAS. Both computer simulations and experiments are utilized to validate their effects. Several interesting conclusions have emerged: (1) FDAS+RD and FDAS+RD+DAMAS both can dramatically ameliorate FDAS's spatial resolution and at the same time inherit its advantages. (2) Compared to the conventional DAMAS, FDAS+RD+DAMAS enjoys the same super spatial resolution, stronger sidelobe attenuation capability and more than two hundred times faster speed. (3) FDAS+RD+DAMAS can effectively conquer FDAS's low quantification accuracy. Whether the focus distance is equal to the distance from the source to the array center or not, it can quantify the source average pressure contribution accurately. This study will be of great significance to the accurate and quick localization and quantification of acoustic sources in cabin environments.

  10. Liquid chromatography with diode array detection combined with spectral deconvolution for the analysis of some diterpene esters in Arabica coffee brew.

    PubMed

    Erny, Guillaume L; Moeenfard, Marzieh; Alves, Arminda

    2015-02-01

    In this manuscript, the separation of kahweol and cafestol esters from Arabica coffee brews was investigated using liquid chromatography with a diode array detector. When detected in conjunction, cafestol, and kahweol esters were eluted together, but, after optimization, the kahweol esters could be selectively detected by setting the wavelength at 290 nm to allow their quantification. Such an approach was not possible for the cafestol esters, and spectral deconvolution was used to obtain deconvoluted chromatograms. In each of those chromatograms, the four esters were baseline separated allowing for the quantification of the eight targeted compounds. Because kahweol esters could be quantified either using the chromatogram obtained by setting the wavelength at 290 nm or using the deconvoluted chromatogram, those compounds were used to compare the analytical performances. Slightly better limits of detection were obtained using the deconvoluted chromatogram. Identical concentrations were found in a real sample with both approaches. The peak areas in the deconvoluted chromatograms were repeatable (intraday repeatability of 0.8%, interday repeatability of 1.0%). This work demonstrates the accuracy of spectral deconvolution when using liquid chromatography to mathematically separate coeluting compounds using the full spectra recorded by a diode array detector. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Seismic Imaging of the Lesser Antilles Subduction Zone Using S-to-P Receiver Functions: Insights From VoiLA

    NASA Astrophysics Data System (ADS)

    Chichester, B.; Rychert, C.; Harmon, N.; Rietbrock, A.; Collier, J.; Henstock, T.; Goes, S. D. B.; Kendall, J. M.; Krueger, F.

    2017-12-01

    In the Lesser Antilles subduction zone Atlantic oceanic lithosphere, expected to be highly hydrated, is being subducted beneath the Caribbean plate. Water and other volatiles from the down-going plate are released and cause the overlying mantle to melt, feeding volcanoes with magma and hence forming the volcanic island arc. However, the depths and pathways of volatiles and melt within the mantle wedge are not well known. Here, we use S-to-P receiver functions to image seismic velocity contrasts with depth within the subduction zone in order to constrain the release of volatiles and the presence of melt in the mantle wedge, as well as slab structure and arc-lithosphere structure. We use data from 55-80° epicentral distances recorded by 32 recovered broadband ocean-bottom seismometers that were deployed during the 2016-2017 Volatiles in the Lesser Antilles (VoiLA) project for 15 months on the back- and fore-arc. The S-to-P receiver functions are calculated using two methods: extended time multi-taper deconvolution followed by migration to depth to constrain 3-D discontinuity structure of the subduction zone; and simultaneous deconvolution to determine structure beneath single stations. In the south of the island arc, we image a velocity increase with depth associated with the Moho at depths of 32-40 ± 4 km on the fore- and back-arc, consistent with various previous studies. At depths of 65-80 ± 4 km beneath the fore-arc we image a strong velocity decrease with depth that is west-dipping. At 96-120 ± 5 km beneath the fore-arc, we image a velocity increase with depth that is also west-dipping. The dipping negative-positive phase could represent velocity contrasts related to the top of the down-going plate, a feature commonly imaged in subduction zone receiver function studies. The negative phase is strong, so there may also be contributions to the negative velocity discontinuity from slab dehydration and/or mantle wedge serpentinization in the fore-arc.

  12. Calibration of Wide-Field Deconvolution Microscopy for Quantitative Fluorescence Imaging

    PubMed Central

    Lee, Ji-Sook; Wee, Tse-Luen (Erika); Brown, Claire M.

    2014-01-01

    Deconvolution enhances contrast in fluorescence microscopy images, especially in low-contrast, high-background wide-field microscope images, improving characterization of features within the sample. Deconvolution can also be combined with other imaging modalities, such as confocal microscopy, and most software programs seek to improve resolution as well as contrast. Quantitative image analyses require instrument calibration and with deconvolution, necessitate that this process itself preserves the relative quantitative relationships between fluorescence intensities. To ensure that the quantitative nature of the data remains unaltered, deconvolution algorithms need to be tested thoroughly. This study investigated whether the deconvolution algorithms in AutoQuant X3 preserve relative quantitative intensity data. InSpeck Green calibration microspheres were prepared for imaging, z-stacks were collected using a wide-field microscope, and the images were deconvolved using the iterative deconvolution algorithms with default settings. Afterwards, the mean intensities and volumes of microspheres in the original and the deconvolved images were measured. Deconvolved data sets showed higher average microsphere intensities and smaller volumes than the original wide-field data sets. In original and deconvolved data sets, intensity means showed linear relationships with the relative microsphere intensities given by the manufacturer. Importantly, upon normalization, the trend lines were found to have similar slopes. In original and deconvolved images, the volumes of the microspheres were quite uniform for all relative microsphere intensities. We were able to show that AutoQuant X3 deconvolution software data are quantitative. In general, the protocol presented can be used to calibrate any fluorescence microscope or image processing and analysis procedure. PMID:24688321

  13. Constrained evolution in numerical relativity

    NASA Astrophysics Data System (ADS)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  14. Harmony: EEG/MEG Linear Inverse Source Reconstruction in the Anatomical Basis of Spherical Harmonics

    PubMed Central

    Petrov, Yury

    2012-01-01

    EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data. PMID:23071497

  15. Probing crustal structures from neutron star compactness

    NASA Astrophysics Data System (ADS)

    Sotani, Hajime; Iida, Kei; Oyamatsu, Kazuhiro

    2017-10-01

    With various sets of the parameters that characterize the equation of state (EOS) of nuclear matter, we systematically examine the thickness of a neutron star crust and of the pasta phases contained therein. Then, with respect to the thickness of the phase of spherical nuclei, the thickness of the cylindrical phase and the crust thickness, we successfully derive fitting formulas that express the ratio of each thickness to the star's radius as a function of the star's compactness, the incompressibility of symmetric nuclear matter and the density dependence of the symmetry energy. In particular, we find that the thickness of the phase of spherical nuclei has such a strong dependence on the stellar compactness as the crust thickness, but both of them show a much weaker dependence on the EOS parameters. Thus, via determination of the compactness, the thickness of the phase of spherical nuclei as well as the crust thickness can be constrained reasonably, even if the EOS parameters remain to be well-determined.

  16. Gamma-Ray Simulated Spectrum Deconvolution of a LaBr₃ 1-in. x 1-in. Scintillator for Nondestructive ATR Fuel Burnup On-Site Predictions

    DOE PAGES

    Navarro, Jorge; Ring, Terry A.; Nigg, David W.

    2015-03-01

    A deconvolution method for a LaBr₃ 1"x1" detector for nondestructive Advanced Test Reactor (ATR) fuel burnup applications was developed. The method consisted of obtaining the detector response function, applying a deconvolution algorithm to 1”x1” LaBr₃ simulated, data along with evaluating the effects that deconvolution have on nondestructively determining ATR fuel burnup. The simulated response function of the detector was obtained using MCNPX as well with experimental data. The Maximum-Likelihood Expectation Maximization (MLEM) deconvolution algorithm was selected to enhance one-isotope source-simulated and fuel- simulated spectra. The final evaluation of the study consisted of measuring the performance of the fuel burnup calibrationmore » curve for the convoluted and deconvoluted cases. The methodology was developed in order to help design a reliable, high resolution, rugged and robust detection system for the ATR fuel canal capable of collecting high performance data for model validation, along with a system that can calculate burnup and using experimental scintillator detector data.« less

  17. Seismic interferometry by multidimensional deconvolution as a means to compensate for anisotropic illumination

    NASA Astrophysics Data System (ADS)

    Wapenaar, K.; van der Neut, J.; Ruigrok, E.; Draganov, D.; Hunziker, J.; Slob, E.; Thorbecke, J.; Snieder, R.

    2008-12-01

    It is well-known that under specific conditions the crosscorrelation of wavefields observed at two receivers yields the impulse response between these receivers. This principle is known as 'Green's function retrieval' or 'seismic interferometry'. Recently it has been recognized that in many situations it can be advantageous to replace the correlation process by deconvolution. One of the advantages is that deconvolution compensates for the waveform emitted by the source; another advantage is that it is not necessary to assume that the medium is lossless. The approaches that have been developed to date employ a 1D deconvolution process. We propose a method for seismic interferometry by multidimensional deconvolution and show that under specific circumstances the method compensates for irregularities in the source distribution. This is an important difference with crosscorrelation methods, which rely on the condition that waves are equipartitioned. This condition is for example fulfilled when the sources are regularly distributed along a closed surface and the power spectra of the sources are identical. The proposed multidimensional deconvolution method compensates for anisotropic illumination, without requiring knowledge about the positions and the spectra of the sources.

  18. Iterative and function-continuation Fourier deconvolution methods for enhancing mass spectrometer resolution

    NASA Technical Reports Server (NTRS)

    Ioup, J. W.; Ioup, G. E.; Rayborn, G. H., Jr.; Wood, G. M., Jr.; Upchurch, B. T.

    1984-01-01

    Mass spectrometer data in the form of ion current versus mass-to-charge ratio often include overlapping mass peaks, especially in low- and medium-resolution instruments. Numerical deconvolution of such data effectively enhances the resolution by decreasing the overlap of mass peaks. In this paper two approaches to deconvolution are presented: a function-domain iterative technique and a Fourier transform method which uses transform-domain function-continuation. Both techniques include data smoothing to reduce the sensitivity of the deconvolution to noise. The efficacy of these methods is demonstrated through application to representative mass spectrometer data and the deconvolved results are discussed and compared to data obtained from a spectrometer with sufficient resolution to achieve separation of the mass peaks studied. A case for which the deconvolution is seriously affected by Gibbs oscillations is analyzed.

  19. Parsimonious Charge Deconvolution for Native Mass Spectrometry

    PubMed Central

    2018-01-01

    Charge deconvolution infers the mass from mass over charge (m/z) measurements in electrospray ionization mass spectra. When applied over a wide input m/z or broad target mass range, charge-deconvolution algorithms can produce artifacts, such as false masses at one-half or one-third of the correct mass. Indeed, a maximum entropy term in the objective function of MaxEnt, the most commonly used charge deconvolution algorithm, favors a deconvolved spectrum with many peaks over one with fewer peaks. Here we describe a new “parsimonious” charge deconvolution algorithm that produces fewer artifacts. The algorithm is especially well-suited to high-resolution native mass spectrometry of intact glycoproteins and protein complexes. Deconvolution of native mass spectra poses special challenges due to salt and small molecule adducts, multimers, wide mass ranges, and fewer and lower charge states. We demonstrate the performance of the new deconvolution algorithm on a range of samples. On the heavily glycosylated plasma properdin glycoprotein, the new algorithm could deconvolve monomer and dimer simultaneously and, when focused on the m/z range of the monomer, gave accurate and interpretable masses for glycoforms that had previously been analyzed manually using m/z peaks rather than deconvolved masses. On therapeutic antibodies, the new algorithm facilitated the analysis of extensions, truncations, and Fab glycosylation. The algorithm facilitates the use of native mass spectrometry for the qualitative and quantitative analysis of protein and protein assemblies. PMID:29376659

  20. Broadband ion mobility deconvolution for rapid analysis of complex mixtures.

    PubMed

    Pettit, Michael E; Brantley, Matthew R; Donnarumma, Fabrizio; Murray, Kermit K; Solouki, Touradj

    2018-05-04

    High resolving power ion mobility (IM) allows for accurate characterization of complex mixtures in high-throughput IM mass spectrometry (IM-MS) experiments. We previously demonstrated that pure component IM-MS data can be extracted from IM unresolved post-IM/collision-induced dissociation (CID) MS data using automated ion mobility deconvolution (AIMD) software [Matthew Brantley, Behrooz Zekavat, Brett Harper, Rachel Mason, and Touradj Solouki, J. Am. Soc. Mass Spectrom., 2014, 25, 1810-1819]. In our previous reports, we utilized a quadrupole ion filter for m/z-isolation of IM unresolved monoisotopic species prior to post-IM/CID MS. Here, we utilize a broadband IM-MS deconvolution strategy to remove the m/z-isolation requirement for successful deconvolution of IM unresolved peaks. Broadband data collection has throughput and multiplexing advantages; hence, elimination of the ion isolation step reduces experimental run times and thus expands the applicability of AIMD to high-throughput bottom-up proteomics. We demonstrate broadband IM-MS deconvolution of two separate and unrelated pairs of IM unresolved isomers (viz., a pair of isomeric hexapeptides and a pair of isomeric trisaccharides) in a simulated complex mixture. Moreover, we show that broadband IM-MS deconvolution improves high-throughput bottom-up characterization of a proteolytic digest of rat brain tissue. To our knowledge, this manuscript is the first to report successful deconvolution of pure component IM and MS data from an IM-assisted data-independent analysis (DIA) or HDMSE dataset.

  1. Gravitational Wakes Sizes from Multiple Cassini Radio Occultations of Saturn's Rings

    NASA Astrophysics Data System (ADS)

    Marouf, E. A.; Wong, K. K.; French, R. G.; Rappaport, N. J.; McGhee, C. A.; Anabtawi, A.

    2016-12-01

    Voyager and Cassini radio occultation extinction and forward scattering observations of Saturn's C-Ring and Cassini Division imply power law particle size distributions extending from few millimeters to several meters with power law index in the 2.8 to 3.2 range, depending on the specific ring feature. We extend size determination to the elongated and canted particle clusters (gravitational wakes) known to permeate Saturn's A- and B-Rings. We use multiple Cassini radio occultation observations over a range of ring opening angle B and wake viewing angle α to constrain the mean wake width W and thickness/height H, and average ring area coverage fraction. The rings are modeled as randomly blocked diffraction screen in the plane normal to the incidence direction. Collective particle shadows define the blocked area. The screen's transmittance is binary: blocked or unblocked. Wakes are modeled as thin layer of elliptical cylinders populated by random but uniformly distributed spherical particles. The cylinders can be immersed in a "classical" layer of spatially uniformly distributed particles. Numerical simulations of model diffraction patterns reveal two distinct components: cylindrical and spherical. The first dominates at small scattering angles and originates from specific locations within the footprint of the spacecraft antenna on the rings. The second dominates at large scattering angles and originates from the full footprint. We interpret Cassini extinction and scattering observations in the light of the simulation results. We compute and remove contribution of the spherical component to observed scattered signal spectra assuming known particle size distribution. A large residual spectral component is interpreted as contribution of cylindrical (wake) diffraction. Its angular width determines a cylindrical shadow width that depends on the wake parameters (W,H) and the viewing geometry (α,B). Its strength constrains the mean fractional area covered (optical depth), hence constrains the mean wakes spacing. Self-consistent (W,H) are estimated using least-square fit to results from multiple occultations. Example results for observed scattering by several inner A-Ring features suggest particle clusters (wakes) that are few tens of meters wide and several meters thick.

  2. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  3. Using deconvolution to improve the metrological performance of the grid method

    NASA Astrophysics Data System (ADS)

    Grédiac, Michel; Sur, Frédéric; Badulescu, Claudiu; Mathias, Jean-Denis

    2013-06-01

    The use of various deconvolution techniques to enhance strain maps obtained with the grid method is addressed in this study. Since phase derivative maps obtained with the grid method can be approximated by their actual counterparts convolved by the envelope of the kernel used to extract phases and phase derivatives, non-blind restoration techniques can be used to perform deconvolution. Six deconvolution techniques are presented and employed to restore a synthetic phase derivative map, namely direct deconvolution, regularized deconvolution, the Richardson-Lucy algorithm and Wiener filtering, the last two with two variants concerning their practical implementations. Obtained results show that the noise that corrupts the grid images must be thoroughly taken into account to limit its effect on the deconvolved strain maps. The difficulty here is that the noise on the grid image yields a spatially correlated noise on the strain maps. In particular, numerical experiments on synthetic data show that direct and regularized deconvolutions are unstable when noisy data are processed. The same remark holds when Wiener filtering is employed without taking into account noise autocorrelation. On the other hand, the Richardson-Lucy algorithm and Wiener filtering with noise autocorrelation provide deconvolved maps where the impact of noise remains controlled within a certain limit. It is also observed that the last technique outperforms the Richardson-Lucy algorithm. Two short examples of actual strain fields restoration are finally shown. They deal with asphalt and shape memory alloy specimens. The benefits and limitations of deconvolution are presented and discussed in these two cases. The main conclusion is that strain maps are correctly deconvolved when the signal-to-noise ratio is high and that actual noise in the actual strain maps must be more specifically characterized than in the current study to address higher noise levels with Wiener filtering.

  4. Least-squares (LS) deconvolution of a series of overlapping cortical auditory evoked potentials: a simulation and experimental study

    NASA Astrophysics Data System (ADS)

    Bardy, Fabrice; Van Dun, Bram; Dillon, Harvey; Cowan, Robert

    2014-08-01

    Objective. To evaluate the viability of disentangling a series of overlapping ‘cortical auditory evoked potentials’ (CAEPs) elicited by different stimuli using least-squares (LS) deconvolution, and to assess the adaptation of CAEPs for different stimulus onset-asynchronies (SOAs). Approach. Optimal aperiodic stimulus sequences were designed by controlling the condition number of matrices associated with the LS deconvolution technique. First, theoretical considerations of LS deconvolution were assessed in simulations in which multiple artificial overlapping responses were recovered. Second, biological CAEPs were recorded in response to continuously repeated stimulus trains containing six different tone-bursts with frequencies 8, 4, 2, 1, 0.5, 0.25 kHz separated by SOAs jittered around 150 (120-185), 250 (220-285) and 650 (620-685) ms. The control condition had a fixed SOA of 1175 ms. In a second condition, using the same SOAs, trains of six stimuli were separated by a silence gap of 1600 ms. Twenty-four adults with normal hearing (<20 dB HL) were assessed. Main results. Results showed disentangling of a series of overlapping responses using LS deconvolution on simulated waveforms as well as on real EEG data. The use of rapid presentation and LS deconvolution did not however, allow the recovered CAEPs to have a higher signal-to-noise ratio than for slowly presented stimuli. The LS deconvolution technique enables the analysis of a series of overlapping responses in EEG. Significance. LS deconvolution is a useful technique for the study of adaptation mechanisms of CAEPs for closely spaced stimuli whose characteristics change from stimulus to stimulus. High-rate presentation is necessary to develop an understanding of how the auditory system encodes natural speech or other intrinsically high-rate stimuli.

  5. Large seismic source imaging from old analogue seismograms

    NASA Astrophysics Data System (ADS)

    Caldeira, Bento; Buforn, Elisa; Borges, José; Bezzeghoud, Mourad

    2017-04-01

    In this work we present a procedure to recover the ground motions by a proper digital structure, from old seismograms in analogue physical support (paper or microfilm) to study the source rupture process, by application of modern finite source inversion tools. Despite the quality that the analog data and the digitizing technologies available may have, recover the ground motions with the accurate metrics from old seismograms, is often an intricate procedure. Frequently the general parameters of the analogue instruments response that allow recover the shape of the ground motions (free periods and damping) are known, but the magnification that allow recover the metric of these motions is dubious. It is in these situations that the procedure applies. The procedure is based on assign of the moment magnitude value to the integral of the apparent Source Time Function (STF), estimated by deconvolution of a synthetic elementary seismogram from the related observed seismogram, corrected with an instrument response affected by improper magnification. Two delicate issues in the process are 1) the calculus of the synthetic elementary seismograms that must consider later phases if applied to large earthquakes (the portions of signal should be 3 or 4 times larger than the rupture time) and 2) the deconvolution to calculate the apparent STF. In present version of the procedure was used the Direct Solution Method to compute the elementary seismograms and the deconvolution was processed in time domain by an iterative algorithm that allow constrains the STF to stay positive and time limited. The method was examined using synthetic data to test the accuracy and robustness. Finally, a set of 17 real old analog seismograms from the Santa Maria (Azores) 1939 earthquake (Mw=7.1) was used in order to recover the waveforms in the required digital structure, from which by inversion allows compute the finite source rupture model (slip distribution). Acknowledgements: This work is co-financed by the European Union through the European Regional Development Fund under COMPETE 2020 (Operational Program for Competitiveness and Internationalization) through the ICT project (UID / GEO / 04683/2013) under the reference POCI-01-0145 -FEDER-007690.

  6. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    PubMed

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  7. Minimum entropy deconvolution and blind equalisation

    NASA Technical Reports Server (NTRS)

    Satorius, E. H.; Mulligan, J. J.

    1992-01-01

    Relationships between minimum entropy deconvolution, developed primarily for geophysics applications, and blind equalization are pointed out. It is seen that a large class of existing blind equalization algorithms are directly related to the scale-invariant cost functions used in minimum entropy deconvolution. Thus the extensive analyses of these cost functions can be directly applied to blind equalization, including the important asymptotic results of Donoho.

  8. Scalar flux modeling in turbulent flames using iterative deconvolution

    NASA Astrophysics Data System (ADS)

    Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.

    2018-04-01

    In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.

  9. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    NASA Astrophysics Data System (ADS)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  10. Rapid perfusion quantification using Welch-Satterthwaite approximation and analytical spectral filtering

    NASA Astrophysics Data System (ADS)

    Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.

    2017-02-01

    CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.

  11. A biomechanical study of artificial cervical discs using computer simulation.

    PubMed

    Ahn, Hyung Soo; DiAngelo, Denis J

    2008-04-15

    A virtual simulation model of the subaxial cervical spine was used to study the biomechanical effects of various disc prosthesis designs. To study the biomechanics of different design features of cervical disc arthroplasty devices. Disc arthroplasty is an alternative approach to cervical fusion surgery for restoring and maintaining motion at a diseased spinal segment. Different types of cervical disc arthroplasty devices exist and vary based on their placement and degrees of motion offered. A virtual dynamic model of the subaxial cervical spine was used to study 3 different prosthetic disc designs (PDD): (1) PDD-I: The center of rotation of a spherical joint located at the mid C5-C6 disc, (2) PDD-II: The center of rotation of a spherical joint located 6.5 mm below the mid C5-C6 disc, and (3) PDD-III: The center of rotation of a spherical joint in a plane located at the C5-C6 disc level. A constrained spherical joint placed at the disc level (PDD-I) significantly increased facet loads during extension. Lowering the rotational axis of the spherical joint towards the subjacent body (PDD-II) caused a marginal increase in facet loading during flexion, extension, and lateral bending. Lastly, unconstraining the spherical joint to move freely in a plane (PDD-III) minimized facet load build up during all loading modes. The simulation model showed the impact simple design changes may have on cervical disc dynamics. The predicted facet loads calculated from computer model have to be validated in the experimental study.

  12. Flux amplification in helicity injected spherical tori

    NASA Astrophysics Data System (ADS)

    Tang, X. Z.; Boozer, A. H.

    2005-04-01

    An important measure of the effective current drive by helicity injection into spheromaks and spherical tori is provided by the flux amplification factor, defined as the ratio between the closed poloidal flux in the relaxed mean field and the initial injector vacuum poloidal flux. Flux amplification in magnetic helicity injection is governed by a resonant behavior for Taylor-relaxed plasmas satisfying j =kB. Under the finite net toroidal flux constraint in a spherical torus (ST), the constrained linear resonance k1c is upshifted substantially from the primary Jensen-Chu resonance k1 that was known to be responsible for flux amplification in spheromak formation. Standard coaxial helicity injection into a ST operates at large M, with M the characteristic dimensionless parameter defined as the ratio between the toroidal flux in the discharge chamber and the injector poloidal flux. Meaningful flux amplification for ST plasmas is limited by a critical kr at which edge toroidal field reverses its direction. The kr is downshifted from k1 by a small amount inversely proportional to M. The maximum flux amplification factor Ar≡A(k=kr) scales linearly with M. At the other end of k, substantial flux amplification A(k =ko)˜1 becomes available for ko that scales inversely proportional to M, a significant departure from that in spheromak formation. These important parameters follow the inequality ko

  13. How to break the density-anisotropy degeneracy in spherical stellar systems

    NASA Astrophysics Data System (ADS)

    Read, J. I.; Steger, P.

    2017-11-01

    We present a new non-parametric Jeans code, GravSphere, that recovers the density ρ(r) and velocity anisotropy β(r) of spherical stellar systems, assuming only that they are in a steady state. Using a large suite of mock data, we confirm that with only line-of-sight velocity data, GravSphere provides a good estimate of the density at the projected stellar half-mass radius, ρ(R1/2), but is not able to measure ρ(r) or β(r), even with 10 000 tracer stars. We then test three popular methods for breaking this ρ - β degeneracy: using multiple populations with different R1/2; using higher order `virial shape parameters' (VSPs); and including proper motion data. We find that two populations provide an excellent recovery of ρ(r) in-between their respective R1/2. However, even with a total of ˜7000 tracers, we are not able to well-constrain β(r) for either population. By contrast, using 1000 tracers with higher order VSPs we are able to measure ρ(r) over the range 0.5 < r/R1/2 < 2 and broadly constrain β(r). Including proper motion data for all stars gives an even better performance, with ρ and β well-measured over the range 0.25 < r/R1/2 < 4. Finally, we test GravSphere on a triaxial mock galaxy that has axis ratios typical of a merger remnant [1 : 0.8 : 0.6]. In this case, GravSphere can become slightly biased. However, we find that when this occurs the data are poorly fit, allowing us to detect when such departures from spherical symmetry become problematic.

  14. The Hollow Spheres of the Orgueil Meteorite: A Re-Examination

    NASA Technical Reports Server (NTRS)

    Hoover, Richard B.; Jerman, Gregory; Rossignold-Strick, Maritine

    2005-01-01

    In 1971, Rossignol-Strick and Barghoorn provided images and a description of a number of spherical hollow microstructures showing well-defined walls in acid macerated extract of the Orgueil CI carbonaceous meteorite. Other forms such as membranes and spiral shaped structures were also reported. The carbon-rich (kerogen) hollow spheres were found to be in a narrowly constrained distribution of sizes (mainly 7 to 10 microns in diameter). Electron microprobe analysis revealed that these spheres contained Carbon, possibly P, N, and K. It was established that these forms could not be attributed to pollen or other recent terrestrial contaminants. It was concluded that they most probably represented organic coatings on globules of glass, olivine or magnetite in the meteorite. However, recent studies of the Orgueil meteorite have been carried out at the NASA/Marshall Space Flight Center with the S-4000 Hitachi Field Emission Scanning Electron Microscope (FESEM). These investigations have revealed the presence of numerous carbon encrusted spherical magnetite platelets and spherical and ovoidal bodies of elemental iron in-situ in freshly fractured interior surfaces of the meteorite. Their size range is also very narrowly constrained (typically approximately 6 to 12 microns) in diameter. High resolution images reveal that these bodies are also encrusted with a thin carbonaceous sheath and are surrounded by short nanofibrils that are shown to be composed of high purity iron by EDAX elemental analysis. We present Secondary and Backscatter Electron FESEM images and associated EDAX elemental analyses and 2D X-ray maps of these forms as we re-examine the hollow spheres of Orgueil and attempt to determine if they are representatives of the same population of indigenous microstructures.

  15. Seismic interferometry by crosscorrelation and by multidimensional deconvolution: a systematic comparison

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Juerg; Slob, Evert; Thorbecke, Jan; Snieder, Roel

    2010-05-01

    In recent years, seismic interferometry (or Green's function retrieval) has led to many applications in seismology (exploration, regional and global), underwater acoustics and ultrasonics. One of the explanations for this broad interest lies in the simplicity of the methodology. In passive data applications a simple crosscorrelation of responses at two receivers gives the impulse response (Green's function) at one receiver as if there were a source at the position of the other. In controlled-source applications the procedure is similar, except that it involves in addition a summation along the sources. It has also been recognized that the simple crosscorrelation approach has its limitations. From the various theoretical models it follows that there are a number of underlying assumptions for retrieving the Green's function by crosscorrelation. The most important assumptions are that the medium is lossless and that the waves are equipartitioned. In heuristic terms the latter condition means that the receivers are illuminated isotropically from all directions, which is for example achieved when the sources are regularly distributed along a closed surface, the sources are mutually uncorrelated and their power spectra are identical. Despite the fact that in practical situations these conditions are at most only partly fulfilled, the results of seismic interferometry are generally quite robust, but the retrieved amplitudes are unreliable and the results are often blurred by artifacts. Several researchers have proposed to address some of the shortcomings by replacing the correlation process by deconvolution. In most cases the employed deconvolution procedure is essentially 1-D (i.e., trace-by-trace deconvolution). This compensates the anelastic losses, but it does not account for the anisotropic illumination of the receivers. To obtain more accurate results, seismic interferometry by deconvolution should acknowledge the 3-D nature of the seismic wave field. Hence, from a theoretical point of view, the trace-by-trace process should be replaced by a full 3-D wave field deconvolution process. Interferometry by multidimensional deconvolution is more accurate than the trace-by-trace correlation and deconvolution approaches but the processing is more involved. In the presentation we will give a systematic analysis of seismic interferometry by crosscorrelation versus multi-dimensional deconvolution and discuss applications of both approaches.

  16. Time reversal focusing of high amplitude sound in a reverberation chamber.

    PubMed

    Willardson, Matthew L; Anderson, Brian E; Young, Sarah M; Denison, Michael H; Patchett, Brian D

    2018-02-01

    Time reversal (TR) is a signal processing technique that can be used for intentional sound focusing. While it has been studied in room acoustics, the application of TR to produce a high amplitude focus of sound in a room has not yet been explored. The purpose of this study is to create a virtual source of spherical waves with TR that are of sufficient intensity to study nonlinear acoustic propagation. A parameterization study of deconvolution, one-bit, clipping, and decay compensation TR methods is performed to optimize high amplitude focusing and temporal signal focus quality. Of all TR methods studied, clipping is shown to produce the highest amplitude focal signal. An experiment utilizing eight horn loudspeakers in a reverberation chamber is done with the clipping TR method. A peak focal amplitude of 9.05 kPa (173.1 dB peak re 20 μPa) is achieved. Results from this experiment indicate that this high amplitude focusing is a nonlinear process.

  17. Physicochemical properties and micro-structural characteristics in starch from kudzu root as affected by cross-linking.

    PubMed

    Chen, Boru; Dang, Leping; Zhang, Xiao; Fang, Wenzhi; Hou, Mengna; Liu, Tiankuo; Wang, Zhanzhong

    2017-03-15

    Kudzu starch was cross-linked with sodium trimetaphosphate (STMP) at different temperatures, time and of STMP concentrations in this work. The cross-linked starches (CLSs) were fractionated further into cross-linked amylose and amylopectin in order to compare the effect of cross-linking on the microstructure. According to scanning electron microscope (SEM), CLSs displayed the resemble appearance of spherical and polygonal shapes like NS. X-ray diffraction (XRD) revealed that amylose of native starch (A), NS and CLS displayed a combination of A-type and B-type structure, while that was not found in amylose of cross-linked starch (CLA). The deconvoluted fourier transform infrared (FT-IR) indicated that crystal structure of kudzu starch was losing with the proceeding of cross-linking reaction. The CLSs exhibited a higher retrogradation and freeze-thaw stability than NS. This was accompanied by a significant decrease in sedimentation, transparency, swelling power and solubility. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Evaluation of Isoprene Chain Extension from PEO Macromolecular Chain Transfer Agents for the Preparation of Dual, Invertible Block Copolymer Nanoassemblies.

    PubMed

    Bartels, Jeremy W; Cauët, Solène I; Billings, Peter L; Lin, Lily Yun; Zhu, Jiahua; Fidge, Christopher; Pochan, Darrin J; Wooley, Karen L

    2010-09-14

    Two RAFT-capable PEO macro-CTAs, 2 and 5 kDa, were prepared and used for the polymerization of isoprene which yielded well-defined block copolymers of varied lengths and compositions. GPC analysis of the PEO macro-CTAs and block copolymers showed remaining unreacted PEO macro-CTA. Mathematical deconvolution of the GPC chromatograms allowed for the estimation of the blocking efficiency, about 50% for the 5 kDa PEO macro-CTA and 64% for the 2 kDa CTA. Self assembly of the block copolymers in both water and decane was investigated and the resulting regular and inverse assemblies, respectively, were analyzed with DLS, AFM, and TEM to ascertain their dimensions and properties. Assembly of PEO-b-PIp block copolymers in aqueous solution resulted in well-defined micelles of varying sizes while the assembly in hydrophobic, organic solvent resulted in the formation of different morphologies including large aggregates and well-defined cylindrical and spherical structures.

  19. Constraining some Horndeski gravity theories

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sourav; Chakraborty, Sumanta

    2017-02-01

    We discuss two spherically symmetric solutions admitted by the Horndeski (or scalar-tensor) theory in the context of Solar System and astrophysical scenarios. One of these solutions is derived for Einstein-Gauss-Bonnet gravity, while the other originates from the coupling of the Gauss-Bonnet invariant with a scalar field. Specifically, we discuss the perihelion precession and the bending angle of light for these two different spherically symmetric spacetimes derived in Maeda and Dadhich [Phys. Rev. D 75, 044007 (2007), 10.1103/PhysRevD.75.044007] and Sotiriou and Zhou [Phys. Rev. D 90, 124063 (2014), 10.1103/PhysRevD.90.124063], respectively. The latter, in particular, applies only to black-hole spacetimes. We further delineate on the numerical bounds of relevant parameters of these theories from such computations.

  20. The energy-release rate and “self-force” of dynamically expanding spherical and plane inclusion boundaries with dilatational eigenstrain

    NASA Astrophysics Data System (ADS)

    Markenscoff, Xanthippi; Ni, Luqun

    2010-01-01

    In the context of the linear theory of elasticity with eigenstrains, the radiated field including inertia effects of a spherical inclusion with dilatational eigenstrain radially expanding is obtained on the basis of the dynamic Green's function, and one of the half-space inclusion boundary (with dilatational eigenstrain) moving from rest in general subsonic motion is obtained by a limiting process from the spherically expanding inclusion as the radius tends to infinity while the eigenstrain remains constrained, and this is the minimum energy solution. The global energy-release rate required to move the plane inclusion boundary and to create an incremental region of eigenstrain is defined analogously to the one for moving cracks and dislocations and represents the mechanical rate of work needed to be provide for the expansion of the inclusion. The calculated value, which is the "self-force" of the expanding inclusion, has a static component plus a dynamic one depending only on the current value of the velocity, while in the case of the spherical boundary, there is an additional contribution accounting for the jump in the strain at the farthest part at the back of the inclusion having the time to reach the front boundary, thus making the dynamic "self-force" history dependent.

  1. An optimized algorithm for multiscale wideband deconvolution of radio astronomical images

    NASA Astrophysics Data System (ADS)

    Offringa, A. R.; Smirnov, O.

    2017-10-01

    We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.

  2. New regularization scheme for blind color image deconvolution

    NASA Astrophysics Data System (ADS)

    Chen, Li; He, Yu; Yap, Kim-Hui

    2011-01-01

    This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.

  3. Methods and Apparatus for Reducing Multipath Signal Error Using Deconvolution

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor); Lau, Kenneth H. (Inventor)

    1999-01-01

    A deconvolution approach to adaptive signal processing has been applied to the elimination of signal multipath errors as embodied in one preferred embodiment in a global positioning system receiver. The method and receiver of the present invention estimates then compensates for multipath effects in a comprehensive manner. Application of deconvolution, along with other adaptive identification and estimation techniques, results in completely novel GPS (Global Positioning System) receiver architecture.

  4. Improving space debris detection in GEO ring using image deconvolution

    NASA Astrophysics Data System (ADS)

    Núñez, Jorge; Núñez, Anna; Montojo, Francisco Javier; Condominas, Marta

    2015-07-01

    In this paper we present a method based on image deconvolution to improve the detection of space debris, mainly in the geostationary ring. Among the deconvolution methods we chose the iterative Richardson-Lucy (R-L), as the method that achieves better goals with a reasonable amount of computation. For this work, we used two sets of real 4096 × 4096 pixel test images obtained with the Telescope Fabra-ROA at Montsec (TFRM). Using the first set of data, we establish the optimal number of iterations in 7, and applying the R-L method with 7 iterations to the images, we show that the astrometric accuracy does not vary significantly while the limiting magnitude of the deconvolved images increases significantly compared to the original ones. The increase is in average about 1.0 magnitude, which means that objects up to 2.5 times fainter can be detected after deconvolution. The application of the method to the second set of test images, which includes several faint objects, shows that, after deconvolution, up to four previously undetected faint objects are detected in a single frame. Finally, we carried out a study of some economic aspects of applying the deconvolution method, showing that an important economic impact can be envisaged.

  5. Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths.

    PubMed

    Ingaramo, Maria; York, Andrew G; Hoogendoorn, Eelco; Postma, Marten; Shroff, Hari; Patterson, George H

    2014-03-17

    We use Richardson-Lucy (RL) deconvolution to combine multiple images of a simulated object into a single image in the context of modern fluorescence microscopy techniques. RL deconvolution can merge images with very different point-spread functions, such as in multiview light-sheet microscopes,1, 2 while preserving the best resolution information present in each image. We show that RL deconvolution is also easily applied to merge high-resolution, high-noise images with low-resolution, low-noise images, relevant when complementing conventional microscopy with localization microscopy. We also use RL deconvolution to merge images produced by different simulated illumination patterns, relevant to structured illumination microscopy (SIM)3, 4 and image scanning microscopy (ISM). The quality of our ISM reconstructions is at least as good as reconstructions using standard inversion algorithms for ISM data, but our method follows a simpler recipe that requires no mathematical insight. Finally, we apply RL deconvolution to merge a series of ten images with varying signal and resolution levels. This combination is relevant to gated stimulated-emission depletion (STED) microscopy, and shows that merges of high-quality images are possible even in cases for which a non-iterative inversion algorithm is unknown. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Least-squares deconvolution of evoked potentials and sequence optimization for multiple stimuli under low-jitter conditions.

    PubMed

    Bardy, Fabrice; Dillon, Harvey; Van Dun, Bram

    2014-04-01

    Rapid presentation of stimuli in an evoked response paradigm can lead to overlap of multiple responses and consequently difficulties interpreting waveform morphology. This paper presents a deconvolution method allowing overlapping multiple responses to be disentangled. The deconvolution technique uses a least-squared error approach. A methodology is proposed to optimize the stimulus sequence associated with the deconvolution technique under low-jitter conditions. It controls the condition number of the matrices involved in recovering the responses. Simulations were performed using the proposed deconvolution technique. Multiple overlapping responses can be recovered perfectly in noiseless conditions. In the presence of noise, the amount of error introduced by the technique can be controlled a priori by the condition number of the matrix associated with the used stimulus sequence. The simulation results indicate the need for a minimum amount of jitter, as well as a sufficient number of overlap combinations to obtain optimum results. An aperiodic model is recommended to improve reconstruction. We propose a deconvolution technique allowing multiple overlapping responses to be extracted and a method of choosing the stimulus sequence optimal for response recovery. This technique may allow audiologists, psychologists, and electrophysiologists to optimize their experimental designs involving rapidly presented stimuli, and to recover evoked overlapping responses. Copyright © 2013 International Federation of Clinical Neurophysiology. All rights reserved.

  7. Dense deconvolution net: Multi path fusion and dense deconvolution for high resolution skin lesion segmentation.

    PubMed

    He, Xinzi; Yu, Zhen; Wang, Tianfu; Lei, Baiying; Shi, Yiyan

    2018-01-01

    Dermoscopy imaging has been a routine examination approach for skin lesion diagnosis. Accurate segmentation is the first step for automatic dermoscopy image assessment. The main challenges for skin lesion segmentation are numerous variations in viewpoint and scale of skin lesion region. To handle these challenges, we propose a novel skin lesion segmentation network via a very deep dense deconvolution network based on dermoscopic images. Specifically, the deep dense layer and generic multi-path Deep RefineNet are combined to improve the segmentation performance. The deep representation of all available layers is aggregated to form the global feature maps using skip connection. Also, the dense deconvolution layer is leveraged to capture diverse appearance features via the contextual information. Finally, we apply the dense deconvolution layer to smooth segmentation maps and obtain final high-resolution output. Our proposed method shows the superiority over the state-of-the-art approaches based on the public available 2016 and 2017 skin lesion challenge dataset and achieves the accuracy of 96.0% and 93.9%, which obtained a 6.0% and 1.2% increase over the traditional method, respectively. By utilizing Dense Deconvolution Net, the average time for processing one testing images with our proposed framework was 0.253 s.

  8. An accelerated non-Gaussianity based multichannel predictive deconvolution method with the limited supporting region of filters

    NASA Astrophysics Data System (ADS)

    Li, Zhong-xiao; Li, Zhen-chun

    2016-09-01

    The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.

  9. Studing Regional Wave Source Time Functions Using A Massive Automated EGF Deconvolution Procedure

    NASA Astrophysics Data System (ADS)

    Xie, J. "; Schaff, D. P.

    2010-12-01

    Reliably estimated source time functions (STF) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection, and minimization of parameter trade-off in attenuation studies. The empirical Green’s function (EGF) method can be used for estimating STF, but it requires a strict recording condition. Waveforms from pairs of events that are similar in focal mechanism, but different in magnitude must be on-scale recorded on the same stations for the method to work. Searching for such waveforms can be very time consuming, particularly for regional waves that contain complex path effects and have reduced S/N ratios due to attenuation. We have developed a massive, automated procedure to conduct inter-event waveform deconvolution calculations from many candidate event pairs. The procedure automatically evaluates the “spikiness” of the deconvolutions by calculating their “sdc”, which is defined as the peak divided by the background value. The background value is calculated as the mean absolute value of the deconvolution, excluding 10 s around the source time function. When the sdc values are about 10 or higher, the deconvolutions are found to be sufficiently spiky (pulse-like), indicating similar path Green’s functions and good estimates of the STF. We have applied this automated procedure to Lg waves and full regional wavetrains from 989 M ≥ 5 events in and around China, calculating about a million deconvolutions. Of these we found about 2700 deconvolutions with sdc greater than 9, which, if having a sufficiently broad frequency band, can be used to estimate the STF of the larger events. We are currently refining our procedure, as well as the estimated STFs. We will infer the source scaling using the STFs. We will also explore the possibility that the deconvolution procedure could complement cross-correlation in a real time event-screening process.

  10. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less

  11. Coulomb energy of uniformly charged spheroidal shell systems.

    PubMed

    Jadhao, Vikram; Yao, Zhenwei; Thomas, Creighton K; de la Cruz, Monica Olvera

    2015-03-01

    We provide exact expressions for the electrostatic energy of uniformly charged prolate and oblate spheroidal shells. We find that uniformly charged prolate spheroids of eccentricity greater than 0.9 have lower Coulomb energy than a sphere of the same area. For the volume-constrained case, we find that a sphere has the highest Coulomb energy among all spheroidal shells. Further, we derive the change in the Coulomb energy of a uniformly charged shell due to small, area-conserving perturbations on the spherical shape. Our perturbation calculations show that buckling-type deformations on a sphere can lower the Coulomb energy. Finally, we consider the possibility of counterion condensation on the spheroidal shell surface. We employ a Manning-Oosawa two-state model approximation to evaluate the renormalized charge and analyze the behavior of the equilibrium free energy as a function of the shell's aspect ratio for both area-constrained and volume-constrained cases. Counterion condensation is seen to favor the formation of spheroidal structures over a sphere of equal area for high values of shell volume fractions.

  12. Is There a Direct Correlation Between Microvascular Wall Structure and k-Trans Values Obtained From Perfusion CT Measurements in Lymphomas?

    PubMed

    Horger, Marius; Fallier-Becker, Petra; Thaiss, Wolfgang M; Sauter, Alexander; Bösmüller, Hans; Martella, Manuela; Preibsch, Heike; Fritz, Jan; Nikolaou, Konstantin; Kloth, Christopher

    2018-05-03

    This study aimed to test the hypothesis that ultrastructural wall abnormalities of lymphoma vessels correlate with perfusion computed tomography (PCT) kinetics. Our local institutional review board approved this prospective study. Between February 2013 and June 2016, we included 23 consecutive subjects with newly diagnosed lymphoma, who were referred for computed tomography-guided biopsy (6 women, 17 men; mean age, 60.61 ± 12.43 years; range, 28-74 years) and additionally agreed to undergo PCT of the target lymphoma tissues. PCT was obtained for 40 seconds using 80 kV, 120 mAs, 64 × 0.6-mm collimation, 6.9-cm z-axis coverage, and 26 volume measurements. Mean and maximum k-trans (mL/100 mL/min), blood flow (BF; mL/100 mL/min) and blood volume (BV) were quantified using the deconvolution and the maximum slope + Patlak calculation models. Immunohistochemical staining was performed for microvessel density quantification (vessels/m 2 ), and electron microscopy was used to determine the presence or absence of tight junctions, endothelial fenestration, basement membrane, and pericytes, and to measure extracellular matrix thickness. Extracellular matrix thickness as well as the presence or absence of tight junctions, basal lamina, and pericytes did not correlate with computed tomography perfusion parameters. Endothelial fenestrations correlated significantly with mean BF deconvolution (P = .047, r = 0.418) and additionally was significantly associated with higher mean BV deconvolution (P < .005). Mean k-trans Patlak correlated strongly with mean k-trans deconvolution (r = 0.939, P = .001), and both correlated with mean BF deconvolution (P = .001, r = 0.748), max BF deconvolution (P = .028, r = 0.564), mean BV deconvolution (P = .001, r = 0.752), and max BV deconvolution (P = .001, r = 0.771). Microvessel density correlated with max k-trans deconvolution (r = 0.564, P = .023). Vascular endothelial growth factor receptor-3 expression (receptor specific for lymphatics) correlated significantly with max k-trans Patlak (P = .041, r = 0.686) and mean BF deconvolution (P = .038, r = 0.695). k-Trans values of PCT do not correlate with ultrastructural microvessel features, whereas endothelial fenestrations correlate with increased intra-tumoral BVs. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  13. A curved ultrasonic actuator optimized for spherical motors: design and experiments.

    PubMed

    Leroy, Edouard; Lozada, José; Hafez, Moustapha

    2014-08-01

    Multi-degree-of-freedom angular actuators are commonly used in numerous mechatronic areas such as omnidirectional robots, robot articulations or inertially stabilized platforms. The conventional method to design these devices consists in placing multiple actuators in parallel or series using gimbals which are bulky and difficult to miniaturize. Motors using a spherical rotor are interesting for miniature multidegree-of-freedom actuators. In this paper, a new actuator is proposed. It is based on a curved piezoelectric element which has its inner contact surface adapted to the diameter of the rotor. This adaptation allows to build spherical motors with a fully constrained rotor and without a need for additional guiding system. The work presents a design methodology based on modal finite element analysis. A methodology for mode selection is proposed and a sensitivity analysis of the final geometry to uncertainties and added masses is discussed. Finally, experimental results that validate the actuator concept on a single degree-of-freedom ultrasonic motor set-up are presented. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Using Global Plate Velocity Boundary Conditions for Embedded Regional Geodynamic Models

    NASA Astrophysics Data System (ADS)

    Taramon Gomez, Jorge; Morgan, Jason; Perez-Gussinye, Marta

    2015-04-01

    The treatment of far-field boundary conditions is one of the most poorly resolved issues for regional modeling of geodynamic processes. In viscous flow, the choice of far-field boundary conditions often strongly shapes the large-scale structure of a geosimulation. The mantle velocity field along the sidewalls and base of a modeling region is typically much more poorly known than the geometry of past global motions of the surface plates as constrained by global plate motion reconstructions. For regional rifting models it has become routine to apply highly simplified 'plate spreading' or 'uniform rifting' boundary conditions to a 3-D model that limits its ability to simulate the geodynamic evolution of a specific rifted margin. One way researchers are exploring the sensitivity of regional models to uncertain boundary conditions is to use a nested modeling approach in which a global model is used to determine a large-scale flow pattern that is imposed as a constraint along the boundaries of the region to be modeled. Here we explore the utility of a different approach that takes advantage of the ability of finite element models to use unstructured meshes than can embed much higher resolution sub-regions within a spherical global mesh. In our initial project to validate this approach, we create a global spherical mesh in which a higher resolution sub-region is created around the nascent South Atlantic Rifting Margin. Global Plate motion BCs and plate boundaries are applied for the time of the onset of rifting, continuing through several 10s of Ma of rifting. Thermal, compositional, and melt-related buoyancy forces are only non-zero within the high-resolution subregion, elsewhere, motions are constrained by surface plate-motion constraints. The total number of unknowns needed to solve an embedded regional model with this approach is less than 1/3 larger than that needed for a structured-mesh solution on a Cartesian or spherical cap sub-regional mesh. Here we illustrate the initial steps within this workflow for creating time-varying surface boundary conditions (using GPlates), and a time-variable unstructured 3-D spherical mesh.

  15. Supershear rupture in the 24 May 2013 Mw 6.7 Okhotsk deep earthquake: Additional evidence from regional seismic stations

    NASA Astrophysics Data System (ADS)

    Zhan, Zhongwen; Shearer, Peter M.; Kanamori, Hiroo

    2015-10-01

    Zhan et al. (2014a) reported supershear rupture during the Mw 6.7 aftershock of the 2013 Mw 8.3 Sea of Okhotsk deep earthquake, relying heavily on the regional station PET, which played a critical role in constraining the vertical rupture dimension and rupture speed. Here we include five more regional stations and find that the durations of the source time functions derived from these stations are consistent with Zhan et al.'s supershear rupture model. Furthermore, to reduce the nonuniqueness of deconvolution and combine the bandwidths of different stations, we conduct a joint inversion of the six regional stations for a single broadband moment-rate function (MRF). The best fitting MRF, which explains all the regional waveforms well, has a smooth shape without any temporal gaps. The Mw 6.7 Okhotsk deep earthquake is more likely a continuous supershear rupture than a dynamically triggered doublet.

  16. Data enhancement and analysis through mathematical deconvolution of signals from scientific measuring instruments

    NASA Technical Reports Server (NTRS)

    Wood, G. M.; Rayborn, G. H.; Ioup, J. W.; Ioup, G. E.; Upchurch, B. T.; Howard, S. J.

    1981-01-01

    Mathematical deconvolution of digitized analog signals from scientific measuring instruments is shown to be a means of extracting important information which is otherwise hidden due to time-constant and other broadening or distortion effects caused by the experiment. Three different approaches to deconvolution and their subsequent application to recorded data from three analytical instruments are considered. To demonstrate the efficacy of deconvolution, the use of these approaches to solve the convolution integral for the gas chromatograph, magnetic mass spectrometer, and the time-of-flight mass spectrometer are described. Other possible applications of these types of numerical treatment of data to yield superior results from analog signals of the physical parameters normally measured in aerospace simulation facilities are suggested and briefly discussed.

  17. Multi-frame partially saturated images blind deconvolution

    NASA Astrophysics Data System (ADS)

    Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2016-12-01

    When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.

  18. Parallelization of a blind deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  19. Improved deconvolution of very weak confocal signals.

    PubMed

    Day, Kasey J; La Rivière, Patrick J; Chandler, Talon; Bindokas, Vytas P; Ferrier, Nicola J; Glick, Benjamin S

    2017-01-01

    Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.

  20. The capture and recreation of 3D auditory scenes

    NASA Astrophysics Data System (ADS)

    Li, Zhiyun

    The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.

  1. Septal penetration correction in I-131 imaging following thyroid cancer treatment

    NASA Astrophysics Data System (ADS)

    Barrack, Fiona; Scuffham, James; McQuaid, Sarah

    2018-04-01

    Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ  =  0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ  =  0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that could potentially improve the diagnostic accuracy of I-131 imaging. Novelty and significance This work has demonstrated that scatter correction combined with deconvolution can be used to substantially reduce the appearance of septal penetration artefacts in I-131 phantom and patient gamma camera planar images, enable improved visualisation of the I-131 distribution. Deconvolution with symmetric PSF has previously been used to reduce artefacts in gamma camera images however this work details the novel use of an asymmetric PSF to remove the angularly dependent septal penetration artefacts.

  2. Source Pulse Estimation of Mine Shock by Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Makowski, R.

    The objective of seismic signal deconvolution is to extract from the signal information concerning the rockmass or the signal in the source of the shock. In the case of blind deconvolution, we have to extract information regarding both quantities. Many methods of deconvolution made use of in prospective seismology were found to be of minor utility when applied to shock-induced signals recorded in the mines of the Lubin Copper District. The lack of effectiveness should be attributed to the inadequacy of the model on which the methods are based, with respect to the propagation conditions for that type of signal. Each of the blind deconvolution methods involves a number of assumptions; hence, only if these assumptions are fulfilled, we may expect reliable results.Consequently, we had to formulate a different model for the signals recorded in the copper mines of the Lubin District. The model is based on the following assumptions: (1) The signal emitted by the sh ock source is a short-term signal. (2) The signal transmitting system (rockmass) constitutes a parallel connection of elementary systems. (3) The elementary systems are of resonant type. Such a model seems to be justified by the geological structure as well as by the positions of the shock foci and seismometers. The results of time-frequency transformation also support the dominance of resonant-type propagation.Making use of the model, a new method for the blind deconvolution of seismic signals has been proposed. The adequacy of the new model, as well as the efficiency of the proposed method, has been confirmed by the results of blind deconvolution. The slight approximation errors obtained with a small number of approximating elements additionally corroborate the adequacy of the model.

  3. Multipoint Optimal Minimum Entropy Deconvolution and Convolution Fix: Application to vibration fault detection

    NASA Astrophysics Data System (ADS)

    McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.

  4. Improving Range Estimation of a 3-Dimensional Flash Ladar via Blind Deconvolution

    DTIC Science & Technology

    2010-09-01

    12 2.1.4 Optical Imaging as a Linear and Nonlinear System 15 2.1.5 Coherence Theory and Laser Light Statistics . . . 16 2.2 Deconvolution...rather than deconvolution. 2.1.5 Coherence Theory and Laser Light Statistics. Using [24] and [25], this section serves as background on coherence theory...the laser light incident on the detector surface. The image intensity related to different types of coherence is governed by the laser light’s spatial

  5. Evaluation of deconvolution modelling applied to numerical combustion

    NASA Astrophysics Data System (ADS)

    Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît

    2018-01-01

    A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.

  6. Nonlinear activity of acoustically driven gas bubble near a rigid boundary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maksimov, Alexey

    2015-10-28

    The presence of a boundary can produce considerable changes in the oscillation amplitude of the bubble and its scattered echo. The present study fills a gap in the literature, in that it is concerned theoretically with the bubble activity at relatively small distances from the rigid boundary. It was shown that the bi-spherical coordinates provide separation of variables and are more suitable for analysis of the dynamics of these constrained bubbles. Explicit formulas have been derived which describe the dependence of the bubble emission near a rigid wall on its size and the separation distance between the bubble and themore » boundary. As applications, time reversal technique for gas leakage detection and radiation forces that are induced by an acoustic wave on a constrained bubble were analyzed.« less

  7. Faceting for direction-dependent spectral deconvolution

    NASA Astrophysics Data System (ADS)

    Tasse, C.; Hugo, B.; Mirmont, M.; Smirnov, O.; Atemkeng, M.; Bester, L.; Hardcastle, M. J.; Lakhoo, R.; Perkins, S.; Shimwell, T.

    2018-04-01

    The new generation of radio interferometers is characterized by high sensitivity, wide fields of view and large fractional bandwidth. To synthesize the deepest images enabled by the high dynamic range of these instruments requires us to take into account the direction-dependent Jones matrices, while estimating the spectral properties of the sky in the imaging and deconvolution algorithms. In this paper we discuss and implement a wideband wide-field spectral deconvolution framework (DDFacet) based on image plane faceting, that takes into account generic direction-dependent effects. Specifically, we present a wide-field co-planar faceting scheme, and discuss the various effects that need to be taken into account to solve for the deconvolution problem (image plane normalization, position-dependent Point Spread Function, etc). We discuss two wideband spectral deconvolution algorithms based on hybrid matching pursuit and sub-space optimisation respectively. A few interesting technical features incorporated in our imager are discussed, including baseline dependent averaging, which has the effect of improving computing efficiency. The version of DDFacet presented here can account for any externally defined Jones matrices and/or beam patterns.

  8. Intrinsic fluorescence spectroscopy of glutamate dehydrogenase: Integrated behavior and deconvolution analysis

    NASA Astrophysics Data System (ADS)

    Pompa, P. P.; Cingolani, R.; Rinaldi, R.

    2003-07-01

    In this paper, we present a deconvolution method aimed at spectrally resolving the broad fluorescence spectra of proteins, namely, of the enzyme bovine liver glutamate dehydrogenase (GDH). The analytical procedure is based on the deconvolution of the emission spectra into three distinct Gaussian fluorescing bands Gj. The relative changes of the Gj parameters are directly related to the conformational changes of the enzyme, and provide interesting information about the fluorescence dynamics of the individual emitting contributions. Our deconvolution method results in an excellent fitting of all the spectra obtained with GDH in a number of experimental conditions (various conformational states of the protein) and describes very well the dynamics of a variety of phenomena, such as the dependence of hexamers association on protein concentration, the dynamics of thermal denaturation, and the interaction process between the enzyme and external quenchers. The investigation was carried out by means of different optical experiments, i.e., native enzyme fluorescence, thermal-induced unfolding, and fluorescence quenching studies, utilizing both the analysis of the “average” behavior of the enzyme and the proposed deconvolution approach.

  9. A new scheme for stigmatic x-ray imaging with large magnification.

    PubMed

    Bitter, M; Hill, K W; Delgado-Aparicio, L F; Pablant, N A; Scott, S; Jones, F; Beiersdorfer, P; Wang, E; del Rio, M Sanchez; Caughey, T A; Brunner, J

    2012-10-01

    This paper describes a new x-ray scheme for stigmatic imaging. The scheme consists of one convex spherically bent crystal and one concave spherically bent crystal. The radii of curvature and Bragg reflecting lattice planes of the two crystals are properly matched to eliminate the astigmatism, so that the conditions for stigmatic imaging are met for a particular wavelength. The magnification is adjustable and solely a function of the two Bragg angles or angles of incidence. Although the choice of Bragg angles is constrained by the availability of crystals, this is not a severe limitation for the imaging of plasmas, since a particular wavelength can be selected from the bremsstrahlung continuum. The working principle of this imaging scheme has been verified with visible light. Further tests with x rays are planned for the near future.

  10. 4Pi microscopy deconvolution with a variable point-spread function.

    PubMed

    Baddeley, David; Carl, Christian; Cremer, Christoph

    2006-09-20

    To remove the axial sidelobes from 4Pi images, deconvolution forms an integral part of 4Pi microscopy. As a result of its high axial resolution, the 4Pi point spread function (PSF) is particularly susceptible to imperfect optical conditions within the sample. This is typically observed as a shift in the position of the maxima under the PSF envelope. A significantly varying phase shift renders deconvolution procedures based on a spatially invariant PSF essentially useless. We present a technique for computing the forward transformation in the case of a varying phase at a computational expense of the same order of magnitude as that of the shift invariant case, a method for the estimation of PSF phase from an acquired image, and a deconvolution procedure built on these techniques.

  11. Improved deconvolution of very weak confocal signals

    PubMed Central

    Day, Kasey J.; La Rivière, Patrick J.; Chandler, Talon; Bindokas, Vytas P.; Ferrier, Nicola J.; Glick, Benjamin S.

    2017-01-01

    Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage. PMID:28868135

  12. Improved deconvolution of very weak confocal signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon

    Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less

  13. Improved deconvolution of very weak confocal signals

    DOE PAGES

    Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon; ...

    2017-06-06

    Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less

  14. Blind deconvolution post-processing of images corrected by adaptive optics

    NASA Astrophysics Data System (ADS)

    Christou, Julian C.

    1995-08-01

    Experience with the adaptive optics system at the Starfire Optical Range has shown that the point spread function is non-uniform and varies both spatially and temporally as well as being object dependent. Because of this, the application of a standard linear and non-linear deconvolution algorithms make it difficult to deconvolve out the point spread function. In this paper we demonstrate the application of a blind deconvolution algorithm to adaptive optics compensated data where a separate point spread function is not needed.

  15. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program.

    PubMed

    Afouxenidis, D; Polymeris, G S; Tsirliganis, N C; Kitis, G

    2012-05-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the GLOw Curve ANalysis INtercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters.

  16. Deconvolution of noisy transient signals: a Kalman filtering application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J.V.; Zicker, J.E.

    The deconvolution of transient signals from noisy measurements is a common problem occuring in various tests at Lawrence Livermore National Laboratory. The transient deconvolution problem places atypical constraints on algorithms presently available. The Schmidt-Kalman filter, a time-varying, tunable predictor, is designed using a piecewise constant model of the transient input signal. A simulation is developed to test the algorithm for various input signal bandwidths and different signal-to-noise ratios for the input and output sequences. The algorithm performance is reasonable.

  17. Dynamics of Compressible Convection and Thermochemical Mantle Convection

    NASA Astrophysics Data System (ADS)

    Liu, Xi

    The Earth's long-wavelength geoid anomalies have long been used to constrain the dynamics and viscosity structure of the mantle in an isochemical, whole-mantle convection model. However, there is strong evidence that the seismically observed large low shear velocity provinces (LLSVPs) in the lowermost mantle are chemically distinct and denser than the ambient mantle. In this thesis, I investigated how chemically distinct and dense piles influence the geoid. I formulated dynamically self-consistent 3D spherical convection models with realistic mantle viscosity structure which reproduce Earth's dominantly spherical harmonic degree-2 convection. The models revealed a compensation effect of the chemically dense LLSVPs. Next, I formulated instantaneous flow models based on seismic tomography to compute the geoid and constrain mantle viscosity assuming thermochemical convection with the compensation effect. Thermochemical models reconcile the geoid observations. The viscosity structure inverted for thermochemical models is nearly identical to that of whole-mantle models, and both prefer weak transition zone. Our results have implications for mineral physics, seismic tomographic studies, and mantle convection modelling. Another part of this thesis describes analyses of the influence of mantle compressibility on thermal convection in an isoviscous and compressible fluid with infinite Prandtl number. A new formulation of the propagator matrix method is implemented to compute the critical Rayleigh number and the corresponding eigenfunctions for compressible convection. Heat flux and thermal boundary layer properties are quantified in numerical models and scaling laws are developed.

  18. Range resolution improvement in passive bistatic radars using nested FM channels and least squares approach

    NASA Astrophysics Data System (ADS)

    Arslan, Musa T.; Tofighi, Mohammad; Sevimli, Rasim A.; ćetin, Ahmet E.

    2015-05-01

    One of the main disadvantages of using commercial broadcasts in a Passive Bistatic Radar (PBR) system is the range resolution. Using multiple broadcast channels to improve the radar performance is offered as a solution to this problem. However, it suffers from detection performance due to the side-lobes that matched filter creates for using multiple channels. In this article, we introduce a deconvolution algorithm to suppress the side-lobes. The two-dimensional matched filter output of a PBR is further analyzed as a deconvolution problem. The deconvolution algorithm is based on making successive projections onto the hyperplanes representing the time delay of a target. Resulting iterative deconvolution algorithm is globally convergent because all constraint sets are closed and convex. Simulation results in an FM based PBR system are presented.

  19. Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image

    NASA Astrophysics Data System (ADS)

    He, Xingwu; You, Junchen

    2018-03-01

    Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.

  20. Imaging resolution and properties analysis of super resolution microscopy with parallel detection under different noise, detector and image restoration conditions

    NASA Astrophysics Data System (ADS)

    Yu, Zhongzhi; Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Liu, Xu

    2018-06-01

    Parallel detection, which can use the additional information of a pinhole plane image taken at every excitation scan position, could be an efficient method to enhance the resolution of a confocal laser scanning microscope. In this paper, we discuss images obtained under different conditions and using different image restoration methods with parallel detection to quantitatively compare the imaging quality. The conditions include different noise levels and different detector array settings. The image restoration methods include linear deconvolution and pixel reassignment with Richard-Lucy deconvolution and with maximum-likelihood estimation deconvolution. The results show that the linear deconvolution share properties such as high-efficiency and the best performance under all different conditions, and is therefore expected to be of use for future biomedical routine research.

  1. Microprobe monazite geochronology: new techniques for dating deformation and metamorphism

    NASA Astrophysics Data System (ADS)

    Williams, M.; Jercinovic, M.; Goncalves, P.; Mahan, K.

    2003-04-01

    High-resolution compositional mapping, age mapping, and precise dating of monazite on the electron microprobe are powerful additions to microstructural and petrologic analysis and important tools for tectonic studies. The in-situ nature and high spatial resolution of the technique offer an entirely new level of structurally and texturally specific geochronologic data that can be used to put absolute time constraints on P-T-D paths, constrain the rates of sedimentary, metamorphic, and deformational processes, and provide new links between metamorphism and deformation. New analytical techniques (including background modeling, sample preparation, and interference analysis) have significantly improved the precision and accuracy of the technique and new mapping and image analysis techniques have increased the efficiency and strengthened the correlation with fabrics and textures. Microprobe geochronology is particularly applicable to three persistent microstructural-microtextural problem areas: (1) constraining the chronology of metamorphic assemblages; (2) constraining the timing of deformational fabrics; and (3) interpreting other geochronological results. In addition, authigenic monazite can be used to date sedimentary basins, and detrital monazite can fingerprint sedimentary source areas, both critical for tectonic analysis. Although some monazite generations can be directly tied to metamorphism or deformation, at present, the most common constraints rely on monazite inclusion relations in porphyroblasts that, in turn, can be tied to the deformation and/or metamorphic history. Examples will be presented from deep-crustal rocks of northern Saskatchewan and from mid-crustal rocks from the southwestern USA. Microprobe monazite geochronology has been used in both regions to deconvolute overprinting deformation and metamorphic events and to clarify the interpretation of other geochronologic data. Microprobe mapping and dating are powerful companions to mass spectroscopic dating techniques. They allow geochronology to be incorporated into the microstructural analytical process, resulting in a new level of integration of time (t) into P-T-D histories.

  2. Application of deconvolution interferometry with both Hi-net and KiK-net data

    NASA Astrophysics Data System (ADS)

    Nakata, N.

    2013-12-01

    Application of deconvolution interferometry to wavefields observed by KiK-net, a strong-motion recording network in Japan, is useful for estimating wave velocities and S-wave splitting in the near surface. Using this technique, for example, Nakata and Snieder (2011, 2012) found changed in velocities caused by Tohoku-Oki earthquake in Japan. At the location of the borehole accelerometer of each KiK-net station, a velocity sensor is also installed as a part of a high-sensitivity seismograph network (Hi-net). I present a technique that uses both Hi-net and KiK-net records for computing deconvolution interferometry. The deconvolved waveform obtained from the combination of Hi-net and KiK-net data is similar to the waveform computed from KiK-net data only, which indicates that one can use Hi-net wavefields for deconvolution interferometry. Because Hi-net records have a high signal-to-noise ratio (S/N) and high dynamic resolution, the S/N and the quality of amplitude and phase of deconvolved waveforms can be improved with Hi-net data. These advantages are especially important for short-time moving-window seismic interferometry and deconvolution interferometry using later coda waves.

  3. The Small-scale Structure of Photospheric Convection Retrieved by a Deconvolution Technique Applied to Hinode/SP Data

    NASA Astrophysics Data System (ADS)

    Oba, T.; Riethmüller, T. L.; Solanki, S. K.; Iida, Y.; Quintero Noda, C.; Shimizu, T.

    2017-11-01

    Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode/SP data in an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of -3.0 km s-1 and +3.0 km s-1 at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.

  4. Application of deterministic deconvolution of ground-penetrating radar data in a study of carbonate strata

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.

    2004-01-01

    We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.

  5. Peptide de novo sequencing of mixture tandem mass spectra

    PubMed Central

    Hotta, Stéphanie Yuki Kolbeck; Verano‐Braga, Thiago; Kjeldsen, Frank

    2016-01-01

    The impact of mixture spectra deconvolution on the performance of four popular de novo sequencing programs was tested using artificially constructed mixture spectra as well as experimental proteomics data. Mixture fragmentation spectra are recognized as a limitation in proteomics because they decrease the identification performance using database search engines. De novo sequencing approaches are expected to be even more sensitive to the reduction in mass spectrum quality resulting from peptide precursor co‐isolation and thus prone to false identifications. The deconvolution approach matched complementary b‐, y‐ions to each precursor peptide mass, which allowed the creation of virtual spectra containing sequence specific fragment ions of each co‐isolated peptide. Deconvolution processing resulted in equally efficient identification rates but increased the absolute number of correctly sequenced peptides. The improvement was in the range of 20–35% additional peptide identifications for a HeLa lysate sample. Some correct sequences were identified only using unprocessed spectra; however, the number of these was lower than those where improvement was obtained by mass spectral deconvolution. Tight candidate peptide score distribution and high sensitivity to small changes in the mass spectrum introduced by the employed deconvolution method could explain some of the missing peptide identifications. PMID:27329701

  6. Deconvolution using a neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  7. Deconvolution of gas chromatographic data

    NASA Technical Reports Server (NTRS)

    Howard, S.; Rayborn, G. H.

    1980-01-01

    The use of deconvolution methods on gas chromatographic data to obtain an accurate determination of the relative amounts of each material present by mathematically separating the merged peaks is discussed. Data were obtained on a gas chromatograph with a flame ionization detector. Chromatograms of five xylenes with differing degrees of separation were generated by varying the column temperature at selected rates. The merged peaks were then successfully separated by deconvolution. The concept of function continuation in the frequency domain was introduced in striving to reach the theoretical limit of accuracy, but proved to be only partially successful.

  8. Detailed interpretation of aeromagnetic data from the Patagonia Mountains area, southeastern Arizona

    USGS Publications Warehouse

    Bultman, Mark W.

    2015-01-01

    Euler deconvolution depth estimates derived from aeromagnetic data with a structural index of 0 show that mapped faults on the northern margin of the Patagonia Mountains generally agree with the depth estimates in the new geologic model. The deconvolution depth estimates also show that the concealed Patagonia Fault southwest of the Patagonia Mountains is more complex than recent geologic mapping represents. Additionally, Euler deconvolution depth estimates with a structural index of 2 locate many potential intrusive bodies that might be associated with known and unknown mineralization.

  9. Dynamics and universal scaling law in geometrically-controlled sessile drop evaporation

    PubMed Central

    Sáenz, P. J.; Wray, A. W.; Che, Z.; Matar, O. K.; Valluri, P.; Kim, J.; Sefiane, K.

    2017-01-01

    The evaporation of a liquid drop on a solid substrate is a remarkably common phenomenon. Yet, the complexity of the underlying mechanisms has constrained previous studies to spherically symmetric configurations. Here we investigate well-defined, non-spherical evaporating drops of pure liquids and binary mixtures. We deduce a universal scaling law for the evaporation rate valid for any shape and demonstrate that more curved regions lead to preferential localized depositions in particle-laden drops. Furthermore, geometry induces well-defined flow structures within the drop that change according to the driving mechanism. In the case of binary mixtures, geometry dictates the spatial segregation of the more volatile component as it is depleted. Our results suggest that the drop geometry can be exploited to prescribe the particle deposition and evaporative dynamics of pure drops and the mixing characteristics of multicomponent drops, which may be of interest to a wide range of industrial and scientific applications. PMID:28294114

  10. Self-assembled fibre optoelectronics with discrete translational symmetry

    PubMed Central

    Rein, Michael; Levy, Etgar; Gumennik, Alexander; Abouraddy, Ayman F.; Joannopoulos, John; Fink, Yoel

    2016-01-01

    Fibres with electronic and photonic properties are essential building blocks for functional fabrics with system level attributes. The scalability of thermal fibre drawing approach offers access to large device quantities, while constraining the devices to be translational symmetric. Lifting this symmetry to create discrete devices in fibres will increase their utility. Here, we draw, from a macroscopic preform, fibres that have three parallel internal non-contacting continuous domains; a semiconducting glass between two conductors. We then heat the fibre and generate a capillary fluid instability, resulting in the selective transformation of the cylindrical semiconducting domain into discrete spheres while keeping the conductive domains unchanged. The cylindrical-to-spherical expansion bridges the continuous conducting domains to create ∼104 self-assembled, electrically contacted and entirely packaged discrete spherical devices per metre of fibre. The photodetection and Mie resonance dependent response are measured by illuminating the fibre while connecting its ends to an electrical readout. PMID:27698454

  11. Spherically symmetric charged black holes in f(R) gravitational theories

    NASA Astrophysics Data System (ADS)

    Nashed, G. G. L.

    2018-01-01

    In this study, we have derived electric and magnetic spherically symmetric black holes for the class f(R)=R+ζ R2 without assuming any restrictions on the Ricci scalar. These black holes asymptotically behave as the de Sitter spacetime under certain constrains. We have shown that the magnetic charge contributes in the metric spacetime similarly to the electric charge. The most interesting feature of some of these black holes is the fact that the Cauchy horizon is not identical to the event horizon. We have calculated the invariants of Ricci and Kretschmann scalars to investigate the nature of singularities of such black holes. Also, we have calculated the conserved quantities to match the constants of integration with the physical quantities. Finally, the thermodynamical quantities, like Hawking temperature, entropy, etc., have been evaluated and the validity of the first law of thermodynamics has been verified.

  12. A NEW THREE-DIMENSIONAL SOLAR WIND MODEL IN SPHERICAL COORDINATES WITH A SIX-COMPONENT GRID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Xueshang; Zhang, Man; Zhou, Yufen, E-mail: fengx@spaceweather.ac.cn

    In this paper, we introduce a new three-dimensional magnetohydrodynamics numerical model to simulate the steady state ambient solar wind from the solar surface to 215 R {sub s} or beyond, and the model adopts a splitting finite-volume scheme based on a six-component grid system in spherical coordinates. By splitting the magnetohydrodynamics equations into a fluid part and a magnetic part, a finite volume method can be used for the fluid part and a constrained-transport method able to maintain the divergence-free constraint on the magnetic field can be used for the magnetic induction part. This new second-order model in space andmore » time is validated when modeling the large-scale structure of the solar wind. The numerical results for Carrington rotation 2064 show its ability to produce structured solar wind in agreement with observations.« less

  13. Self-assembled fibre optoelectronics with discrete translational symmetry.

    PubMed

    Rein, Michael; Levy, Etgar; Gumennik, Alexander; Abouraddy, Ayman F; Joannopoulos, John; Fink, Yoel

    2016-10-04

    Fibres with electronic and photonic properties are essential building blocks for functional fabrics with system level attributes. The scalability of thermal fibre drawing approach offers access to large device quantities, while constraining the devices to be translational symmetric. Lifting this symmetry to create discrete devices in fibres will increase their utility. Here, we draw, from a macroscopic preform, fibres that have three parallel internal non-contacting continuous domains; a semiconducting glass between two conductors. We then heat the fibre and generate a capillary fluid instability, resulting in the selective transformation of the cylindrical semiconducting domain into discrete spheres while keeping the conductive domains unchanged. The cylindrical-to-spherical expansion bridges the continuous conducting domains to create ∼10 4 self-assembled, electrically contacted and entirely packaged discrete spherical devices per metre of fibre. The photodetection and Mie resonance dependent response are measured by illuminating the fibre while connecting its ends to an electrical readout.

  14. The Gross–Pitaevskii equations of a static and spherically symmetric condensate of gravitons

    NASA Astrophysics Data System (ADS)

    Cunillera, Francesc; Germani, Cristiano

    2018-05-01

    In this paper we consider the Dvali and Gómez assumption that the end state of a gravitational collapse is a Bose–Einstein condensate of gravitons. We then construct the two Gross–Pitaevskii equations for a static and spherically symmetric configuration of the condensate. These two equations correspond to the constrained minimisation of the gravitational Hamiltonian with respect to the redshift and the Newtonian potential, per given number of gravitons. We find that the effective geometry of the condensate is the one of a gravastar (a de Sitter star) with a sub-Planckian cosmological constant, for masses larger than the Planck scale. Thus, a condensate corresponding to a semiclassical black hole, is always quantum and weakly coupled. Finally, we obtain that the boundary of our gravastar, although it is not the location of a horizon, corresponds to the Schwarzschild radius.

  15. Dynamics and universal scaling law in geometrically-controlled sessile drop evaporation.

    PubMed

    Sáenz, P J; Wray, A W; Che, Z; Matar, O K; Valluri, P; Kim, J; Sefiane, K

    2017-03-15

    The evaporation of a liquid drop on a solid substrate is a remarkably common phenomenon. Yet, the complexity of the underlying mechanisms has constrained previous studies to spherically symmetric configurations. Here we investigate well-defined, non-spherical evaporating drops of pure liquids and binary mixtures. We deduce a universal scaling law for the evaporation rate valid for any shape and demonstrate that more curved regions lead to preferential localized depositions in particle-laden drops. Furthermore, geometry induces well-defined flow structures within the drop that change according to the driving mechanism. In the case of binary mixtures, geometry dictates the spatial segregation of the more volatile component as it is depleted. Our results suggest that the drop geometry can be exploited to prescribe the particle deposition and evaporative dynamics of pure drops and the mixing characteristics of multicomponent drops, which may be of interest to a wide range of industrial and scientific applications.

  16. Encircling the dark: constraining dark energy via cosmic density in spheres

    NASA Astrophysics Data System (ADS)

    Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.

    2016-08-01

    The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.

  17. SU-G-IeP3-08: Image Reconstruction for Scanning Imaging System Based On Shape-Modulated Point Spreading Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ruixing; Yang, LV; Xu, Kele

    Purpose: Deconvolution is a widely used tool in the field of image reconstruction algorithm when the linear imaging system has been blurred by the imperfect system transfer function. However, due to the nature of Gaussian-liked distribution for point spread function (PSF), the components with coherent high frequency in the image are hard to restored in most of the previous scanning imaging system, even the relatively accurate PSF is acquired. We propose a novel method for deconvolution of images which are obtained by using shape-modulated PSF. Methods: We use two different types of PSF - Gaussian shape and donut shape -more » to convolute the original image in order to simulate the process of scanning imaging. By employing deconvolution of the two images with corresponding given priors, the image quality of the deblurred images are compared. Then we find the critical size of the donut shape compared with the Gaussian shape which has similar deconvolution results. Through calculation of tightened focusing process using radially polarized beam, such size of donut is achievable under same conditions. Results: The effects of different relative size of donut and Gaussian shapes are investigated. When the full width at half maximum (FWHM) ratio of donut and Gaussian shape is set about 1.83, similar resolution results are obtained through our deconvolution method. Decreasing the size of donut will favor the deconvolution method. A mask with both amplitude and phase modulation is used to create a donut-shaped PSF compared with the non-modulated Gaussian PSF. Donut with size smaller than our critical value is obtained. Conclusion: The utility of donutshaped PSF are proved useful and achievable in the imaging and deconvolution processing, which is expected to have potential practical applications in high resolution imaging for biological samples.« less

  18. Precision calculations of the cosmic shear power spectrum projection

    NASA Astrophysics Data System (ADS)

    Kilbinger, Martin; Heymans, Catherine; Asgari, Marika; Joudaki, Shahab; Schneider, Peter; Simon, Patrick; Van Waerbeke, Ludovic; Harnois-Déraps, Joachim; Hildebrandt, Hendrik; Köhlinger, Fabian; Kuijken, Konrad; Viola, Massimo

    2017-12-01

    We compute the spherical-sky weak-lensing power spectrum of the shear and convergence. We discuss various approximations, such as flat-sky, and first- and second-order Limber equations for the projection. We find that the impact of adopting these approximations is negligible when constraining cosmological parameters from current weak-lensing surveys. This is demonstrated using data from the Canada-France-Hawaii Telescope Lensing Survey. We find that the reported tension with Planck cosmic microwave background temperature anisotropy results cannot be alleviated. For future large-scale surveys with unprecedented precision, we show that the spherical second-order Limber approximation will provide sufficient accuracy. In this case, the cosmic-shear power spectrum is shown to be in agreement with the full projection at the sub-percent level for ℓ > 3, with the corresponding errors an order of magnitude below cosmic variance for all ℓ. When computing the two-point shear correlation function, we show that the flat-sky fast Hankel transformation results in errors below two percent compared to the full spherical transformation. In the spirit of reproducible research, our numerical implementation of all approximations and the full projection are publicly available within the package NICAEA at http://www.cosmostat.org/software/nicaea.

  19. Spectral identification of a 90Sr source in the presence of masking nuclides using Maximum-Likelihood deconvolution

    NASA Astrophysics Data System (ADS)

    Neuer, Marcus J.

    2013-11-01

    A technique for the spectral identification of strontium-90 is shown, utilising a Maximum-Likelihood deconvolution. Different deconvolution approaches are discussed and summarised. Based on the intensity distribution of the beta emission and Geant4 simulations, a combined response matrix is derived, tailored to the β- detection process in sodium iodide detectors. It includes scattering effects and attenuation by applying a base material decomposition extracted from Geant4 simulations with a CAD model for a realistic detector system. Inversion results of measurements show the agreement between deconvolution and reconstruction. A detailed investigation with additional masking sources like 40K, 226Ra and 131I shows that a contamination of strontium can be found in the presence of these nuisance sources. Identification algorithms for strontium are presented based on the derived technique. For the implementation of blind identification, an exemplary masking ratio is calculated.

  20. A frequency-domain seismic blind deconvolution based on Gini correlations

    NASA Astrophysics Data System (ADS)

    Wang, Zhiguo; Zhang, Bing; Gao, Jinghuai; Huo Liu, Qing

    2018-02-01

    In reflection seismic processing, the seismic blind deconvolution is a challenging problem, especially when the signal-to-noise ratio (SNR) of the seismic record is low and the length of the seismic record is short. As a solution to this ill-posed inverse problem, we assume that the reflectivity sequence is independent and identically distributed (i.i.d.). To infer the i.i.d. relationships from seismic data, we first introduce the Gini correlations (GCs) to construct a new criterion for the seismic blind deconvolution in the frequency-domain. Due to a unique feature, the GCs are robust in their higher tolerance of the low SNR data and less dependent on record length. Applications of the seismic blind deconvolution based on the GCs show their capacity in estimating the unknown seismic wavelet and the reflectivity sequence, whatever synthetic traces or field data, even with low SNR and short sample record.

  1. Processing strategy for water-gun seismic data from the Gulf of Mexico

    USGS Publications Warehouse

    Lee, Myung W.; Hart, Patrick E.; Agena, Warren F.

    2000-01-01

    In order to study the regional distribution of gas hydrates and their potential relationship to a large-scale sea-fl oor failures, more than 1,300 km of near-vertical-incidence seismic profi les were acquired using a 15-in3 water gun across the upper- and middle-continental slope in the Garden Banks and Green Canyon regions of the Gulf of Mexico. Because of the highly mixed phase water-gun signature, caused mainly by a precursor of the source arriving about 18 ms ahead of the main pulse, a conventional processing scheme based on the minimum phase assumption is not suitable for this data set. A conventional processing scheme suppresses the reverberations and compresses the main pulse, but the failure to suppress precursors results in complex interference between the precursors and primary refl ections, thus obscuring true refl ections. To clearly image the subsurface without interference from the precursors, a wavelet deconvolution based on the mixedphase assumption using variable norm is attempted. This nonminimum- phase wavelet deconvolution compresses a longwave- train water-gun signature into a simple zero-phase wavelet. A second-zero-crossing predictive deconvolution followed by a wavelet deconvolution suppressed variable ghost arrivals attributed to the variable depths of receivers. The processing strategy of using wavelet deconvolution followed by a secondzero- crossing deconvolution resulted in a sharp and simple wavelet and a better defi nition of the polarity of refl ections. Also, the application of dip moveout correction enhanced lateral resolution of refl ections and substantially suppressed coherent noise.

  2. Dereplication of Natural Products Using GC-TOF Mass Spectrometry: Improved Metabolite Identification by Spectral Deconvolution Ratio Analysis.

    PubMed

    Carnevale Neto, Fausto; Pilon, Alan C; Selegato, Denise M; Freire, Rafael T; Gu, Haiwei; Raftery, Daniel; Lopes, Norberto P; Castro-Gamboa, Ian

    2016-01-01

    Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, thereby avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY) with Automated Mass Spectral Deconvolution and Identification System software (AMDIS). Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential, and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication was initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor) was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts.

  3. Dereplication of Natural Products Using GC-TOF Mass Spectrometry: Improved Metabolite Identification by Spectral Deconvolution Ratio Analysis

    PubMed Central

    Carnevale Neto, Fausto; Pilon, Alan C.; Selegato, Denise M.; Freire, Rafael T.; Gu, Haiwei; Raftery, Daniel; Lopes, Norberto P.; Castro-Gamboa, Ian

    2016-01-01

    Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, thereby avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY) with Automated Mass Spectral Deconvolution and Identification System software (AMDIS). Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential, and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication was initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor) was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts. PMID:27747213

  4. A method of PSF generation for 3D brightfield deconvolution.

    PubMed

    Tadrous, P J

    2010-02-01

    This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.

  5. Tractometer: towards validation of tractography pipelines.

    PubMed

    Côté, Marc-Alexandre; Girard, Gabriel; Boré, Arnaud; Garyfallidis, Eleftherios; Houde, Jean-Christophe; Descoteaux, Maxime

    2013-10-01

    We have developed the Tractometer: an online evaluation and validation system for tractography processing pipelines. One can now evaluate the results of more than 57,000 fiber tracking outputs using different acquisition settings (b-value, averaging), different local estimation techniques (tensor, q-ball, spherical deconvolution) and different tracking parameters (masking, seeding, maximum curvature, step size). At this stage, the system is solely based on a revised FiberCup analysis, but we hope that the community will get involved and provide us with new phantoms, new algorithms, third party libraries and new geometrical metrics, to name a few. We believe that the new connectivity analysis and tractography characteristics proposed can highlight limits of the algorithms and contribute in solving open questions in fiber tracking: from raw data to connectivity analysis. Overall, we show that (i) averaging improves quality of tractography, (ii) sharp angular ODF profiles helps tractography, (iii) seeding and multi-seeding has a large impact on tractography outputs and must be used with care, and (iv) deterministic tractography produces less invalid tracts which leads to better connectivity results than probabilistic tractography. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Ferromagnetic resonance studies of granular materials (abstract)

    NASA Astrophysics Data System (ADS)

    Rubinstein, Mark; Das, Badri; Chrisey, D. B.; Horwitz, J.; Koon, N. C.

    1994-05-01

    We have investigated the ferromagnetic resonance (FMR) spectra of several granular alloys displaying giant magnetoresistance (GMR). For this task, we have produced melt-spun ribbons of Fe5Co15Cu80 and Co20Cu80 by rapid quenching and thin films of Co80Cu20 by pulsed laser deposition. The salient feature of the FMR spectra is the increase of the resonance linewidth as a function of increasing annealing temperature. We have deconvoluted the FMR spectra to a single-domain powder pattern and a multidomain powder pattern. As a function of annealing temperature, the GMR of these samples attains a maximum value. Near the peak of the GMR curve, the FMR spectrum reveals that the ferromagnetic particles are half mono- and half multidomain. Since the maximum size of a single-domain particle is known, this enables us to estimate the spin diffusion length of the Cu conduction electrons. We have also demonstrated, theoretically and experimentally, that the appropriate demagnetizing field to apply to the ensemble of spherical magnetic particles that comprise our granular thin film is simply the field corresponding to the average magnetization.

  7. A digital algorithm for spectral deconvolution with noise filtering and peak picking: NOFIPP-DECON

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.; Settle, G. L.; Knight, R. D.

    1975-01-01

    Noise-filtering, peak-picking deconvolution software incorporates multiple convoluted convolute integers and multiparameter optimization pattern search. The two theories are described and three aspects of the software package are discussed in detail. Noise-filtering deconvolution was applied to a number of experimental cases ranging from noisy, nondispersive X-ray analyzer data to very noisy photoelectric polarimeter data. Comparisons were made with published infrared data, and a man-machine interactive language has evolved for assisting in very difficult cases. A modified version of the program is being used for routine preprocessing of mass spectral and gas chromatographic data.

  8. The Small-scale Structure of Photospheric Convection Retrieved by a Deconvolution Technique Applied to Hinode /SP Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oba, T.; Riethmüller, T. L.; Solanki, S. K.

    Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode /SP data inmore » an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of −3.0 km s{sup −1} and +3.0 km s{sup −1} at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.« less

  9. Toxoplasma Modulates Signature Pathways of Human Epilepsy, Neurodegeneration & Cancer.

    PubMed

    Ngô, Huân M; Zhou, Ying; Lorenzi, Hernan; Wang, Kai; Kim, Taek-Kyun; Zhou, Yong; El Bissati, Kamal; Mui, Ernest; Fraczek, Laura; Rajagopala, Seesandra V; Roberts, Craig W; Henriquez, Fiona L; Montpetit, Alexandre; Blackwell, Jenefer M; Jamieson, Sarra E; Wheeler, Kelsey; Begeman, Ian J; Naranjo-Galvis, Carlos; Alliey-Rodriguez, Ney; Davis, Roderick G; Soroceanu, Liliana; Cobbs, Charles; Steindler, Dennis A; Boyer, Kenneth; Noble, A Gwendolyn; Swisher, Charles N; Heydemann, Peter T; Rabiah, Peter; Withers, Shawn; Soteropoulos, Patricia; Hood, Leroy; McLeod, Rima

    2017-09-13

    One third of humans are infected lifelong with the brain-dwelling, protozoan parasite, Toxoplasma gondii. Approximately fifteen million of these have congenital toxoplasmosis. Although neurobehavioral disease is associated with seropositivity, causality is unproven. To better understand what this parasite does to human brains, we performed a comprehensive systems analysis of the infected brain: We identified susceptibility genes for congenital toxoplasmosis in our cohort of infected humans and found these genes are expressed in human brain. Transcriptomic and quantitative proteomic analyses of infected human, primary, neuronal stem and monocytic cells revealed effects on neurodevelopment and plasticity in neural, immune, and endocrine networks. These findings were supported by identification of protein and miRNA biomarkers in sera of ill children reflecting brain damage and T. gondii infection. These data were deconvoluted using three systems biology approaches: "Orbital-deconvolution" elucidated upstream, regulatory pathways interconnecting human susceptibility genes, biomarkers, proteomes, and transcriptomes. "Cluster-deconvolution" revealed visual protein-protein interaction clusters involved in processes affecting brain functions and circuitry, including lipid metabolism, leukocyte migration and olfaction. Finally, "disease-deconvolution" identified associations between the parasite-brain interactions and epilepsy, movement disorders, Alzheimer's disease, and cancer. This "reconstruction-deconvolution" logic provides templates of progenitor cells' potentiating effects, and components affecting human brain parasitism and diseases.

  10. Peptide de novo sequencing of mixture tandem mass spectra.

    PubMed

    Gorshkov, Vladimir; Hotta, Stéphanie Yuki Kolbeck; Verano-Braga, Thiago; Kjeldsen, Frank

    2016-09-01

    The impact of mixture spectra deconvolution on the performance of four popular de novo sequencing programs was tested using artificially constructed mixture spectra as well as experimental proteomics data. Mixture fragmentation spectra are recognized as a limitation in proteomics because they decrease the identification performance using database search engines. De novo sequencing approaches are expected to be even more sensitive to the reduction in mass spectrum quality resulting from peptide precursor co-isolation and thus prone to false identifications. The deconvolution approach matched complementary b-, y-ions to each precursor peptide mass, which allowed the creation of virtual spectra containing sequence specific fragment ions of each co-isolated peptide. Deconvolution processing resulted in equally efficient identification rates but increased the absolute number of correctly sequenced peptides. The improvement was in the range of 20-35% additional peptide identifications for a HeLa lysate sample. Some correct sequences were identified only using unprocessed spectra; however, the number of these was lower than those where improvement was obtained by mass spectral deconvolution. Tight candidate peptide score distribution and high sensitivity to small changes in the mass spectrum introduced by the employed deconvolution method could explain some of the missing peptide identifications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Redesigning existing transcranial magnetic stimulation coils to reduce energy: application to low field magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Wang, Boshuo; Shen, Michael R.; Deng, Zhi-De; Smith, J. Evan; Tharayil, Joseph J.; Gurrey, Clement J.; Gomez, Luis J.; Peterchev, Angel V.

    2018-06-01

    Objective. To present a systematic framework and exemplar for the development of a compact and energy-efficient coil that replicates the electric field (E-field) distribution induced by an existing transcranial magnetic stimulation coil. Approach. The E-field generated by a conventional low field magnetic stimulation (LFMS) coil was measured for a spherical head model and simulated in both spherical and realistic head models. Then, using a spherical head model and spatial harmonic decomposition, a spherical-shaped cap coil was synthesized such that its windings conformed to a spherical surface and replicated the E-field on the cortical surface while requiring less energy. A prototype coil was built and electrically characterized. The effect of constraining the windings to the upper half of the head was also explored via an alternative coil design. Main results. The LFMS E-field distribution resembled that of a large double-cone coil, with a peak field strength around 350 mV m‑1 in the cortex. The E-field distributions of the cap coil designs were validated against the original coil, with mean errors of 1%–3%. The cap coil required as little as 2% of the original coil energy and was significantly smaller in size. Significance. The redesigned LFMS coil is substantially smaller and more energy-efficient than the original, improving cost, power consumption, and portability. These improvements could facilitate deployment of LFMS in the clinic and potentially at home. This coil redesign approach can also be applied to other magnetic stimulation paradigms. Finally, the anatomically-accurate E-field simulation of LFMS can be used to interpret clinical LFMS data.

  12. New Constraints on the Geometry and Kinematics of Matter Surrounding the Accretion Flow in X-Ray Binaries from Chandra High-Energy Transmission Grating X-Ray Spectroscopy

    NASA Technical Reports Server (NTRS)

    Tzanavaris, P.; Yaqoob, T.

    2018-01-01

    The narrow, neutral Fe Ka fluorescence emission line in X-ray binaries (XRBs) is a powerful probe of the geometry, kinematics, and Fe abundance of matter around the accretion flow. In a recent study it has been claimed, using Chandra High-Energy Transmission Grating (HETG) spectra for a sample of XRBs, that the circumnuclear material is consistent with a solar-abundance, uniform, spherical distribution. It was also claimed that the Fe Ka line was unresolved in all cases by the HETG. However, these conclusions were based on ad hoc models that did not attempt to relate the global column density to the Fe Ka line emission. We revisit the sample and test a self-consistent model of a uniform, spherical X-ray reprocessor against HETG spectra from 56 observations of 14 Galactic XRBs. We find that the model is ruled out in 13/14 sources because a variable Fe abundance is required. In two sources a spherical distribution is viable, but with nonsolar Fe abundance. We also applied a solar-abundance Compton-thick reflection model, which can account for the spectra that are inconsistent with a spherical model, but spectra with a broader bandpass are required to better constrain model parameters. We also robustly measured the velocity width of the Fe Ka line and found FWHM values of up to approx. 5000 km/s. Only in some spectra was the Fe Ka line unresolved by the HETG.

  13. Untangling the Diverse Interior and Multiple Exterior Guest Interactions of a Supramolecular Host by the Simultaneous Analysis of Complementary Observables.

    PubMed

    Sgarlata, Carmelo; Raymond, Kenneth N

    2016-07-05

    The entropic and enthalpic driving forces for encapsulation versus sequential exterior guest binding to the [Ga4L6](12-) supramolecular host in solution are very different, which significantly complicates the determination of these thermodynamic parameters. The simultaneous use of complementary techniques, such as NMR, UV-vis, and isothermal titration calorimetry, enables the disentanglement of such multiple host-guest interactions. Indeed, data collected by each technique measure different components of the host-guest equilibria and together provide a complete picture of the solution thermodynamics. Unfortunately, commercially available programs do not allow for global analysis of different physical observables. We thus resorted to a novel procedure for the simultaneous refinement of multiple parameters (ΔG°, ΔH°, and ΔS°) by treating different observables through a weighted nonlinear least-squares analysis of a constrained model. The refinement procedure is discussed for the multiple binding of the Et4N(+) guest, but it is broadly applicable to the deconvolution of other intricate host-guest equilibria.

  14. Automation of a Wave-Optics Simulation and Image Post-Processing Package on Riptide

    NASA Astrophysics Data System (ADS)

    Werth, M.; Lucas, J.; Thompson, D.; Abercrombie, M.; Holmes, R.; Roggemann, M.

    Detailed wave-optics simulations and image post-processing algorithms are computationally expensive and benefit from the massively parallel hardware available at supercomputing facilities. We created an automated system that interfaces with the Maui High Performance Computing Center (MHPCC) Distributed MATLAB® Portal interface to submit massively parallel waveoptics simulations to the IBM iDataPlex (Riptide) supercomputer. This system subsequently postprocesses the output images with an improved version of physically constrained iterative deconvolution (PCID) and analyzes the results using a series of modular algorithms written in Python. With this architecture, a single person can simulate thousands of unique scenarios and produce analyzed, archived, and briefing-compatible output products with very little effort. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.

  15. Dynamics of non-holonomic systems with stochastic transport

    NASA Astrophysics Data System (ADS)

    Holm, D. D.; Putkaradze, V.

    2018-01-01

    This paper formulates a variational approach for treating observational uncertainty and/or computational model errors as stochastic transport in dynamical systems governed by action principles under non-holonomic constraints. For this purpose, we derive, analyse and numerically study the example of an unbalanced spherical ball rolling under gravity along a stochastic path. Our approach uses the Hamilton-Pontryagin variational principle, constrained by a stochastic rolling condition, which we show is equivalent to the corresponding stochastic Lagrange-d'Alembert principle. In the example of the rolling ball, the stochasticity represents uncertainty in the observation and/or error in the computational simulation of the angular velocity of rolling. The influence of the stochasticity on the deterministically conserved quantities is investigated both analytically and numerically. Our approach applies to a wide variety of stochastic, non-holonomically constrained systems, because it preserves the mathematical properties inherited from the variational principle.

  16. Reversible patterning of spherical shells through constrained buckling

    NASA Astrophysics Data System (ADS)

    Marthelot, J.; Brun, P.-T.; Jiménez, F. López; Reis, P. M.

    2017-07-01

    Recent advances in active soft structures envision the large deformations resulting from mechanical instabilities as routes for functional shape morphing. Numerous such examples exist for filamentary and plate systems. However, examples with double-curved shells are rarer, with progress hampered by challenges in fabrication and the complexities involved in analyzing their underlying geometrical nonlinearities. We show that on-demand patterning of hemispherical shells can be achieved through constrained buckling. Their postbuckling response is stabilized by an inner rigid mandrel. Through a combination of experiments, simulations, and scaling analyses, our investigation focuses on the nucleation and evolution of the buckling patterns into a reticulated network of sharp ridges. The geometry of the system, namely, the shell radius and the gap between the shell and the mandrel, is found to be the primary ingredient to set the surface morphology. This prominence of geometry suggests a robust, scalable, and tunable mechanism for reversible shape morphing of elastic shells.

  17. Towards robust deconvolution of low-dose perfusion CT: sparse perfusion deconvolution using online dictionary learning.

    PubMed

    Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C

    2013-05-01

    Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Deconvolution of azimuthal mode detection measurements

    NASA Astrophysics Data System (ADS)

    Sijtsma, Pieter; Brouwer, Harry

    2018-05-01

    Unequally spaced transducer rings make it possible to extend the range of detectable azimuthal modes. The disadvantage is that the response of the mode detection algorithm to a single mode is distributed over all detectable modes, similarly to the Point Spread Function of Conventional Beamforming with microphone arrays. With multiple modes the response patterns interfere, leading to a relatively high "noise floor" of spurious modes in the detected mode spectrum, in other words, to a low dynamic range. In this paper a deconvolution strategy is proposed for increasing this dynamic range. It starts with separating the measured sound into shaft tones and broadband noise. For broadband noise modes, a standard Non-Negative Least Squares solver appeared to be a perfect deconvolution tool. For shaft tones a Matching Pursuit approach is proposed, taking advantage of the sparsity of dominant modes. The deconvolution methods were applied to mode detection measurements in a fan rig. An increase in dynamic range of typically 10-15 dB was found.

  19. Monitoring of Time-Dependent System Profiles by Multiplex Gas Chromatography with Maximum Entropy Demodulation

    NASA Technical Reports Server (NTRS)

    Becker, Joseph F.; Valentin, Jose

    1996-01-01

    The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.

  20. Joint deconvolution and classification with applications to passive acoustic underwater multipath.

    PubMed

    Anderson, Hyrum S; Gupta, Maya R

    2008-11-01

    This paper addresses the problem of classifying signals that have been corrupted by noise and unknown linear time-invariant (LTI) filtering such as multipath, given labeled uncorrupted training signals. A maximum a posteriori approach to the deconvolution and classification is considered, which produces estimates of the desired signal, the unknown channel, and the class label. For cases in which only a class label is needed, the classification accuracy can be improved by not committing to an estimate of the channel or signal. A variant of the quadratic discriminant analysis (QDA) classifier is proposed that probabilistically accounts for the unknown LTI filtering, and which avoids deconvolution. The proposed QDA classifier can work either directly on the signal or on features whose transformation by LTI filtering can be analyzed; as an example a classifier for subband-power features is derived. Results on simulated data and real Bowhead whale vocalizations show that jointly considering deconvolution with classification can dramatically improve classification performance over traditional methods over a range of signal-to-noise ratios.

  1. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  2. A new scoring function for top-down spectral deconvolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kou, Qiang; Wu, Si; Liu, Xiaowen

    2014-12-18

    Background: Top-down mass spectrometry plays an important role in intact protein identification and characterization. Top-down mass spectra are more complex than bottom-up mass spectra because they often contain many isotopomer envelopes from highly charged ions, which may overlap with one another. As a result, spectral deconvolution, which converts a complex top-down mass spectrum into a monoisotopic mass list, is a key step in top-down spectral interpretation. Results: In this paper, we propose a new scoring function, L-score, for evaluating isotopomer envelopes. By combining L-score with MS-Deconv, a new software tool, MS-Deconv+, was developed for top-down spectral deconvolution. Experimental results showedmore » that MS-Deconv+ outperformed existing software tools in top-down spectral deconvolution. Conclusions: L-score shows high discriminative ability in identification of isotopomer envelopes. Using L-score, MS-Deconv+ reports many correct monoisotopic masses missed by other software tools, which are valuable for proteoform identification and characterization.« less

  3. Bayesian Deconvolution for Angular Super-Resolution in Forward-Looking Scanning Radar

    PubMed Central

    Zha, Yuebo; Huang, Yulin; Sun, Zhichao; Wang, Yue; Yang, Jianyu

    2015-01-01

    Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP) criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson–Lucy algorithm. PMID:25806871

  4. Towards robust deconvolution of low-dose perfusion CT: Sparse perfusion deconvolution using online dictionary learning

    PubMed Central

    Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C.

    2014-01-01

    Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. PMID:23542422

  5. Waveform LiDAR processing: comparison of classic approaches and optimized Gold deconvolution to characterize vegetation structure and terrain elevation

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.

    2016-12-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: 1) direct decomposition, 2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from discrete LiDAR data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, < 0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, < 1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (< 1.01m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE.

  6. Processing of single channel air and water gun data for imaging an impact structure at the Chesapeake Bay

    USGS Publications Warehouse

    Lee, Myung W.

    1999-01-01

    Processing of 20 seismic profiles acquired in the Chesapeake Bay area aided in analysis of the details of an impact structure and allowed more accurate mapping of the depression caused by a bolide impact. Particular emphasis was placed on enhancement of seismic reflections from the basement. Application of wavelet deconvolution after a second zero-crossing predictive deconvolution improved the resolution of shallow reflections, and application of a match filter enhanced the basement reflections. The use of deconvolution and match filtering with a two-dimensional signal enhancement technique (F-X filtering) significantly improved the interpretability of seismic sections.

  7. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: III. Convolution and deconvolution.

    PubMed

    Langenbucher, Frieder

    2003-11-01

    Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.

  8. Accounting for optical errors in microtensiometry.

    PubMed

    Hinton, Zachary R; Alvarez, Nicolas J

    2018-09-15

    Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup and increases experiential accuracy. In a broad sense, this work outlines the importance of optical errors in all DSA techniques. More specifically, these results have important implications for all microscale and microfluidic measurements of interface curvature. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. High quality image-pair-based deblurring method using edge mask and improved residual deconvolution

    NASA Astrophysics Data System (ADS)

    Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting

    2017-04-01

    Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.

  10. Windprofiler optimization using digital deconvolution procedures

    NASA Astrophysics Data System (ADS)

    Hocking, W. K.; Hocking, A.; Hocking, D. G.; Garbanzo-Salas, M.

    2014-10-01

    Digital improvements to data acquisition procedures used for windprofiler radars have the potential for improving the height coverage at optimum resolution, and permit improved height resolution. A few newer systems already use this capability. Real-time deconvolution procedures offer even further optimization, and this has not been effectively employed in recent years. In this paper we demonstrate the advantages of combining these features, with particular emphasis on the advantages of real-time deconvolution. Using several multi-core CPUs, we have been able to achieve speeds of up to 40 GHz from a standard commercial motherboard, allowing data to be digitized and processed without the need for any type of hardware except for a transmitter (and associated drivers), a receiver and a digitizer. No Digital Signal Processor chips are needed, allowing great flexibility with analysis algorithms. By using deconvolution procedures, we have then been able to not only optimize height resolution, but also have been able to make advances in dealing with spectral contaminants like ground echoes and other near-zero-Hz spectral contamination. Our results also demonstrate the ability to produce fine-resolution measurements, revealing small-scale structures within the backscattered echoes that were previously not possible to see. Resolutions of 30 m are possible for VHF radars. Furthermore, our deconvolution technique allows the removal of range-aliasing effects in real time, a major bonus in many instances. Results are shown using new radars in Canada and Costa Rica.

  11. Modelling the CO emission in southern Bok globules

    NASA Astrophysics Data System (ADS)

    Cecchi-Pestellini, Cesare; Casu, Silvia; Scappini, Flavio

    2001-10-01

    The analysis of the sample of southern globules investigated by Scappini et al. in the CO (4-3) transition has been extended using a statistical equilibrium-radiative transfer model and making use of the results of Bourke et al. and Henning & Launardt for those globules which are in common among these samples. CO column densities and excitation temperatures have been calculated and the results compared with a chemical model representative of the chemistry of a spherical dark cloud. In a number of cases the gas kinetic temperatures have been constrained.

  12. Cross-correlating 2D and 3D galaxy surveys

    DOE PAGES

    Passaglia, Samuel; Manzotti, Alessandro; Dodelson, Scott

    2017-06-08

    Galaxy surveys probe both structure formation and the expansion rate, making them promising avenues for understanding the dark universe. Photometric surveys accurately map the 2D distribution of galaxy positions and shapes in a given redshift range, while spectroscopic surveys provide sparser 3D maps of the galaxy distribution. We present a way to analyse overlapping 2D and 3D maps jointly and without loss of information. We represent 3D maps using spherical Fourier-Bessel (sFB) modes, which preserve radial coverage while accounting for the spherical sky geometry, and we decompose 2D maps in a spherical harmonic basis. In these bases, a simple expression exists for the cross-correlation of the two fields. One very powerful application is the ability to simultaneously constrain the redshift distribution of the photometric sample, the sample biases, and cosmological parameters. We use our framework to show that combined analysis of DESI and LSST can improve cosmological constraints by factors ofmore » $${\\sim}1.2$$ to $${\\sim}1.8$$ on the region where they overlap relative to identically sized disjoint regions. We also show that in the overlap of DES and SDSS-III in Stripe 82, cross-correlating improves photo-$z$ parameter constraints by factors of $${\\sim}2$$ to $${\\sim}12$$ over internal photo-$z$ reconstructions.« less

  13. Cross-correlating 2D and 3D galaxy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passaglia, Samuel; Manzotti, Alessandro; Dodelson, Scott

    Galaxy surveys probe both structure formation and the expansion rate, making them promising avenues for understanding the dark universe. Photometric surveys accurately map the 2D distribution of galaxy positions and shapes in a given redshift range, while spectroscopic surveys provide sparser 3D maps of the galaxy distribution. We present a way to analyse overlapping 2D and 3D maps jointly and without loss of information. We represent 3D maps using spherical Fourier-Bessel (sFB) modes, which preserve radial coverage while accounting for the spherical sky geometry, and we decompose 2D maps in a spherical harmonic basis. In these bases, a simple expression exists for the cross-correlation of the two fields. One very powerful application is the ability to simultaneously constrain the redshift distribution of the photometric sample, the sample biases, and cosmological parameters. We use our framework to show that combined analysis of DESI and LSST can improve cosmological constraints by factors ofmore » $${\\sim}1.2$$ to $${\\sim}1.8$$ on the region where they overlap relative to identically sized disjoint regions. We also show that in the overlap of DES and SDSS-III in Stripe 82, cross-correlating improves photo-$z$ parameter constraints by factors of $${\\sim}2$$ to $${\\sim}12$$ over internal photo-$z$ reconstructions.« less

  14. Constraining Atmospheric Particle Size in Gale Crater Using REMS UV Measurements and Mastcam Observations at 440 and 880 nm

    NASA Astrophysics Data System (ADS)

    Mason, E. L.; Lemmon, M. T.; de la Torre-Juárez, M.; Vicente-Retortillo, A.; Martinez, G.

    2015-12-01

    Optical depth measured in Gale crater has been shown to vary seasonally, and this variation is potentially linked to a change in dust size visible from the surface. The Mast Camera (Mastcam) on the Mars Science Laboratory (MSL) has performed cross-sky brightness surveys similar to those obtained at the Phoenix Lander site. Since particle size can be constrained by observing airborne dust across multiple wavelengths and angles, surveys at 440 and 880 nm can be used to characterize atmospheric dust within and above the crater. In addition, Rover Environmental Monitoring Station (REMS) on MSL provides downward radiation flux from 250 nm (UVD) to 340 nm (UVA), which would further constrain aerosol properties. The dust, which is not spherical and likely contains irregular particles, can be modeled using randomly oriented triaxial ellipsoids with predetermined microphysical optical properties and fit to sky survey observations to retrieve an effective radius. This work provides a discussion on the constraints of particle size distribution using REMS measurements as well as shape of the particle in Gale crater in comparison to Mastcam at the specified wavelengths.

  15. Calibration of a polarimetric imaging SAR

    NASA Technical Reports Server (NTRS)

    Sarabandi, K.; Pierce, L. E.; Ulaby, F. T.

    1991-01-01

    Calibration of polarimetric imaging Synthetic Aperture Radars (SAR's) using point calibration targets is discussed. The four-port network calibration technique is used to describe the radar error model. The polarimetric ambiguity function of the SAR is then found using a single point target, namely a trihedral corner reflector. Based on this, an estimate for the backscattering coefficient of the terrain is found by a deconvolution process. A radar image taken by the JPL Airborne SAR (AIRSAR) is used for verification of the deconvolution calibration method. The calibrated responses of point targets in the image are compared both with theory and the POLCAL technique. Also, response of a distributed target are compared using the deconvolution and POLCAL techniques.

  16. EGF Search for Compound Source Time Functions in Microearthquakes

    NASA Astrophysics Data System (ADS)

    Ampuero, J.; Rubin, A. M.

    2003-12-01

    Numerical simulations of stopping ruptures on bimaterial interfaces seem to indicate a pronounced asymmetry in the time it takes to reach the peak Coulomb stress shortly beyond the rupture ends. For the rupture front moving in the direction of slip of the stiffer medium, the timescale is controlled by the arrival of stopping phases from the opposite side of the crack, but for the opposite rupture front this timescale is controlled by the much shorter-duration tensile stress pulse that moves in front of the crack tip as it slows down. This behavior may have implications for rupture complexity on bimaterial interfaces. In addition to observing an asymmetry in aftershock occurrence on the San Andreas fault, Rubin and Gillard (2000) noted that for all 5 of the compound earthquakes they observed in a cluster of 72 events, the second subevent occurred to the NW of the first (that is, near the rupture front moving in the direction of slip of the stiffer medium). They suggested that these 5``second events'' were simply examples of ``early aftershocks'' which also occur preferentially to the NW; however, the fact that these 5 earthquakes could not be recognized as compound at stations located to the SE indicates that the second event actually occurred on the timescale of the passage of the dynamic stress waves. Thus, observations of asymmetry in rupture complexity may form an independent dataset, complimentary to observations of aftershock asymmetry, for constraining models of rupture on bimaterial interfaces. Microseismicity recorded on dense seismological networks has proved interesting for earthquake physics because the high number of events allows one to gain statistical insight into the observed source properties. However, microearthquakes are usually so small that the range of methods that can be applied to their analysis is limited and of low resolution. To address the questions raised above we would like to characterize the source time functions (STF) of a large number of microearthquakes, in particular the statistics of compound events and the possible asymmetry of their spatial distribution. We will show results of the systematic application of empirical Green's function deconvolution methods to a large dataset from the Parkfield HRSN. On the methodological side the performance and robustness of various deconvolution schemes is tested. These range from trivially stabilized spectral division to projected Landweber and blind deconvolution. Use is also made of the redundance available in highly clustered seismicity with many similar seismograms. The observations will be interpreted in the light of recent numerical simulations of dynamic rupture on bimaterial interfaces (see abstract of Rubin and Ampuero).

  17. Modes of mantle convection and the removal of heat from the earth's interior

    NASA Technical Reports Server (NTRS)

    Spohn, T.; Schubert, G.

    1982-01-01

    Thermal histories for two-layer and whole-mantle convection models are calculated and presented, based on a parameterization of convective heat transport. The model is composed of two concentric spherical shells surrounding a spherical core. The models were constrained to yield the observed present-day surface heat flow and mantle viscosity, in order to determine parameters. These parameters were varied to determine their effects on the results. Studies show that whole-mantle convection removes three times more primordial heat from the earth interior and six times more from the core than does two-layer convection (in 4.5 billion years). Mantle volumetric heat generation rates for both models are comparable to that of a potassium-depleted chondrite, and thus surface heat-flux balance does not require potassium in the core. Whole and two-layer mantle convection differences are primarily due to lower mantle thermal insulation and the lower heat removal efficiency of the upper mantle as compared with that of the whole mantle.

  18. Collective degrees of freedom involved in absorption and desorption of surfactant molecules in spherical non-ionic micelles

    NASA Astrophysics Data System (ADS)

    Ahn, Yong Nam; Mohan, Gunjan; Kopelevich, Dmitry I.

    2012-10-01

    Dynamics of absorption and desorption of a surfactant monomer into and out of a spherical non-ionic micelle is investigated by coarse-grained molecular dynamics (MD) simulations. It is shown that these processes involve a complex interplay between the micellar structure and the monomer configuration. A quantitative model for collective dynamics of these degrees of freedom is developed. This is accomplished by reconstructing a multi-dimensional free energy landscape of the surfactant-micelle system using constrained MD simulations in which the distance between the micellar and monomer centers of mass is held constant. Results of this analysis are verified by direct (unconstrained) MD simulations of surfactant absorption in the micelle. It is demonstrated that the system dynamics is likely to deviate from the minimum energy path on the energy landscape. These deviations create an energy barrier for the monomer absorption and increase an existing barrier for the monomer desorption. A reduced Fokker-Planck equation is proposed to model these effects.

  19. Histogram deconvolution - An aid to automated classifiers

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.

    1983-01-01

    It is shown that N-dimensional histograms are convolved by the addition of noise in the picture domain. Three methods are described which provide the ability to deconvolve such noise-affected histograms. The purpose of the deconvolution is to provide automated classifiers with a higher quality N-dimensional histogram from which to obtain classification statistics.

  20. Study of one- and two-dimensional filtering and deconvolution algorithms for a streaming array computer

    NASA Technical Reports Server (NTRS)

    Ioup, G. E.

    1985-01-01

    Appendix 5 of the Study of One- and Two-Dimensional Filtering and Deconvolution Algorithms for a Streaming Array Computer includes a resume of the professional background of the Principal Investigator on the project, lists of this publications and research papers, graduate thesis supervised, and grants received.

  1. New Constraints on the Geometry and Kinematics of Matter Surrounding the Accretion Flow in X-Ray Binaries from Chandra High-energy Transmission Grating X-Ray Spectroscopy

    NASA Astrophysics Data System (ADS)

    Tzanavaris, P.; Yaqoob, T.

    2018-03-01

    The narrow, neutral Fe Kα fluorescence emission line in X-ray binaries (XRBs) is a powerful probe of the geometry, kinematics, and Fe abundance of matter around the accretion flow. In a recent study it has been claimed, using Chandra High-Energy Transmission Grating (HETG) spectra for a sample of XRBs, that the circumnuclear material is consistent with a solar-abundance, uniform, spherical distribution. It was also claimed that the Fe Kα line was unresolved in all cases by the HETG. However, these conclusions were based on ad hoc models that did not attempt to relate the global column density to the Fe Kα line emission. We revisit the sample and test a self-consistent model of a uniform, spherical X-ray reprocessor against HETG spectra from 56 observations of 14 Galactic XRBs. We find that the model is ruled out in 13/14 sources because a variable Fe abundance is required. In two sources a spherical distribution is viable, but with nonsolar Fe abundance. We also applied a solar-abundance Compton-thick reflection model, which can account for the spectra that are inconsistent with a spherical model, but spectra with a broader bandpass are required to better constrain model parameters. We also robustly measured the velocity width of the Fe Kα line and found FWHM values of up to ∼5000 km s‑1. Only in some spectra was the Fe Kα line unresolved by the HETG.

  2. The effect of non-sphericity on mass and anisotropy measurements in dSph galaxies with Schwarzschild method

    NASA Astrophysics Data System (ADS)

    Kowalczyk, Klaudia; Łokas, Ewa L.; Valluri, Monica

    2018-05-01

    In our previous work we confirmed the reliability of the spherically symmetric Schwarzschild orbit-superposition method to recover the mass and velocity anisotropy profiles of spherical dwarf galaxies. Here, we investigate the effect of its application to intrinsically non-spherical objects. For this purpose we use a model of a dwarf spheroidal galaxy formed in a numerical simulation of a major merger of two discy dwarfs. The shape of the stellar component of the merger remnant is axisymmetric and prolate which allows us to identify and measure the bias caused by observing the spheroidal galaxy along different directions, especially the longest and shortest principal axis. The modelling is based on mock data generated from the remnant that are observationally available for dwarfs: projected positions and line-of-sight velocities of the stars. In order to obtain a reliable tool while keeping the number of parameters low we parametrize the total mass distribution as a radius-dependent mass-to-light ratio with just two free parameters we aim to constrain. Our study shows that if the total density profile is known, the true, radially increasing anisotropy profile can be well recovered for the observations along the longest axis whereas the data along the shortest axis lead to the inference of an incorrect, isotropic model. On the other hand, if the density profile is derived from the method as well, the anisotropy is always underestimated but the total mass profile is well recovered for the data along the shortest axis whereas for the longest axis the mass content is overestimated.

  3. Turning Around along the Cosmic Web

    NASA Astrophysics Data System (ADS)

    Lee, Jounghun; Yepes, Gustavo

    2016-12-01

    A bound violation designates a case in which the turnaround radius of a bound object exceeds the upper limit imposed by the spherical collapse model based on the standard ΛCDM paradigm. Given that the turnaround radius of a bound object is a stochastic quantity and that the spherical model overly simplifies the true gravitational collapse, which actually proceeds anisotropically along the cosmic web, the rarity of the occurrence of a bound violation may depend on the web environment. Assuming a Planck cosmology, we numerically construct the bound-zone peculiar velocity profiles along the cosmic web (filaments and sheets) around the isolated groups with virial mass {M}{{v}}≥slant 3× {10}13 {h}-1 {M}⊙ identified in the Small MultiDark Planck simulations and determine the radial distances at which their peculiar velocities equal the Hubble expansion speed as the turnaround radii of the groups. It is found that although the average turnaround radii of the isolated groups are well below the spherical bound limit on all mass scales, the bound violations are not forbidden for individual groups, and the cosmic web has an effect of reducing the rarity of the occurrence of a bound violation. Explaining that the spherical bound limit on the turnaround radius in fact represents the threshold distance up to which the intervention of the external gravitational field in the bound-zone peculiar velocity profiles around the nonisolated groups stays negligible, we discuss the possibility of using the threshold distance scale to constrain locally the equation of state of dark energy.

  4. Calculation of the static in-flight telescope-detector response by deconvolution applied to point-spread function for the geostationary earth radiation budget experiment.

    PubMed

    Matthews, Grant

    2004-12-01

    The Geostationary Earth Radiation Budget (GERB) experiment is a broadband satellite radiometer instrument program intended to resolve remaining uncertainties surrounding the effect of cloud radiative feedback on future climate change. By use of a custom-designed diffraction-aberration telescope model, the GERB detector spatial response is recovered by deconvolution applied to the ground calibration point-spread function (PSF) measurements. An ensemble of randomly generated white-noise test scenes, combined with the measured telescope transfer function results in the effect of noise on the deconvolution being significantly reduced. With the recovered detector response as a base, the same model is applied in construction of the predicted in-flight field-of-view response of each GERB pixel to both short- and long-wave Earth radiance. The results of this study can now be used to simulate and investigate the instantaneous sampling errors incurred by GERB. Also, the developed deconvolution method may be highly applicable in enhancing images or PSF data for any telescope system for which a wave-front error measurement is available.

  5. Point spread functions and deconvolution of ultrasonic images.

    PubMed

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  6. Gaussian and linear deconvolution of LC-MS/MS chromatograms of the eight aminobutyric acid isomers

    PubMed Central

    Vemula, Harika; Kitase, Yukiko; Ayon, Navid J.; Bonewald, Lynda; Gutheil, William G.

    2016-01-01

    Isomeric molecules present a challenge for analytical resolution and quantification, even with MS-based detection. The eight-aminobutyric acid (ABA) isomers are of interest for their various biological activities, particularly γ-aminobutyric acid (GABA) and the d- and l-isomers of β-aminoisobutyric acid (β-AIBA; BAIBA). This study aimed to investigate LC-MS/MS-based resolution of these ABA isomers as their Marfey's (Mar) reagent derivatives. HPLC was able to separate three Mar-ABA isomers l-β-ABA (l-BABA), and l- and d-α-ABA (AABA) completely, with three isomers (GABA, and d/l-BAIBA) in one chromatographic cluster, and two isomers (α-AIBA (AAIBA) and d-BABA) in a second cluster. Partially separated cluster components were deconvoluted using Gaussian peak fitting except for GABA and d-BAIBA. MS/MS detection of Marfey's derivatized ABA isomers provided six MS/MS fragments, with substantially different intensity profiles between structural isomers. This allowed linear deconvolution of ABA isomer peaks. Combining HPLC separation with linear and Gaussian deconvolution allowed resolution of all eight ABA isomers. Application to human serum found a substantial level of l-AABA (13 μM), an intermediate level of l-BAIBA (0.8 μM), and low but detectable levels (<0.2 μM) of GABA, l-BABA, AAIBA, d-BAIBA, and d-AABA. This approach should be useful for LC-MS/MS deconvolution of other challenging groups of isomeric molecules. PMID:27771391

  7. Deconvolution of ferredoxin, plastocyanin, and P700 transmittance changes in intact leaves with a new type of kinetic LED array spectrophotometer.

    PubMed

    Klughammer, Christof; Schreiber, Ulrich

    2016-05-01

    A newly developed compact measuring system for assessment of transmittance changes in the near-infrared spectral region is described; it allows deconvolution of redox changes due to ferredoxin (Fd), P700, and plastocyanin (PC) in intact leaves. In addition, it can also simultaneously measure chlorophyll fluorescence. The major opto-electronic components as well as the principles of data acquisition and signal deconvolution are outlined. Four original pulse-modulated dual-wavelength difference signals are measured (785-840 nm, 810-870 nm, 870-970 nm, and 795-970 nm). Deconvolution is based on specific spectral information presented graphically in the form of 'Differential Model Plots' (DMP) of Fd, P700, and PC that are derived empirically from selective changes of these three components under appropriately chosen physiological conditions. Whereas information on maximal changes of Fd is obtained upon illumination after dark-acclimation, maximal changes of P700 and PC can be readily induced by saturating light pulses in the presence of far-red light. Using the information of DMP and maximal changes, the new measuring system enables on-line deconvolution of Fd, P700, and PC. The performance of the new device is demonstrated by some examples of practical applications, including fast measurements of flash relaxation kinetics and of the Fd, P700, and PC changes paralleling the polyphasic fluorescence rise upon application of a 300-ms pulse of saturating light.

  8. SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, C; Jin, M; Ouyang, L

    2015-06-15

    Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used tomore » recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation compared to the direct method.« less

  9. Gold - A novel deconvolution algorithm with optimization for waveform LiDAR processing

    NASA Astrophysics Data System (ADS)

    Zhou, Tan; Popescu, Sorin C.; Krause, Keith; Sheridan, Ryan D.; Putman, Eric

    2017-07-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: (1) direct decomposition, (2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson-Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from the corresponding reference data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, <0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, <1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (<1.01 m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE. Additionally, the high level of uncertainty occurs more on areas with high slope and high vegetation. This study provides an alternative and innovative approach for waveform processing that will benefit high fidelity processing of waveform LiDAR data to characterize vegetation structures.

  10. Improving cell mixture deconvolution by identifying optimal DNA methylation libraries (IDOL).

    PubMed

    Koestler, Devin C; Jones, Meaghan J; Usset, Joseph; Christensen, Brock C; Butler, Rondi A; Kobor, Michael S; Wiencke, John K; Kelsey, Karl T

    2016-03-08

    Confounding due to cellular heterogeneity represents one of the foremost challenges currently facing Epigenome-Wide Association Studies (EWAS). Statistical methods leveraging the tissue-specificity of DNA methylation for deconvoluting the cellular mixture of heterogenous biospecimens offer a promising solution, however the performance of such methods depends entirely on the library of methylation markers being used for deconvolution. Here, we introduce a novel algorithm for Identifying Optimal Libraries (IDOL) that dynamically scans a candidate set of cell-specific methylation markers to find libraries that optimize the accuracy of cell fraction estimates obtained from cell mixture deconvolution. Application of IDOL to training set consisting of samples with both whole-blood DNA methylation data (Illumina HumanMethylation450 BeadArray (HM450)) and flow cytometry measurements of cell composition revealed an optimized library comprised of 300 CpG sites. When compared existing libraries, the library identified by IDOL demonstrated significantly better overall discrimination of the entire immune cell landscape (p = 0.038), and resulted in improved discrimination of 14 out of the 15 pairs of leukocyte subtypes. Estimates of cell composition across the samples in the training set using the IDOL library were highly correlated with their respective flow cytometry measurements, with all cell-specific R (2)>0.99 and root mean square errors (RMSEs) ranging from [0.97 % to 1.33 %] across leukocyte subtypes. Independent validation of the optimized IDOL library using two additional HM450 data sets showed similarly strong prediction performance, with all cell-specific R (2)>0.90 and R M S E<4.00 %. In simulation studies, adjustments for cell composition using the IDOL library resulted in uniformly lower false positive rates compared to competing libraries, while also demonstrating an improved capacity to explain epigenome-wide variation in DNA methylation within two large publicly available HM450 data sets. Despite consisting of half as many CpGs compared to existing libraries for whole blood mixture deconvolution, the optimized IDOL library identified herein resulted in outstanding prediction performance across all considered data sets and demonstrated potential to improve the operating characteristics of EWAS involving adjustments for cell distribution. In addition to providing the EWAS community with an optimized library for whole blood mixture deconvolution, our work establishes a systematic and generalizable framework for the assembly of libraries that improve the accuracy of cell mixture deconvolution.

  11. SU-C-9A-03: Simultaneous Deconvolution and Segmentation for PET Tumor Delineation Using a Variational Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L; Tan, S; Lu, W

    2014-06-01

    Purpose: To implement a new method that integrates deconvolution with segmentation under the variational framework for PET tumor delineation. Methods: Deconvolution and segmentation are both challenging problems in image processing. The partial volume effect (PVE) makes tumor boundaries in PET image blurred which affects the accuracy of tumor segmentation. Deconvolution aims to obtain a PVE-free image, which can help to improve the segmentation accuracy. Conversely, a correct localization of the object boundaries is helpful to estimate the blur kernel, and thus assist in the deconvolution. In this study, we proposed to solve the two problems simultaneously using a variational methodmore » so that they can benefit each other. The energy functional consists of a fidelity term and a regularization term, and the blur kernel was limited to be the isotropic Gaussian kernel. We minimized the energy functional by solving the associated Euler-Lagrange equations and taking the derivative with respect to the parameters of the kernel function. An alternate minimization method was used to iterate between segmentation, deconvolution and blur-kernel recovery. The performance of the proposed method was tested on clinic PET images of patients with non-Hodgkin's lymphoma, and compared with seven other segmentation methods using the dice similarity index (DSI) and volume error (VE). Results: Among all segmentation methods, the proposed one (DSI=0.81, VE=0.05) has the highest accuracy, followed by the active contours without edges (DSI=0.81, VE=0.25), while other methods including the Graph Cut and the Mumford-Shah (MS) method have lower accuracy. A visual inspection shows that the proposed method localizes the real tumor contour very well. Conclusion: The result showed that deconvolution and segmentation can contribute to each other. The proposed variational method solve the two problems simultaneously, and leads to a high performance for tumor segmentation in PET. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less

  12. Spectrophotometric Determination of the Dissociation Constant of an Acid-Base Indicator Using a Mathematical Deconvolution Technique

    ERIC Educational Resources Information Center

    Alter, Krystyn P.; Molloy, John L.; Niemeyer, Emily D.

    2005-01-01

    A laboratory experiment reinforces the concept of acid-base equilibria while introducing a common application of spectrophotometry and can easily be completed within a standard four-hour laboratory period. It provides students with an opportunity to use advanced data analysis techniques like data smoothing and spectral deconvolution to…

  13. Deconvolution of Energy Spectra in the ATIC Experiment

    NASA Technical Reports Server (NTRS)

    Batkov, K. E.; Panov, A. D.; Adams, J. H.; Ahn, H. S.; Bashindzhagyan, G. L.; Chang, J.; Christl, M.; Fazley, A. R.; Ganel, O.; Gunasigha, R. M.; hide

    2005-01-01

    The Advanced Thin Ionization Calorimeter (ATIC) balloon-borne experiment is designed to perform cosmic- ray elemental spectra measurements from below 100 GeV up to tens TeV for nuclei from hydrogen to iron. The instrument is composed of a silicon matrix detector followed by a carbon target, interleaved with scintillator tracking layers, and a segmented BGO calorimeter composed of 320 individual crystals totalling 18 radiation lengths, used to determine the particle energy. The technique for deconvolution of the energy spectra measured in the thin calorimeter is based on detailed simulations of the response of the ATIC instrument to different cosmic ray nuclei over a wide energy range. The method of deconvolution is described and energy spectrum of carbon obtained by this technique is presented.

  14. Sequential deconvolution from wave-front sensing using bivariate simplex splines

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai

    2015-05-01

    Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.

  15. SOURCE PULSE ENHANCEMENT BY DECONVOLUTION OF AN EMPIRICAL GREEN'S FUNCTION.

    USGS Publications Warehouse

    Mueller, Charles S.

    1985-01-01

    Observations of the earthquake source-time function are enhanced if path, recording-site, and instrument complexities can be removed from seismograms. Assuming that a small earthquake has a simple source, its seismogram can be treated as an empirical Green's function and deconvolved from the seismogram of a larger and/or more complex earthquake by spectral division. When the deconvolution is well posed, the quotient spectrum represents the apparent source-time function of the larger event. This study shows that with high-quality locally recorded earthquake data it is feasible to Fourier transform the quotient and obtain a useful result in the time domain. In practice, the deconvolution can be stabilized by one of several simple techniques. Application of the method is given. Refs.

  16. Deconvolution of time series in the laboratory

    NASA Astrophysics Data System (ADS)

    John, Thomas; Pietschmann, Dirk; Becker, Volker; Wagner, Christian

    2016-10-01

    In this study, we present two practical applications of the deconvolution of time series in Fourier space. First, we reconstruct a filtered input signal of sound cards that has been heavily distorted by a built-in high-pass filter using a software approach. Using deconvolution, we can partially bypass the filter and extend the dynamic frequency range by two orders of magnitude. Second, we construct required input signals for a mechanical shaker in order to obtain arbitrary acceleration waveforms, referred to as feedforward control. For both situations, experimental and theoretical approaches are discussed to determine the system-dependent frequency response. Moreover, for the shaker, we propose a simple feedback loop as an extension to the feedforward control in order to handle nonlinearities of the system.

  17. Sparse Bayesian Inference of White Matter Fiber Orientations from Compressed Multi-resolution Diffusion MRI

    PubMed Central

    Pisharady, Pramod Kumar; Duarte-Carvajalino, Julio M; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe

    2017-01-01

    The RubiX [1] algorithm combines high SNR characteristics of low resolution data with high spacial specificity of high resolution data, to extract microstructural tissue parameters from diffusion MRI. In this paper we focus on estimating crossing fiber orientations and introduce sparsity to the RubiX algorithm, making it suitable for reconstruction from compressed (under-sampled) data. We propose a sparse Bayesian algorithm for estimation of fiber orientations and volume fractions from compressed diffusion MRI. The data at high resolution is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible directions. Volume fractions of fibers along these orientations define the dictionary weights. The data at low resolution is modeled using a spatial partial volume representation. The proposed dictionary representation and sparsity priors consider the dependence between fiber orientations and the spatial redundancy in data representation. Our method exploits the sparsity of fiber orientations, therefore facilitating inference from under-sampled data. Experimental results show improved accuracy and decreased uncertainty in fiber orientation estimates. For under-sampled data, the proposed method is also shown to produce more robust estimates of fiber orientations. PMID:28845484

  18. Sparse Bayesian Inference of White Matter Fiber Orientations from Compressed Multi-resolution Diffusion MRI.

    PubMed

    Pisharady, Pramod Kumar; Duarte-Carvajalino, Julio M; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe

    2015-10-01

    The RubiX [1] algorithm combines high SNR characteristics of low resolution data with high spacial specificity of high resolution data, to extract microstructural tissue parameters from diffusion MRI. In this paper we focus on estimating crossing fiber orientations and introduce sparsity to the RubiX algorithm, making it suitable for reconstruction from compressed (under-sampled) data. We propose a sparse Bayesian algorithm for estimation of fiber orientations and volume fractions from compressed diffusion MRI. The data at high resolution is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible directions. Volume fractions of fibers along these orientations define the dictionary weights. The data at low resolution is modeled using a spatial partial volume representation. The proposed dictionary representation and sparsity priors consider the dependence between fiber orientations and the spatial redundancy in data representation. Our method exploits the sparsity of fiber orientations, therefore facilitating inference from under-sampled data. Experimental results show improved accuracy and decreased uncertainty in fiber orientation estimates. For under-sampled data, the proposed method is also shown to produce more robust estimates of fiber orientations.

  19. Optical aberration correction for simple lenses via sparse representation

    NASA Astrophysics Data System (ADS)

    Cui, Jinlin; Huang, Wei

    2018-04-01

    Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.

  20. Renormalizable Quantum Field Theories in the Large -n Limit

    NASA Astrophysics Data System (ADS)

    Guruswamy, Sathya

    1995-01-01

    In this thesis, we study two examples of renormalizable quantum field theories in the large-N limit. Chapter one is a general introduction describing physical motivations for studying such theories. In chapter two, we describe the large-N method in field theory and discuss the pioneering work of 't Hooft in large-N two-dimensional Quantum Chromodynamics (QCD). In chapter three we study a spherically symmetric approximation to four-dimensional QCD ('spherical QCD'). We recast spherical QCD into a bilocal (constrained) theory of hadrons which in the large-N limit is equivalent to large-N spherical QCD for all energy scales. The linear approximation to this theory gives an eigenvalue equation which is the analogue of the well-known 't Hooft's integral equation in two dimensions. This eigenvalue equation is a scale invariant one and therefore leads to divergences in the theory. We give a non-perturbative renormalization prescription to cure this and obtain a beta function which shows that large-N spherical QCD is asymptotically free. In chapter four, we review the essentials of conformal field theories in two and higher dimensions, particularly in the context of critical phenomena. In chapter five, we study the O(N) non-linear sigma model on three-dimensional curved spaces in the large-N limit and show that there is a non-trivial ultraviolet stable critical point at which it becomes conformally invariant. We study this model at this critical point on examples of spaces of constant curvature and compute the mass gap in the theory, the free energy density (which turns out to be a universal function of the information contained in the geometry of the manifold) and the two-point correlation functions. The results we get give an indication that this model is an example of a three-dimensional analogue of a rational conformal field theory. A conclusion with a brief summary and remarks follows at the end.

  1. Color normalization of histology slides using graph regularized sparse NMF

    NASA Astrophysics Data System (ADS)

    Sha, Lingdao; Schonfeld, Dan; Sethi, Amit

    2017-03-01

    Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The representation of a pixel in the stain density space is constrained to follow the feature distance of the pixel to pixels in the neighborhood graph. Utilizing color matrix transfer method with the stain concentrations found using our GSNMF method, the color normalization performance was also better than existing methods.

  2. Efficient volumetric estimation from plenoptic data

    NASA Astrophysics Data System (ADS)

    Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.

    2013-03-01

    The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.

  3. Detection of increased vasa vasorum in artery walls: improving CT number accuracy using image deconvolution

    NASA Astrophysics Data System (ADS)

    Rajendran, Kishore; Leng, Shuai; Jorgensen, Steven M.; Abdurakhimova, Dilbar; Ritman, Erik L.; McCollough, Cynthia H.

    2017-03-01

    Changes in arterial wall perfusion are an indicator of early atherosclerosis. This is characterized by an increased spatial density of vasa vasorum (VV), the micro-vessels that supply oxygen and nutrients to the arterial wall. Detection of increased VV during contrast-enhanced computed tomography (CT) imaging is limited due to contamination from blooming effect from the contrast-enhanced lumen. We report the application of an image deconvolution technique using a measured system point-spread function, on CT data obtained from a photon-counting CT system to reduce blooming and to improve the CT number accuracy of arterial wall, which enhances detection of increased VV. A phantom study was performed to assess the accuracy of the deconvolution technique. A porcine model was created with enhanced VV in one carotid artery; the other carotid artery served as a control. CT images at an energy range of 25-120 keV were reconstructed. CT numbers were measured for multiple locations in the carotid walls and for multiple time points, pre and post contrast injection. The mean CT number in the carotid wall was compared between the left (increased VV) and right (control) carotid arteries. Prior to deconvolution, results showed similar mean CT numbers in the left and right carotid wall due to the contamination from blooming effect, limiting the detection of increased VV in the left carotid artery. After deconvolution, the mean CT number difference between the left and right carotid arteries was substantially increased at all the time points, enabling detection of the increased VV in the artery wall.

  4. VizieR Online Data Catalog: Spatial deconvolution code (Quintero Noda+, 2015)

    NASA Astrophysics Data System (ADS)

    Quintero Noda, C.; Asensio Ramos, A.; Orozco Suarez, D.; Ruiz Cobo, B.

    2015-05-01

    This deconvolution method follows the scheme presented in Ruiz Cobo & Asensio Ramos (2013A&A...549L...4R) The Stokes parameters are projected onto a few spectral eigenvectors and the ensuing maps of coefficients are deconvolved using a standard Lucy-Richardson algorithm. This introduces a stabilization because the PCA filtering reduces the amount of noise. (1 data file).

  5. 3D image restoration for confocal microscopy: toward a wavelet deconvolution for the study of complex biological structures

    NASA Astrophysics Data System (ADS)

    Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats

    2000-05-01

    Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.

  6. A method to measure the presampling MTF in digital radiography using Wiener deconvolution

    NASA Astrophysics Data System (ADS)

    Zhou, Zhongxing; Zhu, Qingzhen; Gao, Feng; Zhao, Huijuan; Zhang, Lixin; Li, Guohui

    2013-03-01

    We developed a novel method for determining the presampling modulation transfer function (MTF) of digital radiography systems from slanted edge images based on Wiener deconvolution. The degraded supersampled edge spread function (ESF) was obtained from simulated slanted edge images with known MTF in the presence of poisson noise, and its corresponding ideal ESF without degration was constructed according to its central edge position. To meet the requirements of the absolute integrable condition of Fourier transformation, the origianl ESFs were mirrored to construct the symmetric pattern of ESFs. Then based on Wiener deconvolution technique, the supersampled line spread function (LSF) could be acquired from the symmetric pattern of degraded supersampled ESFs in the presence of ideal symmetric ESFs and system noise. The MTF is then the normalized magnitude of the Fourier transform of the LSF. The determined MTF showed a strong agreement with the theoritical true MTF when appropriated Wiener parameter was chosen. The effects of Wiener parameter value and the width of square-like wave peak in symmetric ESFs were illustrated and discussed. In conclusion, an accurate and simple method to measure the presampling MTF was established using Wiener deconvolution technique according to slanted edge images.

  7. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  8. Single-Ion Deconvolution of Mass Peak Overlaps for Atom Probe Microscopy.

    PubMed

    London, Andrew J; Haley, Daniel; Moody, Michael P

    2017-04-01

    Due to the intrinsic evaporation properties of the material studied, insufficient mass-resolving power and lack of knowledge of the kinetic energy of incident ions, peaks in the atom probe mass-to-charge spectrum can overlap and result in incorrect composition measurements. Contributions to these peak overlaps can be deconvoluted globally, by simply examining adjacent peaks combined with knowledge of natural isotopic abundances. However, this strategy does not account for the fact that the relative contributions to this convoluted signal can often vary significantly in different regions of the analysis volume; e.g., across interfaces and within clusters. Some progress has been made with spatially localized deconvolution in cases where the discrete microstructural regions can be easily identified within the reconstruction, but this means no further point cloud analyses are possible. Hence, we present an ion-by-ion methodology where the identity of each ion, normally obscured by peak overlap, is resolved by examining the isotopic abundance of their immediate surroundings. The resulting peak-deconvoluted data are a point cloud and can be analyzed with any existing tools. We present two detailed case studies and discussion of the limitations of this new technique.

  9. Image deblurring by motion estimation for remote sensing

    NASA Astrophysics Data System (ADS)

    Chen, Yueting; Wu, Jiagu; Xu, Zhihai; Li, Qi; Feng, Huajun

    2010-08-01

    The imagery resolution of imaging systems for remote sensing is often limited by image degradation resulting from unwanted motion disturbances of the platform during image exposures. Since the form of the platform vibration can be arbitrary, the lack of priori knowledge about the motion function (the PSF) suggests blind restoration approaches. A deblurring method which combines motion estimation and image deconvolution both for area-array and TDI remote sensing has been proposed in this paper. The image motion estimation is accomplished by an auxiliary high-speed detector and a sub-pixel correlation algorithm. The PSF is then reconstructed from estimated image motion vectors. Eventually, the clear image can be recovered by the Richardson-Lucy (RL) iterative deconvolution algorithm from the blurred image of the prime camera with the constructed PSF. The image deconvolution for the area-array detector is direct. While for the TDICCD detector, an integral distortion compensation step and a row-by-row deconvolution scheme are applied. Theoretical analyses and experimental results show that, the performance of the proposed concept is convincing. Blurred and distorted images can be properly recovered not only for visual observation, but also with significant objective evaluation increment.

  10. Comparison of active-set method deconvolution and matched-filtering for derivation of an ultrasound transit time spectrum.

    PubMed

    Wille, M-L; Zapf, M; Ruiter, N V; Gemmeke, H; Langton, C M

    2015-06-21

    The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.

  11. Chemometric Data Analysis for Deconvolution of Overlapped Ion Mobility Profiles

    NASA Astrophysics Data System (ADS)

    Zekavat, Behrooz; Solouki, Touradj

    2012-11-01

    We present the details of a data analysis approach for deconvolution of the ion mobility (IM) overlapped or unresolved species. This approach takes advantage of the ion fragmentation variations as a function of the IM arrival time. The data analysis involves the use of an in-house developed data preprocessing platform for the conversion of the original post-IM/collision-induced dissociation mass spectrometry (post-IM/CID MS) data to a Matlab compatible format for chemometric analysis. We show that principle component analysis (PCA) can be used to examine the post-IM/CID MS profiles for the presence of mobility-overlapped species. Subsequently, using an interactive self-modeling mixture analysis technique, we show how to calculate the total IM spectrum (TIMS) and CID mass spectrum for each component of the IM overlapped mixtures. Moreover, we show that PCA and IM deconvolution techniques provide complementary results to evaluate the validity of the calculated TIMS profiles. We use two binary mixtures with overlapping IM profiles, including (1) a mixture of two non-isobaric peptides (neurotensin (RRPYIL) and a hexapeptide (WHWLQL)), and (2) an isobaric sugar isomer mixture of raffinose and maltotriose, to demonstrate the applicability of the IM deconvolution.

  12. Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.

    PubMed

    Eichstädt, S; Wilkens, V

    2017-06-01

    An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.

  13. Designing a stable feedback control system for blind image deconvolution.

    PubMed

    Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan

    2018-05-01

    Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Systematic search for spherical crystal X-ray microscopes matching 1–25 keV spectral line sources

    DOE PAGES

    Schollmeier, Marius S.; Loisel, Guillaume P.

    2016-12-29

    Spherical-crystal microscopes are used as high-resolution imaging devices for monochromatic x-ray radiography or for imaging the source itself. Crystals and Miller indices (hkl) have to be matched such that the resulting lattice spacing d is close to half the spectral wavelength used for imaging, to fulfill the Bragg equation with a Bragg angle near 90° which reduces astigmatism. Only a few suitable crystal and spectral-line combinations have been identified for applications in the literature, suggesting that x-ray imaging using spherical crystals is constrained to a few chance matches. In this paper, after performing a systematic, automated search over more thanmore » 9 × 10 6 possible combinations for x-ray energies between 1 and 25 keV, for six crystals with arbitrary Miller-index combinations hkl between 0 and 20, we show that a matching, efficient crystal and spectral-line pair can be found for almost every He α or K α x-ray source for the elements Ne to Sn. Finally, using the data presented here it should be possible to find a suitable imaging combination using an x-ray source that is specifically selected for a particular purpose, instead of relying on the limited number of existing crystal imaging systems that have been identified to date.« less

  15. A musculoskeletal shoulder model based on pseudo-inverse and null-space optimization.

    PubMed

    Terrier, Alexandre; Aeberhard, Martin; Michellod, Yvan; Mullhaupt, Philippe; Gillet, Denis; Farron, Alain; Pioletti, Dominique P

    2010-11-01

    The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.

  16. Structure of the Sumatra-Andaman subduction zone

    NASA Astrophysics Data System (ADS)

    Pesicek, Jeremy Dale

    We have conducted studies of the Sumatra-Andaman subduction zone using newly available teleseismic data resulting from the aftershock sequences of the 2004, 2005, and 2007 great earthquakes that occurred offshore of the island of Sumatra. In order to better exploit the new data, existing methodologies have been adapted and advanced in several ways to obtain results at a level of precision not previously possible from teleseismic data. Seismic tomography studies of the mantle were conducted using an improved iterative technique that accounts for fine-scale three-dimensional (3-D) velocity variations inside the study region and coarser global velocity variations outside the region. More precise earthquake locations were determined using a double-difference technique that has been extended to teleseismic distances using spherical ray tracing through the nested 3-D regional-global velocity models. Earthquake relocation was included in the iterative tomography scheme and was found to significantly enhance the recovery of slab velocity anomalies. Finally, because crustal structure is poorly constrained by the teleseismic data, 3-D density modeling of the crust was conducted using newly available satellite gravity data and a spherical prism gravity algorithm. The results of these studies better constrain the structure of the Sumatra-Andaman subduction zone, including the geometry of the mantle slab, position of the megathrust, and structural features of the downgoing plate. Tomography results reveal continuous upper mantle slab anomalies with significant variations in dip throughout the region. Broad curvature of the fast anomalies beneath northern Sumatra, similar to curvature of the trench and volcanic arc at the surface, is interpreted as folding of the upper mantle slab. Earthquake relocations show systematic shifts of the hypocenters to the northeast and to shallower depths, each with average changes of 5 km. Reduced scatter in the relocations better constrain the megathrust plate boundary and the regions of coseismic slip during the 2004 and 2005 great earthquakes. In addition, the relocations reveal discrete seismic features on the downgoing plate not previously visible in teleseismic catalogs. The new velocity model and earthquake locations provide the most comprehensive view of the deep structure of the Sumatra-Andaman subduction zone yet available.

  17. Multi-limit unsymmetrical MLIBD image restoration algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Cheng, Yiping; Chen, Zai-wang; Bo, Chen

    2012-11-01

    A novel multi-limit unsymmetrical iterative blind deconvolution(MLIBD) algorithm was presented to enhance the performance of adaptive optics image restoration.The algorithm enhances the reliability of iterative blind deconvolution by introducing the bandwidth limit into the frequency domain of point spread(PSF),and adopts the PSF dynamic support region estimation to improve the convergence speed.The unsymmetrical factor is automatically computed to advance its adaptivity.Image deconvolution comparing experiments between Richardson-Lucy IBD and MLIBD were done,and the result indicates that the iteration number is reduced by 22.4% and the peak signal-to-noise ratio is improved by 10.18dB with MLIBD method. The performance of MLIBD algorithm is outstanding in the images restoration the FK5-857 adaptive optics and the double-star adaptive optics.

  18. Model and algorithm based on accurate realization of dwell time in magnetorheological finishing.

    PubMed

    Song, Ci; Dai, Yifan; Peng, Xiaoqiang

    2010-07-01

    Classically, a dwell-time map is created with a method such as deconvolution or numerical optimization, with the input being a surface error map and influence function. This dwell-time map is the numerical optimum for minimizing residual form error, but it takes no account of machine dynamics limitations. The map is then reinterpreted as machine speeds and accelerations or decelerations in a separate operation. In this paper we consider combining the two methods in a single optimization by the use of a constrained nonlinear optimization model, which regards both the two-norm of the surface residual error and the dwell-time gradient as an objective function. This enables machine dynamic limitations to be properly considered within the scope of the optimization, reducing both residual surface error and polishing times. Further simulations are introduced to demonstrate the feasibility of the model, and the velocity map is reinterpreted from the dwell time, meeting the requirement of velocity and the limitations of accelerations or decelerations. Indeed, the model and algorithm can also apply to other computer-controlled subaperture methods.

  19. Empirical transfer functions: Application to determination of outermost core velocity structure using SmKS phases

    NASA Astrophysics Data System (ADS)

    Alexandrakis, Catherine; Eaton, David W.

    2007-11-01

    SmKS waves provide good resolution of outer-core velocity structure, but are affected by heterogeneity in the D'' region. We have developed an Empirical Transfer Function (ETF) technique that transforms a reference pulse (here, SmKS) into a target waveform (SKKS) by: (1) time-windowing the respective pulses, (2) applying Wiener deconvolution, and (3) convolving the output with a Gaussian waveform. Common source and path effects are implicitly removed by this process. We combine ETFs from 446 broadband seismograms to produce a global stack, from which S3KS-SKKS differential time can be measured accurately. As a result of stacking, the scatter in our measurements (0.43 s) is much less than the 1.29 s scatter in previous compilations. Although our data do not uniquely constrain outermost core velocities, we show that the fit of most standard models can be improved by perturbing the outermost core velocity. Our best-fitting model is formed using IASP91 with PREM-like velocity at the top of the core.

  20. An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks.

    PubMed

    Shamwell, E Jared; Nothwang, William D; Perlis, Donald

    2018-05-04

    Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.

  1. Protostellar Outflows Mapped with ALMA and Techniques to Include Short Spacings

    NASA Astrophysics Data System (ADS)

    Plunkett, Adele

    2018-01-01

    Protostellar outflows are early signs of star formation, yet in cluster environments - common sites of star formation - their role and interaction with surrounding gas are complicated. Protostellar outflows are interesting and complex because they connect protostars (scales 10s au) to the surrounding gas environment (few pc), and their morphology constrains launching and/or accretion modes. A complete outflow study must use observing methods that recover several orders of magnitude of spatial scales, ideally with sub-arcsecond resolution and mapping over a few parsecs. ALMA provides high-resolution observations of outflows, and in some cases outflows have been mapped in clusters. Combining with observations using the Total Power array is possible, but challenging, and a large single dish telescope providing more overlap in uv space is advantageous. In this presentation I show protostellar outflows observed with ALMA using 12m, 7m, and To tal Power arrays. With a new CASA tool TP2VIS we create total power ``visibility'' data and perform joint imaging and deconvolution of interferometry and single dish data. TP2VIS will ultimately provide synergy between ALMA and AtLAST data.

  2. A feasibility and optimization study to determine cooling time and burnup of advanced test reactor fuels using a nondestructive technique

    NASA Astrophysics Data System (ADS)

    Navarro, Jorge

    The goal of this study presented is to determine the best available nondestructive technique necessary to collect validation data as well as to determine burnup and cooling time of the fuel elements on-site at the Advanced Test Reactor (ATR) canal. This study makes a recommendation of the viability of implementing a permanent fuel scanning system at the ATR canal and leads to the full design of a permanent fuel scan system. The study consisted at first in determining if it was possible and which equipment was necessary to collect useful spectra from ATR fuel elements at the canal adjacent to the reactor. Once it was establish that useful spectra can be obtained at the ATR canal, the next step was to determine which detector and which configuration was better suited to predict burnup and cooling time of fuel elements nondestructively. Three different detectors of High Purity Germanium (HPGe), Lanthanum Bromide (LaBr3), and High Pressure Xenon (HPXe) in two system configurations of above and below the water pool were used during the study. The data collected and analyzed were used to create burnup and cooling time calibration prediction curves for ATR fuel. The next stage of the study was to determine which of the three detectors tested was better suited for the permanent system. From spectra taken and the calibration curves obtained, it was determined that although the HPGe detector yielded better results, a detector that could better withstand the harsh environment of the ATR canal was needed. The in-situ nature of the measurements required a rugged fuel scanning system, low in maintenance and easy to control system. Based on the ATR canal feasibility measurements and calibration results, it was determined that the LaBr3 detector was the best alternative for canal in-situ measurements; however, in order to enhance the quality of the spectra collected using this scintillator, a deconvolution method was developed. Following the development of the deconvolution method for ATR applications, the technique was tested using one-isotope, multi-isotope, and fuel simulated sources. Burnup calibrations were perfomed using convoluted and deconvoluted data. The calibrations results showed burnup prediction by this method improves using deconvolution. The final stage of the deconvolution method development was to perform an irradiation experiment in order to create a surrogate fuel source to test the deconvolution method using experimental data. A conceptual design of the fuel scan system is path forward using the rugged LaBr 3 detector in an above the water configuration and deconvolution algorithms.

  3. Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution

    DTIC Science & Technology

    2015-06-08

    deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero

  4. Constraints on Ceres' Internal Structure and Evolution From Its Shape and Gravity Measured by the Dawn Spacecraft

    NASA Astrophysics Data System (ADS)

    Ermakov, A. I.; Fu, R. R.; Castillo-Rogez, J. C.; Raymond, C. A.; Park, R. S.; Preusker, F.; Russell, C. T.; Smith, D. E.; Zuber, M. T.

    2017-11-01

    Ceres is the largest body in the asteroid belt with a radius of approximately 470 km. In part due to its large mass, Ceres more closely approaches hydrostatic equilibrium than major asteroids. Pre-Dawn mission shape observations of Ceres revealed a shape consistent with a hydrostatic ellipsoid of revolution. The Dawn spacecraft Framing Camera has been imaging Ceres since March 2015, which has led to high-resolution shape models of the dwarf planet, while the gravity field has been globally determined to a spherical harmonic degree 14 (equivalent to a spatial wavelength of 211 km) and locally to 18 (a wavelength of 164 km). We use these shape and gravity models to constrain Ceres' internal structure. We find a negative correlation and admittance between topography and gravity at degree 2 and order 2. Low admittances between spherical harmonic degrees 3 and 16 are well explained by Airy isostatic compensation mechanism. Different models of isostasy give crustal densities between 1,200 and 1,400 kg/m3 with our preferred model giving a crustal density of 1,287+70-87 kg/m3. The mantle density is constrained to be 2,434+5-8 kg/m3. We compute isostatic gravity anomaly and find evidence for mascon-like structures in the two biggest basins. The topographic power spectrum of Ceres and its latitude dependence suggest that viscous relaxation occurred at the long wavelengths (>246 km). Our density constraints combined with finite element modeling of viscous relaxation suggests that the rheology and density of the shallow surface are most consistent with a rock, ice, salt and clathrate mixture.

  5. A photometric study of Enceladus

    NASA Technical Reports Server (NTRS)

    Verbiscer, Anne J.; Veverka, Joseph

    1994-01-01

    We have supplemented Voyager imaging data from Enceladus (limited to phase angles of 13 deg-43 deg) with recent Earth-based CCD observations to obtain an improved determination of the Bond albedo, to construct an albedo map of the satellite, and to constrain parameters in Hapke's (1986) photometric equation. A major result is evidence of regional variations in the physical properties of Enceladus' surface. The average global photometric properties are described by single scattering albedo omega(sub 0) average = 0.998 +/- 0.001, macroscopic roughness parameter theta average = 6 +/- 1 deg, and Henyey-Greenstein asymmetry parameter g = -0.399 +/- 0.005. The value of theta average is smaller than the 14 deg found by fitting whole-disk data, which include all terrains on Enceladus. The opposition surge amplitude B(sub 0) = 0.21 +/- 0.07 and regolith compaction parameter h = 0.014 +/- 0.02 are loosely constrained by the scarcity of and uncertainty in near-opposition observations. From the solar phase curve we determine the geometric albedo of Enceladus p(sub v) = 0.99 +/- 0.06 and phase integral q = 0.92 +/- 0.05, which corresponds to a spherical albedo A = p(sub v)q = 0.91 +/- 0.1. Since the spectrum of Enceladus is fairly flat, we can approximate the Bond albedo A(sub B) with the spherical albedo. Our photometric analysis is summarized in terms of an albedo map which generally reproduces the satellite's observed lightcurve and indicates that normal reflectances range from 0.9 on the leading hemisphere to 1.4 on the trailing one. The albedo map also revels an albedo variation of 15% from longitudes 170 deg to 200 deg, corresponding to the boundary between the leading and trailing hemispheres.

  6. SHERMAN - A shape-based thermophysical model II. Application to 8567 (1996 HW1)

    NASA Astrophysics Data System (ADS)

    Howell, E. S.; Magri, C.; Vervack, R. J.; Nolan, M. C.; Taylor, P. A.; Fernández, Y. R.; Hicks, M. D.; Somers, J. M.; Lawrence, K. J.; Rivkin, A. S.; Marshall, S. E.; Crowell, J. L.

    2018-03-01

    We apply a new shape-based thermophysical model, SHERMAN, to the near-Earth asteroid (NEA) 8567 (1996 HW1) to derive surface properties. We use the detailed shape model of Magri et al. (2011) for this contact binary NEA to analyze spectral observations (2-4.1 microns) obtained at the NASA IRTF on several different dates to find thermal parameters that match all the data. Visible and near-infrared (0.8-2.5 microns) spectral observations are also utilized in a self-consistent way. We find that an average visible albedo of 0.33, thermal inertia of 70 (SI units) and surface roughness of 50% closely match the observations. The shape and orientation of the asteroid is very important to constrain the thermal parameters to be consistent with all the observations. Multiple viewing geometries are equally important to achieve a robust solution for small, non-spherical NEAs. We separate the infrared beaming effects of shape, viewing geometry and surface roughness for this asteroid and show how their effects combine. We compare the diameter and albedo that would be derived from the thermal observations assuming a spherical shape with those from the shape-based model. We also discuss how observations from limited viewing geometries compare to the solution from multiple observations. The size that would be derived from the individual observation dates varies by 20% from the best-fit solution, and can be either larger or smaller. If the surface properties are not homogeneous, many solutions are possible, but the average properties derived here are very tightly constrained by the multiple observations, and give important insights into the nature of small NEAs.

  7. Determination of ion mobility collision cross sections for unresolved isomeric mixtures using tandem mass spectrometry and chemometric deconvolution.

    PubMed

    Harper, Brett; Neumann, Elizabeth K; Stow, Sarah M; May, Jody C; McLean, John A; Solouki, Touradj

    2016-10-05

    Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting "pure" IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810-1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) "shift factors" to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288.8 Å(2), 295.1 Å(2), 296.8 Å(2), and 300.1 Å(2); all four of these CCS values were within 1.5% of independently measured DTIM-MS values. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Image restoration and superresolution as probes of small scale far-IR structure in star forming regions

    NASA Technical Reports Server (NTRS)

    Lester, D. F.; Harvey, P. M.; Joy, M.; Ellis, H. B., Jr.

    1986-01-01

    Far-infrared continuum studies from the Kuiper Airborne Observatory are described that are designed to fully exploit the small-scale spatial information that this facility can provide. This work gives the clearest picture to data on the structure of galactic and extragalactic star forming regions in the far infrared. Work is presently being done with slit scans taken simultaneously at 50 and 100 microns, yielding one-dimensional data. Scans of sources in different directions have been used to get certain information on two dimensional structure. Planned work with linear arrays will allow us to generalize our techniques to two dimensional image restoration. For faint sources, spatial information at the diffraction limit of the telescope is obtained, while for brighter sources, nonlinear deconvolution techniques have allowed us to improve over the diffraction limit by as much as a factor of four. Information on the details of the color temperature distribution is derived as well. This is made possible by the accuracy with which the instrumental point-source profile (PSP) is determined at both wavelengths. While these two PSPs are different, data at different wavelengths can be compared by proper spatial filtering. Considerable effort has been devoted to implementing deconvolution algorithms. Nonlinear deconvolution methods offer the potential of superresolution -- that is, inference of power at spatial frequencies that exceed D lambda. This potential is made possible by the implicit assumption by the algorithm of positivity of the deconvolved data, a universally justifiable constraint for photon processes. We have tested two nonlinear deconvolution algorithms on our data; the Richardson-Lucy (R-L) method and the Maximum Entropy Method (MEM). The limits of image deconvolution techniques for achieving spatial resolution are addressed.

  9. Sheet-scanned dual-axis confocal microscopy using Richardson-Lucy deconvolution.

    PubMed

    Wang, D; Meza, D; Wang, Y; Gao, L; Liu, J T C

    2014-09-15

    We have previously developed a line-scanned dual-axis confocal (LS-DAC) microscope with subcellular resolution suitable for high-frame-rate diagnostic imaging at shallow depths. Due to the loss of confocality along one dimension, the contrast (signal-to-background ratio) of a LS-DAC microscope is deteriorated compared to a point-scanned DAC microscope. However, by using a sCMOS camera for detection, a short oblique light-sheet is imaged at each scanned position. Therefore, by scanning the light sheet in only one dimension, a thin 3D volume is imaged. Both sequential two-dimensional deconvolution and three-dimensional deconvolution are performed on the thin image volume to improve the resolution and contrast of one en face confocal image section at the center of the volume, a technique we call sheet-scanned dual-axis confocal (SS-DAC) microscopy.

  10. Computerized glow curve deconvolution of thermoluminescent emission from polyminerals of Jamaica Mexican flower

    NASA Astrophysics Data System (ADS)

    Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.

    The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.

  11. Punch stretching process monitoring using acoustic emission signal analysis. II - Application of frequency domain deconvolution

    NASA Technical Reports Server (NTRS)

    Liang, Steven Y.; Dornfeld, David A.; Nickerson, Jackson A.

    1987-01-01

    The coloring effect on the acoustic emission signal due to the frequency response of the data acquisition/processing instrumentation may bias the interpretation of AE signal characteristics. In this paper, a frequency domain deconvolution technique, which involves the identification of the instrumentation transfer functions and multiplication of the AE signal spectrum by the inverse of these system functions, has been carried out. In this way, the change in AE signal characteristics can be better interpreted as the result of the change in only the states of the process. Punch stretching process was used as an example to demonstrate the application of the technique. Results showed that, through the deconvolution, the frequency characteristics of AE signals generated during the stretching became more distinctive and can be more effectively used as tools for process monitoring.

  12. Improving the Ability of Image Sensors to Detect Faint Stars and Moving Objects Using Image Deconvolution Techniques

    PubMed Central

    Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D.

    2010-01-01

    In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors. PMID:22294896

  13. Improving the ability of image sensors to detect faint stars and moving objects using image deconvolution techniques.

    PubMed

    Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D

    2010-01-01

    In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.

  14. Regression-assisted deconvolution.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-06-30

    We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.

  15. Subsurface failure in spherical bodies. A formation scenario for linear troughs on Vesta’s surface

    DOE PAGES

    Stickle, Angela M.; Schultz, P. H.; Crawford, D. A.

    2014-10-13

    Many asteroids in the Solar System exhibit unusual, linear features on their surface. The Dawn mission recently observed two sets of linear features on the surface of the asteroid 4 Vesta. Geologic observations indicate that these features are related to the two large impact basins at the south pole of Vesta, though no specific mechanism of origin has been determined. Furthermore, the orientation of the features is offset from the center of the basins. Experimental and numerical results reveal that the offset angle is a natural consequence of oblique impacts into a spherical target. We demonstrate that a set ofmore » shear planes develops in the subsurface of the body opposite to the point of first contact. Moreover, these subsurface failure zones then propagate to the surface under combined tensile-shear stress fields after the impact to create sets of approximately linear faults on the surface. Comparison between the orientation of damage structures in the laboratory and failure regions within Vesta can be used to constrain impact parameters (e.g., the approximate impact point and likely impact trajectory).« less

  16. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  17. Two-Dimensional Signal Processing and Storage and Theory and Applications of Electromagnetic Measurements.

    DTIC Science & Technology

    1983-06-01

    system, provides a convenient, low- noise , fully parallel method of improving contrast and enhancing structural detail in an image prior to input to a...directed towards problems in deconvolution, reconstruction from projections, bandlimited extrapolation, and shift varying deblurring of images...deconvolution algorithm has been studied with promising 5 results [I] for simulated motion blurs. Future work will focus on noise effects and the extension

  18. Chemometric Deconvolution of Continuous Electrokinetic Injection Micellar Electrokinetic Chromatography Data for the Quantitation of Trinitrotoluene in Mixtures of Other Nitroaromatic Compounds

    DTIC Science & Technology

    2014-02-24

    Suite 600 Washington, DC 20036 NRL/MR/ 6110 --14-9521 Approved for public release; distribution is unlimited. 1Science & Engineering Apprenticeship...Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/ 6110 --14-9521 Chemometric Deconvolution of Continuous Electrokinetic Injection Micellar... Engineering Apprenticeship Program American Society for Engineering Education Washington, DC Kevin Johnson Navy Technology Center for Safety and

  19. Enhanced Seismic Imaging of Turbidite Deposits in Chicontepec Basin, Mexico

    NASA Astrophysics Data System (ADS)

    Chavez-Perez, S.; Vargas-Meleza, L.

    2007-05-01

    We test, as postprocessing tools, a combination of migration deconvolution and geometric attributes to attack the complex problems of reflector resolution and detection in migrated seismic volumes. Migration deconvolution has been empirically shown to be an effective approach for enhancing the illumination of migrated images, which are blurred versions of the subsurface reflectivity distribution, by decreasing imaging artifacts, improving spatial resolution, and alleviating acquisition footprint problems. We utilize migration deconvolution as a means to improve the quality and resolution of 3D prestack time migrated results from Chicontepec basin, Mexico, a very relevant portion of the producing onshore sector of Pemex, the Mexican petroleum company. Seismic data covers the Agua Fria, Coapechaca, and Tajin fields. It exhibits acquisition footprint problems, migration artifacts and a severe lack of resolution in the target area, where turbidite deposits need to be characterized between major erosional surfaces. Vertical resolution is about 35 m and the main hydrocarbon plays are turbidite beds no more than 60 m thick. We also employ geometric attributes (e.g., coherent energy and curvature), computed after migration deconvolution, to detect and map out depositional features, and help design development wells in the area. Results of this workflow show imaging enhancement and allow us to identify meandering channels and individual sand bodies, previously undistinguishable in the original seismic migrated images.

  20. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  1. Data Dependent Peak Model Based Spectrum Deconvolution for Analysis of High Resolution LC-MS Data

    PubMed Central

    2015-01-01

    A data dependent peak model (DDPM) based spectrum deconvolution method was developed for analysis of high resolution LC-MS data. To construct the selected ion chromatogram (XIC), a clustering method, the density based spatial clustering of applications with noise (DBSCAN), is applied to all m/z values of an LC-MS data set to group the m/z values into each XIC. The DBSCAN constructs XICs without the need for a user defined m/z variation window. After the XIC construction, the peaks of molecular ions in each XIC are detected using both the first and the second derivative tests, followed by an optimized chromatographic peak model selection method for peak deconvolution. A total of six chromatographic peak models are considered, including Gaussian, log-normal, Poisson, gamma, exponentially modified Gaussian, and hybrid of exponential and Gaussian models. The abundant nonoverlapping peaks are chosen to find the optimal peak models that are both data- and retention-time-dependent. Analysis of 18 spiked-in LC-MS data demonstrates that the proposed DDPM spectrum deconvolution method outperforms the traditional method. On average, the DDPM approach not only detected 58 more chromatographic peaks from each of the testing LC-MS data but also improved the retention time and peak area 3% and 6%, respectively. PMID:24533635

  2. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    NASA Astrophysics Data System (ADS)

    Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  3. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  4. Partitioning of nitroxides in dispersed systems investigated by ultrafiltration, EPR and NMR spectroscopy.

    PubMed

    Krudopp, Heimke; Sönnichsen, Frank D; Steffen-Heins, Anja

    2015-08-15

    The partitioning behavior of paramagnetic nitroxides in dispersed systems can be determined by deconvolution of electron paramagnetic resonance (EPR) spectra giving equivalent results with the validated methods of ultrafiltration techniques (UF) and pulsed-field gradient nuclear magnetic resonance spectroscopy (PFG-NMR). The partitioning behavior of nitroxides with increasing lipophilicity was investigated in anionic, cationic and nonionic micellar systems and 10 wt% o/w emulsions. Apart from EPR spectra deconvolution, the PFG-NMR was used in micellar solutions as a non-destructive approach, while UF based on separation of very small volume of the aqueous phase. As a function of their substituent and lipophilicity, the proportions of nitroxides that were solubilized in the micellar or emulsion interface increased with increasing nitroxide lipophilicity for all emulsifier used. Comparing the different approaches, EPR deconvolution and UF revealed comparable nitroxide proportions that were solubilized in the interfaces. Those proportions were higher than found with PFG-NMR. For PFG-NMR self-diffusion experiments the reduced nitroxides were used revealing a high dynamic of hydroxylamines and emulsifiers. Deconvolution of EPR spectra turned out to be the preferred method for measuring the partitioning behavior of paramagnetic molecules as it enables distinguishing between several populations at their individual solubilization sites. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Extraction of near-surface properties for a lossy layered medium using the propagator matrix

    USGS Publications Warehouse

    Mehta, K.; Snieder, R.; Graizer, V.

    2007-01-01

    Near-surface properties play an important role in advancing earthquake hazard assessment. Other areas where near-surface properties are crucial include civil engineering and detection and delineation of potable groundwater. From an exploration point of view, near-surface properties are needed for wavefield separation and correcting for the local near-receiver structure. It has been shown that these properties can be estimated for a lossless homogeneous medium using the propagator matrix. To estimate the near-surface properties, we apply deconvolution to passive borehole recordings of waves excited by an earthquake. Deconvolution of these incoherent waveforms recorded by the sensors at different depths in the borehole with the recording at the surface results in waves that propagate upwards and downwards along the array. These waves, obtained by deconvolution, can be used to estimate the P- and S-wave velocities near the surface. As opposed to waves obtained by cross-correlation that represent filtered version of the sum of causal and acausal Green's function between the two receivers, the waves obtained by deconvolution represent the elements of the propagator matrix. Finally, we show analytically the extension of the propagator matrix analysis to a lossy layered medium for a special case of normal incidence. ?? 2007 The Authors Journal compilation ?? 2007 RAS.

  6. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  7. Model-free quantification of dynamic PET data using nonparametric deconvolution

    PubMed Central

    Zanderigo, Francesca; Parsey, Ramin V; Todd Ogden, R

    2015-01-01

    Dynamic positron emission tomography (PET) data are usually quantified using compartment models (CMs) or derived graphical approaches. Often, however, CMs either do not properly describe the tracer kinetics, or are not identifiable, leading to nonphysiologic estimates of the tracer binding. The PET data are modeled as the convolution of the metabolite-corrected input function and the tracer impulse response function (IRF) in the tissue. Using nonparametric deconvolution methods, it is possible to obtain model-free estimates of the IRF, from which functionals related to tracer volume of distribution and binding may be computed, but this approach has rarely been applied in PET. Here, we apply nonparametric deconvolution using singular value decomposition to simulated and test–retest clinical PET data with four reversible tracers well characterized by CMs ([11C]CUMI-101, [11C]DASB, [11C]PE2I, and [11C]WAY-100635), and systematically compare reproducibility, reliability, and identifiability of various IRF-derived functionals with that of traditional CMs outcomes. Results show that nonparametric deconvolution, completely free of any model assumptions, allows for estimates of tracer volume of distribution and binding that are very close to the estimates obtained with CMs and, in some cases, show better test–retest performance than CMs outcomes. PMID:25873427

  8. High-resolution speckle masking interferometry and radiative transfer modeling of the oxygen-rich AGB star AFGL 2290

    NASA Astrophysics Data System (ADS)

    Gauger, A.; Balega, Y. Y.; Irrgang, P.; Osterbart, R.; Weigelt, G.

    1999-06-01

    We present the first diffraction-limited speckle masking observations of the oxygen-rich AGB star AFGL 2290. The speckle interferograms were recorded with the Russian 6 m SAO telescope. At the wavelength 2.11 microns a resolution of 75 milli-arcsec (mas) was obtained. The reconstructed diffraction-limited image reveals that the circumstellar dust shell (CDS) of AFGL 2290 is at least slightly non-spherical. The visibility function shows that the stellar contribution to the total 2.11 microns flux is less than ~ 40%, indicating a rather large optical depth of the circumstellar dust shell. The 2-dimensional Gaussian visibility fit yields a diameter of AFGL 2290 at 2.11 microns of 43 masx51 mas, which corresponds to a diameter of 42 AUx50 AU for an adopted distance of 0.98 kpc. Our new observational results provide additional constraints on the CDS of AFGL 2290, which supplement the information from the spectral energy distribution (SED). To determine the structure and the properties of the CDS we have performed radiative transfer calculations for spherically symmetric dust shell models. The observed SED approximately at phase 0.2 can be well reproduced at all wavelengths by a model with T_eff=2000 K, a dust temperature of 800 K at the inner boundary r1, an optical depth tau_ {V}=100 and a radius for the single-sized grains of a_gr=0.1 microns . However, the 2.11 microns visibility of the model does not match the observation. Exploring the parameter space, we found that grain size is the key parameter in achieving a fit of the observed visibility while retaining the match of the SED, at least partially. Both the slope and the curvature of the visibility strongly constrain the possible grain radii. On the other hand, the SED at longer wavelengths, the silicate feature in particular, determines the dust mass loss rate and, thereby, restricts the possible optical depths of the model. With a larger grain size of 0.16 microns and a higher tau_ {V}=150, the observed visibility can be reproduced preserving the match of the SED at longer wavelengths. Nevertheless, the model shows a deficiency of flux at short wavelengths, which is attributed to the model assumption of a spherically symmetric dust distribution, whereas the actual structure of the CDS around AFGL 2290 is in fact non-spherical. Our study demonstrates the possible limitations of dust shell models which are constrained solely by the spectral energy distribution, and emphasizes the importance of high spatial resolution observations for the determination of the structure and the properties of circumstellar dust shells around evolved stars. Based on data collected at the 6~m telescope of the Special Astrophysical Observatory in Russia

  9. Variations in the fine-structure constant constraining gravity theories

    NASA Astrophysics Data System (ADS)

    Bezerra, V. B.; Cunha, M. S.; Muniz, C. R.; Tahim, M. O.; Vieira, H. S.

    2016-08-01

    In this paper, we investigate how the fine-structure constant, α, locally varies in the presence of a static and spherically symmetric gravitational source. The procedure consists in calculating the solution and the energy eigenvalues of a massive scalar field around that source, considering the weak-field regime. From this result, we obtain expressions for a spatially variable fine-structure constant by considering suitable modifications in the involved parameters admitting some scenarios of semi-classical and quantum gravities. Constraints on free parameters of the approached theories are calculated from astrophysical observations of the emission spectra of a white dwarf. Such constraints are finally compared with those obtained in the literature.

  10. Spatially resolved spectrophotometry of Comet P/Stephan-Oterma

    NASA Technical Reports Server (NTRS)

    Cochran, A. L.; Barker, E. S.

    1985-01-01

    Observations of Comet P/Stephan-Oterma were made with an Intensified Dissector Scanner spectrograph on the McDonald Observatory 2.7-m telescope during the period from July 1980 to February 1981. These spectra cover a range of heliocentric distances from 2.3 AU preperihelion to 1.8 AU postperihelion. A small aperture was used to map the spatial distributions of the gases in the coma. Column densities of the observed cometary emissions (CN, C3, CH, and C2) were calculated, and it is shown that Stephan-Oterma appeared nearly spherically symmetric. These date are used by Cochran (1985) to constrain chemical models of Stephan-Oterma.

  11. Image restoration using aberration taken by a Hartmann wavefront sensor on extended object, towards real-time deconvolution

    NASA Astrophysics Data System (ADS)

    Darudi, Ahmad; Bakhshi, Hadi; Asgari, Reza

    2015-05-01

    In this paper we present the results of image restoration using the data taken by a Hartmann sensor. The aberration is measure by a Hartmann sensor in which the object itself is used as reference. Then the Point Spread Function (PSF) is simulated and used for image reconstruction using the Lucy-Richardson technique. A technique is presented for quantitative evaluation the Lucy-Richardson technique for deconvolution.

  12. Novel Image Quality Control Systems(Add-On). Innovative Computational Methods for Inverse Problems in Optical and SAR Imaging

    DTIC Science & Technology

    2007-02-28

    Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex Medium Response, International Journal of Imaging Systems and...1767-1782, 2006. 31. Z. Mu, R. Plemmons, and P. Santago. Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex...rigorous mathematical and computational research on inverse problems in optical imaging of direct interest to the Army and also the intelligence agencies

  13. Adaptive Optics Image Restoration Based on Frame Selection and Multi-frame Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Tian, Yu; Rao, Chang-hui; Wei, Kai

    Restricted by the observational condition and the hardware, adaptive optics can only make a partial correction of the optical images blurred by atmospheric turbulence. A postprocessing method based on frame selection and multi-frame blind deconvolution is proposed for the restoration of high-resolution adaptive optics images. By frame selection we mean we first make a selection of the degraded (blurred) images for participation in the iterative blind deconvolution calculation, with no need of any a priori knowledge, and with only a positivity constraint. This method has been applied to the restoration of some stellar images observed by the 61-element adaptive optics system installed on the Yunnan Observatory 1.2m telescope. The experimental results indicate that this method can effectively compensate for the residual errors of the adaptive optics system on the image, and the restored image can reach the diffraction-limited quality.

  14. Forward Looking Radar Imaging by Truncated Singular Value Decomposition and Its Application for Adverse Weather Aircraft Landing.

    PubMed

    Huang, Yulin; Zha, Yuebo; Wang, Yue; Yang, Jianyu

    2015-06-18

    The forward looking radar imaging task is a practical and challenging problem for adverse weather aircraft landing industry. Deconvolution method can realize the forward looking imaging but it often leads to the noise amplification in the radar image. In this paper, a forward looking radar imaging based on deconvolution method is presented for adverse weather aircraft landing. We first present the theoretical background of forward looking radar imaging task and its application for aircraft landing. Then, we convert the forward looking radar imaging task into a corresponding deconvolution problem, which is solved in the framework of algebraic theory using truncated singular decomposition method. The key issue regarding the selecting of the truncated parameter is addressed using generalized cross validation approach. Simulation and experimental results demonstrate that the proposed method is effective in achieving angular resolution enhancement with suppressing the noise amplification in forward looking radar imaging.

  15. Towards real-time image deconvolution: application to confocal and STED microscopy

    PubMed Central

    Zanella, R.; Zanghirati, G.; Cavicchioli, R.; Zanni, L.; Boccacci, P.; Bertero, M.; Vicidomini, G.

    2013-01-01

    Although deconvolution can improve the quality of any type of microscope, the high computational time required has so far limited its massive spreading. Here we demonstrate the ability of the scaled-gradient-projection (SGP) method to provide accelerated versions of the most used algorithms in microscopy. To achieve further increases in efficiency, we also consider implementations on graphic processing units (GPUs). We test the proposed algorithms both on synthetic and real data of confocal and STED microscopy. Combining the SGP method with the GPU implementation we achieve a speed-up factor from about a factor 25 to 690 (with respect the conventional algorithm). The excellent results obtained on STED microscopy images demonstrate the synergy between super-resolution techniques and image-deconvolution. Further, the real-time processing allows conserving one of the most important property of STED microscopy, i.e the ability to provide fast sub-diffraction resolution recordings. PMID:23982127

  16. Removing the echoes from terahertz pulse reflection system and sample

    NASA Astrophysics Data System (ADS)

    Liu, Haishun; Zhang, Zhenwei; Zhang, Cunlin

    2018-01-01

    Due to the echoes both from terahertz (THz) pulse reflection system and sample, the THz primary pulse will be distorted. The system echoes include two types. One preceding the main peak probably is caused by ultrafast laser pulse and the other at the back of the primary pulse is caused by the Fabry-Perot (F-P) etalon effect of detector. We attempt to remove the corresponding echoes by using two kinds of deconvolution. A Si wafer of 400μm was selected as the tested sample. Firstly, the method of double Gaussian filter (DGF) decnvolution was used to remove the systematic echoes, and then another deconvolution technique was employed to eliminate the two obvious echoes of the sample. The ultimate results indicated: although the combination of two deconvolution techniques could not entirely remove the echoes of sample and system, the echoes were largely reduced.

  17. Determination of uronic acids in isolated hemicelluloses from kenaf using diffuse reflectance infrared fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method.

    PubMed

    Batsoulis, A N; Nacos, M K; Pappas, C S; Tarantilis, P A; Mavromoustakos, T; Polissiou, M G

    2004-02-01

    Hemicellulose samples were isolated from kenaf (Hibiscus cannabinus L.). Hemicellulosic fractions usually contain a variable percentage of uronic acids. The uronic acid content (expressed in polygalacturonic acid) of the isolated hemicelluloses was determined by diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method. A linear relationship between uronic acids content and the sum of the peak areas at 1745, 1715, and 1600 cm(-1) was established with a high correlation coefficient (0.98). The deconvolution analysis using the curve-fitting method allowed the elimination of spectral interferences from other cell wall components. The above method was compared with an established spectrophotometric method and was found equivalent for accuracy and repeatability (t-test, F-test). This method is applicable in analysis of natural or synthetic mixtures and/or crude substances. The proposed method is simple, rapid, and nondestructive for the samples.

  18. A feasibility and optimization study to determine cooling time and burnup of advanced test reactor fuels using a nondestructive technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Navarro, Jorge

    2013-12-01

    The goal of this study presented is to determine the best available non-destructive technique necessary to collect validation data as well as to determine burn-up and cooling time of the fuel elements onsite at the Advanced Test Reactor (ATR) canal. This study makes a recommendation of the viability of implementing a permanent fuel scanning system at the ATR canal and leads3 to the full design of a permanent fuel scan system. The study consisted at first in determining if it was possible and which equipment was necessary to collect useful spectra from ATR fuel elements at the canal adjacent tomore » the reactor. Once it was establish that useful spectra can be obtained at the ATR canal the next step was to determine which detector and which configuration was better suited to predict burnup and cooling time of fuel elements non-destructively. Three different detectors of High Purity Germanium (HPGe), Lanthanum Bromide (LaBr3), and High Pressure Xenon (HPXe) in two system configurations of above and below the water pool were used during the study. The data collected and analyzed was used to create burnup and cooling time calibration prediction curves for ATR fuel. The next stage of the study was to determine which of the three detectors tested was better suited for the permanent system. From spectra taken and the calibration curves obtained, it was determined that although the HPGe detector yielded better results, a detector that could better withstand the harsh environment of the ATR canal was needed. The in-situ nature of the measurements required a rugged fuel scanning system, low in maintenance and easy to control system. Based on the ATR canal feasibility measurements and calibration results it was determined that the LaBr3 detector was the best alternative for canal in-situ measurements; however in order to enhance the quality of the spectra collected using this scintillator a deconvolution method was developed. Following the development of the deconvolution method for ATR applications the technique was tested using one-isotope, multi-isotope and fuel simulated sources. Burnup calibrations were perfomed using convoluted and deconvoluted data. The calibrations results showed burnup prediction by this method improves using deconvolution. The final stage of the deconvolution method development was to perform an irradiation experiment in order to create a surrogate fuel source to test the deconvolution method using experimental data. A conceptual design of the fuel scan system is path forward using the rugged LaBr3 detector in an above the water configuration and deconvolution algorithms.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devereux, Nick, E-mail: devereux@erau.edu

    Prior imaging of the lenticular galaxy, NGC 3998, with the Hubble Space Telescope revealed a small, highly inclined, nuclear ionized gas disk, the kinematics of which indicate the presence of a 270 million solar mass black hole. Plausible kinematic models are used to constrain the size of the broad emission line region (BELR) in NGC 3998 by modeling the shape of the broad H{alpha}, H{beta}, and H{gamma} emission line profiles. The analysis indicates that the BELR is large with an outer radius {approx}7 pc, regardless of whether the kinematic model is represented by an accretion disk or a spherically symmetricmore » inflow. The electron temperature in the BELR is {<=} 28,800 K consistent with photoionization by the active galactic nucleus (AGN). Indeed, the AGN is able to sustain the ionization of the BELR, albeit with a high covering factor ranging between 20% and 100% depending on the spectral energy distribution adopted for the AGN. The high covering factor favors a spherical distribution for the gas as opposed to a thin disk. If the gas density is {>=}7 x 10{sup 3} cm{sup -3} as indicated by the broad forbidden [S II] emission line ratio, then interpreting the broad H{alpha} emission line in terms of a steady state spherically symmetric inflow leads to a rate {<=} 6.5 x 10{sup -2} M{sub sun} yr{sup -1} which exceeds the inflow requirement to explain the X-ray luminosity in terms of a radiatively inefficient inflow by a factor of {<=}18.« less

  20. Space Telescope Imaging Spectrograph Spectroscopy of the Central 14 pc OF NGC 3998: Evidence for an Inflow

    NASA Astrophysics Data System (ADS)

    Devereux, Nick

    2011-02-01

    Prior imaging of the lenticular galaxy, NGC 3998, with the Hubble Space Telescope revealed a small, highly inclined, nuclear ionized gas disk, the kinematics of which indicate the presence of a 270 million solar mass black hole. Plausible kinematic models are used to constrain the size of the broad emission line region (BELR) in NGC 3998 by modeling the shape of the broad Hα, Hβ, and Hγ emission line profiles. The analysis indicates that the BELR is large with an outer radius ~7 pc, regardless of whether the kinematic model is represented by an accretion disk or a spherically symmetric inflow. The electron temperature in the BELR is <= 28,800 K consistent with photoionization by the active galactic nucleus (AGN). Indeed, the AGN is able to sustain the ionization of the BELR, albeit with a high covering factor ranging between 20% and 100% depending on the spectral energy distribution adopted for the AGN. The high covering factor favors a spherical distribution for the gas as opposed to a thin disk. If the gas density is >=7 × 103 cm-3 as indicated by the broad forbidden [S II] emission line ratio, then interpreting the broad Hα emission line in terms of a steady state spherically symmetric inflow leads to a rate <= 6.5 × 10-2 M sun yr-1 which exceeds the inflow requirement to explain the X-ray luminosity in terms of a radiatively inefficient inflow by a factor of <=18.

  1. Quantitative fluorescence microscopy and image deconvolution.

    PubMed

    Swedlow, Jason R

    2013-01-01

    Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used to remove blurred signal from an image. There are two major types of deconvolution approaches, deblurring and restoration algorithms. Deblurring algorithms remove blur, but treat a series of optical sections as individual two-dimensional entities, and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed. Copyright © 1998 Elsevier Inc. All rights reserved.

  2. A Robust Deconvolution Method based on Transdimensional Hierarchical Bayesian Inference

    NASA Astrophysics Data System (ADS)

    Kolb, J.; Lekic, V.

    2012-12-01

    Analysis of P-S and S-P conversions allows us to map receiver side crustal and lithospheric structure. This analysis often involves deconvolution of the parent wave field from the scattered wave field as a means of suppressing source-side complexity. A variety of deconvolution techniques exist including damped spectral division, Wiener filtering, iterative time-domain deconvolution, and the multitaper method. All of these techniques require estimates of noise characteristics as input parameters. We present a deconvolution method based on transdimensional Hierarchical Bayesian inference in which both noise magnitude and noise correlation are used as parameters in calculating the likelihood probability distribution. Because the noise for P-S and S-P conversion analysis in terms of receiver functions is a combination of both background noise - which is relatively easy to characterize - and signal-generated noise - which is much more difficult to quantify - we treat measurement errors as an known quantity, characterized by a probability density function whose mean and variance are model parameters. This transdimensional Hierarchical Bayesian approach has been successfully used previously in the inversion of receiver functions in terms of shear and compressional wave speeds of an unknown number of layers [1]. In our method we used a Markov chain Monte Carlo (MCMC) algorithm to find the receiver function that best fits the data while accurately assessing the noise parameters. In order to parameterize the receiver function we model the receiver function as an unknown number of Gaussians of unknown amplitude and width. The algorithm takes multiple steps before calculating the acceptance probability of a new model, in order to avoid getting trapped in local misfit minima. Using both observed and synthetic data, we show that the MCMC deconvolution method can accurately obtain a receiver function as well as an estimate of the noise parameters given the parent and daughter components. Furthermore, we demonstrate that this new approach is far less susceptible to generating spurious features even at high noise levels. Finally, the method yields not only the most-likely receiver function, but also quantifies its full uncertainty. [1] Bodin, T., M. Sambridge, H. Tkalčić, P. Arroucau, K. Gallagher, and N. Rawlinson (2012), Transdimensional inversion of receiver functions and surface wave dispersion, J. Geophys. Res., 117, B02301

  3. A mathematical deconvolution formulation for superficial dose distribution measurement by Cerenkov light dosimetry.

    PubMed

    Brost, Eric Edward; Watanabe, Yoichi

    2018-06-01

    Cerenkov photons are created by high-energy radiation beams used for radiation therapy. In this study, we developed a Cerenkov light dosimetry technique to obtain a two-dimensional dose distribution in a superficial region of medium from the images of Cerenkov photons by using a deconvolution method. An integral equation was derived to represent the Cerenkov photon image acquired by a camera for a given incident high-energy photon beam by using convolution kernels. Subsequently, an equation relating the planar dose at a depth to a Cerenkov photon image using the well-known relationship between the incident beam fluence and the dose distribution in a medium was obtained. The final equation contained a convolution kernel called the Cerenkov dose scatter function (CDSF). The CDSF function was obtained by deconvolving the Cerenkov scatter function (CSF) with the dose scatter function (DSF). The GAMOS (Geant4-based Architecture for Medicine-Oriented Simulations) Monte Carlo particle simulation software was used to obtain the CSF and DSF. The dose distribution was calculated from the Cerenkov photon intensity data using an iterative deconvolution method with the CDSF. The theoretical formulation was experimentally evaluated by using an optical phantom irradiated by high-energy photon beams. The intensity of the deconvolved Cerenkov photon image showed linear dependence on the dose rate and the photon beam energy. The relative intensity showed a field size dependence similar to the beam output factor. Deconvolved Cerenkov images showed improvement in dose profiles compared with the raw image data. In particular, the deconvolution significantly improved the agreement in the high dose gradient region, such as in the penumbra. Deconvolution with a single iteration was found to provide the most accurate solution of the dose. Two-dimensional dose distributions of the deconvolved Cerenkov images agreed well with the reference distributions for both square fields and a multileaf collimator (MLC) defined, irregularly shaped field. The proposed technique improved the accuracy of the Cerenkov photon dosimetry in the penumbra region. The results of this study showed initial validation of the deconvolution method for beam profile measurements in a homogeneous media. The new formulation accounted for the physical processes of Cerenkov photon transport in the medium more accurately than previously published methods. © 2018 American Association of Physicists in Medicine.

  4. Constraining torsion with Gravity Probe B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao Yi; Guth, Alan H.; Cabi, Serkan

    2007-11-15

    It is well-entrenched folklore that all torsion gravity theories predict observationally negligible torsion in the solar system, since torsion (if it exists) couples only to the intrinsic spin of elementary particles, not to rotational angular momentum. We argue that this assumption has a logical loophole which can and should be tested experimentally, and consider nonstandard torsion theories in which torsion can be generated by macroscopic rotating objects. In the spirit of action=reaction, if a rotating mass like a planet can generate torsion, then a gyroscope would be expected to feel torsion. An experiment with a gyroscope (without nuclear spin) suchmore » as Gravity Probe B (GPB) can test theories where this is the case. Using symmetry arguments, we show that to lowest order, any torsion field around a uniformly rotating spherical mass is determined by seven dimensionless parameters. These parameters effectively generalize the parametrized post-Newtonian formalism and provide a concrete framework for further testing Einstein's general theory of relativity (GR). We construct a parametrized Lagrangian that includes both standard torsion-free GR and Hayashi-Shirafuji maximal torsion gravity as special cases. We demonstrate that classic solar system tests rule out the latter and constrain two observable parameters. We show that Gravity Probe B is an ideal experiment for further constraining nonstandard torsion theories, and work out the most general torsion-induced precession of its gyroscope in terms of our torsion parameters.« less

  5. Spherical Harmonics Reveal Standing EEG Waves and Long-Range Neural Synchronization during Non-REM Sleep.

    PubMed

    Sivakumar, Siddharth S; Namath, Amalia G; Galán, Roberto F

    2016-01-01

    Previous work from our lab has demonstrated how the connectivity of brain circuits constrains the repertoire of activity patterns that those circuits can display. Specifically, we have shown that the principal components of spontaneous neural activity are uniquely determined by the underlying circuit connections, and that although the principal components do not uniquely resolve the circuit structure, they do reveal important features about it. Expanding upon this framework on a larger scale of neural dynamics, we have analyzed EEG data recorded with the standard 10-20 electrode system from 41 neurologically normal children and adolescents during stage 2, non-REM sleep. We show that the principal components of EEG spindles, or sigma waves (10-16 Hz), reveal non-propagating, standing waves in the form of spherical harmonics. We mathematically demonstrate that standing EEG waves exist when the spatial covariance and the Laplacian operator on the head's surface commute. This in turn implies that the covariance between two EEG channels decreases as the inverse of their relative distance; a relationship that we corroborate with empirical data. Using volume conduction theory, we then demonstrate that superficial current sources are more synchronized at larger distances, and determine the characteristic length of large-scale neural synchronization as 1.31 times the head radius, on average. Moreover, consistent with the hypothesis that EEG spindles are driven by thalamo-cortical rather than cortico-cortical loops, we also show that 8 additional patients with hypoplasia or complete agenesis of the corpus callosum, i.e., with deficient or no connectivity between cortical hemispheres, similarly exhibit standing EEG waves in the form of spherical harmonics. We conclude that spherical harmonics are a hallmark of spontaneous, large-scale synchronization of neural activity in the brain, which are associated with unconscious, light sleep. The analogy with spherical harmonics in quantum mechanics suggests that the variances (eigenvalues) of the principal components follow a Boltzmann distribution, or equivalently, that standing waves are in a sort of "thermodynamic" equilibrium during non-REM sleep. By extension, we speculate that consciousness emerges as the brain dynamics deviate from such equilibrium.

  6. Spherical Harmonics Reveal Standing EEG Waves and Long-Range Neural Synchronization during Non-REM Sleep

    PubMed Central

    Sivakumar, Siddharth S.; Namath, Amalia G.; Galán, Roberto F.

    2016-01-01

    Previous work from our lab has demonstrated how the connectivity of brain circuits constrains the repertoire of activity patterns that those circuits can display. Specifically, we have shown that the principal components of spontaneous neural activity are uniquely determined by the underlying circuit connections, and that although the principal components do not uniquely resolve the circuit structure, they do reveal important features about it. Expanding upon this framework on a larger scale of neural dynamics, we have analyzed EEG data recorded with the standard 10–20 electrode system from 41 neurologically normal children and adolescents during stage 2, non-REM sleep. We show that the principal components of EEG spindles, or sigma waves (10–16 Hz), reveal non-propagating, standing waves in the form of spherical harmonics. We mathematically demonstrate that standing EEG waves exist when the spatial covariance and the Laplacian operator on the head's surface commute. This in turn implies that the covariance between two EEG channels decreases as the inverse of their relative distance; a relationship that we corroborate with empirical data. Using volume conduction theory, we then demonstrate that superficial current sources are more synchronized at larger distances, and determine the characteristic length of large-scale neural synchronization as 1.31 times the head radius, on average. Moreover, consistent with the hypothesis that EEG spindles are driven by thalamo-cortical rather than cortico-cortical loops, we also show that 8 additional patients with hypoplasia or complete agenesis of the corpus callosum, i.e., with deficient or no connectivity between cortical hemispheres, similarly exhibit standing EEG waves in the form of spherical harmonics. We conclude that spherical harmonics are a hallmark of spontaneous, large-scale synchronization of neural activity in the brain, which are associated with unconscious, light sleep. The analogy with spherical harmonics in quantum mechanics suggests that the variances (eigenvalues) of the principal components follow a Boltzmann distribution, or equivalently, that standing waves are in a sort of “thermodynamic” equilibrium during non-REM sleep. By extension, we speculate that consciousness emerges as the brain dynamics deviate from such equilibrium. PMID:27445777

  7. Robust dynamic myocardial perfusion CT deconvolution for accurate residue function estimation via adaptive-weighted tensor total variation regularization: a preclinical study.

    PubMed

    Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua

    2016-11-21

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed 'MPD-AwTTV'. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.

  8. Robust dynamic myocardial perfusion CT deconvolution for accurate residue function estimation via adaptive-weighted tensor total variation regularization: a preclinical study

    NASA Astrophysics Data System (ADS)

    Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua

    2016-11-01

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.

  9. A deconvolution extraction method for 2D multi-object fibre spectroscopy based on the regularized least-squares QR-factorization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li

    2014-09-01

    This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.

  10. Unsupervised Learning for Monaural Source Separation Using Maximization–Minimization Algorithm with Time–Frequency Deconvolution †

    PubMed Central

    Bouridane, Ahmed; Ling, Bingo Wing-Kuen

    2018-01-01

    This paper presents an unsupervised learning algorithm for sparse nonnegative matrix factor time–frequency deconvolution with optimized fractional β-divergence. The β-divergence is a group of cost functions parametrized by a single parameter β. The Itakura–Saito divergence, Kullback–Leibler divergence and Least Square distance are special cases that correspond to β=0, 1, 2, respectively. This paper presents a generalized algorithm that uses a flexible range of β that includes fractional values. It describes a maximization–minimization (MM) algorithm leading to the development of a fast convergence multiplicative update algorithm with guaranteed convergence. The proposed model operates in the time–frequency domain and decomposes an information-bearing matrix into two-dimensional deconvolution of factor matrices that represent the spectral dictionary and temporal codes. The deconvolution process has been optimized to yield sparse temporal codes through maximizing the likelihood of the observations. The paper also presents a method to estimate the fractional β value. The method is demonstrated on separating audio mixtures recorded from a single channel. The paper shows that the extraction of the spectral dictionary and temporal codes is significantly more efficient by using the proposed algorithm and subsequently leads to better source separation performance. Experimental tests and comparisons with other factorization methods have been conducted to verify its efficacy. PMID:29702629

  11. MetaUniDec: High-Throughput Deconvolution of Native Mass Spectra

    NASA Astrophysics Data System (ADS)

    Reid, Deseree J.; Diesing, Jessica M.; Miller, Matthew A.; Perry, Scott M.; Wales, Jessica A.; Montfort, William R.; Marty, Michael T.

    2018-04-01

    The expansion of native mass spectrometry (MS) methods for both academic and industrial applications has created a substantial need for analysis of large native MS datasets. Existing software tools are poorly suited for high-throughput deconvolution of native electrospray mass spectra from intact proteins and protein complexes. The UniDec Bayesian deconvolution algorithm is uniquely well suited for high-throughput analysis due to its speed and robustness but was previously tailored towards individual spectra. Here, we optimized UniDec for deconvolution, analysis, and visualization of large data sets. This new module, MetaUniDec, centers around a hierarchical data format 5 (HDF5) format for storing datasets that significantly improves speed, portability, and file size. It also includes code optimizations to improve speed and a new graphical user interface for visualization, interaction, and analysis of data. To demonstrate the utility of MetaUniDec, we applied the software to analyze automated collision voltage ramps with a small bacterial heme protein and large lipoprotein nanodiscs. Upon increasing collisional activation, bacterial heme-nitric oxide/oxygen binding (H-NOX) protein shows a discrete loss of bound heme, and nanodiscs show a continuous loss of lipids and charge. By using MetaUniDec to track changes in peak area or mass as a function of collision voltage, we explore the energetic profile of collisional activation in an ultra-high mass range Orbitrap mass spectrometer. [Figure not available: see fulltext.

  12. Testing theoretical models of subdwarf B stars using multicolor photometry

    NASA Astrophysics Data System (ADS)

    Reed, Mike; Baran, Andrzej; Ostensen, Roy; O'Toole, Simon

    2012-08-01

    Pulsating stars allow a direct investigation of their structure and evolutionary history from the evaluation of pulsation modes. However, the observed pulsation frequencies must first be identified with spherical harmonics (modes). For subdwarfs B (sdB) stars, such identifications using white light photometry currently have significant limitations. We intend to use multicolor photometry to identify pulsation modes and constrain structure models. We propose to observe the pulsating sdB star PG0154+182 (BI Ari) with our multicolor instrument GT Cam. Our observations will be compared with perturbative atmospheric models (BRUCE/KYLIE) to identify the pulsation modes. This is part of our NSF grant to obtain seismic tools to test structure and evolution models; constraining stellar parameters including total mass, envelope mass, internal composition discontinuities and internal rotation. During winter/spring 2012, we were allocated three runs on the 2.1 m to collect multicolor data on other promising pulsating subdwarf B stars as part of this work. Those runs were very successful, prompting our continued proposals. In addition, we will obtain 3-color data using MAIA on the Mercator Telescope (using guaranteed institutional time).

  13. Scalar-tensor theories and modified gravity in the wake of GW170817

    NASA Astrophysics Data System (ADS)

    Langlois, David; Saito, Ryo; Yamauchi, Daisuke; Noui, Karim

    2018-03-01

    Theories of dark energy and modified gravity can be strongly constrained by astrophysical or cosmological observations, as illustrated by the recent observation of the gravitational wave event GW170817 and of its electromagnetic counterpart GRB 170817A, which showed that the speed of gravitational waves, cg , is the same as the speed of light, within deviations of order 10-15 . This observation implies severe restrictions on scalar-tensor theories, in particular theories whose action depends on second derivatives of a scalar field. Working in the very general framework of degenerate higher-order scalar-tensor (DHOST) theories, which encompass Horndeski and beyond Horndeski theories, we present the DHOST theories that satisfy cg=c . We then examine, for these theories, the screening mechanism that suppresses scalar interactions on small scales, namely the Vainshtein mechanism, and compute the corresponding gravitational laws for a nonrelativistic spherical body. We show that it can lead to a deviation from standard gravity inside matter, parametrized by three coefficients which satisfy a consistency relation and can be constrained by present and future astrophysical observations.

  14. Empirical Green's function analysis: Taking the next step

    USGS Publications Warehouse

    Hough, S.E.

    1997-01-01

    An extension of the empirical Green's function (EGF) method is presented that involves determination of source parameters using standard EGF deconvolution, followed by inversion for a common attenuation parameter for a set of colocated events. Recordings of three or more colocated events can thus be used to constrain a single path attenuation estimate. I apply this method to recordings from the 1995-1996 Ridgecrest, California, earthquake sequence; I analyze four clusters consisting of 13 total events with magnitudes between 2.6 and 4.9. I first obtain corner frequencies, which are used to infer Brune stress drop estimates. I obtain stress drop values of 0.3-53 MPa (with all but one between 0.3 and 11 MPa), with no resolved increase of stress drop with moment. With the corner frequencies constrained, the inferred attenuation parameters are very consistent; they imply an average shear wave quality factor of approximately 20-25 for alluvial sediments within the Indian Wells Valley. Although the resultant spectral fitting (using corner frequency and ??) is good, the residuals are consistent among the clusters analyzed. Their spectral shape is similar to the the theoretical one-dimensional response of a layered low-velocity structure in the valley (an absolute site response cannot be determined by this method, because of an ambiguity between absolute response and source spectral amplitudes). I show that even this subtle site response can significantly bias estimates of corner frequency and ??, if it is ignored in an inversion for only source and path effects. The multiple-EGF method presented in this paper is analogous to a joint inversion for source, path, and site effects; the use of colocated sets of earthquakes appears to offer significant advantages in improving resolution of all three estimates, especially if data are from a single site or sites with similar site response.

  15. A Model Based Deconvolution Approach for Creating Surface Composition Maps of Irregularly Shaped Bodies from Limited Orbiting Nuclear Spectrometer Measurements

    NASA Astrophysics Data System (ADS)

    Dallmann, N. A.; Carlsten, B. E.; Stonehill, L. C.

    2017-12-01

    Orbiting nuclear spectrometers have contributed significantly to our understanding of the composition of solar system bodies. Gamma rays and neutrons are produced within the surfaces of bodies by impacting galactic cosmic rays (GCR) and by intrinsic radionuclide decay. Measuring the flux and energy spectrum of these products at one point in an orbit elucidates the elemental content of the area in view. Deconvolution of measurements from many spatially registered orbit points can produce detailed maps of elemental abundances. In applying these well-established techniques to small and irregularly shaped bodies like Phobos, one encounters unique challenges beyond those of a large spheroid. Polar mapping orbits are not possible for Phobos and quasistatic orbits will realize only modest inclinations unavoidably limiting surface coverage and creating North-South ambiguities in deconvolution. The irregular shape causes self-shadowing both of the body to the spectrometer but also of the body to the incoming GCR. The view angle to the surface normal as well as the distance between the surface and the spectrometer is highly irregular. These characteristics can be synthesized into a complicated and continuously changing measurement system point spread function. We have begun to explore different model-based, statistically rigorous, iterative deconvolution methods to produce elemental abundance maps for a proposed future investigation of Phobos. By incorporating the satellite orbit, the existing high accuracy shape-models of Phobos, and the spectrometer response function, a detailed and accurate system model can be constructed. Many aspects of this model formation are particularly well suited to modern graphics processing techniques and parallel processing. We will present the current status and preliminary visualizations of the Phobos measurement system model. We will also discuss different deconvolution strategies and their relative merit in statistical rigor, stability, achievable resolution, and exploitation of the irregular shape to partially resolve ambiguities. The general applicability of these new approaches to existing data sets from Mars, Mercury, and Lunar investigations will be noted.

  16. A comparison of deconvolution and the Rutland-Patlak plot in parenchymal renal uptake rate.

    PubMed

    Al-Shakhrah, Issa A

    2012-07-01

    Deconvolution and the Rutland-Patlak (R-P) plot are two of the most commonly used methods for analyzing dynamic radionuclide renography. Both methods allow estimation of absolute and relative renal uptake of radiopharmaceutical and of its rate of transit through the kidney. Seventeen patients (32 kidneys) were referred for further evaluation by renal scanning. All patients were positioned supine with their backs to the scintillation gamma camera, so that the kidneys and the heart are both in the field of view. Approximately 5-7 mCi of (99m)Tc-DTPA (diethylinetriamine penta-acetic acid) in about 0.5 ml of saline is injected intravenously and sequential 20 s frames were acquired, the study on each patient lasts for approximately 20 min. The time-activity curves of the parenchymal region of interest of each kidney, as well as the heart were obtained for analysis. The data were then analyzed with deconvolution and the R-P plot. A strong positive association (n = 32; r = 0.83; R (2) = 0.68) was found between the values that obtained by applying the two methods. Bland-Altman statistical analysis demonstrated that ninety seven percent of the values in the study (31 cases from 32 cases, 97% of the cases) were within limits of agreement (mean ± 1.96 standard deviation). We believe that R-P analysis method is expected to be more reproducible than iterative deconvolution method, because the deconvolution technique (the iterative method) relies heavily on the accuracy of the first point analyzed, as any errors are carried forward into the calculations of all the subsequent points, whereas R-P technique is based on an initial analysis of the data by means of the R-P plot, and it can be considered as an alternative technique to find and calculate the renal uptake rate.

  17. Interpretation of high resolution airborne magnetic data (HRAMD) of Ilesha and its environs, Southwest Nigeria, using Euler deconvolution method

    NASA Astrophysics Data System (ADS)

    Olurin, Oluwaseun Tolutope

    2017-12-01

    Interpretation of high resolution aeromagnetic data of Ilesha and its environs within the basement complex of the geological setting of Southwestern Nigeria was carried out in the study. The study area is delimited by geographic latitudes 7°30'-8°00'N and longitudes 4°30'-5°00'E. This investigation was carried out using Euler deconvolution on filtered digitised total magnetic data (Sheet Number 243) to delineate geological structures within the area under consideration. The digitised airborne magnetic data acquired in 2009 were obtained from the archives of the Nigeria Geological Survey Agency (NGSA). The airborne magnetic data were filtered, processed and enhanced; the resultant data were subjected to qualitative and quantitative magnetic interpretation, geometry and depth weighting analyses across the study area using Euler deconvolution filter control file in Oasis Montag software. Total magnetic intensity distribution in the field ranged from -77.7 to 139.7 nT. Total magnetic field intensities reveal high-magnitude magnetic intensity values (high-amplitude anomaly) and magnetic low intensities (low-amplitude magnetic anomaly) in the area under consideration. The study area is characterised with high intensity correlated with lithological variation in the basement. The sharp contrast is enhanced due to the sharp contrast in magnetic intensity between the magnetic susceptibilities of the crystalline and sedimentary rocks. The reduced-to-equator (RTE) map is characterised by high frequencies, short wavelengths, small size, weak intensity, sharp low amplitude and nearly irregular shaped anomalies, which may due to near-surface sources, such as shallow geologic units and cultural features. Euler deconvolution solution indicates a generally undulating basement, with a depth ranging from -500 to 1000 m. The Euler deconvolution results show that the basement relief is generally gentle and flat, lying within the basement terrain.

  18. Estimating Fluctuating Pressures From Distorted Measurements

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Leondes, Cornelius T.

    1994-01-01

    Two algorithms extract estimates of time-dependent input (upstream) pressures from outputs of pressure sensors located at downstream ends of pneumatic tubes. Effect deconvolutions that account for distoring effects of tube upon pressure signal. Distortion of pressure measurements by pneumatic tubes also discussed in "Distortion of Pressure Signals in Pneumatic Tubes," (ARC-12868). Varying input pressure estimated from measured time-varying output pressure by one of two deconvolution algorithms that take account of measurement noise. Algorithms based on minimum-covariance (Kalman filtering) theory.

  19. A Division-Dependent Compartmental Model for Computing Cell Numbers in CFSE-based Lymphocyte Proliferation Assays

    DTIC Science & Technology

    2012-02-12

    is the total number of data points, is an approximately unbiased estimate of the “expected relative Kullback - Leibler distance” ( information loss...possible models). Thus, after each model from Table 2 is fit to a data set, we can compute the Akaike weights for the set of candidate models and use ...computed from the OLS best- fit model solution (top), from a deconvolution of the data using normal curves (middle) and from a deconvolution of the data

  20. Fourier Deconvolution Methods for Resolution Enhancement in Continuous-Wave EPR Spectroscopy.

    PubMed

    Reed, George H; Poyner, Russell R

    2015-01-01

    An overview of resolution enhancement of conventional, field-swept, continuous-wave electron paramagnetic resonance spectra using Fourier transform-based deconvolution methods is presented. Basic steps that are involved in resolution enhancement of calculated spectra using an implementation based on complex discrete Fourier transform algorithms are illustrated. Advantages and limitations of the method are discussed. An application to an experimentally obtained spectrum is provided to illustrate the power of the method for resolving overlapped transitions. © 2015 Elsevier Inc. All rights reserved.

  1. Least-Squares Deconvolution of Compton Telescope Data with the Positivity Constraint

    NASA Technical Reports Server (NTRS)

    Wheaton, William A.; Dixon, David D.; Tumer, O. Tumay; Zych, Allen D.

    1993-01-01

    We describe a Direct Linear Algebraic Deconvolution (DLAD) approach to imaging of data from Compton gamma-ray telescopes. Imposition of the additional physical constraint, that all components of the model be non-negative, has been found to have a powerful effect in stabilizing the results, giving spatial resolution at or near the instrumental limit. A companion paper (Dixon et al. 1993) presents preliminary images of the Crab Nebula region using data from COMPTEL on the Compton Gamma-Ray Observatory.

  2. An l1-TV Algorithm for Deconvolution with Salt and Pepper Noise

    DTIC Science & Technology

    2009-04-01

    deblurring in the presence of impulsive noise ,” Int. J. Comput. Vision, vol. 70, no. 3, pp. 279–298, Dec. 2006. [13] A. E. Beaton and J. W. Tukey, “The...AN 1-TV ALGORITHM FOR DECONVOLUTIONWITH SALT AND PEPPER NOISE Brendt Wohlberg∗ T-7 Mathematical Modeling and Analysis Los Alamos National Laboratory...and pepper noise , but the extension of this formulation to more general prob- lems, such as deconvolution, has received little attention. We consider

  3. Modeling of the Inner Coma of Comet 67P/Churyumov-Gerasimenko Constrained by VIRTIS and ROSINA Observations

    NASA Astrophysics Data System (ADS)

    Fougere, N.; Combi, M. R.; Tenishev, V.; Bieler, A. M.; Migliorini, A.; Bockelée-Morvan, D.; Toth, G.; Huang, Z.; Gombosi, T. I.; Hansen, K. C.; Capaccioni, F.; Filacchione, G.; Piccioni, G.; Debout, V.; Erard, S.; Leyrat, C.; Fink, U.; Rubin, M.; Altwegg, K.; Tzou, C. Y.; Le Roy, L.; Calmonte, U.; Berthelier, J. J.; Rème, H.; Hässig, M.; Fuselier, S. A.; Fiethe, B.; De Keyser, J.

    2015-12-01

    As it orbits around comet 67P/Churyumov-Gerasimenko (CG), the Rosetta spacecraft acquires more information about its main target. The numerous observations made at various geometries and at different times enable a good spatial and temporal coverage of the evolution of CG's cometary coma. However, the question regarding the link between the coma measurements and the nucleus activity remains relatively open notably due to gas expansion and strong kinetic effects in the comet's rarefied atmosphere. In this work, we use coma observations made by the ROSINA-DFMS instrument to constrain the activity at the surface of the nucleus. The distribution of the H2O and CO2 outgassing is described with the use of spherical harmonics. The coordinates in the orthogonal system represented by the spherical harmonics are computed using a least squared method, minimizing the sum of the square residuals between an analytical coma model and the DFMS data. Then, the previously deduced activity distributions are used in a Direct Simulation Monte Carlo (DSMC) model to compute a full description of the H2O and CO2 coma of comet CG from the nucleus' surface up to several hundreds of kilometers. The DSMC outputs are used to create synthetic images, which can be directly compared with VIRTIS measurements. The good agreement between the VIRTIS observations and the DSMC model, itself constrained with ROSINA data, provides a compelling juxtaposition of the measurements from these two instruments. Acknowledgements Work at UofM was supported by contracts JPL#1266313, JPL#1266314 and NASA grant NNX09AB59G. Work at UoB was funded by the State of Bern, the Swiss National Science Foundation and by the ESA PRODEX Program. Work at Southwest Research institute was supported by subcontract #1496541 from the JPL. Work at BIRA-IASB was supported by the Belgian Science Policy Office via PRODEX/ROSINA PEA 90020. The authors would like to thank ASI, CNES, DLR, NASA for supporting this research. VIRTIS was built by a consortium formed by Italy, France and Germany, under the scientific responsibility of the IAPS of INAF, which guides also the scientific operations. The consortium includes also the LESIA of the Observatoire de Paris, and the Institut für Planetenforschung of DLR. The authors wish to thank the RSGS and the RMOC for their continuous support.

  4. Constraining the Global, Cloud-Free Reflected Solar Radiation Flux (RSRF) with Earth Observing System (EOS) Instruments

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph

    1999-01-01

    Variations in the top-of-atmosphere reflected solar radiation flux, and in the factors that determine its value, are among the most important diagnostic indicators of changes in Earth's energy balance. Data from the MISR (Multi-angle Imaging SpectroRadiometer), MODIS (Moderate-resolution Imaging Spectroradiometer), SAGE-3 (Stratospheric Aerosol and Gas Experiment), and CERES (Clouds and Earth's Radiant Energy System), all of which are spacecraft instruments scheduled for launch in 1999, will each constrain pieces of the RSRF budget. Prior to launch, we are performing studies to determine the sensitivity of these instruments to key factors that influence the cloud-free RSRF: aerosol optical depth, aerosol scattering properties, and surface visible bidirectional reflectance distribution function (BRDF). We are also assessing the ability of the aggregate of instruments to constrain the overall RSRF budget under natural conditions over the globe. Consider the MISR retrieval of aerosols: according to simulations over cloud-free, calm ocean, for pure particles with natural ranges of optical depth, particle size, and indices of refraction, MISR can retrieve column aerosol optical depth for all but the darkest particles, to an uncertainty of at most 0.05 or 20%, whichever is larger, even if the particle properties are poorly known. For one common particle type, soot, constraints on the optical depth over dark ocean are very poor. The simulated measurements also allow us to distinguish spherical from non-spherical particles, to separate two to four compositional groups based on indices of refraction, and to identify three to four distinct size groups between 0. 1 and 2.0 microns characteristic radius at most latitudes. Based on these results, we expect to distinguish air masses containing different aerosol types, routinely and globally, with multiangle remote sensing data. Such results far exceed current satellite aerosol retrieval capabilities, which provide only total optical depth for assumed particle properties; the new information will complement in situ data, which give details about aerosol size and composition locally. In addition, our team is using climatologies that reflect the constraints each instrument is expected to provide, along with ERBE (Earth Radiation Budget Experiment) data and a radiative transfer code, to study overall sensitivity to RSRF, helping us prepare for similar studies with new data from the EOS-era instruments.

  5. Texas two-step: a framework for optimal multi-input single-output deconvolution.

    PubMed

    Neelamani, Ramesh; Deffenbaugh, Max; Baraniuk, Richard G

    2007-11-01

    Multi-input single-output deconvolution (MISO-D) aims to extract a deblurred estimate of a target signal from several blurred and noisy observations. This paper develops a new two step framework--Texas Two-Step--to solve MISO-D problems with known blurs. Texas Two-Step first reduces the MISO-D problem to a related single-input single-output deconvolution (SISO-D) problem by invoking the concept of sufficient statistics (SSs) and then solves the simpler SISO-D problem using an appropriate technique. The two-step framework enables new MISO-D techniques (both optimal and suboptimal) based on the rich suite of existing SISO-D techniques. In fact, the properties of SSs imply that a MISO-D algorithm is mean-squared-error optimal if and only if it can be rearranged to conform to the Texas Two-Step framework. Using this insight, we construct new wavelet- and curvelet-based MISO-D algorithms with asymptotically optimal performance. Simulated and real data experiments verify that the framework is indeed effective.

  6. Voigt deconvolution method and its applications to pure oxygen absorption spectrum at 1270 nm band.

    PubMed

    Al-Jalali, Muhammad A; Aljghami, Issam F; Mahzia, Yahia M

    2016-03-15

    Experimental spectral lines of pure oxygen at 1270 nm band were analyzed by Voigt deconvolution method. The method gave a total Voigt profile, which arises from two overlapping bands. Deconvolution of total Voigt profile leads to two Voigt profiles, the first as a result of O2 dimol at 1264 nm band envelope, and the second from O2 monomer at 1268 nm band envelope. In addition, Voigt profile itself is the convolution of Lorentzian and Gaussian distributions. Competition between thermal and collisional effects was clearly observed through competition between Gaussian and Lorentzian width for each band envelope. Voigt full width at half-maximum height (Voigt FWHM) for each line, and the width ratio between Lorentzian and Gaussian width (ΓLΓG(-1)) have been investigated. The following applied pressures were at 1, 2, 3, 4, 5, and 8 bar, while the temperatures were at 298 K, 323 K, 348 K, and 373 K range. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. A novel SURE-based criterion for parametric PSF estimation.

    PubMed

    Xue, Feng; Blu, Thierry

    2015-02-01

    We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.

  8. Palladium-based Mass-Tag Cell Barcoding with a Doublet-Filtering Scheme and Single Cell Deconvolution Algorithm

    PubMed Central

    Zunder, Eli R.; Finck, Rachel; Behbehani, Gregory K.; Amir, El-ad D.; Krishnaswamy, Smita; Gonzalez, Veronica D.; Lorang, Cynthia G.; Bjornson, Zach; Spitzer, Matthew H.; Bodenmiller, Bernd; Fantl, Wendy J.; Pe’er, Dana; Nolan, Garry P.

    2015-01-01

    SUMMARY Mass-tag cell barcoding (MCB) labels individual cell samples with unique combinatorial barcodes, after which they are pooled for processing and measurement as a single multiplexed sample. The MCB method eliminates variability between samples in antibody staining and instrument sensitivity, reduces antibody consumption, and shortens instrument measurement time. Here, we present an optimized MCB protocol with several improvements over previously described methods. The use of palladium-based labeling reagents expands the number of measurement channels available for mass cytometry and reduces interference with lanthanide-based antibody measurement. An error-detecting combinatorial barcoding scheme allows cell doublets to be identified and removed from the analysis. A debarcoding algorithm that is single cell-based rather than population-based improves the accuracy and efficiency of sample deconvolution. This debarcoding algorithm has been packaged into software that allows rapid and unbiased sample deconvolution. The MCB procedure takes 3–4 h, not including sample acquisition time of ~1 h per million cells. PMID:25612231

  9. Automated processing for proton spectroscopic imaging using water reference deconvolution.

    PubMed

    Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W

    1994-06-01

    Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.

  10. DECONV-TOOL: An IDL based deconvolution software package

    NASA Technical Reports Server (NTRS)

    Varosi, F.; Landsman, W. B.

    1992-01-01

    There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.

  11. Deconvolutions based on singular value decomposition and the pseudoinverse: a guide for beginners.

    PubMed

    Hendler, R W; Shrager, R I

    1994-01-01

    Singular value decomposition (SVD) is deeply rooted in the theory of linear algebra, and because of this is not readily understood by a large group of researchers who could profit from its application. In this paper, we discuss the subject on a level that should be understandable to scientists who are not well versed in linear algebra. However, because it is necessary that certain key concepts in linear algebra be appreciated in order to comprehend what is accomplished by SVD, we present the section, 'Bare basics of linear algebra'. This is followed by a discussion of the theory of SVD. Next we present step-by-step examples to illustrate how SVD is applied to deconvolute a titration involving a mixture of three pH indicators. One noiseless case is presented as well as two cases where either a fixed or varying noise level is present. Finally, we discuss additional deconvolutions of mixed spectra based on the use of the pseudoinverse.

  12. Robust dynamic myocardial perfusion CT deconvolution using adaptive-weighted tensor total variation regularization

    NASA Astrophysics Data System (ADS)

    Gong, Changfei; Zeng, Dong; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua

    2016-03-01

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for diagnosis and risk stratification of coronary artery disease by assessing the myocardial perfusion hemodynamic maps (MPHM). Meanwhile, the repeated scanning of the same region results in a relatively large radiation dose to patients potentially. In this work, we present a robust MPCT deconvolution algorithm with adaptive-weighted tensor total variation regularization to estimate residue function accurately under the low-dose context, which is termed `MPD-AwTTV'. More specifically, the AwTTV regularization takes into account the anisotropic edge property of the MPCT images compared with the conventional total variation (TV) regularization, which can mitigate the drawbacks of TV regularization. Subsequently, an effective iterative algorithm was adopted to minimize the associative objective function. Experimental results on a modified XCAT phantom demonstrated that the present MPD-AwTTV algorithm outperforms and is superior to other existing deconvolution algorithms in terms of noise-induced artifacts suppression, edge details preservation and accurate MPHM estimation.

  13. Order and Jamming on Curved Surfaces

    NASA Astrophysics Data System (ADS)

    Burke, Christopher J.

    Geometric frustration occurs when a physical system's preferred ordering (e.g. spherical particles packing in a hexagonal lattice) is incompatible with the system's geometry. An example of this occurs in arrested relaxation in Pickering emulsions. Pickering emulsions are emulsions (e.g. mixtures of oil and water) with colloidal particles mixed in. The particles tend to lie at an oil-water interface, and can coat the surface of droplets within the emulsion (e.g. an oil droplet surrounded by water.) If a droplet is deformed from its spherical ground state, more particles adsorb at the surface, and the droplet is allowed to relax, then the particles on the surface can become close packed and prevent further relaxation, arresting the droplet in a non-spherical shape. The resulting structures tend to be relatively well ordered with regions of highly hexagonal packings; however, the curvature of the surface prevents perfect ordering and defects in the packing are required. These defects may influence the stability of these structures, making it important to understand how to predict and control them for applications in the food, cosmetic, oil, and medical industries. In this work, we use simulations to study the ordering and stability of sphere packings on arrested emulsions droplets. We first isolate the role of surface geometry by creating packings on a static ellipsoidal surface. Next we perform simulations which include dynamic effects that are present in the experimental Pickering emulsion system. Packings are created by evolving an ellipsoidal surface towards a spherical shape at fixed volume; the effects of relaxation rate, interparticle attraction, and gravity are determined. Finally, we study jamming on curved surfaces. Packings of hard particles are used to study marginally stable packings and the role curvature plays in constraining them. We also study packings of soft particles, compressed beyond marginal stability, and find that geometric frustration plays an important role in determining their mechanical properties.

  14. Fast and accurate reference-free alignment of subtomograms.

    PubMed

    Chen, Yuxiang; Pfeffer, Stefan; Hrabe, Thomas; Schuller, Jan Michael; Förster, Friedrich

    2013-06-01

    In cryoelectron tomography alignment and averaging of subtomograms, each dnepicting the same macromolecule, improves the resolution compared to the individual subtomogram. Major challenges of subtomogram alignment are noise enhancement due to overfitting, the bias of an initial reference in the iterative alignment process, and the computational cost of processing increasingly large amounts of data. Here, we propose an efficient and accurate alignment algorithm via a generalized convolution theorem, which allows computation of a constrained correlation function using spherical harmonics. This formulation increases computational speed of rotational matching dramatically compared to rotation search in Cartesian space without sacrificing accuracy in contrast to other spherical harmonic based approaches. Using this sampling method, a reference-free alignment procedure is proposed to tackle reference bias and overfitting, which also includes contrast transfer function correction by Wiener filtering. Application of the method to simulated data allowed us to obtain resolutions near the ground truth. For two experimental datasets, ribosomes from yeast lysate and purified 20S proteasomes, we achieved reconstructions of approximately 20Å and 16Å, respectively. The software is ready-to-use and made public to the community. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Sequence analysis of ORF IV RTBV isolated from tungro infected Oryza sativa L. cv Ciherang

    NASA Astrophysics Data System (ADS)

    Hastilestari, Bernadetta Rina; Astuti, Dwi; Estiati, Amy; Nugroho, Satya

    2015-09-01

    The Effort to increase rice production is often constrained by pest and disease such as Tungro. The Tungro disease is caused by the joint infection with two dissimilar viruses; a bacil-form-DNA virus, the Rice tungro bacilliform virus(RTBV) and the spherical RNA virus, Rice tungro spherical virus (RTSV) and transmitted by Green leafhopper (Nephotettix virescens). The symptom of disease is caused by the presence of RTBV. The genome of RTBV consists of four Open reading frames (ORFs) which encode functional proteins. Of the four, ORF IV is unique because it exists only in RTBV. The most efficient method of generating disease resistance plants is to look for natural sources of resistance genes in wild or germplasm and then transfer the gene and the accompanying resistance in cultivated crop varieties. The aim of this study is, therefore, to isolate and analyze of 1170 bp gene of ORF 4 of Tungro virus isolated from an Indonesian rice cultivar, Ciherang (Oryza sativa L. cv Indica). DNA sequencing analysis using BLAST showed 94% similarity with the reference sequence gen bank Acc.M65026.1. The comparisons and mutation analysis of DNA sequences were discussed in this research.

  16. Efficient color mixing through étendue conservation using freeform optics

    NASA Astrophysics Data System (ADS)

    Sorgato, Simone; Mohedano, Rubén.; Chaves, Julio; Cvetkovic, Aleksandra; Hernández, Maikel; Benitez, Pablo; Miñano, Juan C.; Thienpont, Hugo; Duerr, Fabian

    2015-08-01

    Today's SSL illumination market shows a clear trend to high flux packages with higher efficiency and higher CRI, realized by means of multiple color chips and phosphors. Such light sources require the optics to provide both near- and far-field color mixing. This design problem is particularly challenging for collimated luminaries, since traditional diffusers cannot be employed without enlarging the exit aperture and reducing brightness. Furthermore, diffusers compromise the light output ratio (efficiency) of the lamps to which they are applied. A solution, based on Köhler integration, consisting of a spherical cap comprising spherical microlenses on both its interior and exterior sides was presented in 2012. The diameter of this so-called Shell-Mixer was 3 times that of the chip array footprint. A new version of the Shell-Mixer, based on the Edge Ray Principle and conservation of etendue, where neither the outer shape of the cap nor the surfaces of the lenses are constrained to spheres or 2D Cartesian ovals will be shown in this work. The new shell is freeform, only twice as large as the original chip-array and equals the original model in terms of color uniformity, brightness and efficiency.

  17. Rings in Evolved Stars: Fingerprints of Their Mass-Loss History

    NASA Astrophysics Data System (ADS)

    Ramos-Larios, Gerardo; Santamaria, Edgar; Sabin, Laurence; Guerrero, Martin; Marquez-Lugo, Alejandro

    2015-08-01

    The majority of intermediate mass evolved stars i.e. asymptotic giant branch (AGB) stars, post-AGB and pre-planetary nebulae (PPN) are well known for been characterized by external structures such as knots, arcs, ansae, jets, haloes, shells and even annular enhancements in intensity -features which are commonly referred to as rings. These are well described either as spherical bubbles of periodic isotropic nuclear mass pulsations (Balick, Wilson & Hajian 2001) or projections of spherical shells onto the plane of the sky by Kwok (2001).These interesting structures are part of the AGB wind, suggesting that this wind comes in a series of semi periodic lapses, indicating that the outflow has quasi-periodic oscillations.After an extensive analysis in the Hubble Space Telescope (HST) archives we found new ring-like structures in several evolved stars. Following the image analysis procedure described by Corradi et al. (2004), and using unsharp masking techniques it was possible to enhance the ring structures, and to obtain an effective removal of the underlying halo emission.Our new findings will help first to constrain the physical processes responsible for the rings creation and then to better understand the mass loss activity in these evolved stars.

  18. Profile measurements in the plasma edge of mega amp spherical tokamak using a ball pen probe

    NASA Astrophysics Data System (ADS)

    Walkden, N. R.; Adamek, J.; Allan, S.; Dudson, B. D.; Elmore, S.; Fishpool, G.; Harrison, J.; Kirk, A.; Komm, M.

    2015-02-01

    The ball pen probe (BPP) technique is used successfully to make profile measurements of plasma potential, electron temperature, and radial electric field on the Mega Amp Spherical Tokamak. The potential profile measured by the BPP is shown to significantly differ from the floating potential both in polarity and profile shape. By combining the BPP potential and the floating potential, the electron temperature can be measured, which is compared with the Thomson scattering (TS) diagnostic. Excellent agreement between the two diagnostics is obtained when secondary electron emission is accounted for in the floating potential. From the BPP profile, an estimate of the radial electric field is extracted which is shown to be of the order ˜1 kV/m and increases with plasma current. Corrections to the BPP measurement, constrained by the TS comparison, introduce uncertainty into the ER measurements. The uncertainty is most significant in the electric field well inside the separatrix. The electric field is used to estimate toroidal and poloidal rotation velocities from E × B motion. This paper further demonstrates the ability of the ball pen probe to make valuable and important measurements in the boundary plasma of a tokamak.

  19. Scalar field dark energy with a minimal coupling in a spherically symmetric background

    NASA Astrophysics Data System (ADS)

    Matsumoto, Jiro

    Dark energy models and modified gravity theories have been actively studied and the behaviors in the solar system have been also carefully investigated in a part of the models. However, the isotropic solutions of the field equations in the simple models of dark energy, e.g. quintessence model without matter coupling, have not been well investigated. One of the reason would be the nonlinearity of the field equations. In this paper, a method to evaluate the solution of the field equations is constructed, and it is shown that there is a model that can easily pass the solar system tests, whereas, there is also a model that is constrained from the solar system tests.

  20. A Simple Model for Immature Retrovirus Capsid Assembly

    NASA Astrophysics Data System (ADS)

    Paquay, Stefan; van der Schoot, Paul; Dragnea, Bogdan

    In this talk I will present simulations of a simple model for capsomeres in immature virus capsids, consisting of only point particles with a tunable range of attraction constrained to a spherical surface. We find that, at sufficiently low density, a short interaction range is sufficient for the suppression of five-fold defects in the packing and causes instead larger tears and scars in the capsid. These findings agree both qualitatively and quantitatively with experiments on immature retrovirus capsids, implying that the structure of the retroviral protein lattice can, for a large part, be explained simply by the effective interaction between the capsomeres. We thank the HFSP for funding under Grant RGP0017/2012.

  1. Combining the modified Skyrme-like model and the local density approximation to determine the symmetry energy of nuclear matter

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Ren, Zhongzhou; Xu, Chang

    2018-07-01

    Combining the modified Skyrme-like model and the local density approximation model, the slope parameter L of symmetry energy is extracted from the properties of finite nuclei with an improved iterative method. The calculations of the iterative method are performed within the framework of the spherical symmetry. By choosing 200 neutron rich nuclei on 25 isotopic chains as candidates, the slope parameter is constrained to be 50 MeV < L < 62 MeV. The validity of this method is examined by the properties of finite nuclei. Results show that reasonable descriptions on the properties of finite nuclei and nuclear matter can be obtained together.

  2. Dark Candles of the Universe: Black Hole Observations

    NASA Astrophysics Data System (ADS)

    Aykutalp, Aycin

    2016-03-01

    In 1916, when Karl Schwarzschild solved the Einstein field equations of general relativity for a spherically symmetric, non-rotating mass no one anticipated the impact black holes would have on astrophysics. I will review the main formation channels for black hole seeds and their evolution through cosmic time. In this, emphasis will be placed on the observational diagnostics of astrophysical black holes and their role on the assembly of galaxy formation and evolution. I then review how these observations put constrain on the seed black hole formation theories. Finally, I present an outlook for how future observations can shed light on our understanding of black holes. This work is supported by NSF Grant AST-1333360.

  3. X-ray Reverberation Mapping of Ci Cam

    NASA Astrophysics Data System (ADS)

    Bartlett, Elizabeth; Garcia, M.

    2009-01-01

    We have analyzed the X-ray lightcurve of the star CI Cam, the optical counterpart of the X-ray transient XTE J0421+56 using data from XMM-Newton. Our motivation is based on evidence from ground based optical interferometry from the Keck and IOTA observatories which suggests that the dust surrounding CI CAM has a taurus morphology rather than a spherical distribution as previously hypothesized. By using a technique known as reverberation mapping we have constrained the time delay between the continuum of CI Cam and the Fe-K fluorescence line, corresponding to the reflection of the continuum off the dusty taurus. The time delay yields information on the size of the taurus.

  4. Optimal application of Morrison's iterative noise removal for deconvolution. Appendices

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1987-01-01

    Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.

  5. Total variation based image deconvolution for extended depth-of-field microscopy images

    NASA Astrophysics Data System (ADS)

    Hausser, F.; Beckers, I.; Gierlak, M.; Kahraman, O.

    2015-03-01

    One approach for a detailed understanding of dynamical cellular processes during drug delivery is the use of functionalized biocompatible nanoparticles and fluorescent markers. An appropriate imaging system has to detect these moving particles so as whole cell volumes in real time with high lateral resolution in a range of a few 100 nm. In a previous study Extended depth-of-field microscopy (EDF-microscopy) has been applied to fluorescent beads and tradiscantia stamen hair cells and the concept of real-time imaging has been proved in different microscopic modes. In principle a phase retardation system like a programmable space light modulator or a static waveplate is incorporated in the light path and modulates the wavefront of light. Hence the focal ellipsoid is smeared out and images seem to be blurred in a first step. An image restoration by deconvolution using the known point-spread-function (PSF) of the optical system is necessary to achieve sharp microscopic images of an extended depth-of-field. This work is focused on the investigation and optimization of deconvolution algorithms to solve this restoration problem satisfactorily. This inverse problem is challenging due to presence of Poisson distributed noise and Gaussian noise, and since the PSF used for deconvolution exactly fits in just one plane within the object. We use non-linear Total Variation based image restoration techniques, where different types of noise can be treated properly. Various algorithms are evaluated for artificially generated 3D images as well as for fluorescence measurements of BPAE cells.

  6. Deconvolution of the vestibular evoked myogenic potential.

    PubMed

    Lütkenhöner, Bernd; Basel, Türker

    2012-02-07

    The vestibular evoked myogenic potential (VEMP) and the associated variance modulation can be understood by a convolution model. Two functions of time are incorporated into the model: the motor unit action potential (MUAP) of an average motor unit, and the temporal modulation of the MUAP rate of all contributing motor units, briefly called rate modulation. The latter is the function of interest, whereas the MUAP acts as a filter that distorts the information contained in the measured data. Here, it is shown how to recover the rate modulation by undoing the filtering using a deconvolution approach. The key aspects of our deconvolution algorithm are as follows: (1) the rate modulation is described in terms of just a few parameters; (2) the MUAP is calculated by Wiener deconvolution of the VEMP with the rate modulation; (3) the model parameters are optimized using a figure-of-merit function where the most important term quantifies the difference between measured and model-predicted variance modulation. The effectiveness of the algorithm is demonstrated with simulated data. An analysis of real data confirms the view that there are basically two components, which roughly correspond to the waves p13-n23 and n34-p44 of the VEMP. The rate modulation corresponding to the first, inhibitory component is much stronger than that corresponding to the second, excitatory component. But the latter is more extended so that the two modulations have almost the same equivalent rectangular duration. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Isotope pattern deconvolution as a tool to study iron metabolism in plants.

    PubMed

    Rodríguez-Castrillón, José Angel; Moldovan, Mariella; García Alonso, J Ignacio; Lucena, Juan José; García-Tomé, Maria Luisa; Hernández-Apaolaza, Lourdes

    2008-01-01

    Isotope pattern deconvolution is a mathematical technique for isolating distinct isotope signatures from mixtures of natural abundance and enriched tracers. In iron metabolism studies measurement of all four isotopes of the element by high-resolution multicollector or collision cell ICP-MS allows the determination of the tracer/tracee ratio with simultaneous internal mass bias correction and lower uncertainties. This technique was applied here for the first time to study iron uptake by cucumber plants using 57Fe-enriched iron chelates of the o,o and o,p isomers of ethylenediaminedi(o-hydroxyphenylacetic) acid (EDDHA) and ethylenediamine tetraacetic acid (EDTA). Samples of root, stem, leaves, and xylem sap, after exposure of the cucumber plants to the mentioned 57Fe chelates, were collected, dried, and digested using nitric acid. The isotopic composition of iron in the samples was measured by ICP-MS using a high-resolution multicollector instrument. Mass bias correction was computed using both a natural abundance iron standard and by internal correction using isotope pattern deconvolution. It was observed that, for plants with low 57Fe enrichment, isotope pattern deconvolution provided lower tracer/tracee ratio uncertainties than the traditional method applying external mass bias correction. The total amount of the element in the plants was determined by isotope dilution analysis, using a collision cell quadrupole ICP-MS instrument, after addition of 57Fe or natural abundance Fe in a known amount which depended on the isotopic composition of the sample.

  8. Comment on the paper "Thermoluminescence glow-curve deconvolution functions for mixed order of kinetics and continuous trap distribution by G. Kitis, J.M. Gomez-Ros, Nuclear Instruments and Methods in Physics Research A 440, 2000, pp 224-231"

    NASA Astrophysics Data System (ADS)

    Kazakis, Nikolaos A.

    2018-01-01

    The present comment concerns the correct presentation of an algorithm proposed in the above paper for the glow-curve deconvolution in the case of continuous distribution of trapping states. Since most researchers would use directly the proposed algorithm as published, they should be notified of its correct formulation during the fitting of TL glow curves of materials with continuous trap distribution using this Equation.

  9. An l1-TV algorithm for deconvolution with salt and pepper noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohlberg, Brendt; Rodriguez, Paul

    2008-01-01

    There has recently been considerable interest in applying Total Variation with an {ell}{sup 1} data fidelity term to the denoising of images subject to salt and pepper noise, but the extension of this formulation to more general problems, such as deconvolution, has received little attention, most probably because most efficient algorithms for {ell}{sup 1}-TV denoising can not handle more general inverse problems. We apply the Iteratively Reweighted Norm algorithm to this problem, and compare performance with an alternative algorithm based on the Mumford-Shah functional.

  10. Joint Far-field and Near-field GPS Observations to Modified the Fault Slip Models of 2011 Tohoku-Oki Earthquake (Mw 9.0)

    NASA Astrophysics Data System (ADS)

    Yang, J.; Yi, S.; Sun, W.

    2016-12-01

    Signification displacements caused by the 2011 Tohoku-Oki earthquake (Mw9.0) can be detected by GPS observations on the north and northeast of Asian continent which comes from Crustal Movement Observation Network of China (CMONOC). Obviously horizontal displacement which can be detected with many GPS stations reaches to almost 3cm and 2cm and most of those extend eastward pointing to the epicenter of this earthquake. Those data can be acquired rapidly after the earthquake from CMONOC. Here, we will discuss how to calculate the seismic moment with those far-field GPS observations. The far field displacement can constrain the pattern of finite slip model and seismic moment using spherically stratified Earth model (PREM). We give a general rule of thumb to show how far-field GPS observations are affected by the earthquake parameters. In the worldwide, after 1990 there are 27 large earthquakes (the magnitude more than Mw 8.0) which most are subduction types with low rake angle. Their far-field GPS observations are mainly controlled by the component of Y22. Far-field GPS observations are potential to constrain one or two components of the focal mechanisms. When we joint far-field and near-field GPS data to get the 2011 Tohoku-Oki earthquake, we can get a more accurately finite slip model. The article shows a new mothed using far-field GPS data to constrain the fault slip model.

  11. Gabor Deconvolution as Preliminary Method to Reduce Pitfall in Deeper Target Seismic Data

    NASA Astrophysics Data System (ADS)

    Oktariena, M.; Triyoso, W.

    2018-03-01

    Anelastic attenuation process during seismic wave propagation is the trigger of seismic non-stationary characteristic. An absorption and a scattering of energy are causing the seismic energy loss as the depth increasing. A series of thin reservoir layers found in the study area is located within Talang Akar Fm. Level, showing an indication of interpretation pitfall due to attenuation effect commonly occurred in deeper level seismic data. Attenuation effect greatly influences the seismic images of deeper target level, creating pitfalls in several aspect. Seismic amplitude in deeper target level often could not represent its real subsurface character due to a low amplitude value or a chaotic event nearing the Basement. Frequency wise, the decaying could be seen as the frequency content diminishing in deeper target. Meanwhile, seismic amplitude is the simple tool to point out Direct Hydrocarbon Indicator (DHI) in preliminary Geophysical study before a further advanced interpretation method applied. A quick-look of Post-Stack Seismic Data shows the reservoir associated with a bright spot DHI while another bigger bright spot body detected in the North East area near the field edge. A horizon slice confirms a possibility that the other bright spot zone has smaller delineation; an interpretation pitfall commonly occurs in deeper level of seismic. We evaluates this pitfall by applying Gabor Deconvolution to address the attenuation problem. Gabor Deconvolution forms a Partition of Unity to factorize the trace into smaller convolution window that could be processed as stationary packets. Gabor Deconvolution estimates both the magnitudes of source signature alongside its attenuation function. The enhanced seismic shows a better imaging in the pitfall area that previously detected as a vast bright spot zone. When the enhanced seismic is used for further advanced reprocessing process, the Seismic Impedance and Vp/Vs Ratio slices show a better reservoir delineation, in which the pitfall area is reduced and some morphed as background lithology. Gabor Deconvolution removes the attenuation by performing Gabor Domain spectral division, which in extension also reduces interpretation pitfall in deeper target seismic.

  12. Image processing tools dedicated to quantification in 3D fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Dieterlen, A.; De Meyer, A.; Colicchio, B.; Le Calvez, S.; Haeberlé, O.; Jacquey, S.

    2006-05-01

    3-D optical fluorescent microscopy now becomes an efficient tool for the volume investigation of living biological samples. Developments in instrumentation have permitted to beat off the conventional Abbe limit. In any case the recorded image can be described by the convolution equation between the original object and the Point Spread Function (PSF) of the acquisition system. Due to the finite resolution of the instrument, the original object is recorded with distortions and blurring, and contaminated by noise. This induces that relevant biological information cannot be extracted directly from raw data stacks. If the goal is 3-D quantitative analysis, then to assess optimal performance of the instrument and to ensure the data acquisition reproducibility, the system characterization is mandatory. The PSF represents the properties of the image acquisition system; we have proposed the use of statistical tools and Zernike moments to describe a 3-D PSF system and to quantify the variation of the PSF. This first step toward standardization is helpful to define an acquisition protocol optimizing exploitation of the microscope depending on the studied biological sample. Before the extraction of geometrical information and/or intensities quantification, the data restoration is mandatory. Reduction of out-of-focus light is carried out computationally by deconvolution process. But other phenomena occur during acquisition, like fluorescence photo degradation named "bleaching", inducing an alteration of information needed for restoration. Therefore, we have developed a protocol to pre-process data before the application of deconvolution algorithms. A large number of deconvolution methods have been described and are now available in commercial package. One major difficulty to use this software is the introduction by the user of the "best" regularization parameters. We have pointed out that automating the choice of the regularization level; also greatly improves the reliability of the measurements although it facilitates the use. Furthermore, to increase the quality and the repeatability of quantitative measurements a pre-filtering of images improves the stability of deconvolution process. In the same way, the PSF prefiltering stabilizes the deconvolution process. We have shown that Zemike polynomials can be used to reconstruct experimental PSF, preserving system characteristics and removing the noise contained in the PSF.

  13. An improved method for polarimetric image restoration in interferometry

    NASA Astrophysics Data System (ADS)

    Pratley, Luke; Johnston-Hollitt, Melanie

    2016-11-01

    Interferometric radio astronomy data require the effects of limited coverage in the Fourier plane to be accounted for via a deconvolution process. For the last 40 years this process, known as `cleaning', has been performed almost exclusively on all Stokes parameters individually as if they were independent scalar images. However, here we demonstrate for the case of the linear polarization P, this approach fails to properly account for the complex vector nature resulting in a process which is dependent on the axes under which the deconvolution is performed. We present here an improved method, `Generalized Complex CLEAN', which properly accounts for the complex vector nature of polarized emission and is invariant under rotations of the deconvolution axes. We use two Australia Telescope Compact Array data sets to test standard and complex CLEAN versions of the Högbom and SDI (Steer-Dwedney-Ito) CLEAN algorithms. We show that in general the complex CLEAN version of each algorithm produces more accurate clean components with fewer spurious detections and lower computation cost due to reduced iterations than the current methods. In particular, we find that the complex SDI CLEAN produces the best results for diffuse polarized sources as compared with standard CLEAN algorithms and other complex CLEAN algorithms. Given the move to wide-field, high-resolution polarimetric imaging with future telescopes such as the Square Kilometre Array, we suggest that Generalized Complex CLEAN should be adopted as the deconvolution method for all future polarimetric surveys and in particular that the complex version of an SDI CLEAN should be used.

  14. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  15. Water Residence Time estimation by 1D deconvolution in the form of a l2 -regularized inverse problem with smoothness, positivity and causality constraints

    NASA Astrophysics Data System (ADS)

    Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François

    2018-06-01

    The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.

  16. Pulse-Inversion Subharmonic Ultrafast Active Cavitation Imaging in Tissue Using Fast Eigenspace-Based Adaptive Beamforming and Cavitation Deconvolution.

    PubMed

    Bai, Chen; Xu, Shanshan; Duan, Junbo; Jing, Bowen; Yang, Miao; Wan, Mingxi

    2017-08-01

    Pulse-inversion subharmonic (PISH) imaging can display information relating to pure cavitation bubbles while excluding that of tissue. Although plane-wave-based ultrafast active cavitation imaging (UACI) can monitor the transient activities of cavitation bubbles, its resolution and cavitation-to-tissue ratio (CTR) are barely satisfactory but can be significantly improved by introducing eigenspace-based (ESB) adaptive beamforming. PISH and UACI are a natural combination for imaging of pure cavitation activity in tissue; however, it raises two problems: 1) the ESB beamforming is hard to implement in real time due to the enormous amount of computation associated with the covariance matrix inversion and eigendecomposition and 2) the narrowband characteristic of the subharmonic filter will incur a drastic degradation in resolution. Thus, in order to jointly address these two problems, we propose a new PISH-UACI method using novel fast ESB (F-ESB) beamforming and cavitation deconvolution for nonlinear signals. This method greatly reduces the computational complexity by using F-ESB beamforming through dimensionality reduction based on principal component analysis, while maintaining the high quality of ESB beamforming. The degraded resolution is recovered using cavitation deconvolution through a modified convolution model and compressive deconvolution. Both simulations and in vitro experiments were performed to verify the effectiveness of the proposed method. Compared with the ESB-based PISH-UACI, the entire computation of our proposed approach was reduced by 99%, while the axial resolution gain and CTR were increased by 3 times and 2 dB, respectively, confirming that satisfactory performance can be obtained for monitoring pure cavitation bubbles in tissue erosion.

  17. Waveform inversion of mantle Love waves: The born seismogram approach

    NASA Technical Reports Server (NTRS)

    Tanimoto, T.

    1983-01-01

    Normal mode theory, extended to the slightly laterally heterogeneous Earth by the first-order Born approximation, is applied to the waveform inversion of mantle Love waves (200-500 sec) for the Earth's lateral heterogeneity at l=2 and a spherically symmetric anelasticity (Q sub mu) structure. The data are from the Global Digital Seismograph Network (GDSN). The l=2 pattern is very similar to the results of other studies that used either different methods, such as phase velocity measurements and multiplet location measurements, or a different data set, such as mantle Rayleigh waves from different instruments. The results are carefully analyzed for variance reduction and are most naturally explained by heterogeneity in the upper 420 km. Because of the poor resolution of the data set for the deep interior, however, a fairly large heterogeneity in the transition zones, of the order of up to 3.5% in shear wave velocity, is allowed. It is noteworthy that Love waves of this period range can not constrain the structure below 420 km and thus any model presented by similar studies below this depth are likely to be constrained by Rayleigh waves (spheroidal modes) only.

  18. Waveform inversion of mantle Love waves - The Born seismogram approach

    NASA Technical Reports Server (NTRS)

    Tanimoto, T.

    1984-01-01

    Normal mode theory, extended to the slightly laterally heterogeneous earth by the first-order Born approximation, is applied to the waveform inversion of mantle Love waves (200-500 sec) for the earth's lateral heterogeneity at l = 2 and a spherically symmetric anelasticity (Q sub mu) structure. The data are from the Global Digital Seismograph Network (GDSN). The l = 2 pattern is very similar to the results of other studies that used either different methods, such as phase velocity measurements and multiplet location measurements, or a different data set, such as mantle Rayleigh waves from different instruments. The results are carefully analyzed for variance reduction and are most naturally explained by heterogeneity in the upper 420 km. Because of the poor resolution of the data set for the deep interior, however, a fairly large heterogeneity in the transition zones, of the order of up to 3.5 percent in shear wave velocity, is allowed. It is noteworthy that Love waves of this period range can not constrain the structure below 420 km and thus any model presented by similar studies below this depth are likely to be constrained by Rayleigh waves (spheroidal modes) only.

  19. Extracting the building response using seismic interferometry: Theory and application to the Millikan Library in Pasadena, California

    USGS Publications Warehouse

    Snieder, R.; Safak, E.

    2006-01-01

    The motion of a building depends on the excitation, the coupling of the building to the ground, and the mechanical properties of the building. We separate the building response from the excitation and the ground coupling by deconvolving the motion recorded at different levels in the building and apply this to recordings of the motion in the Robert A. Millikan Library in Pasadena, California. This deconvolution allows for the separation of instrinsic attenuation and radiation damping. The waveforms obtained from deconvolution with the motion in the top floor show a superposition of one upgoing and one downgoing wave. The waveforms obtained by deconvolution with the motion in the basement can be formulated either as a sum of upgoing and downgoing waves, or as a sum over normal modes. Because these deconvolved waves for late time have a monochromatic character, they are most easily analyzed with normal-mode theory. For this building we estimate a shear velocity c = 322 m/sec and a quality factor Q = 20. These values explain both the propagating waves and the normal modes.

  20. Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong

    2011-06-01

    With the use of adaptive optics (AO), the ocular aberrations can be compensated to get high-resolution image of living human retina. However, the wavefront correction is not perfect due to the wavefront measure error and hardware restrictions. Thus, it is necessary to use a deconvolution algorithm to recover the retinal images. In this paper, a blind deconvolution technique called Incremental Wiener filter is used to restore the adaptive optics confocal scanning laser ophthalmoscope (AOSLO) images. The point-spread function (PSF) measured by wavefront sensor is only used as an initial value of our algorithm. We also realize the Incremental Wiener filter on graphics processing unit (GPU) in real-time. When the image size is 512 × 480 pixels, six iterations of our algorithm only spend about 10 ms. Retinal blood vessels as well as cells in retinal images are restored by our algorithm, and the PSFs are also revised. Retinal images with and without adaptive optics are both restored. The results show that Incremental Wiener filter reduces the noises and improve the image quality.

  1. Supersampling multiframe blind deconvolution resolution enhancement of adaptive-optics-compensated imagery of LEO satellites

    NASA Astrophysics Data System (ADS)

    Gerwe, David R.; Lee, David J.; Barchers, Jeffrey D.

    2000-10-01

    A post-processing methodology for reconstructing undersampled image sequences with randomly varying blur is described which can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive optics compensated imagery taken by the Starfire Optical Range 3.5 meter telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques which includes a representation of spatial sampling by the focal plane array elements in the forward stochastic model of the imaging system. This generalization enables the random shifts and shape of the adaptive compensated PSF to be used to partially eliminate the aliasing effects associated with sub- Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss which occurs when imaging in wide FOV modes.

  2. Supersampling multiframe blind deconvolution resolution enhancement of adaptive optics compensated imagery of low earth orbit satellites

    NASA Astrophysics Data System (ADS)

    Gerwe, David R.; Lee, David J.; Barchers, Jeffrey D.

    2002-09-01

    We describe a postprocessing methodology for reconstructing undersampled image sequences with randomly varying blur that can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive-optics-(AO)-compensated imagery taken by the Starfire Optical Range 3.5-m telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground-based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques that include a representation of spatial sampling by the focal plane array elements based on a forward stochastic model. This generalization enables the random shifts and shape of the AO- compensated point spread function (PSF) to be used to partially eliminate the aliasing effects associated with sub-Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss that occurs when imaging in wide- field-of-view (FOV) modes.

  3. Pulse analysis of acoustic emission signals

    NASA Technical Reports Server (NTRS)

    Houghton, J. R.; Packman, P. F.

    1977-01-01

    A method for the signature analysis of pulses in the frequency domain and the time domain is presented. Fourier spectrum, Fourier transfer function, shock spectrum and shock spectrum ratio were examined in the frequency domain analysis and pulse shape deconvolution was developed for use in the time domain analysis. Comparisons of the relative performance of each analysis technique are made for the characterization of acoustic emission pulses recorded by a measuring system. To demonstrate the relative sensitivity of each of the methods to small changes in the pulse shape, signatures of computer modeled systems with analytical pulses are presented. Optimization techniques are developed and used to indicate the best design parameter values for deconvolution of the pulse shape. Several experiments are presented that test the pulse signature analysis methods on different acoustic emission sources. These include acoustic emission associated with (a) crack propagation, (b) ball dropping on a plate, (c) spark discharge, and (d) defective and good ball bearings. Deconvolution of the first few micro-seconds of the pulse train is shown to be the region in which the significant signatures of the acoustic emission event are to be found.

  4. Scanning two-photon microscopy with upconverting lanthanide nanoparticles via Richardson-Lucy deconvolution.

    PubMed

    Gainer, Christian F; Utzinger, Urs; Romanowski, Marek

    2012-07-01

    The use of upconverting lanthanide nanoparticles in fast-scanning microscopy is hindered by a long luminescence decay time, which greatly blurs images acquired in a nondescanned mode. We demonstrate herein an image processing method based on Richardson-Lucy deconvolution that mitigates the detrimental effects of their luminescence lifetime. This technique generates images with lateral resolution on par with the system's performance, ∼1.2  μm, while maintaining an axial resolution of 5 μm or better at a scan rate comparable with traditional two-photon microscopy. Remarkably, this can be accomplished with near infrared excitation power densities of 850 W/cm(2), several orders of magnitude below those used in two-photon imaging with molecular fluorophores. By way of illustration, we introduce the use of lipids to coat and functionalize these nanoparticles, rendering them water dispersible and readily conjugated to biologically relevant ligands, in this case epidermal growth factor receptor antibody. This deconvolution technique combined with the functionalized nanoparticles will enable three-dimensional functional tissue imaging at exceptionally low excitation power densities.

  5. Spatial studies of planetary nebulae with IRAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hawkins, G.W.; Zuckerman, B.

    1991-06-01

    The infrared sizes at the four IRAS wavelengths of 57 planetaries, most with 20-60 arcsec optical size, are derived from spatial deconvolution of one-dimensional survey mode scans. Survey observations from multiple detectors and hours confirmed (HCON) observations are combined to increase the sampling to a rate that is sufficient for successful deconvolution. The Richardson-Lucy deconvolution algorithm is used to obtain an increase in resolution of a factor of about 2 or 3 from the normal IRAS detector sizes of 45, 45, 90, and 180 arcsec at wavelengths 12, 25, 60, and 100 microns. Most of the planetaries deconvolve at 12more » and 25 microns to sizes equal to or smaller than the optical size. Some of the planetaries with optical rings 60 arcsec or more in diameter show double-peaked IRAS profiles. Many, such as NGC 6720 and NGC 6543 show all infrared sizes equal to the optical size, while others indicate increasing infrared size with wavelength. Deconvolved IRAS profiles are presented for the 57 planetaries at nearly all wavelengths where IRAS flux densities are 1-2 Jy or higher. 60 refs.« less

  6. A spatially-variant deconvolution method based on total variation for optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Almasganj, Mohammad; Adabi, Saba; Fatemizadeh, Emad; Xu, Qiuyun; Sadeghi, Hamid; Daveluy, Steven; Nasiriavanaki, Mohammadreza

    2017-03-01

    Optical Coherence Tomography (OCT) has a great potential to elicit clinically useful information from tissues due to its high axial and transversal resolution. In practice, an OCT setup cannot reach to its theoretical resolution due to imperfections of its components, which make its images blurry. The blurriness is different alongside regions of image; thus, they cannot be modeled by a unique point spread function (PSF). In this paper, we investigate the use of solid phantoms to estimate the PSF of each sub-region of imaging system. We then utilize Lucy-Richardson, Hybr and total variation (TV) based iterative deconvolution methods for mitigating occurred spatially variant blurriness. It is shown that the TV based method will suppress the so-called speckle noise in OCT images better than the two other approaches. The performance of proposed algorithm is tested on various samples, including several skin tissues besides the test image blurred with synthetic PSF-map, demonstrating qualitatively and quantitatively the advantage of TV based deconvolution method using spatially-variant PSF for enhancing image quality.

  7. A stopping criterion to halt iterations at the Richardson-Lucy deconvolution of radiographic images

    NASA Astrophysics Data System (ADS)

    Almeida, G. L.; Silvani, M. I.; Souza, E. S.; Lopes, R. T.

    2015-07-01

    Radiographic images, as any experimentally acquired ones, are affected by spoiling agents which degrade their final quality. The degradation caused by agents of systematic character, can be reduced by some kind of treatment such as an iterative deconvolution. This approach requires two parameters, namely the system resolution and the best number of iterations in order to achieve the best final image. This work proposes a novel procedure to estimate the best number of iterations, which replaces the cumbersome visual inspection by a comparison of numbers. These numbers are deduced from the image histograms, taking into account the global difference G between them for two subsequent iterations. The developed algorithm, including a Richardson-Lucy deconvolution procedure has been embodied into a Fortran program capable to plot the 1st derivative of G as the processing progresses and to stop it automatically when this derivative - within the data dispersion - reaches zero. The radiograph of a specially chosen object acquired with thermal neutrons from the Argonauta research reactor at Institutode Engenharia Nuclear - CNEN, Rio de Janeiro, Brazil, have undergone this treatment with fair results.

  8. Fast online deconvolution of calcium imaging data

    PubMed Central

    Zhou, Pengcheng; Paninski, Liam

    2017-01-01

    Fluorescent calcium indicators are a popular means for observing the spiking activity of large neuronal populations, but extracting the activity of each neuron from raw fluorescence calcium imaging data is a nontrivial problem. We present a fast online active set method to solve this sparse non-negative deconvolution problem. Importantly, the algorithm 3progresses through each time series sequentially from beginning to end, thus enabling real-time online estimation of neural activity during the imaging session. Our algorithm is a generalization of the pool adjacent violators algorithm (PAVA) for isotonic regression and inherits its linear-time computational complexity. We gain remarkable increases in processing speed: more than one order of magnitude compared to currently employed state of the art convex solvers relying on interior point methods. Unlike these approaches, our method can exploit warm starts; therefore optimizing model hyperparameters only requires a handful of passes through the data. A minor modification can further improve the quality of activity inference by imposing a constraint on the minimum spike size. The algorithm enables real-time simultaneous deconvolution of O(105) traces of whole-brain larval zebrafish imaging data on a laptop. PMID:28291787

  9. [Deconvolution of overlapped peaks in total ion chromatogram of essential oil from citri reticulatae pericarpium viride by automated mass spectral deconvolution & identification system].

    PubMed

    Wang, Jian; Chen, Hong-Ping; Liu, You-Ping; Wei, Zheng; Liu, Rong; Fan, Dan-Qing

    2013-05-01

    This experiment shows how to use the automated mass spectral deconvolution & identification system (AMDIS) to deconvolve the overlapped peaks in the total ion chromatogram (TIC) of volatile oil from Chineses materia medica (CMM). The essential oil was obtained by steam distillation. Its TIC was gotten by GC-MS, and the superimposed peaks in TIC were deconvolved by AMDIS. First, AMDIS can detect the number of components in TIC through the run function. Then, by analyzing the extracted spectrum of corresponding scan point of detected component and the original spectrum of this scan point, and their counterparts' spectra in the referred MS Library, researchers can ascertain the component's structure accurately or deny some compounds, which don't exist in nature. Furthermore, through examining the changeability of characteristic fragment ion peaks of identified compounds, the previous outcome can be affirmed again. The result demonstrated that AMDIS could efficiently deconvolve the overlapped peaks in TIC by taking out the spectrum of matching scan point of discerned component, which led to exact identification of the component's structure.

  10. Thorium concentrations in the lunar surface. V - Deconvolution of the central highlands region

    NASA Technical Reports Server (NTRS)

    Metzger, A. E.; Etchegaray-Ramirez, M. I.; Haines, E. L.

    1982-01-01

    The distribution of thorium in the lunar central highlands measured from orbit by the Apollo 16 gamma-ray spectrometer is subjected to a deconvolution analysis to yield improved spatial resolution and contrast. Use of two overlapping data fields for complete coverage also provides a demonstration of the technique's ability to model concentrations several degrees beyond the data track. Deconvolution reveals an association between Th concentration and the Kant Plateau, Descartes Mountain and Cayley plains surface formations. The Kant Plateau and Descartes Mountains model with Th less than 1 part per million, which is typical of farside highlands but is infrequently seen over any other nearside highland portions of the Apollo 15 and 16 ground tracks. It is noted that, if the Cayley plains are the result of basin-forming impact ejecta, the distribution of Th concentration with longitude supports an origin from the Imbrium basin rather than the Nectaris or Orientale basins. Nectaris basin materials are found to have a Th concentration similar to that of the Descartes Mountains, evidence that the latter may have been emplaced as Nectaris basin impact deposits.

  11. An Optimal Deconvolution Method for Reconstructing Pneumatically Distorted Near-Field Sonic Boom Pressure Measurements

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Haering, Edward A., Jr.; Ehernberger, L. J.

    1996-01-01

    In-flight measurements of the SR-71 near-field sonic boom were obtained by an F-16XL airplane at flightpath separation distances from 40 to 740 ft. Twenty-two signatures were obtained from Mach 1.60 to Mach 1.84 and altitudes from 47,600 to 49,150 ft. The shock wave signatures were measured by the total and static sensors on the F-16XL noseboo. These near-field signature measurements were distorted by pneumatic attenuation in the pitot-static sensors and accounting for their effects using optimal deconvolution. Measurement system magnitude and phase characteristics were determined from ground-based step-response tests and extrapolated to flight conditions using analytical models. Deconvolution was implemented using Fourier transform methods. Comparisons of the shock wave signatures reconstructed from the total and static pressure data are presented. The good agreement achieved gives confidence of the quality of the reconstruction analysis. although originally developed to reconstruct the sonic boom signatures from SR-71 sonic boom flight tests, the methods presented here generally apply to other types of highly attenuated or distorted pneumatic measurements.

  12. Two-dimensional imaging of two types of radicals by the CW-EPR method

    NASA Astrophysics Data System (ADS)

    Czechowski, Tomasz; Krzyminiewski, Ryszard; Jurga, Jan; Chlewicki, Wojciech

    2008-01-01

    The CW-EPR method of image reconstruction is based on sample rotation in a magnetic field with a constant gradient (50 G/cm). In order to obtain a projection (radical density distribution) along a given direction, the EPR spectra are recorded with and without the gradient. Deconvolution, then gives the distribution of the spin density. Projection at 36 different angles gives the information that is necessary for reconstruction of the radical distribution. The problem becomes more complex when there are at least two types of radicals in the sample, because the deconvolution procedure does not give satisfactory results. We propose a method to calculate the projections for each radical, based on iterative procedures. The images of density distribution for each radical obtained by our procedure have proved that the method of deconvolution, in combination with iterative fitting, provides correct results. The test was performed on a sample of polymer PPS Br 111 ( p-phenylene sulphide) with glass fibres and minerals. The results indicated a heterogeneous distribution of radicals in the sample volume. The images obtained were in agreement with the known shape of the sample.

  13. Variation of High-Intensity Therapeutic Ultrasound (HITU) Pressure Field Characterization: Effects of Hydrophone Choice, Nonlinearity, Spatial Averaging and Complex Deconvolution.

    PubMed

    Liu, Yunbo; Wear, Keith A; Harris, Gerald R

    2017-10-01

    Reliable acoustic characterization is fundamental for patient safety and clinical efficacy during high-intensity therapeutic ultrasound (HITU) treatment. Technical challenges, such as measurement variation and signal analysis, still exist for HITU exposimetry using ultrasound hydrophones. In this work, four hydrophones were compared for pressure measurement: a robust needle hydrophone, a small polyvinylidene fluoride capsule hydrophone and two fiberoptic hydrophones. The focal waveform and beam distribution of a single-element HITU transducer (1.05 MHz and 3.3 MHz) were evaluated. Complex deconvolution between the hydrophone voltage signal and frequency-dependent complex sensitivity was performed to obtain pressure waveforms. Compressional pressure (p + ), rarefactional pressure (p - ) and focal beam distribution were compared up to 10.6/-6.0 MPa (p + /p - ) (1.05 MHz) and 20.65/-7.20 MPa (3.3 MHz). The effects of spatial averaging, local non-linear distortion, complex deconvolution and hydrophone damage thresholds were investigated. This study showed a variation of no better than 10%-15% among hydrophones during HITU pressure characterization. Published by Elsevier Inc.

  14. Pulse analysis of acoustic emission signals

    NASA Technical Reports Server (NTRS)

    Houghton, J. R.; Packman, P. F.

    1977-01-01

    A method for the signature analysis of pulses in the frequency domain and the time domain is presented. Fourier spectrum, Fourier transfer function, shock spectrum and shock spectrum ratio were examined in the frequency domain analysis, and pulse shape deconvolution was developed for use in the time domain analysis. Comparisons of the relative performance of each analysis technique are made for the characterization of acoustic emission pulses recorded by a measuring system. To demonstrate the relative sensitivity of each of the methods to small changes in the pulse shape, signatures of computer modeled systems with analytical pulses are presented. Optimization techniques are developed and used to indicate the best design parameters values for deconvolution of the pulse shape. Several experiments are presented that test the pulse signature analysis methods on different acoustic emission sources. These include acoustic emissions associated with: (1) crack propagation, (2) ball dropping on a plate, (3) spark discharge and (4) defective and good ball bearings. Deconvolution of the first few micro-seconds of the pulse train are shown to be the region in which the significant signatures of the acoustic emission event are to be found.

  15. Minimum entropy deconvolution optimized sinusoidal synthesis and its application to vibration based fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; Zhao, Qing

    2017-03-01

    In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.

  16. A Novel Richardson-Lucy Model with Dictionary Basis and Spatial Regularization for Isolating Isotropic Signals.

    PubMed

    Xu, Tiantian; Feng, Yuanjing; Wu, Ye; Zeng, Qingrun; Zhang, Jun; He, Jianzhong; Zhuge, Qichuan

    2017-01-01

    Diffusion-weighted magnetic resonance imaging is a non-invasive imaging method that has been increasingly used in neuroscience imaging over the last decade. Partial volume effects (PVEs) exist in sampling signal for many physical and actual reasons, which lead to inaccurate fiber imaging. We overcome the influence of PVEs by separating isotropic signal from diffusion-weighted signal, which can provide more accurate estimation of fiber orientations. In this work, we use a novel response function (RF) and the correspondent fiber orientation distribution function (fODF) to construct different signal models, in which case the fODF is represented using dictionary basis function. We then put forward a new index Piso, which is a part of fODF to quantify white and gray matter. The classic Richardson-Lucy (RL) model is usually used in the field of digital image processing to solve the problem of spherical deconvolution caused by highly ill-posed least-squares algorithm. In this case, we propose an innovative model integrating RL model with spatial regularization to settle the suggested double-models, which improve noise resistance and accuracy of imaging. Experimental results of simulated and real data show that the proposal method, which we call iRL, can robustly reconstruct a more accurate fODF and the quantitative index Piso performs better than fractional anisotropy and general fractional anisotropy.

  17. A Novel Richardson-Lucy Model with Dictionary Basis and Spatial Regularization for Isolating Isotropic Signals

    PubMed Central

    Feng, Yuanjing; Wu, Ye; Zeng, Qingrun; Zhang, Jun; He, Jianzhong; Zhuge, Qichuan

    2017-01-01

    Diffusion-weighted magnetic resonance imaging is a non-invasive imaging method that has been increasingly used in neuroscience imaging over the last decade. Partial volume effects (PVEs) exist in sampling signal for many physical and actual reasons, which lead to inaccurate fiber imaging. We overcome the influence of PVEs by separating isotropic signal from diffusion-weighted signal, which can provide more accurate estimation of fiber orientations. In this work, we use a novel response function (RF) and the correspondent fiber orientation distribution function (fODF) to construct different signal models, in which case the fODF is represented using dictionary basis function. We then put forward a new index Piso, which is a part of fODF to quantify white and gray matter. The classic Richardson-Lucy (RL) model is usually used in the field of digital image processing to solve the problem of spherical deconvolution caused by highly ill-posed least-squares algorithm. In this case, we propose an innovative model integrating RL model with spatial regularization to settle the suggested double-models, which improve noise resistance and accuracy of imaging. Experimental results of simulated and real data show that the proposal method, which we call iRL, can robustly reconstruct a more accurate fODF and the quantitative index Piso performs better than fractional anisotropy and general fractional anisotropy. PMID:28081561

  18. Hubble Space Telescope photometry of the central regions of Virgo cluster elliptical galaxies. 3: Brightness profiles

    NASA Technical Reports Server (NTRS)

    Ferrarese, Laura; Bosch, Frank C. Van Den; Ford, Holland C.; Jaffe, Walter; O'Connell, Robert W.

    1994-01-01

    We have used the Planetary Camera on the Hubble Space Telescope (HST) to study the morphology and surface brightness parameters of a luminosity-limited sample of fourteen elliptical galaxies in the Virgo cluster. The total apparent blue magnitudes of the galaxies range between 9.4 and 13.4. In this paper, the core brightness profiles are presented, while the overall morphology and the isophotal shapes are discussed in two companion papers (Jaffe et al. (1994); van den Bosch et al. (1994)). We show that, in spite of the spherical aberration affecting the HST primary mirror, deconvolution techniques allow recovery of the brightness profile up to 0.2 arcsec from the center of the galaxies. We find that none of the galaxies has an isothermal core. On the basis of their morphological and photometrical properties, the galaxies can be divided in two physically distinct groups, referred to as Type I and Type II. All of the Type I galaxies are classified as E1 to E3 in the Revised Shapley Ames Catalog (Sandage & Tammann 1981), while Type II galaxies are classified as E5 to E7. The characteristics of Type II galaxies are explained by the presence of disks component on both the 1 arcsec and the 10 arcsec scales, while Type I galaxies correspond to the classical disk-free ellipticals.

  19. Vertical detachment energy of hydrated electron based on a modified form of solvent reorganization energy.

    PubMed

    Wang, Xing-Jian; Zhu, Quan; Li, Yun-Kui; Cheng, Xue-Min; Li, Xiang-Yuan; Fu, Ke-Xiang; He, Fu-Cheng

    2010-02-18

    In this work, the constrained equilibrium principle is introduced and applied to the derivations of the nonequilibrium solvation free energy and solvent reorganization energy in the process of removing the hydrated electron. Within the framework of the continuum model, a modified expression of the vertical detachment energy (VDE) of a hydrated electron in water is formulated. Making use of the approximation of spherical cavity and point charge, the variation tendency of VDE accompanying the size increase of the water cluster has been inspected. Discussions comparing the present form of the VDE and the traditional one and the influence of the cavity radius in either the fixed pattern or the varying pattern on the VDE have been made.

  20. Blind image deconvolution using the Fields of Experts prior

    NASA Astrophysics Data System (ADS)

    Dong, Wende; Feng, Huajun; Xu, Zhihai; Li, Qi

    2012-11-01

    In this paper, we present a method for single image blind deconvolution. To improve its ill-posedness, we formulate the problem under Bayesian probabilistic framework and use a prior named Fields of Experts (FoE) which is learnt from natural images to regularize the latent image. Furthermore, due to the sparse distribution of the point spread function (PSF), we adopt a Student-t prior to regularize it. An improved alternating minimization (AM) approach is proposed to solve the resulted optimization problem. Experiments on both synthetic and real world blurred images show that the proposed method can achieve results of high quality.

  1. Application of the Lucy–Richardson Deconvolution Procedure to High Resolution Photoemission Spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rameau, J.; Yang, H.-B.; Johnson, P.D.

    2010-07-01

    Angle-resolved photoemission has developed into one of the leading probes of the electronic structure and associated dynamics of condensed matter systems. As with any experimental technique the ability to resolve features in the spectra is ultimately limited by the resolution of the instrumentation used in the measurement. Previously developed for sharpening astronomical images, the Lucy-Richardson deconvolution technique proves to be a useful tool for improving the photoemission spectra obtained in modern hemispherical electron spectrometers where the photoelectron spectrum is displayed as a 2D image in energy and momentum space.

  2. Absolute Hugoniot measurements from a spherically convergent shock using x-ray radiography

    NASA Astrophysics Data System (ADS)

    Swift, Damian C.; Kritcher, Andrea L.; Hawreliak, James A.; Lazicki, Amy; MacPhee, Andrew; Bachmann, Benjamin; Döppner, Tilo; Nilsen, Joseph; Collins, Gilbert W.; Glenzer, Siegfried; Rothman, Stephen D.; Kraus, Dominik; Falcone, Roger W.

    2018-05-01

    The canonical high pressure equation of state measurement is to induce a shock wave in the sample material and measure two mechanical properties of the shocked material or shock wave. For accurate measurements, the experiment is normally designed to generate a planar shock which is as steady as possible in space and time, and a single state is measured. A converging shock strengthens as it propagates, so a range of shock pressures is induced in a single experiment. However, equation of state measurements must then account for spatial and temporal gradients. We have used x-ray radiography of spherically converging shocks to determine states along the shock Hugoniot. The radius-time history of the shock, and thus its speed, was measured by radiographing the position of the shock front as a function of time using an x-ray streak camera. The density profile of the shock was then inferred from the x-ray transmission at each instant of time. Simultaneous measurement of the density at the shock front and the shock speed determines an absolute mechanical Hugoniot state. The density profile was reconstructed using the known, unshocked density which strongly constrains the density jump at the shock front. The radiographic configuration and streak camera behavior were treated in detail to reduce systematic errors. Measurements were performed on the Omega and National Ignition Facility lasers, using a hohlraum to induce a spatially uniform drive over the outside of a solid, spherical sample and a laser-heated thermal plasma as an x-ray source for radiography. Absolute shock Hugoniot measurements were demonstrated for carbon-containing samples of different composition and initial density, up to temperatures at which K-shell ionization reduced the opacity behind the shock. Here we present the experimental method using measurements of polystyrene as an example.

  3. Chandra X-ray Spectroscopy of the Focused Wind In the Cygnus X-1 System I. The Non-Dip Spectrum in the Low/Hard State

    NASA Technical Reports Server (NTRS)

    Hanke, Manfred; Wilms, Jorn; Nowak, Michael A.; Pottschmidt, Katja; Schultz, Norbert S.; Lee, Julia C.

    2008-01-01

    We present analyses of a 50 ks observation of the supergiant X-ray binary system CygnusX-1/HDE226868 taken with the Chandra High Energy Transmission Grating Spectrometer (HETGS). CygX-1 was in its spectrally hard state and the observation was performed during superior conjunction of the black hole, allowing for the spectroscopic analysis of the accreted stellar wind along the line of sight. A significant part of the observation covers X-ray dips as commonly observed for CygX-1 at this orbital phase, however, here we only analyze the high count rate non-dip spectrum. The full 0.5-10 keV continuum can be described by a single model consisting of a disk, a narrow and a relativistically broadened Fe K line, and a power law component, which is consistent with simultaneous RXTE broad band data. We detect absorption edges from overabundant neutral O, Ne and Fe, and absorption line series from highly ionized ions and infer column densities and Doppler shifts. With emission lines of He-like Mg XI, we detect two plasma components with velocities and densities consistent with the base of the spherical wind and a focused wind. A simple simulation of the photoionization zone suggests that large parts of the spherical wind outside of the focused stream are completely ionized, which is consistent with the low velocities (<200 km/s) observed in the absorption lines, as the position of absorbers in a spherical wind at low projected velocity is well constrained. Our observations provide input for models that couple the wind activity of HDE 226868 to the properties of the accretion flow onto the black hole.

  4. MAGNETO-FRICTIONAL MODELING OF CORONAL NONLINEAR FORCE-FREE FIELDS. I. TESTING WITH ANALYTIC SOLUTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Keppens, R.; Xia, C.

    2016-09-10

    We report our implementation of the magneto-frictional method in the Message Passing Interface Adaptive Mesh Refinement Versatile Advection Code (MPI-AMRVAC). The method aims at applications where local adaptive mesh refinement (AMR) is essential to make follow-up dynamical modeling affordable. We quantify its performance in both domain-decomposed uniform grids and block-adaptive AMR computations, using all frequently employed force-free, divergence-free, and other vector comparison metrics. As test cases, we revisit the semi-analytic solution of Low and Lou in both Cartesian and spherical geometries, along with the topologically challenging Titov–Démoulin model. We compare different combinations of spatial and temporal discretizations, and find thatmore » the fourth-order central difference with a local Lax–Friedrichs dissipation term in a single-step marching scheme is an optimal combination. The initial condition is provided by the potential field, which is the potential field source surface model in spherical geometry. Various boundary conditions are adopted, ranging from fully prescribed cases where all boundaries are assigned with the semi-analytic models, to solar-like cases where only the magnetic field at the bottom is known. Our results demonstrate that all the metrics compare favorably to previous works in both Cartesian and spherical coordinates. Cases with several AMR levels perform in accordance with their effective resolutions. The magneto-frictional method in MPI-AMRVAC allows us to model a region of interest with high spatial resolution and large field of view simultaneously, as required by observation-constrained extrapolations using vector data provided with modern instruments. The applications of the magneto-frictional method to observations are shown in an accompanying paper.« less

  5. The Gravity field of Comet 67 P/Churyumov-Gerasimenko Expressed in Bispherical Harmonics

    NASA Astrophysics Data System (ADS)

    Andert, T.; Barriot, J. P.; Paetzold, M.; Sichoix, L.; Tellmann, S.; Häusler, B.

    2015-12-01

    On 6 August 2014, after a ten years cruise, the ESA-Rosetta spacecraft arrived at comet 67P/Churyumov-Gerasimenko. At that time the spacecraft was commanded to drift along with the comet at distances between 100 km and 50 km, the distance was then successfully lowered to 30 km in September 2014 and to 10 km in November 2014 and bound orbits could be achieved. Based on Doppler tracking data the Rosetta radio science experiment (RSI) was able to determine the mass of the nucleus and its gravity field in spherical harmonics series in order to constrain density and the internal structure of the nucleus. The shape of the comet is complex, a representation of the gravity field as belonging to one single body in either spherical or ellipsoidal harmonics series will give the shape of the body more preference than its internal structure. The observed shape of the nucleus, however, offers the opportunity to interpret it as consisting of two different bodies, namely the "head" and the "feet" sections of 67P/Churyumov-Gerasimenko, both having a nearly ellipsoidal shape. In this new approach, the bispherical harmonics expansion, the comet nucleus has been approximated by two independent lobes, each lobe represented by its own spherical harmonics expansion. As a result of the bispherical harmonics representation, it is anticipated that the gravity field will gain higher accuracy and will be less dominated by the complex shape of the comet. We have derived the analytical expressions of the gravity potential and its derivatives of a body in bispherical coordinates and applied this concept to the comet Churyumov-Gerasimenko. The paper will present the bispherical harmonics representation of the gravity field and first results derived from this new concept.

  6. The Signature of the Central Engine in the Weakest Relativistic Explosions: GRB 100316D

    NASA Astrophysics Data System (ADS)

    Margutti, R.; Soderberg, A. M.; Wieringa, M. H.; Edwards, P. G.; Chevalier, R. A.; Morsony, B. J.; Barniol Duran, R.; Sironi, L.; Zauderer, B. A.; Milisavljevic, D.; Kamble, A.; Pian, E.

    2013-11-01

    We present late-time radio and X-ray observations of the nearby sub-energetic gamma-ray burst (GRB)100316D associated with supernova (SN) 2010bh. Our broad-band analysis constrains the explosion properties of GRB 100316D to be intermediate between highly relativistic, collimated GRBs and the spherical, ordinary hydrogen-stripped SNe. We find that ~1049 erg is coupled to mildly relativistic (Γ = 1.5-2), quasi-spherical ejecta, expanding into a medium previously shaped by the progenitor mass-loss with a rate of \\dot{M}\\, {\\sim }\\, 10^{-5}\\,M_{\\odot }\\,yr^{-1} (for an assumed wind density profile and wind velocity vw = 1000 km s-1). The kinetic energy profile of the ejecta argues for the presence of a central engine and identifies GRB 100316D as one of the weakest central-engine-driven explosions detected to date. Emission from the central engine is responsible for an excess of soft X-ray radiation that dominates over the standard afterglow at late times (t > 10 days). We connect this phenomenology with the birth of the most rapidly rotating magnetars. Alternatively, accretion onto a newly formed black hole might explain the excess of radiation. However, significant departure from the standard fall-back scenario is required.

  7. Regional inversion of GRACE data for continental water mass time-variations. Comparison with global hydrology models, classical spherical harmonics and "mascons" solutions

    NASA Astrophysics Data System (ADS)

    Seoane, L.; Ramillien, G.; Frappart, F.; Biancale, R.; Gratton, S.; Bourgogne, S.

    2010-12-01

    Time series of 2°-by-2° constrained/unconstrained GRACE geoid solutions have been computed with a 10-day resolution by using a new regional method recently implemented at GRGS (Toulouse, France). This approach uses the dynamical orbit analysis of GRACE Level-1 measurements, and specially accurate along-track KBRR residuals to estimate the continental water mass changes over large geographical regions. For validation, our GRACE-derived regional maps are compared to: (1) the global hydrological model outputs (WGHM, LaD, NOAH), (2) the NASA "mascons" solutions based on spherical harmonics and (3) the global solutions produced by GRGS and CSR, GFZ, JPL filtered with different methodologies (Gaussian, destriped and smoothed, ICA). In this study, we focus on the annual time scale of water mass redistributions occuring in drainage basins like Amazon or Congo. Each 2°-averaged surface element is characterized by its seasonal amplitude and phase. Even if the all sources are expected to provide quite comparable results for the continental water cycle, we suspect the residual differences are from smoothing effects of the spatial constraints included in the "mascons" solutions and the underestimating the seasonal amplitudes by global hydrological models.

  8. Statistics of the geomagnetic secular variation for the past 5Ma

    NASA Technical Reports Server (NTRS)

    Constable, C. G.; Parker, R. L.

    1986-01-01

    A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.

  9. Modeling the Gravitational Potential of a Cosmological Dark Matter Halo with Stellar Streams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanderson, Robyn E.; Hartke, Johanna; Helmi, Amina, E-mail: robyn@astro.columbia.edu

    2017-02-20

    Stellar streams result from the tidal disruption of satellites and star clusters as they orbit a host galaxy, and can be very sensitive probes of the gravitational potential of the host system. We select and study narrow stellar streams formed in a Milky-Way-like dark matter halo of the Aquarius suite of cosmological simulations, to determine if these streams can be used to constrain the present day characteristic parameters of the halo’s gravitational potential. We find that orbits integrated in both spherical and triaxial static Navarro–Frenk–White potentials reproduce the locations and kinematics of the various streams reasonably well. To quantify thismore » further, we determine the best-fit potential parameters by maximizing the amount of clustering of the stream stars in the space of their actions. We show that using our set of Aquarius streams, we recover a mass profile that is consistent with the spherically averaged dark matter profile of the host halo, although we ignored both triaxiality and time evolution in the fit. This gives us confidence that such methods can be applied to the many streams that will be discovered by the Gaia mission to determine the gravitational potential of our Galaxy.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schollmeier, Marius S.; Loisel, Guillaume P.

    Spherical-crystal microscopes are used as high-resolution imaging devices for monochromatic x-ray radiography or for imaging the source itself. Crystals and Miller indices (hkl) have to be matched such that the resulting lattice spacing d is close to half the spectral wavelength used for imaging, to fulfill the Bragg equation with a Bragg angle near 90° which reduces astigmatism. Only a few suitable crystal and spectral-line combinations have been identified for applications in the literature, suggesting that x-ray imaging using spherical crystals is constrained to a few chance matches. In this paper, after performing a systematic, automated search over more thanmore » 9 × 10 6 possible combinations for x-ray energies between 1 and 25 keV, for six crystals with arbitrary Miller-index combinations hkl between 0 and 20, we show that a matching, efficient crystal and spectral-line pair can be found for almost every He α or K α x-ray source for the elements Ne to Sn. Finally, using the data presented here it should be possible to find a suitable imaging combination using an x-ray source that is specifically selected for a particular purpose, instead of relying on the limited number of existing crystal imaging systems that have been identified to date.« less

  11. Internal transport barriers in the National Spherical Torus Experimenta)

    NASA Astrophysics Data System (ADS)

    Yuh, H. Y.; Levinton, F. M.; Bell, R. E.; Hosea, J. C.; Kaye, S. M.; LeBlanc, B. P.; Mazzucato, E.; Peterson, J. L.; Smith, D. R.; Candy, J.; Waltz, R. E.; Domier, C. W.; Luhmann, N. C.; Lee, W.; Park, H. K.

    2009-05-01

    In the National Spherical Torus Experiment [M. Ono et al., Nucl. Fusion 41, 1435 (2001)], internal transport barriers (ITBs) are observed in reversed (negative) shear discharges where diffusivities for electron and ion thermal channels and momentum are reduced. While neutral beam heating can produce ITBs in both electron and ion channels, high harmonic fast wave heating can also produce electron ITBs (e-ITBs) under reversed magnetic shear conditions without momentum input. Interestingly, the location of the e-ITB does not necessarily match that of the ion ITB (i-ITB). The e-ITB location correlates best with the magnetic shear minima location determined by motional Stark effect constrained equilibria, whereas the i-ITB location better correlates with the location of maximum E ×B shearing rate. Measured electron temperature gradients in the e-ITB can exceed critical gradients for the onset of electron thermal gradient microinstabilities calculated by linear gyrokinetic codes. A high-k microwave scattering diagnostic shows locally reduced density fluctuations at wave numbers characteristic of electron turbulence for discharges with strongly negative magnetic shear versus weakly negative or positive magnetic shear. Reductions in fluctuation amplitude are found to be correlated with the local value of magnetic shear. These results are consistent with nonlinear gyrokinetic simulations predicting a reduction in electron turbulence under negative magnetic shear conditions despite exceeding critical gradients.

  12. Statistics of the geomagnetic secular variation for the past 5 m.y

    NASA Technical Reports Server (NTRS)

    Constable, C. G.; Parker, R. L.

    1988-01-01

    A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.

  13. The Southampton-York Natural Scenes (SYNS) dataset: Statistics of surface attitude

    PubMed Central

    Adams, Wendy J.; Elder, James H.; Graf, Erich W.; Leyland, Julian; Lugtigheid, Arthur J.; Muryy, Alexander

    2016-01-01

    Recovering 3D scenes from 2D images is an under-constrained task; optimal estimation depends upon knowledge of the underlying scene statistics. Here we introduce the Southampton-York Natural Scenes dataset (SYNS: https://syns.soton.ac.uk), which provides comprehensive scene statistics useful for understanding biological vision and for improving machine vision systems. In order to capture the diversity of environments that humans encounter, scenes were surveyed at random locations within 25 indoor and outdoor categories. Each survey includes (i) spherical LiDAR range data (ii) high-dynamic range spherical imagery and (iii) a panorama of stereo image pairs. We envisage many uses for the dataset and present one example: an analysis of surface attitude statistics, conditioned on scene category and viewing elevation. Surface normals were estimated using a novel adaptive scale selection algorithm. Across categories, surface attitude below the horizon is dominated by the ground plane (0° tilt). Near the horizon, probability density is elevated at 90°/270° tilt due to vertical surfaces (trees, walls). Above the horizon, probability density is elevated near 0° slant due to overhead structure such as ceilings and leaf canopies. These structural regularities represent potentially useful prior assumptions for human and machine observers, and may predict human biases in perceived surface attitude. PMID:27782103

  14. Supercontinent cycles, true polar wander, and very long-wavelength mantle convection

    NASA Astrophysics Data System (ADS)

    Zhong, Shijie; Zhang, Nan; Li, Zheng-Xiang; Roberts, James H.

    2007-09-01

    We show in this paper that mobile-lid mantle convection in a three-dimensional spherical shell with observationally constrained mantle viscosity structure, and realistic convective vigor and internal heating rate is characterized by either a spherical harmonic degree-1 planform with a major upwelling in one hemisphere and a major downwelling in the other hemisphere when continents are absent, or a degree-2 planform with two antipodal major upwellings when a supercontinent is present. We propose that due to modulation of continents, these two modes of mantle convection alternate within the Earth's mantle, causing the cyclic processes of assembly and breakup of supercontinents including Rodinia and Pangea in the last 1 Ga. Our model suggests that the largely degree-2 structure for the present-day mantle with the Africa and Pacific antipodal superplumes, is a natural consequence of this dynamic process of very long-wavelength mantle convection interacting with supercontinent Pangea. Our model explains the basic features of true polar wander (TPW) events for Rodinia and Pangea including their equatorial locations and large variability of TPW inferred from paleomagnetic studies. Our model also suggests that TPW is expected to be more variable and large during supercontinent assembly, but small after a supercontinent acquires its equatorial location and during its subsequent dispersal.

  15. Profile measurements in the plasma edge of mega amp spherical tokamak using a ball pen probe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walkden, N. R., E-mail: nrw504@york.ac.uk; Department of Physics, York Plasma Institute, University of York, Heslington, York YO10 5DD; Adamek, J.

    The ball pen probe (BPP) technique is used successfully to make profile measurements of plasma potential, electron temperature, and radial electric field on the Mega Amp Spherical Tokamak. The potential profile measured by the BPP is shown to significantly differ from the floating potential both in polarity and profile shape. By combining the BPP potential and the floating potential, the electron temperature can be measured, which is compared with the Thomson scattering (TS) diagnostic. Excellent agreement between the two diagnostics is obtained when secondary electron emission is accounted for in the floating potential. From the BPP profile, an estimate ofmore » the radial electric field is extracted which is shown to be of the order ∼1 kV/m and increases with plasma current. Corrections to the BPP measurement, constrained by the TS comparison, introduce uncertainty into the E{sub R} measurements. The uncertainty is most significant in the electric field well inside the separatrix. The electric field is used to estimate toroidal and poloidal rotation velocities from E × B motion. This paper further demonstrates the ability of the ball pen probe to make valuable and important measurements in the boundary plasma of a tokamak.« less

  16. Impact of sensor's point spread function on land cover characterization: Assessment and deconvolution

    USGS Publications Warehouse

    Huang, C.; Townshend, J.R.G.; Liang, S.; Kalluri, S.N.V.; DeFries, R.S.

    2002-01-01

    Measured and modeled point spread functions (PSF) of sensor systems indicate that a significant portion of the recorded signal of each pixel of a satellite image originates from outside the area represented by that pixel. This hinders the ability to derive surface information from satellite images on a per-pixel basis. In this study, the impact of the PSF of the Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m bands was assessed using four images representing different landscapes. Experimental results showed that though differences between pixels derived with and without PSF effects were small on the average, the PSF generally brightened dark objects and darkened bright objects. This impact of the PSF lowered the performance of a support vector machine (SVM) classifier by 5.4% in overall accuracy and increased the overall root mean square error (RMSE) by 2.4% in estimating subpixel percent land cover. An inversion method based on the known PSF model reduced the signals originating from surrounding areas by as much as 53%. This method differs from traditional PSF inversion deconvolution methods in that the PSF was adjusted with lower weighting factors for signals originating from neighboring pixels than those specified by the PSF model. By using this deconvolution method, the lost classification accuracy due to residual impact of PSF effects was reduced to only 1.66% in overall accuracy. The increase in the RMSE of estimated subpixel land cover proportions due to the residual impact of PSF effects was reduced to 0.64%. Spatial aggregation also effectively reduced the errors in estimated land cover proportion images. About 50% of the estimation errors were removed after applying the deconvolution method and aggregating derived proportion images to twice their dimensional pixel size. ?? 2002 Elsevier Science Inc. All rights reserved.

  17. A gene profiling deconvolution approach to estimating immune cell composition from complex tissues.

    PubMed

    Chen, Shu-Hwa; Kuo, Wen-Yu; Su, Sheng-Yao; Chung, Wei-Chun; Ho, Jen-Ming; Lu, Henry Horng-Shing; Lin, Chung-Yen

    2018-05-08

    A new emerged cancer treatment utilizes intrinsic immune surveillance mechanism that is silenced by those malicious cells. Hence, studies of tumor infiltrating lymphocyte populations (TILs) are key to the success of advanced treatments. In addition to laboratory methods such as immunohistochemistry and flow cytometry, in silico gene expression deconvolution methods are available for analyses of relative proportions of immune cell types. Herein, we used microarray data from the public domain to profile gene expression pattern of twenty-two immune cell types. Initially, outliers were detected based on the consistency of gene profiling clustering results and the original cell phenotype notation. Subsequently, we filtered out genes that are expressed in non-hematopoietic normal tissues and cancer cells. For every pair of immune cell types, we ran t-tests for each gene, and defined differentially expressed genes (DEGs) from this comparison. Equal numbers of DEGs were then collected as candidate lists and numbers of conditions and minimal values for building signature matrixes were calculated. Finally, we used v -Support Vector Regression to construct a deconvolution model. The performance of our system was finally evaluated using blood biopsies from 20 adults, in which 9 immune cell types were identified using flow cytometry. The present computations performed better than current state-of-the-art deconvolution methods. Finally, we implemented the proposed method into R and tested extensibility and usability on Windows, MacOS, and Linux operating systems. The method, MySort, is wrapped as the Galaxy platform pluggable tool and usage details are available at https://testtoolshed.g2.bx.psu.edu/view/moneycat/mysort/e3afe097e80a .

  18. Combining a Deconvolution and a Universal Library Search Algorithm for the Nontarget Analysis of Data-Independent Acquisition Mode Liquid Chromatography-High-Resolution Mass Spectrometry Results.

    PubMed

    Samanipour, Saer; Reid, Malcolm J; Bæk, Kine; Thomas, Kevin V

    2018-04-17

    Nontarget analysis is considered one of the most comprehensive tools for the identification of unknown compounds in a complex sample analyzed via liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS). Due to the complexity of the data generated via LC-HRMS, the data-dependent acquisition mode, which produces the MS 2 spectra of a limited number of the precursor ions, has been one of the most common approaches used during nontarget screening. However, data-independent acquisition mode produces highly complex spectra that require proper deconvolution and library search algorithms. We have developed a deconvolution algorithm and a universal library search algorithm (ULSA) for the analysis of complex spectra generated via data-independent acquisition. These algorithms were validated and tested using both semisynthetic and real environmental data. A total of 6000 randomly selected spectra from MassBank were introduced across the total ion chromatograms of 15 sludge extracts at three levels of background complexity for the validation of the algorithms via semisynthetic data. The deconvolution algorithm successfully extracted more than 60% of the added ions in the analytical signal for 95% of processed spectra (i.e., 3 complexity levels multiplied by 6000 spectra). The ULSA ranked the correct spectra among the top three for more than 95% of cases. We further tested the algorithms with 5 wastewater effluent extracts for 59 artificial unknown analytes (i.e., their presence or absence was confirmed via target analysis). These algorithms did not produce any cases of false identifications while correctly identifying ∼70% of the total inquiries. The implications, capabilities, and the limitations of both algorithms are further discussed.

  19. Transformation of chlorinated paraffins to olefins during metal work and thermal exposure - Deconvolution of mass spectra and kinetics.

    PubMed

    Schinkel, Lena; Lehner, Sandro; Knobloch, Marco; Lienemann, Peter; Bogdal, Christian; McNeill, Kristopher; Heeb, Norbert V

    2018-03-01

    Chlorinated paraffins (CPs) are high production volume chemicals widely used as additives in metal working fluids. Thereby, CPs are exposed to hot metal surfaces which may induce degradation processes. We hypothesized that the elimination of hydrochloric acid would transform CPs into chlorinated olefins (COs). Mass spectrometry is widely used to detect CPs, mostly in the selected ion monitoring mode (SIM) evaluating 2-3 ions at mass resolutions R < 20'000. This approach is not suited to detected COs, because their mass spectra strongly overlap with CPs. We applied a mathematical deconvolution method based on full-scan MS data to separate interfered CP/CO spectra. Metal drilling indeed induced HCl-losses. CO proportions in exposed mixtures of chlorotridecanes increased. Thermal exposure of chlorotridecanes at 160, 180, 200 and 220 °C also induced dehydrohalogenation reactions and CO proportions also increased. Deconvolution of respective mass spectra is needed to study the CP transformation kinetics without bias from CO interferences. Apparent first-order rate constants (k app ) increased up to 0.17, 0.29 and 0.46 h -1 for penta-, hexa- and heptachloro-tridecanes exposed at 220 °C. Respective half-life times (τ 1/2 ) decreased from 4.0 to 2.4 and 1.5 h. Thus, higher chlorinated paraffins degrade faster than lower chlorinated ones. In conclusion, exposure of CPs during metal drilling and thermal treatment induced HCl losses and CO formation. It is expected that CPs and COs are co-released from such processes. Full-scan mass spectra and subsequent deconvolution of interfered signals is a promising approach to tackle the CP/CO problem, in case of insufficient mass resolution. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Raman Spectra of Crystalline Double Calcium Orthovanadates Ca10M(VO4)7 (M = Li, K, Na) and Their Interpretation Based on Deconvolution Into Voigt Profiles

    NASA Astrophysics Data System (ADS)

    Khodasevich, I. A.; Voitikov, S. V.; Orlovich, V. A.; Kosmyna, M. B.; Shekhovtsov, A. N.

    2016-09-01

    Unpolarized spontaneous Raman spectra of crystalline double calcium orthovanadates Ca10M(VO4)7 (M = Li, K, Na) in the range 150-1600 cm-1 were measured. Two vibrational bands with full-width at half-maximum (FWHM) of 37-50 cm-1 were found in the regions 150-500 and 700-1000 cm-1. The band shapes were approximated well by deconvolution into Voigt profiles. The band at 700-1000 cm-1 was stronger and deconvoluted into eight Voigt profiles. The frequencies of two strong lines were ~848 and ~862 cm-1 for Ca10Li(VO4)7; ~850 and ~866 cm-1 for Ca10Na(VO4)7; and ~844 and ~866 cm-1 for Ca10K(VO4)7. The Lorentzian width parameters of these lines in the Voigt profiles were ~5 times greater than those of the Gaussian width parameters. The FWHM of the Voigt profiles were ~18-42 cm-1. The two strongest lines had widths of 21-25 cm-1. The vibrational band at 300-500 cm-1 was ~5-6 times weaker than that at 700-1000 cm-1 and was deconvoluted into four lines with widths of 25-40 cm-1. The large FWHM of the Raman lines indicated that the crystal structures were disordered. These crystals could be of interest for Raman conversion of pico- and femtosecond laser pulses because of the intense vibrations with large FWHM in the Raman spectra.

  1. Acceleration of image-based resolution modelling reconstruction using an expectation maximization nested algorithm.

    PubMed

    Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2013-08-07

    Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.

  2. Continuous monitoring of high-rise buildings using seismic interferometry

    NASA Astrophysics Data System (ADS)

    Mordret, A.; Sun, H.; Prieto, G. A.; Toksoz, M. N.; Buyukozturk, O.

    2016-12-01

    The linear seismic response of a building is commonly extracted from ambient vibration measurements. Seismic deconvolution interferometry performed on ambient vibration measurements can also be used to estimate the dynamic characteristics of a building, such as the velocity of shear-waves travelling inside the building as well as a damping parameter depending on the intrinsic attenuation of the building and the soil-structure coupling. The continuous nature of the ambient vibrations allows us to measure these parameters repeatedly and to observe their temporal variations. We used 2 weeks of ambient vibration recorded by 36 accelerometers installed in the Green Building on the Massachusetts Institute of Technology campus (Cambridge, MA) to continuously monitor the shear-wave speed and the attenuation factor of the building. Due to the low strain of the ambient vibrations, the observed changes are totally reversible. The relative velocity changes between a reference deconvolution function and the current deconvolution functions are measured with two different methods: 1) the Moving Window Cross-Spectral technique and 2) the stretching technique. Both methods show similar results. We show that measuring the stretching coefficient for the deconvolution functions filtered around the fundamental mode frequency is equivalent to measuring the wandering of the fundamental frequency in the raw ambient vibration data. By comparing these results with local weather parameters, we show that the relative air humidity is the factor dominating the relative seismic velocity variations in the Green Building, as well as the wandering of the fundamental mode. The one-day periodic variations are affected by both the temperature and the humidity. The attenuation factor, measured as the exponential decay of the fundamental mode waveforms, shows a more complex behaviour with respect to the weather measurements.

  3. Constraining Data Mining with Physical Models: Voltage- and Oxygen Pressure-Dependent Transport in Multiferroic Nanostructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strelcov, Evgheni; Belianinov, Alexei; Hsieh, Ying-Hui

    Development of new generation electronic devices requires understanding and controlling the electronic transport in ferroic, magnetic, and optical materials, which is hampered by two factors. First, the complications of working at the nanoscale, where interfaces, grain boundaries, defects, and so forth, dictate the macroscopic characteristics. Second, the convolution of the response signals stemming from the fact that several physical processes may be activated simultaneously. Here, we present a method of solving these challenges via a combination of atomic force microscopy and data mining analysis techniques. Rational selection of the latter allows application of physical constraints and enables direct interpretation ofmore » the statistically significant behaviors in the framework of the chosen physical model, thus distilling physical meaning out of raw data. We demonstrate our approach with an example of deconvolution of complex transport behavior in a bismuth ferrite–cobalt ferrite nanocomposite in ambient and ultrahigh vacuum environments. Measured signal is apportioned into four electronic transport patterns, showing different dependence on partial oxygen and water vapor pressure. These patterns are described in terms of Ohmic conductance and Schottky emission models in the light of surface electrochemistry. Finally and furthermore, deep data analysis allows extraction of local dopant concentrations and barrier heights empowering our understanding of the underlying dynamic mechanisms of resistive switching.« less

  4. Receiver functions from west Antarctica; crust and mantle properties from POLENET

    NASA Astrophysics Data System (ADS)

    Aster, R. C.; Chaput, J. A.; Hansen, S. E.; Nyblade, A.; Wiens, D. A.; Huerta, A. D.; Wilson, T. J.; Anandakrishnan, S.

    2011-12-01

    We use receiver functions to extract crustal thickness and mantle transition zone depths across a wide extent of West Antarctica and the Transantarctic mountains using POLENET data, including recently recovered data from a 14-station West Antarctic Rift Zone transect. An adaptive approach for generating and analyzing P-receiver functions over ice sheets and sedimentary basins (similar to Winberry and Anandakrishnan, 2004) is applied using an extended time multitaper deconvolution algorithm and forward modeling synthetic seismogram fitting. We model P-S receiver functions via a layer stripping methodology (beginning with the ice sheet, if present), and fit increasingly longer sections of synthetic receiver functions to model the multiples observed in the data derived receiver functions. We additionally calculate S-P receiver functions, which provide complementary structural constraints, to generate consistent common conversion point stacks to image crustal and upper mantle discontinuities under West Antarctica. Crust throughout West Antarctica is generally thin (23-29 km; comparable to the U.S. Basin and Range) with relative thickening under the Marie Byrd Land volcanic province (to 32 km) and the Transantarctic Mountains. All constrained west Antarctic crust is substantially thicker than that in the vicinity of Ross Island, where crust as thin as 17 km is inferred in the Terror Rift region.

  5. Determination of Earth outgoing radiation using a constellation of satellites

    NASA Astrophysics Data System (ADS)

    Gristey, Jake; Chiu, Christine; Gurney, Robert; Han, Shin-Chan; Morcrette, Cyril

    2017-04-01

    The outgoing radiation fluxes at the top of the atmosphere, referred to as Earth outgoing radiation (EOR), constitute a vital component of the Earth's energy budget. This EOR exhibits strong diurnal signatures and is inherently connected to the rapidly evolving scene from which the radiation originates, so our ability to accurately monitor EOR with sufficient temporal resolution and spatial coverage is crucial for weather and climate studies. Despite vast improvements in satellite observations in recent decades, achieving these criteria remains challenging from current measurements. A technology revolution in small satellites and sensor miniaturisation has created a new and exciting opportunity for a novel, viable and sustainable observation strategy from a constellation of satellites, capable of providing both global coverage and high temporal resolution simultaneously. To explore the potential of a constellation approach for observing EOR we perform a series of theoretical simulation experiments. Using the results from these simulation experiments, we will demonstrate a baseline constellation configuration capable of accurately monitoring global EOR at unprecedented temporal resolution. We will also show whether it is possible to reveal synoptic scale, fast evolving phenomena by applying a deconvolution technique to the simulated measurements. The ability to observe and understand the relationship between these phenomena and changes in EOR is of fundamental importance in constraining future warming of our climate system.

  6. Constraining Data Mining with Physical Models: Voltage- and Oxygen Pressure-Dependent Transport in Multiferroic Nanostructures

    DOE PAGES

    Strelcov, Evgheni; Belianinov, Alexei; Hsieh, Ying-Hui; ...

    2015-08-27

    Development of new generation electronic devices requires understanding and controlling the electronic transport in ferroic, magnetic, and optical materials, which is hampered by two factors. First, the complications of working at the nanoscale, where interfaces, grain boundaries, defects, and so forth, dictate the macroscopic characteristics. Second, the convolution of the response signals stemming from the fact that several physical processes may be activated simultaneously. Here, we present a method of solving these challenges via a combination of atomic force microscopy and data mining analysis techniques. Rational selection of the latter allows application of physical constraints and enables direct interpretation ofmore » the statistically significant behaviors in the framework of the chosen physical model, thus distilling physical meaning out of raw data. We demonstrate our approach with an example of deconvolution of complex transport behavior in a bismuth ferrite–cobalt ferrite nanocomposite in ambient and ultrahigh vacuum environments. Measured signal is apportioned into four electronic transport patterns, showing different dependence on partial oxygen and water vapor pressure. These patterns are described in terms of Ohmic conductance and Schottky emission models in the light of surface electrochemistry. Finally and furthermore, deep data analysis allows extraction of local dopant concentrations and barrier heights empowering our understanding of the underlying dynamic mechanisms of resistive switching.« less

  7. Traversable geometric dark energy wormholes constrained by astrophysical observations

    NASA Astrophysics Data System (ADS)

    Wang, Deng; Meng, Xin-he

    2016-09-01

    In this paper, we introduce the astrophysical observations into the wormhole research. We investigate the evolution behavior of the dark energy equation of state parameter ω by constraining the dark energy model, so that we can determine in which stage of the universe wormholes can exist by using the condition ω <-1. As a concrete instance, we study the Ricci dark energy (RDE) traversable wormholes constrained by astrophysical observations. Particularly, we find from Fig. 5 of this work, when the effective equation of state parameter ω _X<-1 (or z<0.109), i.e., the null energy condition (NEC) is violated clearly, the wormholes will exist (open). Subsequently, six specific solutions of statically and spherically symmetric traversable wormhole supported by the RDE fluids are obtained. Except for the case of a constant redshift function, where the solution is not only asymptotically flat but also traversable, the five remaining solutions are all non-asymptotically flat, therefore, the exotic matter from the RDE fluids is spatially distributed in the vicinity of the throat. Furthermore, we analyze the physical characteristics and properties of the RDE traversable wormholes. It is worth noting that, using the astrophysical observations, we obtain the constraints on the parameters of the RDE model, explore the types of exotic RDE fluids in different stages of the universe, limit the number of available models for wormhole research, reduce theoretically the number of the wormholes corresponding to different parameters for the RDE model, and provide a clearer picture for wormhole investigations from the new perspective of observational cosmology.

  8. Thermoluminescence of nanocrystalline CaSO{sub 4}: Dy for gamma dosimetry and calculation of trapping parameters using deconvolution method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandlik, Nandkumar, E-mail: ntmandlik@gmail.com; Patil, B. J.; Bhoraskar, V. N.

    2014-04-24

    Nanorods of CaSO{sub 4}: Dy having diameter 20 nm and length 200 nm have been synthesized by the chemical coprecipitation method. These samples were irradiated with gamma radiation for the dose varying from 0.1 Gy to 50 kGy and their TL characteristics have been studied. TL dose response shows a linear behavior up to 5 kGy and further saturates with increase in the dose. A Computerized Glow Curve Deconvolution (CGCD) program was used for the analysis of TL glow curves. Trapping parameters for various peaks have been calculated by using CGCD program.

  9. Thermoluminescence of nanocrystalline CaSO4: Dy for gamma dosimetry and calculation of trapping parameters using deconvolution method

    NASA Astrophysics Data System (ADS)

    Mandlik, Nandkumar; Patil, B. J.; Bhoraskar, V. N.; Sahare, P. D.; Dhole, S. D.

    2014-04-01

    Nanorods of CaSO4: Dy having diameter 20 nm and length 200 nm have been synthesized by the chemical coprecipitation method. These samples were irradiated with gamma radiation for the dose varying from 0.1 Gy to 50 kGy and their TL characteristics have been studied. TL dose response shows a linear behavior up to 5 kGy and further saturates with increase in the dose. A Computerized Glow Curve Deconvolution (CGCD) program was used for the analysis of TL glow curves. Trapping parameters for various peaks have been calculated by using CGCD program.

  10. Iterative Transform Phase Diversity: An Image-Based Object and Wavefront Recovery

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey

    2012-01-01

    The Iterative Transform Phase Diversity algorithm is designed to solve the problem of recovering the wavefront in the exit pupil of an optical system and the object being imaged. This algorithm builds upon the robust convergence capability of Variable Sampling Mapping (VSM), in combination with the known success of various deconvolution algorithms. VSM is an alternative method for enforcing the amplitude constraints of a Misell-Gerchberg-Saxton (MGS) algorithm. When provided the object and additional optical parameters, VSM can accurately recover the exit pupil wavefront. By combining VSM and deconvolution, one is able to simultaneously recover the wavefront and the object.

  11. Deconvolution Method on OSL Curves from ZrO2 Irradiated by Beta and UV Radiations

    NASA Astrophysics Data System (ADS)

    Rivera, T.; Kitis, G.; Azorín, J.; Furetta, C.

    This paper reports the optically stimulated luminescent (OSL) response of ZrO2 to beta and ultraviolet radiations in order to investigate the potential use of this material as a radiation dosimeter. The experimentally obtained OSL decay curves were analyzed using the computerized curve de-convolution (CCD) method. It was found that the OSL curve structure, for the short (practical) illumination time used, consists of three first order components. The individual OSL dose response behavior of each component was found. The values of the time at the OSL peak maximum and the decay constant of each component were also estimated.

  12. Pooling across cells to normalize single-cell RNA sequencing data with many zero counts.

    PubMed

    Lun, Aaron T L; Bach, Karsten; Marioni, John C

    2016-04-27

    Normalization of single-cell RNA sequencing data is necessary to eliminate cell-specific biases prior to downstream analyses. However, this is not straightforward for noisy single-cell data where many counts are zero. We present a novel approach where expression values are summed across pools of cells, and the summed values are used for normalization. Pool-based size factors are then deconvolved to yield cell-based factors. Our deconvolution approach outperforms existing methods for accurate normalization of cell-specific biases in simulated data. Similar behavior is observed in real data, where deconvolution improves the relevance of results of downstream analyses.

  13. Correction Factor for Gaussian Deconvolution of Optically Thick Linewidths in Homogeneous Sources

    NASA Technical Reports Server (NTRS)

    Kastner, S. O.; Bhatia, A. K.

    1999-01-01

    Profiles of optically thick, non-Gaussian emission line profiles convoluted with Gaussian instrumental profiles are constructed, and are deconvoluted on the usual Gaussian basis to examine the departure from accuracy thereby caused in "measured" linewidths. It is found that "measured" linewidths underestimate the true linewidths of optically thick lines, by a factor which depends on the resolution factor r congruent to Doppler width/instrumental width and on the optical thickness tau(sub 0). An approximating expression is obtained for this factor, applicable in the range of at least 0 <= tau(sub 0) <= 10, which can provide estimates of the true linewidth and optical thickness.

  14. Application of Deconvolution Algorithm of Point Spread Function in Improving Image Quality: An Observer Preference Study on Chest Radiography.

    PubMed

    Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho

    2018-01-01

    To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p < 0.001) and for the overall image quality (mean, 3.8; range, 3.3-4.0; p < 0.001). The most preferred anatomical regions were the azygoesophageal recess, thoracic spine, and unobscured lung. The visibility of chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.

  15. Time-Domain Receiver Function Deconvolution using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Moreira, L. P.

    2017-12-01

    Receiver Functions (RF) are well know method for crust modelling using passive seismological signals. Many different techniques were developed to calculate the RF traces, applying the deconvolution calculation to radial and vertical seismogram components. A popular method used a spectral division of both components, which requires human intervention to apply the Water Level procedure to avoid instabilities from division by small numbers. One of most used method is an iterative procedure to estimate the RF peaks and applying the convolution with vertical component seismogram, comparing the result with the radial component. This method is suitable for automatic processing, however several RF traces are invalid due to peak estimation failure.In this work it is proposed a deconvolution algorithm using Genetic Algorithm (GA) to estimate the RF peaks. This method is entirely processed in the time domain, avoiding the time-to-frequency calculations (and vice-versa), and totally suitable for automatic processing. Estimated peaks can be used to generate RF traces in a seismogram format for visualization. The RF trace quality is similar for high magnitude events, although there are less failures for RF calculation of smaller events, increasing the overall performance for high number of events per station.

  16. Ultrasonic inspection of studs (bolts) using dynamic predictive deconvolution and wave shaping.

    PubMed

    Suh, D M; Kim, W W; Chung, J G

    1999-01-01

    Bolt degradation has become a major issue in the nuclear industry since the 1980's. If small cracks in stud bolts are not detected early enough, they grow rapidly and cause catastrophic disasters. Their detection, despite its importance, is known to be a very difficult problem due to the complicated structures of the stud bolts. This paper presents a method of detecting and sizing a small crack in the root between two adjacent crests in threads. The key idea is from the fact that the mode-converted Rayleigh wave travels slowly down the face of the crack and turns from the intersection of the crack and the root of thread to the transducer. Thus, when a crack exists, a small delayed pulse due to the Rayleigh wave is detected between large regularly spaced pulses from the thread. The delay time is the same as the propagation delay time of the slow Rayleigh wave and is proportional to the site of the crack. To efficiently detect the slow Rayleigh wave, three methods based on digital signal processing are proposed: wave shaping, dynamic predictive deconvolution, and dynamic predictive deconvolution combined with wave shaping.

  17. Retinal image restoration by means of blind deconvolution

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés G.; Šorel, Michal; Šroubek, Filip; Millán, María S.

    2011-11-01

    Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.

  18. Laboratory for Engineering Man/Machine Systems (LEMS): System identification, model reduction and deconvolution filtering using Fourier based modulating signals and high order statistics

    NASA Technical Reports Server (NTRS)

    Pan, Jianqiang

    1992-01-01

    Several important problems in the fields of signal processing and model identification, such as system structure identification, frequency response determination, high order model reduction, high resolution frequency analysis, deconvolution filtering, and etc. Each of these topics involves a wide range of applications and has received considerable attention. Using the Fourier based sinusoidal modulating signals, it is shown that a discrete autoregressive model can be constructed for the least squares identification of continuous systems. Some identification algorithms are presented for both SISO and MIMO systems frequency response determination using only transient data. Also, several new schemes for model reduction were developed. Based upon the complex sinusoidal modulating signals, a parametric least squares algorithm for high resolution frequency estimation is proposed. Numerical examples show that the proposed algorithm gives better performance than the usual. Also, the problem was studied of deconvolution and parameter identification of a general noncausal nonminimum phase ARMA system driven by non-Gaussian stationary random processes. Algorithms are introduced for inverse cumulant estimation, both in the frequency domain via the FFT algorithms and in the domain via the least squares algorithm.

  19. Optimisation of chromatographic resolution using objective functions including both time and spectral information.

    PubMed

    Torres-Lapasió, J R; Pous-Torres, S; Ortiz-Bolsico, C; García-Alvarez-Coque, M C

    2015-01-16

    The optimisation of the resolution in high-performance liquid chromatography is traditionally performed attending only to the time information. However, even in the optimal conditions, some peak pairs may remain unresolved. Such incomplete resolution can be still accomplished by deconvolution, which can be carried out with more guarantees of success by including spectral information. In this work, two-way chromatographic objective functions (COFs) that incorporate both time and spectral information were tested, based on the peak purity (analyte peak fraction free of overlapping) and the multivariate selectivity (figure of merit derived from the net analyte signal) concepts. These COFs are sensitive to situations where the components that coelute in a mixture show some spectral differences. Therefore, they are useful to find out experimental conditions where the spectrochromatograms can be recovered by deconvolution. Two-way multivariate selectivity yielded the best performance and was applied to the separation using diode-array detection of a mixture of 25 phenolic compounds, which remained unresolved in the chromatographic order using linear and multi-linear gradients of acetonitrile-water. Peak deconvolution was carried out using the combination of orthogonal projection approach and alternating least squares. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit

    NASA Astrophysics Data System (ADS)

    Schawinski, Kevin; Zhang, Ce; Zhang, Hantian; Fowler, Lucas; Santhanam, Gokula Krishnan

    2017-05-01

    Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon-Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.

  1. Imaging samples in silica aerogel using an experimental point spread function.

    PubMed

    White, Amanda J; Ebel, Denton S

    2015-02-01

    Light microscopy is a powerful tool that allows for many types of samples to be examined in a rapid, easy, and nondestructive manner. Subsequent image analysis, however, is compromised by distortion of signal by instrument optics. Deconvolution of images prior to analysis allows for the recovery of lost information by procedures that utilize either a theoretically or experimentally calculated point spread function (PSF). Using a laser scanning confocal microscope (LSCM), we have imaged whole impact tracks of comet particles captured in silica aerogel, a low density, porous SiO2 solid, by the NASA Stardust mission. In order to understand the dynamical interactions between the particles and the aerogel, precise grain location and track volume measurement are required. We report a method for measuring an experimental PSF suitable for three-dimensional deconvolution of imaged particles in aerogel. Using fluorescent beads manufactured into Stardust flight-grade aerogel, we have applied a deconvolution technique standard in the biological sciences to confocal images of whole Stardust tracks. The incorporation of an experimentally measured PSF allows for better quantitative measurements of the size and location of single grains in aerogel and more accurate measurements of track morphology.

  2. Wavefield iterative deconvolution to remove multiples and produce phase specific Ps receiver functions

    NASA Astrophysics Data System (ADS)

    Ainiwaer, A.; Gurrola, H.

    2018-03-01

    Common conversion point stacking or migration of receiver functions (RFs) and H-k (H is depth and k is Vp/Vs) stacking of RFs has become a common method to study the crust and upper mantle beneath broad-band three-component seismic stations. However, it can be difficult to interpret Pds RFs due to interference between the Pds, PPds and PSds phases, especially in the mantle portion of the lithosphere. We propose a phase separation method to isolate the prominent phases of the RFs and produce separate Pds, PPds and PSds `phase specific' receiver functions (referred to as PdsRFs, PPdsRFs and PSdsRFs, respectively) by deconvolution of the wavefield rather than single seismograms. One of the most important products of this deconvolution method is to produce Ps receiver functions (PdsRFs) that are free of crustal multiples. This is accomplished by using H-k analysis to identify specific phases in the wavefield from all seismograms recorded at a station which enables development of an iterative deconvolution procedure to produce the above-mentioned phase specific RFs. We refer to this method as wavefield iterative deconvolution (WID). The WID method differentiates and isolates different RF phases by exploiting their differences in moveout curves across the entire wave front. We tested the WID by applying it to synthetic seismograms produced using a modified version of the PREM velocity model. The WID effectively separates phases from each stacked RF in synthetic data. We also applied this technique to produce RFs from seismograms recorded at ARU (a broad-band station in Arti, Russia). The phase specific RFs produced using WID are easier to interpret than traditional RFs. The PdsRFs computed using WID are the most improved, owing to the distinct shape of its moveout curves as compared to the moveout curves for the PPds and PSds phases. The importance of this WID method is most significant in reducing interference between phases for depths of less than 300 km. Phases from deeper layers (i.e. P660s as compared to PP220s) are less likely to be misinterpreted because the large amount of moveout causes the appropriate phases to stack coherently if there is sufficient distribution in ray parameter. WID is most effective in producing clean PdsRFs that are relatively free of reverberations whereas PPdsRFs and PSdsRFs retain contamination from reverberations.

  3. Constraints on geomagnetic secular variation modeling from electromagnetism and fluid dynamics of the Earth's core

    NASA Technical Reports Server (NTRS)

    Benton, E. R.

    1986-01-01

    A spherical harmonic representation of the geomagnetic field and its secular variation for epoch 1980, designated GSFC(9/84), is derived and evaluated. At three epochs (1977.5, 1980.0, 1982.5) this model incorporates conservation of magnetic flux through five selected patches of area on the core/mantle boundary bounded by the zero contours of vertical magnetic field. These fifteen nonlinear constraints are included like data in an iterative least squares parameter estimation procedure that starts with the recently derived unconstrained field model GSFC (12/83). Convergence is approached within three iterations. The constrained model is evaluated by comparing its predictive capability outside the time span of its data, in terms of residuals at magnetic observatories, with that for the unconstrained model.

  4. ASTROPHYSICS. Atom-interferometry constraints on dark energy.

    PubMed

    Hamilton, P; Jaffe, M; Haslinger, P; Simmons, Q; Müller, H; Khoury, J

    2015-08-21

    If dark energy, which drives the accelerated expansion of the universe, consists of a light scalar field, it might be detectable as a "fifth force" between normal-matter objects, in potential conflict with precision tests of gravity. Chameleon fields and other theories with screening mechanisms, however, can evade these tests by suppressing the forces in regions of high density, such as the laboratory. Using a cesium matter-wave interferometer near a spherical mass in an ultrahigh-vacuum chamber, we reduced the screening mechanism by probing the field with individual atoms rather than with bulk matter. We thereby constrained a wide class of dark energy theories, including a range of chameleon and other theories that reproduce the observed cosmic acceleration. Copyright © 2015, American Association for the Advancement of Science.

  5. Two-dimensional and 3-D images of thick tissue using time-constrained times-of-flight and absorbance spectrophotometry

    NASA Astrophysics Data System (ADS)

    Benaron, David A.; Lennox, M.; Stevenson, David K.

    1992-05-01

    Reconstructing deep-tissue images in real time using spectrophotometric data from optically diffusing thick tissues has been problematic. Continuous wave applications (e.g., pulse oximetry, regional cerebral saturation) ignore both the multiple paths traveled by the photons through the tissue and the effects of scattering, allowing scalar measurements but only under limited conditions; interferometry works poorly in thick, highly-scattering media; frequency- modulated approaches may not allow full deconvolution of scattering and absorbance; and pulsed-light techniques allow for preservation of information regarding the multiple paths taken by light through the tissue, but reconstruction is both computation intensive and limited by the relative surface area available for detection of photons. We have developed a picosecond times-of-flight and absorbance (TOFA) optical system, time-constrained to measure only photons with a narrow range of path lengths and arriving within a narrow angel of the emitter-detector axis. The delay until arrival of the earliest arriving photons is a function of both the scattering and absorbance of the tissues in a direct line between the emitter and detector, reducing the influence of surrounding tissues. Measurement using a variety of emitter and detector locations produces spatial information which can be analyzed in a standard 2-D grid, or subject to computer reconstruction to produce tomographic images representing 3-D structure. Using such a technique, we have been able to demonstrate the principles of tc-TOFA, detect and localize diffusive and/or absorptive objects suspended in highly scattering media (such as blood admixed with yeast), and perform simple 3-D reconstructions using phantom objects. We are now attempting to obtain images in vivo. Potential future applications include use as a research tool, and as a continuous, noninvasive, nondestructive monitor in diagnostic imaging, fetal monitoring, neurologic and cardiac assessment. The technique may lead to real-time optical imaging and quantitation of tissues oxygen delivery.

  6. A Test of Carbon and Oxygen Stable Isotope Ratio Process Models in Tree Rings.

    NASA Astrophysics Data System (ADS)

    Roden, J. S.; Farquhar, G. D.

    2008-12-01

    Stable isotopes ratios of carbon and oxygen in tree ring cellulose have been used to infer environmental change. Process-based models have been developed to clarify the potential of historic tree ring records for meaningful paleoclimatic reconstructions. However, isotopic variation can be influenced by multiple environmental factors making simplistic interpretations problematic. Recently, the dual isotope approach, where the variation in one stable isotope ratio (e.g. oxygen) is used to constrain the interpretation of variation in another (e.g. carbon), has been shown to have the potential to de-convolute isotopic analysis. However, this approach requires further testing to determine its applicability for paleo-reconstructions using tree-ring time series. We present a study where the information needed to parameterize mechanistic models for both carbon and oxygen stable isotope ratios were collected in controlled environment chambers for two species (Pinus radiata and Eucalyptus globulus). The seedlings were exposed to treatments designed to modify leaf temperature, transpiration rates, stomatal conductance and photosynthetic capacity. Both species were grown for over 100 days under two humidity regimes that differed by 20%. Stomatal conductance was significantly different between species and for seedlings under drought conditions but not between other treatments or humidity regimes. The treatments produced large differences in transpiration rate and photosynthesis. Treatments that effected photosynthetic rates but not stomatal conductance influenced carbon isotope discrimination more than those that influenced primarily conductance. The various treatments produced a range in oxygen isotope ratios of 7 ‰. Process models predicted greater oxygen isotope enrichment in tree ring cellulose than observed. The oxygen isotope ratios of bulk leaf water were reasonably well predicted by current steady-state models. However, the fractional difference between models that predict bulk leaf water versus the site of evaporation did not increase with transpiration rates. In conclusion, although the dual isotope approach may better constrain interpretation of isotopic variation, more work is required before its predictive power can be applied to tree-ring archives.

  7. Fiber estimation and tractography in diffusion MRI: Development of simulated brain images and comparison of multi-fiber analysis methods at clinical b-values

    PubMed Central

    Wilkins, Bryce; Lee, Namgyun; Gajawelli, Niharika; Law, Meng; Leporé, Natasha

    2015-01-01

    Advances in diffusion-weighted magnetic resonance imaging (DW-MRI) have led to many alternative diffusion sampling strategies and analysis methodologies. A common objective among methods is estimation of white matter fiber orientations within each voxel, as doing so permits in-vivo fiber-tracking and the ability to study brain connectivity and networks. Knowledge of how DW-MRI sampling schemes affect fiber estimation accuracy, and consequently tractography and the ability to recover complex white-matter pathways, as well as differences between results due to choice of analysis method and which method(s) perform optimally for specific data sets, all remain important problems, especially as tractography-based studies become common. In this work we begin to address these concerns by developing sets of simulated diffusion-weighted brain images which we then use to quantitatively evaluate the performance of six DW-MRI analysis methods in terms of estimated fiber orientation accuracy, false-positive (spurious) and false-negative (missing) fiber rates, and fiber-tracking. The analysis methods studied are: 1) a two-compartment “ball and stick” model (BSM) (Behrens et al., 2003); 2) a non-negativity constrained spherical deconvolution (CSD) approach (Tournier et al., 2007); 3) analytical q-ball imaging (QBI) (Descoteaux et al., 2007); 4) q-ball imaging with Funk-Radon and Cosine Transform (FRACT) (Haldar and Leahy, 2013); 5) q-ball imaging within constant solid angle (CSA) (Aganj et al., 2010); and 6) a generalized Fourier transform approach known as generalized q-sampling imaging (GQI) (Yeh et al., 2010). We investigate these methods using 20, 30, 40, 60, 90 and 120 evenly distributed q-space samples of a single shell, and focus on a signal-to-noise ratio (SNR = 18) and diffusion-weighting (b = 1000 s/mm2) common to clinical studies. We found the BSM and CSD methods consistently yielded the least fiber orientation error and simultaneously greatest detection rate of fibers. Fiber detection rate was found to be the most distinguishing characteristic between the methods, and a significant factor for complete recovery of tractography through complex white-matter pathways. For example, while all methods recovered similar tractography of prominent white matter pathways of limited fiber crossing, CSD (which had the highest fiber detection rate, especially for voxels containing three fibers) recovered the greatest number of fibers and largest fraction of correct tractography for a complex three-fiber crossing region. The synthetic data sets, ground-truth, and tools for quantitative evaluation are publically available on the NITRC website as the project “Simulated DW-MRI Brain Data Sets for Quantitative Evaluation of Estimated Fiber Orientations” at http://www.nitrc.org/projects/sim_dwi_brain PMID:25555998

  8. Fiber estimation and tractography in diffusion MRI: development of simulated brain images and comparison of multi-fiber analysis methods at clinical b-values.

    PubMed

    Wilkins, Bryce; Lee, Namgyun; Gajawelli, Niharika; Law, Meng; Leporé, Natasha

    2015-04-01

    Advances in diffusion-weighted magnetic resonance imaging (DW-MRI) have led to many alternative diffusion sampling strategies and analysis methodologies. A common objective among methods is estimation of white matter fiber orientations within each voxel, as doing so permits in-vivo fiber-tracking and the ability to study brain connectivity and networks. Knowledge of how DW-MRI sampling schemes affect fiber estimation accuracy, tractography and the ability to recover complex white-matter pathways, differences between results due to choice of analysis method, and which method(s) perform optimally for specific data sets, all remain important problems, especially as tractography-based studies become common. In this work, we begin to address these concerns by developing sets of simulated diffusion-weighted brain images which we then use to quantitatively evaluate the performance of six DW-MRI analysis methods in terms of estimated fiber orientation accuracy, false-positive (spurious) and false-negative (missing) fiber rates, and fiber-tracking. The analysis methods studied are: 1) a two-compartment "ball and stick" model (BSM) (Behrens et al., 2003); 2) a non-negativity constrained spherical deconvolution (CSD) approach (Tournier et al., 2007); 3) analytical q-ball imaging (QBI) (Descoteaux et al., 2007); 4) q-ball imaging with Funk-Radon and Cosine Transform (FRACT) (Haldar and Leahy, 2013); 5) q-ball imaging within constant solid angle (CSA) (Aganj et al., 2010); and 6) a generalized Fourier transform approach known as generalized q-sampling imaging (GQI) (Yeh et al., 2010). We investigate these methods using 20, 30, 40, 60, 90 and 120 evenly distributed q-space samples of a single shell, and focus on a signal-to-noise ratio (SNR = 18) and diffusion-weighting (b = 1000 s/mm(2)) common to clinical studies. We found that the BSM and CSD methods consistently yielded the least fiber orientation error and simultaneously greatest detection rate of fibers. Fiber detection rate was found to be the most distinguishing characteristic between the methods, and a significant factor for complete recovery of tractography through complex white-matter pathways. For example, while all methods recovered similar tractography of prominent white matter pathways of limited fiber crossing, CSD (which had the highest fiber detection rate, especially for voxels containing three fibers) recovered the greatest number of fibers and largest fraction of correct tractography for complex three-fiber crossing regions. The synthetic data sets, ground-truth, and tools for quantitative evaluation are publically available on the NITRC website as the project "Simulated DW-MRI Brain Data Sets for Quantitative Evaluation of Estimated Fiber Orientations" at http://www.nitrc.org/projects/sim_dwi_brain. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. The shape of galaxy dark matter halos in massive galaxy clusters: Insights from strong gravitational lensing

    NASA Astrophysics Data System (ADS)

    Jauzac, Mathilde; Harvey, David; Massey, Richard

    2018-04-01

    We assess how much unused strong lensing information is available in the deep Hubble Space Telescope imaging and VLT/MUSE spectroscopy of the Frontier Field clusters. As a pilot study, we analyse galaxy cluster MACS J0416.1-2403 (z=0.397, M(R < 200 kpc)=1.6×1014M⊙), which has 141 multiple images with spectroscopic redshifts. We find that many additional parameters in a cluster mass model can be constrained, and that adding even small amounts of extra freedom to a model can dramatically improve its figures of merit. We use this information to constrain the distribution of dark matter around cluster member galaxies, simultaneously with the cluster's large-scale mass distribution. We find tentative evidence that some galaxies' dark matter has surprisingly similar ellipticity to their stars (unlike in the field, where it is more spherical), but that its orientation is often misaligned. When non-coincident dark matter and stellar halos are allowed, the model improves by 35%. This technique may provide a new way to investigate the processes and timescales on which dark matter is stripped from galaxies as they fall into a massive cluster. Our preliminary conclusions will be made more robust by analysing the remaining five Frontier Field clusters.

  10. The shape of galaxy dark matter haloes in massive galaxy clusters: insights from strong gravitational lensing

    NASA Astrophysics Data System (ADS)

    Jauzac, Mathilde; Harvey, David; Massey, Richard

    2018-07-01

    We assess how much unused strong lensing information is available in the deep Hubble Space Telescope imaging and Very Large Telescope/Multi Unit Spectroscopic Explorer spectroscopy of the Frontier Field clusters. As a pilot study, we analyse galaxy cluster MACS J0416.1-2403 (z = 0.397, M(R < 200 kpc) = 1.6 × 1014 M⊙), which has 141 multiple images with spectroscopic redshifts. We find that many additional parameters in a cluster mass model can be constrained, and that adding even small amounts of extra freedom to a model can dramatically improve its figures of merit. We use this information to constrain the distribution of dark matter around cluster member galaxies, simultaneously with the cluster's large-scale mass distribution. We find tentative evidence that some galaxies' dark matter has surprisingly similar ellipticity to their stars (unlike in the field, where it is more spherical), but that its orientation is often misaligned. When non-coincident dark matter and stellar haloes are allowed, the model improves by 35 per cent. This technique may provide a new way to investigate the processes and time-scales on which dark matter is stripped from galaxies as they fall into a massive cluster. Our preliminary conclusions will be made more robust by analysing the remaining five Frontier Field clusters.

  11. Adaptive optics images restoration based on frame selection and multi-framd blind deconvolution

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Rao, C. H.; Wei, K.

    2008-10-01

    The adaptive optics can only partially compensate the image blurred by atmospheric turbulent due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frame blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are picked out by frame selection technique is deconvolved. There is no priori knowledge except the positive constraint. The method has been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system in Yunnan Observatory. The results showed that the method can effectively improve the images partially corrected by adaptive optics.

  12. Digital sorting of complex tissues for cell type-specific gene expression profiles.

    PubMed

    Zhong, Yi; Wan, Ying-Wooi; Pang, Kaifang; Chow, Lionel M L; Liu, Zhandong

    2013-03-07

    Cellular heterogeneity is present in almost all gene expression profiles. However, transcriptome analysis of tissue specimens often ignores the cellular heterogeneity present in these samples. Standard deconvolution algorithms require prior knowledge of the cell type frequencies within a tissue or their in vitro expression profiles. Furthermore, these algorithms tend to report biased estimations. Here, we describe a Digital Sorting Algorithm (DSA) for extracting cell-type specific gene expression profiles from mixed tissue samples that is unbiased and does not require prior knowledge of cell type frequencies. The results suggest that DSA is a specific and sensitivity algorithm in gene expression profile deconvolution and will be useful in studying individual cell types of complex tissues.

  13. Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media

    NASA Astrophysics Data System (ADS)

    Edrei, Eitan; Scarcelli, Giuliano

    2016-09-01

    High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.

  14. Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media.

    PubMed

    Edrei, Eitan; Scarcelli, Giuliano

    2016-09-16

    High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.

  15. Deconvolution of acoustic emissions for source localization using time reverse modeling

    NASA Astrophysics Data System (ADS)

    Kocur, Georg Karl

    2017-01-01

    Impact experiments on small-scale slabs made of concrete and aluminum were carried out. Wave motion radiated from the epicenter of the impact was recorded as voltage signals by resonant piezoelectric transducers. Numerical simulations of the elastic wave propagation are performed to simulate the physical experiments. The Hertz theory of contact is applied to estimate the force impulse, which is subsequently used for the numerical simulation. Displacements at the transducer positions are calculated numerically. A deconvolution function is obtained by comparing the physical (voltage signal) and the numerical (calculated displacement) experiments. Acoustic emission signals due to pencil-lead breaks are recorded, deconvolved and applied for localization using time reverse modeling.

  16. Reconstruction of the mass distribution of galaxy clusters from the inversion of the thermal Sunyaev-Zel'dovich effect

    NASA Astrophysics Data System (ADS)

    Majer, C. L.; Meyer, S.; Konrad, S.; Sarli, E.; Bartelmann, M.

    2016-07-01

    This paper continues a series in which we intend to show how all observables of galaxy clusters can be combined to recover the two-dimensional, projected gravitational potential of individual clusters. Our goal is to develop a non-parametric algorithm for joint cluster reconstruction taking all cluster observables into account. For this reason we focus on the line-of-sight projected gravitational potential, proportional to the lensing potential, in order to extend existing reconstruction algorithms. In this paper, we begin with the relation between the Compton-y parameter and the Newtonian gravitational potential, assuming hydrostatic equilibrium and a polytropic stratification of the intracluster gas. Extending our first publication we now consider a spheroidal rather than a spherical cluster symmetry. We show how a Richardson-Lucy deconvolution can be used to convert the intensity change of the CMB due to the thermal Sunyaev-Zel'dovich effect into an estimate for the two-dimensional gravitational potential. We apply our reconstruction method to a cluster based on an N-body/hydrodynamical simulation processed with the characteristics (resolution and noise) of the ALMA interferometer for which we achieve a relative error of ≲20 per cent for a large fraction of the virial radius. We further apply our method to an observation of the galaxy cluster RXJ1347 for which we can reconstruct the potential with a relative error of ≲20 per cent for the observable cluster range.

  17. Mixed poloidal-toroidal magnetic configuration and surface abundance distributions of the Bp star 36 Lyn

    NASA Astrophysics Data System (ADS)

    Oksala, M. E.; Silvester, J.; Kochukhov, O.; Neiner, C.; Wade, G. A.; the MiMeS Collaboration

    2018-01-01

    Previous studies of the chemically peculiar Bp star 36 Lyn revealed a moderately strong magnetic field, circumstellar material and inhomogeneous surface abundance distributions of certain elements. We present in this paper an analysis of 33 high signal-to-noise ratio, high-resolution Stokes IV observations of 36 Lyn obtained with the Narval spectropolarimeter at the Bernard Lyot Telescope at Pic du Midi Observatory. From these data, we compute new measurements of the mean longitudinal magnetic field, Bℓ, using the multiline least-squares deconvolution (LSD) technique. A rotationally phased Bℓ curve reveals a strong magnetic field, with indications for deviation from a pure dipole field. We derive magnetic maps and chemical abundance distributions from the LSD profiles, produced using the Zeeman-Doppler imaging code INVERSLSD. Using a spherical harmonic expansion to characterize the magnetic field, we find that the harmonic energy is concentrated predominantly in the dipole mode (ℓ = 1), with significant contribution from both the poloidal and toroidal components. This toroidal field component is predicted theoretically, but not typically observed for Ap/Bp stars. Chemical abundance maps reveal a helium enhancement in a distinct region where the radial magnetic field is strong. Silicon enhancements are located in two regions, also where the radial field is stronger. Titanium and iron enhancements are slightly offset from the helium enhancements, and are located in areas where the radial field is weak, close to the magnetic equator.

  18. Anatomic Connections of the Subgenual Cingulate Region.

    PubMed

    Vergani, Francesco; Martino, Juan; Morris, Christopher; Attems, Johannes; Ashkan, Keyoumars; DellʼAcqua, Flavio

    2016-09-01

    The subgenual cingulate gyrus (SCG) has been proposed as a target for deep brain stimulation (DBS) in neuropsychiatric disorders, mainly major depression. Despite promising clinical results, the mechanism of action of DBS in this region is poorly understood. Knowledge of the connections of the SCG can elucidate the network involved by DBS in this area and can help refine the targeting for DBS electrode placement. To investigate the anatomic connections of the SCG region. An anatomic study of the connections of the SCG was performed on postmortem specimens and in vivo with MR diffusion imaging tractography. Postmortem dissections were performed according to the Klingler technique. Specimens were fixed in 10% formalin and frozen at -15°C for 2 weeks. After thawing, dissection was performed with blunt dissectors. Whole brain tractography was performed using spherical deconvolution tractography. Four main connections were found: (1) fibers of the cingulum, originating at the level of the SCG and terminating at the medial aspect of the temporal lobe (parahippocampal gyrus); (2) fibers running toward the base of the frontal lobe, connecting the SCG with frontopolar areas; (3) fibers running more laterally, converging onto the ventral striatum (nucleus accumbens); (4) fibers of the uncinate fasciculus, connecting the orbitofrontal with the anterior temporal region. The SCG shows a wide range of white matter connections with limbic, prefrontal, and mesiotemporal areas. These findings can help to explain the role of the SCG in DBS for psychiatric disorders. DBS, deep brain stimulationSCG, subgenual cingulate gyrus.

  19. Apparent Fibre Density: a novel measure for the analysis of diffusion-weighted magnetic resonance images.

    PubMed

    Raffelt, David; Tournier, J-Donald; Rose, Stephen; Ridgway, Gerard R; Henderson, Robert; Crozier, Stuart; Salvado, Olivier; Connelly, Alan

    2012-02-15

    This article proposes a new measure called Apparent Fibre Density (AFD) for the analysis of high angular resolution diffusion-weighted images using higher-order information provided by fibre orientation distributions (FODs) computed using spherical deconvolution. AFD has the potential to provide specific information regarding differences between populations by identifying not only the location, but also the orientations along which differences exist. In this work, analytical and numerical Monte-Carlo simulations are used to support the use of the FOD amplitude as a quantitative measure (i.e. AFD) for population and longitudinal analysis. To perform robust voxel-based analysis of AFD, we present and evaluate a novel method to modulate the FOD to account for changes in fibre bundle cross-sectional area that occur during spatial normalisation. We then describe a novel approach for statistical analysis of AFD that uses cluster-based inference of differences extended throughout space and orientation. Finally, we demonstrate the capability of the proposed method by performing voxel-based AFD comparisons between a group of Motor Neurone Disease patients and healthy control subjects. A significant decrease in AFD was detected along voxels and orientations corresponding to both the corticospinal tract and corpus callosal fibres that connect the primary motor cortices. In addition to corroborating previous findings in MND, this study demonstrates the clear advantage of using this type of analysis by identifying differences along single fibre bundles in regions containing multiple fibre populations. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. CLUMP-3D: Three-dimensional Shape and Structure of 20 CLASH Galaxy Clusters from Combined Weak and Strong Lensing

    NASA Astrophysics Data System (ADS)

    Chiu, I.-Non; Umetsu, Keiichi; Sereno, Mauro; Ettori, Stefano; Meneghetti, Massimo; Merten, Julian; Sayers, Jack; Zitrin, Adi

    2018-06-01

    We perform a three-dimensional triaxial analysis of 16 X-ray regular and 4 high-magnification galaxy clusters selected from the CLASH survey by combining two-dimensional weak-lensing and central strong-lensing constraints. In a Bayesian framework, we constrain the intrinsic structure and geometry of each individual cluster assuming a triaxial Navarro–Frenk–White halo with arbitrary orientations, characterized by the mass {M}200{{c}}, halo concentration {c}200{{c}}, and triaxial axis ratios ({q}{{a}}≤slant {q}{{b}}), and investigate scaling relations between these halo structural parameters. From triaxial modeling of the X-ray-selected subsample, we find that the halo concentration decreases with increasing cluster mass, with a mean concentration of {c}200{{c}}=4.82+/- 0.30 at the pivot mass {M}200{{c}}={10}15{M}ȯ {h}-1. This is consistent with the result from spherical modeling, {c}200{{c}}=4.51+/- 0.14. Independently of the priors, the minor-to-major axis ratio {q}{{a}} of our full sample exhibits a clear deviation from the spherical configuration ({q}{{a}}=0.52+/- 0.04 at {10}15{M}ȯ {h}-1 with uniform priors), with a weak dependence on the cluster mass. Combining all 20 clusters, we obtain a joint ensemble constraint on the minor-to-major axis ratio of {q}{{a}}={0.652}-0.078+0.162 and a lower bound on the intermediate-to-major axis ratio of {q}{{b}}> 0.63 at the 2σ level from an analysis with uniform priors. Assuming priors on the axis ratios derived from numerical simulations, we constrain the degree of triaxiality for the full sample to be { \\mathcal T }=0.79+/- 0.03 at {10}15{M}ȯ {h}-1, indicating a preference for a prolate geometry of cluster halos. We find no statistical evidence for an orientation bias ({f}geo}=0.93+/- 0.07), which is insensitive to the priors and in agreement with the theoretical expectation for the CLASH clusters.

  1. Probing the shape and internal structure of dark matter haloes with the halo-shear-shear three-point correlation function

    NASA Astrophysics Data System (ADS)

    Shirasaki, Masato; Yoshida, Naoki

    2018-04-01

    Weak lensing three-point statistics are powerful probes of the structure of dark matter haloes. We propose to use the correlation of the positions of galaxies with the shapes of background galaxy pairs, known as the halo-shear-shear correlation (HSSC), to measure the mean halo ellipticity and the abundance of subhaloes in a statistical manner. We run high-resolution cosmological N-body simulations and use the outputs to measure the HSSC for galaxy haloes and cluster haloes. Non-spherical haloes cause a characteristic azimuthal variation of the HSSC, and massive subhaloes in the outer region near the virial radius contribute to ˜ 10 per cent of the HSSC amplitude. Using the HSSC and its covariance estimated from our N-body simulations, we make forecast for constraining the internal structure of dark matter haloes with future galaxy surveys. With 1000 galaxy groups with mass greater than 1013.5 h-1M⊙, the average halo ellipticity can be measured with an accuracy of 10 percent. A spherical, smooth mass distribution can be ruled out at a ˜5σ significance level. The existence of subhaloes whose masses are in 1-10 percent of the main halo mass can be detected with ˜104 galaxies/clusters. We conclude that the HSSC provides valuable information on the structure of dark haloes and hence on the nature of dark matter.

  2. Nonlinear spatio-temporal filtering of dynamic PET data using a four-dimensional Gaussian filter and expectation-maximization deconvolution

    NASA Astrophysics Data System (ADS)

    Floberg, J. M.; Holden, J. E.

    2013-02-01

    We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications.

  3. Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding

    2018-02-01

    Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.

  4. Multi-kernel deconvolution for contrast improvement in a full field imaging system with engineered PSFs using conical diffraction

    NASA Astrophysics Data System (ADS)

    Enguita, Jose M.; Álvarez, Ignacio; González, Rafael C.; Cancelas, Jose A.

    2018-01-01

    The problem of restoration of a high-resolution image from several degraded versions of the same scene (deconvolution) has been receiving attention in the last years in fields such as optics and computer vision. Deconvolution methods are usually based on sets of images taken with small (sub-pixel) displacements or slightly different focus. Techniques based on sets of images obtained with different point-spread-functions (PSFs) engineered by an optical system are less popular and mostly restricted to microscopic systems, where a spot of light is projected onto the sample under investigation, which is then scanned point-by-point. In this paper, we use the effect of conical diffraction to shape the PSFs in a full-field macroscopic imaging system. We describe a series of simulations and real experiments that help to evaluate the possibilities of the system, showing the enhancement in image contrast even at frequencies that are strongly filtered by the lens transfer function or when sampling near the Nyquist frequency. Although results are preliminary and there is room to optimize the prototype, the idea shows promise to overcome the limitations of the image sensor technology in many fields, such as forensics, medical, satellite, or scientific imaging.

  5. Charge reconstruction in large-area photomultipliers

    NASA Astrophysics Data System (ADS)

    Grassi, M.; Montuschi, M.; Baldoncini, M.; Mantovani, F.; Ricci, B.; Andronico, G.; Antonelli, V.; Bellato, M.; Bernieri, E.; Brigatti, A.; Brugnera, R.; Budano, A.; Buscemi, M.; Bussino, S.; Caruso, R.; Chiesa, D.; Corti, D.; Dal Corso, F.; Ding, X. F.; Dusini, S.; Fabbri, A.; Fiorentini, G.; Ford, R.; Formozov, A.; Galet, G.; Garfagnini, A.; Giammarchi, M.; Giaz, A.; Insolia, A.; Isocrate, R.; Lippi, I.; Longhitano, F.; Lo Presti, D.; Lombardi, P.; Marini, F.; Mari, S. M.; Martellini, C.; Meroni, E.; Mezzetto, M.; Miramonti, L.; Monforte, S.; Nastasi, M.; Ortica, F.; Paoloni, A.; Parmeggiano, S.; Pedretti, D.; Pelliccia, N.; Pompilio, R.; Previtali, E.; Ranucci, G.; Re, A. C.; Romani, A.; Saggese, P.; Salamanna, G.; Sawy, F. H.; Settanta, G.; Sisti, M.; Sirignano, C.; Spinetti, M.; Stanco, L.; Strati, V.; Verde, G.; Votano, L.

    2018-02-01

    Large-area PhotoMultiplier Tubes (PMT) allow to efficiently instrument Liquid Scintillator (LS) neutrino detectors, where large target masses are pivotal to compensate for neutrinos' extremely elusive nature. Depending on the detector light yield, several scintillation photons stemming from the same neutrino interaction are likely to hit a single PMT in a few tens/hundreds of nanoseconds, resulting in several photoelectrons (PEs) to pile-up at the PMT anode. In such scenario, the signal generated by each PE is entangled to the others, and an accurate PMT charge reconstruction becomes challenging. This manuscript describes an experimental method able to address the PMT charge reconstruction in the case of large PE pile-up, providing an unbiased charge estimator at the permille level up to 15 detected PEs. The method is based on a signal filtering technique (Wiener filter) which suppresses the noise due to both PMT and readout electronics, and on a Fourier-based deconvolution able to minimize the influence of signal distortions—such as an overshoot. The analysis of simulated PMT waveforms shows that the slope of a linear regression modeling the relation between reconstructed and true charge values improves from 0.769 ± 0.001 (without deconvolution) to 0.989 ± 0.001 (with deconvolution), where unitary slope implies perfect reconstruction. A C++ implementation of the charge reconstruction algorithm is available online at [1].

  6. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  7. Facilitating high resolution mass spectrometry data processing for screening of environmental water samples: An evaluation of two deconvolution tools.

    PubMed

    Bade, Richard; Causanilles, Ana; Emke, Erik; Bijlsma, Lubertus; Sancho, Juan V; Hernandez, Felix; de Voogt, Pim

    2016-11-01

    A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of >200 pharmaceuticals and illicit drugs or ChemSpider. This hidden target screening approach led to the detection of numerous compounds including the illicit drug cocaine and its metabolite benzoylecgonine and the pharmaceuticals carbamazepine, gemfibrozil and losartan. The compounds found using both approaches were combined, and isotopic pattern and retention time prediction were used to filter out false positives. The remaining potential positives were reanalysed in MS/MS mode and their product ions were compared with literature and/or mass spectral libraries. The inclusion of the chemical database ChemSpider led to the tentative identification of several metabolites, including paraxanthine, theobromine, theophylline and carboxylosartan, as well as the pharmaceutical phenazone. The first three of these compounds are isomers and they were subsequently distinguished based on their product ions and predicted retention times. This work has shown that the use deconvolution tools facilitates non-target screening and enables the identification of a higher number of compounds. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Pre-processing liquid chromatography/high-resolution mass spectrometry data: extracting pure mass spectra by deconvolution from the invariance of isotopic distribution.

    PubMed

    Krishnan, Shaji; Verheij, Elwin E R; Bas, Richard C; Hendriks, Margriet W B; Hankemeier, Thomas; Thissen, Uwe; Coulier, Leon

    2013-05-15

    Mass spectra obtained by deconvolution of liquid chromatography/high-resolution mass spectrometry (LC/HRMS) data can be impaired by non-informative mass-over-charge (m/z) channels. This impairment of mass spectra can have significant negative influence on further post-processing, like quantification and identification. A metric derived from the knowledge of errors in isotopic distribution patterns, and quality of the signal within a pre-defined mass chromatogram block, has been developed to pre-select all informative m/z channels. This procedure results in the clean-up of deconvoluted mass spectra by maintaining the intensity counts from m/z channels that originate from a specific compound/molecular ion, for example, molecular ion, adducts, (13) C-isotopes, multiply charged ions and removing all m/z channels that are not related to the specific peak. The methodology has been successfully demonstrated for two sets of high-resolution LC/MS data. The approach described is therefore thought to be a useful tool in the automatic processing of LC/HRMS data. It clearly shows the advantages compared to other approaches like peak picking and de-isotoping in the sense that all information is retained while non-informative data is removed automatically. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Plenoptic Image Motion Deblurring.

    PubMed

    Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo

    2018-04-01

    We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.

  10. Distributed capillary adiabatic tissue homogeneity model in parametric multi-channel blind AIF estimation using DCE-MRI.

    PubMed

    Kratochvíla, Jiří; Jiřík, Radovan; Bartoš, Michal; Standara, Michal; Starčuk, Zenon; Taxt, Torfinn

    2016-03-01

    One of the main challenges in quantitative dynamic contrast-enhanced (DCE) MRI is estimation of the arterial input function (AIF). Usually, the signal from a single artery (ignoring contrast dispersion, partial volume effects and flow artifacts) or a population average of such signals (also ignoring variability between patients) is used. Multi-channel blind deconvolution is an alternative approach avoiding most of these problems. The AIF is estimated directly from the measured tracer concentration curves in several tissues. This contribution extends the published methods of multi-channel blind deconvolution by applying a more realistic model of the impulse residue function, the distributed capillary adiabatic tissue homogeneity model (DCATH). In addition, an alternative AIF model is used and several AIF-scaling methods are tested. The proposed method is evaluated on synthetic data with respect to the number of tissue regions and to the signal-to-noise ratio. Evaluation on clinical data (renal cell carcinoma patients before and after the beginning of the treatment) gave consistent results. An initial evaluation on clinical data indicates more reliable and less noise sensitive perfusion parameter estimates. Blind multi-channel deconvolution using the DCATH model might be a method of choice for AIF estimation in a clinical setup. © 2015 Wiley Periodicals, Inc.

  11. Computationally efficient video restoration for Nyquist sampled imaging sensors combining an affine-motion-based temporal Kalman filter and adaptive Wiener filter.

    PubMed

    Rucci, Michael; Hardie, Russell C; Barnard, Kenneth J

    2014-05-01

    In this paper, we present a computationally efficient video restoration algorithm to address both blur and noise for a Nyquist sampled imaging system. The proposed method utilizes a temporal Kalman filter followed by a correlation-model based spatial adaptive Wiener filter (AWF). The Kalman filter employs an affine background motion model and novel process-noise variance estimate. We also propose and demonstrate a new multidelay temporal Kalman filter designed to more robustly treat local motion. The AWF is a spatial operation that performs deconvolution and adapts to the spatially varying residual noise left in the Kalman filter stage. In image areas where the temporal Kalman filter is able to provide significant noise reduction, the AWF can be aggressive in its deconvolution. In other areas, where less noise reduction is achieved with the Kalman filter, the AWF balances the deconvolution with spatial noise reduction. In this way, the Kalman filter and AWF work together effectively, but without the computational burden of full joint spatiotemporal processing. We also propose a novel hybrid system that combines a temporal Kalman filter and BM3D processing. To illustrate the efficacy of the proposed methods, we test the algorithms on both simulated imagery and video collected with a visible camera.

  12. THE EFFECT OF BACKGROUND SIGNAL AND ITS REPRESENTATION IN DECONVOLUTION OF EPR SPECTRA ON ACCURACY OF EPR DOSIMETRY IN BONE.

    PubMed

    Ciesielski, Bartlomiej; Marciniak, Agnieszka; Zientek, Agnieszka; Krefft, Karolina; Cieszyński, Mateusz; Boguś, Piotr; Prawdzik-Dampc, Anita

    2016-12-01

    This study is about the accuracy of EPR dosimetry in bones based on deconvolution of the experimental spectra into the background (BG) and the radiation-induced signal (RIS) components. The model RIS's were represented by EPR spectra from irradiated enamel or bone powder; the model BG signals by EPR spectra of unirradiated bone samples or by simulated spectra. Samples of compact and trabecular bones were irradiated in the 30-270 Gy range and the intensities of their RIS's were calculated using various combinations of those benchmark spectra. The relationships between the dose and the RIS were linear (R 2  > 0.995), with practically no difference between results obtained when using signals from irradiated enamel or bone as the model RIS. Use of different experimental spectra for the model BG resulted in variations in intercepts of the dose-RIS calibration lines, leading to systematic errors in reconstructed doses, in particular for high- BG samples of trabecular bone. These errors were reduced when simulated spectra instead of the experimental ones were used as the benchmark BG signal in the applied deconvolution procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Approximate deconvolution model for the simulation of turbulent gas-solid flows: An a priori analysis

    NASA Astrophysics Data System (ADS)

    Schneiderbauer, Simon; Saeedipour, Mahdi

    2018-02-01

    Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].

  14. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  15. An integrated analysis-synthesis array system for spatial sound fields.

    PubMed

    Bai, Mingsian R; Hua, Yi-Hsin; Kuo, Chia-Hao; Hsieh, Yu-Hao

    2015-03-01

    An integrated recording and reproduction array system for spatial audio is presented within a generic framework akin to the analysis-synthesis filterbanks in discrete time signal processing. In the analysis stage, a microphone array "encodes" the sound field by using the plane-wave decomposition. Direction of arrival of plane-wave components that comprise the sound field of interest are estimated by multiple signal classification. Next, the source signals are extracted by using a deconvolution procedure. In the synthesis stage, a loudspeaker array "decodes" the sound field by reconstructing the plane-wave components obtained in the analysis stage. This synthesis stage is carried out by pressure matching in the interior domain of the loudspeaker array. The deconvolution problem is solved by truncated singular value decomposition or convex optimization algorithms. For high-frequency reproduction that suffers from the spatial aliasing problem, vector panning is utilized. Listening tests are undertaken to evaluate the deconvolution method, vector panning, and a hybrid approach that combines both methods to cover frequency ranges below and above the spatial aliasing frequency. Localization and timbral attributes are considered in the subjective evaluation. The results show that the hybrid approach performs the best in overall preference. In addition, there is a trade-off between reproduction performance and the external radiation.

  16. Investigation of the lithosphere of the Texas Gulf Coast using phase-specific Ps receiver functions produced by wavefield iterative deconvolution

    NASA Astrophysics Data System (ADS)

    Gurrola, H.; Berdine, A.; Pulliam, J.

    2017-12-01

    Interference between Ps phases and reverberations (PPs, PSs phases and reverberations thereof) make it difficult to use Ps receiver functions (RF) in regions with thick sediments. Crustal reverberations typically interfere with Ps phases from the lithosphere-asthenosphere boundary (LAB). We have developed a method to separate Ps phases from reverberations by deconvolution of all the data recorded at a seismic station by removing phases from a single wavefront at each iteration of the deconvolution (wavefield iterative deconvolution or WID). We applied WID to data collected in the Gulf Coast and Llano Front regions of Texas by the EarthScope Transportable array and by a temporary deployment of 23 broadband seismometers (deployed by Texas Tech and Baylor Universities). The 23 station temporary deployment was 300 km long; crossing from Matagorda Island onto the Llano uplift. 3-D imaging using these data shows that the deepest part of the sedimentary basin may be inboard of the coastline. The Moho beneath the Gulf Coast plain does not appear in many of the images. This could be due to interference from reverberations from shallower layers or it may indicate the lack of a strong velocity contrast at the Moho perhaps due to serpentinization of the uppermost mantle. The Moho appears to be flat, at 40 km) beneath most of the Llano uplift but may thicken to the south and thin beneath the Coastal plain. After application of WID, we were able to identify a negatively polarized Ps phase consistent with LAB depths identified in Sp RF images. The LAB appears to be 80-100 km deep beneath most of the coast but is 100 to 120 km deep beneath the Llano uplift. There are other negatively polarized phases between 160 and 200 km depths beneath the Gulf Coast and the Llano Uplift. These deeper phases may indicate that, in this region, the LAB is transitional in nature and rather than a discrete boundary.

  17. SU-E-T-236: Deconvolution of the Total Nuclear Cross-Sections of Therapeutic Protons and the Characterization of the Reaction Channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulmer, W.

    2015-06-15

    Purpose: The knowledge of the total nuclear cross-section Qtot(E) of therapeutic protons Qtot(E) provides important information in advanced radiotherapy with protons, such as the decrease of fluence of primary protons, the release of secondary particles (neutrons, protons, deuterons, etc.), and the production of nuclear fragments (heavy recoils), which usually undergo β+/− decay by emission of γ-quanta. Therefore determination of Qtot(E) is an important tool for sophisticated calculation algorithms of dose distributions. This cross-section can be determined by a linear combination of shifted Gaussian kernels and an error-function. The resonances resulting from deconvolutions in the energy space can be associated withmore » typical nuclear reactions. Methods: The described method of the determination of Qtot(E) results from an extension of the Breit-Wigner formula and a rather extended version of the nuclear shell theory to include nuclear correlation effects, clusters and highly excited/virtually excited nuclear states. The elastic energy transfer of protons to nucleons (the quantum numbers of the target nucleus remain constant) can be removed by the mentioned deconvolution. Results: The deconvolution of the term related to the error-function of the type cerf*er((E-ETh)/σerf] is the main contribution to obtain various nuclear reactions as resonances, since the elastic part of energy transfer is removed. The nuclear products of various elements of therapeutic interest like oxygen, calcium are classified and calculated. Conclusions: The release of neutrons is completely underrated, in particular, for low-energy protons. The transport of seconary particles, e.g. cluster formation by deuterium, tritium and α-particles, show an essential contribution to secondary particles, and the heavy recoils, which create γ-quanta by decay reactions, lead to broadening of the scatter profiles. These contributions cannot be accounted for by one single Gaussian kernel for the description of lateral scatter.« less

  18. A restricted proof that the weak equivalence principle implies the Einstein equivalence principle

    NASA Technical Reports Server (NTRS)

    Lightman, A. P.; Lee, D. L.

    1973-01-01

    Schiff has conjectured that the weak equivalence principle (WEP) implies the Einstein equivalence principle (EEP). A proof is presented of Schiff's conjecture, restricted to: (1) test bodies made of electromagnetically interacting point particles, that fall from rest in a static, spherically symmetric gravitational field; (2) theories of gravity within a certain broad class - a class that includes almost all complete relativistic theories that have been found in the literature, but with each theory truncated to contain only point particles plus electromagnetic and gravitational fields. The proof shows that every nonmentric theory in the class (every theory that violates EEP) must violate WEP. A formula is derived for the magnitude of the violation. It is shown that WEP is a powerful theoretical and experimental tool for constraining the manner in which gravity couples to electromagnetism in gravitation theories.

  19. Origin and thermal evolution of Mars

    NASA Technical Reports Server (NTRS)

    Schubert, Gerald; Soloman, S. C.; Turcotte, D. L.; Drake, M. J.; Sleep, N. H.

    1990-01-01

    The thermal evolution of Mars is governed by subsolidus mantle convection beneath a thick lithosphere. Models of the interior evolution are developed by parameterizing mantle convective heat transport in terms of mantle viscosity, the superadiabatic temperature rise across the mantle, and mantle heat production. Geological, geophysical, and geochemical observations of the compositon and structure of the interior and of the timing of major events in Martian evolution are used to constrain the model computations. Such evolutionary events include global differentiation, atmospheric outgassing, and the formation of the hemispherical dichotomy and Tharsis. Numerical calculations of fully three-dimensional, spherical convection in a shell the size of the Martian mantle are performed to explore plausible patterns of Martian mantel convection and to relate convective features, such as plumes, to surface features, such as Tharsis. The results from the model calculations are presented.

  20. Seismic Constraints on Interior Solar Convection

    NASA Technical Reports Server (NTRS)

    Hanasoge, Shravan M.; Duvall, Thomas L.; DeRosa, Marc L.

    2010-01-01

    We constrain the velocity spectral distribution of global-scale solar convective cells at depth using techniques of local helioseismology. We calibrate the sensitivity of helioseismic waves to large-scale convective cells in the interior by analyzing simulations of waves propagating through a velocity snapshot of global solar convection via methods of time-distance helioseismology. Applying identical analysis techniques to observations of the Sun, we are able to bound from above the magnitudes of solar convective cells as a function of spatial convective scale. We find that convection at a depth of r/R(solar) = 0.95 with spatial extent l < 30, where l is the spherical harmonic degree, comprise weak flow systems, on the order of 15 m/s or less. Convective features deeper than r/R(solar) = 0.95 are more difficult to image due to the rapidly decreasing sensitivity of helioseismic waves.

  1. Assembly of micro/nanomaterials into complex, three-dimensional architectures by compressive buckling

    NASA Astrophysics Data System (ADS)

    Xu, Sheng; Yan, Zheng; Jang, Kyung-In; Huang, Wen; Fu, Haoran; Kim, Jeonghyun; Wei, Zijun; Flavin, Matthew; McCracken, Joselle; Wang, Renhan; Badea, Adina; Liu, Yuhao; Xiao, Dongqing; Zhou, Guoyan; Lee, Jungwoo; Chung, Ha Uk; Cheng, Huanyu; Ren, Wen; Banks, Anthony; Li, Xiuling; Paik, Ungyu; Nuzzo, Ralph G.; Huang, Yonggang; Zhang, Yihui; Rogers, John A.

    2015-01-01

    Complex three-dimensional (3D) structures in biology (e.g., cytoskeletal webs, neural circuits, and vasculature networks) form naturally to provide essential functions in even the most basic forms of life. Compelling opportunities exist for analogous 3D architectures in human-made devices, but design options are constrained by existing capabilities in materials growth and assembly. We report routes to previously inaccessible classes of 3D constructs in advanced materials, including device-grade silicon. The schemes involve geometric transformation of 2D micro/nanostructures into extended 3D layouts by compressive buckling. Demonstrations include experimental and theoretical studies of more than 40 representative geometries, from single and multiple helices, toroids, and conical spirals to structures that resemble spherical baskets, cuboid cages, starbursts, flowers, scaffolds, fences, and frameworks, each with single- and/or multiple-level configurations.

  2. Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution.

    PubMed

    Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl

    2016-11-16

    Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.

  3. A joint Richardson-Lucy deconvolution algorithm for the reconstruction of multifocal structured illumination microscopy data.

    PubMed

    Ströhl, Florian; Kaminski, Clemens F

    2015-01-16

    We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson-Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package.

  4. A joint Richardson—Lucy deconvolution algorithm for the reconstruction of multifocal structured illumination microscopy data

    NASA Astrophysics Data System (ADS)

    Ströhl, Florian; Kaminski, Clemens F.

    2015-03-01

    We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson-Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package.

  5. Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution

    NASA Astrophysics Data System (ADS)

    Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl

    2016-11-01

    Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.

  6. Data matching for free-surface multiple attenuation by multidimensional deconvolution

    NASA Astrophysics Data System (ADS)

    van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald

    2012-09-01

    A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.

  7. Reply to Comment: 'A novel method for fast and robust estimation of fluorescence decay dynamics using constrained least-square deconvolution with Laguerre expansion'.

    PubMed

    Ma, Dinglong; Liu, Jing; Qi, Jinyi; Marcu, Laura

    2017-02-21

    In this response we underscore that the instrumentation described in the original publication (Liu et al 2012 Phys. Med. Biol. 57 843-65) was based on pulse-sampling technique, while the comment by Zhang et al is based on the assumption that a time-correlated single photon counting (TCSPC) instrumentation was used. Therefore the arguments made in the comment are not applicable to the noise model reported by Liu et al. As reported in the literature (Lakowicz 2006 Principles of Fluorescence Spectroscopy (New York: Springer)), while in the TCSPC the experimental noise can be estimated from Poisson statistics, such an assumption is not valid for pulse-sampling (transient recording) techniques. To further clarify this aspect, we present here a comprehensive noise model describing the signal and noise propagation of the pulse sampling time-resolved fluorescence detection. Experimental data recorded in various conditions are analyzed as a case study to demonstrate the noise model of our instrumental system. In addition, regarding the statement of correcting equation (3) in Liu et al (2012 Phys. Med. Biol. 57 843-65), the notation of discrete time Laguerre function in the original publication was clear and consistent with literature conventions (Marmarelis 1993 Ann. Biomed. Eng. 21 573-89, Westwick and Kearney 2003 Identification of Nonlinear Physiological Systems (Hoboken, NJ: Wiley)). Thus, it does not require revision.

  8. Reply to Comment: ‘A novel method for fast and robust estimation of fluorescence decay dynamics using constrained least-square deconvolution with Laguerre expansion’

    NASA Astrophysics Data System (ADS)

    Ma, Dinglong; Liu, Jing; Qi, Jinyi; Marcu, Laura

    2017-02-01

    In this response we underscore that the instrumentation described in the original publication (Liu et al 2012 Phys. Med. Biol. 57 843-65) was based on pulse-sampling technique, while the comment by Zhang et al is based on the assumption that a time-correlated single photon counting (TCSPC) instrumentation was used. Therefore the arguments made in the comment are not applicable to the noise model reported by Liu et al. As reported in the literature (Lakowicz 2006 Principles of Fluorescence Spectroscopy (New York: Springer)), while in the TCSPC the experimental noise can be estimated from Poisson statistics, such an assumption is not valid for pulse-sampling (transient recording) techniques. To further clarify this aspect, we present here a comprehensive noise model describing the signal and noise propagation of the pulse sampling time-resolved fluorescence detection. Experimental data recorded in various conditions are analyzed as a case study to demonstrate the noise model of our instrumental system. In addition, regarding the statement of correcting equation (3) in Liu et al (2012 Phys. Med. Biol. 57 843-65), the notation of discrete time Laguerre function in the original publication was clear and consistent with literature conventions (Marmarelis 1993 Ann. Biomed. Eng. 21 573-89, Westwick and Kearney 2003 Identification of Nonlinear Physiological Systems (Hoboken, NJ: Wiley)). Thus, it does not require revision.

  9. Discovery of a New Companion and Evidence of a Circumprimary Disk: Adaptive Optics Imaging of the Young Multiple System VW Chamaeleon

    NASA Astrophysics Data System (ADS)

    Brandeker, Alexis; Liseau, René; Artymowicz, Pawel; Jayawardhana, Ray

    2001-11-01

    Since a majority of young low-mass stars are members of multiple systems, the study of their stellar and disk configurations is crucial to our understanding of both star and planet formation processes. Here we present near-infrared adaptive optics observations of the young multiple star system VW Chamaeleon. The previously known 0.7" binary is clearly resolved already in our raw J- and K-band images. We report the discovery of a new faint companion to the secondary, at an apparent separation of only 0.1", or 16 AU. Our high-resolution photometric observations also make it possible to measure the J-K colors of each of the three components individually. We detect an infrared excess in the primary, consistent with theoretical models of a circumprimary disk. Analytical and numerical calculations of orbital stability show that VW Cha may be a stable triple system. Using models for the age and total mass of the secondary pair, we estimate the orbital period to be 74 yr. Thus, follow-up astrometric observations might yield direct dynamical masses within a few years and constrain evolutionary models of low-mass stars. Our results demonstrate that adaptive optics imaging in conjunction with deconvolution techniques is a powerful tool for probing close multiple systems. Based on observations collected at the European Southern Observatory, Chile.

  10. Age mapping and dating of monazite on the electron microprobe: Deconvoluting multistage tectonic histories

    NASA Astrophysics Data System (ADS)

    Williams, Michael L.; Jercinovic, Michael J.; Terry, Michael P.

    1999-11-01

    High-resolution X-ray mapping and dating of monazite on the electron microprobe are powerful geochronological tools for structural, metamorphic, and tectonic analysis. X-ray maps commonly show complex Th, U, and Pb zoning that reflects monazite growth and overgrowth events. Age maps constructed from the X-ray maps simplify the zoning and highlight age domains. Microprobe dating offers a rapid, in situ method for estimating ages of mapped domains. Application of these techniques has placed new constraints on the tectonic history of three areas. In western Canada, age mapping has revealed multiphase monazite, with older cores and younger rims, included in syntectonic garnet. Microprobe ages show that tectonism occurred ca. 1.9 Ga, 700 m.y. later than mylonitization in the adjacent Snowbird tectonic zone. In New Mexico, age mapping and dating show that the dominant fabric and triple-point metamorphism occurred during a 1.4 Ga reactivation, not during the 1.7 Ga Yavapai-Mazatzal orogeny. In Norway, monazite inclusions in garnet constrain high-pressure metamorphism to ca. 405 Ma, and older cores indicate a previously unrecognized component of ca. 1.0 Ga monazite. In all three areas, microprobe dating and age mapping have provided a critical textural context for geochronologic data and a better understanding of the complex age spectra of these multistage orogenic belts.

  11. Dynamic gamma knife radiosurgery

    NASA Astrophysics Data System (ADS)

    Luan, Shuang; Swanson, Nathan; Chen, Zhe; Ma, Lijun

    2009-03-01

    Gamma knife has been the treatment of choice for various brain tumors and functional disorders. Current gamma knife radiosurgery is planned in a 'ball-packing' approach and delivered in a 'step-and-shoot' manner, i.e. it aims to 'pack' the different sized spherical high-dose volumes (called 'shots') into a tumor volume. We have developed a dynamic scheme for gamma knife radiosurgery based on the concept of 'dose-painting' to take advantage of the new robotic patient positioning system on the latest Gamma Knife C™ and Perfexion™ units. In our scheme, the spherical high dose volume created by the gamma knife unit will be viewed as a 3D spherical 'paintbrush', and treatment planning reduces to finding the best route of this 'paintbrush' to 'paint' a 3D tumor volume. Under our dose-painting concept, gamma knife radiosurgery becomes dynamic, where the patient moves continuously under the robotic positioning system. We have implemented a fully automatic dynamic gamma knife radiosurgery treatment planning system, where the inverse planning problem is solved as a traveling salesman problem combined with constrained least-square optimizations. We have also carried out experimental studies of dynamic gamma knife radiosurgery and showed the following. (1) Dynamic gamma knife radiosurgery is ideally suited for fully automatic inverse planning, where high quality radiosurgery plans can be obtained in minutes of computation. (2) Dynamic radiosurgery plans are more conformal than step-and-shoot plans and can maintain a steep dose gradient (around 13% per mm) between the target tumor volume and the surrounding critical structures. (3) It is possible to prescribe multiple isodose lines with dynamic gamma knife radiosurgery, so that the treatment can cover the periphery of the target volume while escalating the dose for high tumor burden regions. (4) With dynamic gamma knife radiosurgery, one can obtain a family of plans representing a tradeoff between the delivery time and the dose distributions, thus giving the clinician one more dimension of flexibility of choosing a plan based on the clinical situations.

  12. Detection of high-risk atherosclerotic lesions by time-resolved fluorescence spectroscopy based on the Laguerre deconvolution technique

    NASA Astrophysics Data System (ADS)

    Jo, J. A.; Fang, Q.; Papaioannou, T.; Qiao, J. H.; Fishbein, M. C.; Beseth, B.; Dorafshar, A. H.; Reil, T.; Baker, D.; Freischlag, J.; Marcu, L.

    2006-02-01

    This study introduces new methods of time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data analysis for tissue characterization. These analytical methods were applied for the detection of atherosclerotic vulnerable plaques. Upon pulsed nitrogen laser (337 nm, 1 ns) excitation, TR-LIFS measurements were obtained from carotid atherosclerotic plaque specimens (57 endarteroctomy patients) at 492 distinct areas. The emission was both spectrally- (360-600 nm range at 5 nm interval) and temporally- (0.3 ns resolution) resolved using a prototype clinically compatible fiber-optic catheter TR-LIFS apparatus. The TR-LIFS measurements were subsequently analyzed using a standard multiexponential deconvolution and a recently introduced Laguerre deconvolution technique. Based on their histopathology, the lesions were classified as early (thin intima), fibrotic (collagen-rich intima), and high-risk (thin cap over necrotic core and/or inflamed intima). Stepwise linear discriminant analysis (SLDA) was applied for lesion classification. Normalized spectral intensity values and Laguerre expansion coefficients (LEC) at discrete emission wavelengths (390, 450, 500 and 550 nm) were used as features for classification. The Laguerre based SLDA classifier provided discrimination of high-risk lesions with high sensitivity (SE>81%) and specificity (SP>95%). Based on these findings, we believe that TR-LIFS information derived from the Laguerre expansion coefficients can provide a valuable additional dimension for the diagnosis of high-risk vulnerable atherosclerotic plaques.

  13. Deconvolution enhanced direction of arrival estimation using one- and three-component seismic arrays applied to ocean induced microseisms

    NASA Astrophysics Data System (ADS)

    Gal, M.; Reading, A. M.; Ellingsen, S. P.; Koper, K. D.; Burlacu, R.; Gibbons, S. J.

    2016-07-01

    Microseisms in the period of 2-10 s are generated in deep oceans and near coastal regions. It is common for microseisms from multiple sources to arrive at the same time at a given seismometer. It is therefore desirable to be able to measure multiple slowness vectors accurately. Popular ways to estimate the direction of arrival of ocean induced microseisms are the conventional (fk) or adaptive (Capon) beamformer. These techniques give robust estimates, but are limited in their resolution capabilities and hence do not always detect all arrivals. One of the limiting factors in determining direction of arrival with seismic arrays is the array response, which can strongly influence the estimation of weaker sources. In this work, we aim to improve the resolution for weaker sources and evaluate the performance of two deconvolution algorithms, Richardson-Lucy deconvolution and a new implementation of CLEAN-PSF. The algorithms are tested with three arrays of different aperture (ASAR, WRA and NORSAR) using 1 month of real data each and compared with the conventional approaches. We find an improvement over conventional methods from both algorithms and the best performance with CLEAN-PSF. We then extend the CLEAN-PSF framework to three components (3C) and evaluate 1 yr of data from the Pilbara Seismic Array in northwest Australia. The 3C CLEAN-PSF analysis is capable in resolving a previously undetected Sn phase.

  14. Microseismic source locations with deconvolution migration

    NASA Astrophysics Data System (ADS)

    Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

    2018-03-01

    Identifying and locating microseismic events are critical problems in hydraulic fracturing monitoring for unconventional resources exploration. In contrast to active seismic data, microseismic data are usually recorded with unknown source excitation time and source location. In this study, we introduce deconvolution migration by combining deconvolution interferometry with interferometric cross-correlation migration (CCM). This method avoids the need for the source excitation time and enhances both the spatial resolution and robustness by eliminating the square term of the source wavelets from CCM. The proposed algorithm is divided into the following three steps: (1) generate the virtual gathers by deconvolving the master trace with all other traces in the microseismic gather to remove the unknown excitation time; (2) migrate the virtual gather to obtain a single image of the source location and (3) stack all of these images together to get the final estimation image of the source location. We test the proposed method on complex synthetic and field data set from the surface hydraulic fracturing monitoring, and compare the results with those obtained by interferometric CCM. The results demonstrate that the proposed method can obtain a 50 per cent higher spatial resolution image of the source location, and more robust estimation with smaller errors of the localization especially in the presence of velocity model errors. This method is also beneficial for source mechanism inversion and global seismology applications.

  15. Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed

    NASA Astrophysics Data System (ADS)

    Walsh, Alex J.; Beier, Hope T.

    2016-03-01

    Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.

  16. Wellskins and slug tests: where's the bias?

    NASA Astrophysics Data System (ADS)

    Rovey, C. W.; Niemann, W. L.

    2001-03-01

    Pumping tests in an outwash sand at the Camp Dodge Site give hydraulic conductivities ( K) approximately seven times greater than conventional slug tests in the same wells. To determine if this difference is caused by skin bias, we slug tested three sets of wells, each in a progressively greater stage of development. Results were analyzed with both the conventional Bouwer-Rice method and the deconvolution method, which quantifies the skin and eliminates its effects. In 12 undeveloped wells the average skin is +4.0, causing underestimation of conventional slug-test K (Bouwer-Rice method) by approximately a factor of 2 relative to the deconvolution method. In seven nominally developed wells the skin averages just +0.34, and the Bouwer-Rice method gives K within 10% of that calculated with the deconvolution method. The Bouwer-Rice K in this group is also within 5% of that measured by natural-gradient tracer tests at the same site. In 12 intensely developed wells the average skin is <-0.82, consistent with an average skin of -1.7 measured during single-well pumping tests. At this site the maximum possible skin bias is much smaller than the difference between slug and pumping-test Ks. Moreover, the difference in K persists even in intensely developed wells with negative skins. Therefore, positive wellskins do not cause the difference in K between pumping and slug tests at this site.

  17. Time-domain separation of interfering waves in cancellous bone using bandlimited deconvolution: simulation and phantom study.

    PubMed

    Wear, Keith A

    2014-04-01

    In through-transmission interrogation of cancellous bone, two longitudinal pulses ("fast" and "slow" waves) may be generated. Fast and slow wave properties convey information about material and micro-architectural characteristics of bone. However, these properties can be difficult to assess when fast and slow wave pulses overlap in time and frequency domains. In this paper, two methods are applied to decompose signals into fast and slow waves: bandlimited deconvolution and modified least-squares Prony's method with curve-fitting (MLSP + CF). The methods were tested in plastic and Zerdine(®) samples that provided fast and slow wave velocities commensurate with velocities for cancellous bone. Phase velocity estimates were accurate to within 6 m/s (0.4%) (slow wave with both methods and fast wave with MLSP + CF) and 26 m/s (1.2%) (fast wave with bandlimited deconvolution). Midband signal loss estimates were accurate to within 0.2 dB (1.7%) (fast wave with both methods), and 1.0 dB (3.7%) (slow wave with both methods). Similar accuracies were found for simulations based on fast and slow wave parameter values published for cancellous bone. These methods provide sufficient accuracy and precision for many applications in cancellous bone such that experimental error is likely to be a greater limiting factor than estimation error.

  18. Dissipative dark matter halos: The steady state solution

    NASA Astrophysics Data System (ADS)

    Foot, R.

    2018-02-01

    Dissipative dark matter, where dark matter particle properties closely resemble familiar baryonic matter, is considered. Mirror dark matter, which arises from an isomorphic hidden sector, is a specific and theoretically constrained scenario. Other possibilities include models with more generic hidden sectors that contain massless dark photons [unbroken U (1 ) gauge interactions]. Such dark matter not only features dissipative cooling processes but also is assumed to have nontrivial heating sourced by ordinary supernovae (facilitated by the kinetic mixing interaction). The dynamics of dissipative dark matter halos around rotationally supported galaxies, influenced by heating as well as cooling processes, can be modeled by fluid equations. For a sufficiently isolated galaxy with a stable star formation rate, the dissipative dark matter halos are expected to evolve to a steady state configuration which is in hydrostatic equilibrium and where heating and cooling rates locally balance. Here, we take into account the major cooling and heating processes, and numerically solve for the steady state solution under the assumptions of spherical symmetry, negligible dark magnetic fields, and that supernova sourced energy is transported to the halo via dark radiation. For the parameters considered, and assumptions made, we were unable to find a physically realistic solution for the constrained case of mirror dark matter halos. Halo cooling generally exceeds heating at realistic halo mass densities. This problem can be rectified in more generic dissipative dark matter models, and we discuss a specific example in some detail.

  19. New Sediment Data to Constrain Southern Atlantic Holocene Secular Variation

    NASA Astrophysics Data System (ADS)

    Korte, M. C.; Frank, U.; Nowaczyk, N. R.; Frederichs, T.; Brown, M. C.

    2014-12-01

    The present day geomagnetic field shows a notable weak zone stretching from South America to southern Africa. This is known as the South Atlantic Anomaly caused by a growing patch of reversed magnetic flux at the core-mantle boundary. The investigation of existence and evolution of similar features over the past millennia using global spherical harmonic models is hampered by the fact that at present only very few paleomagnetic data from equatorial and many southern hemisphere regions are available to constrain models well in these regions. Here, we present the results of paleomagnetic investigations of sediment cores from four locations at low latitudes. OPD 1078 and 1079 lie off the coast of Angola, GeoB6517-2 and ODP 1076D are located in the Congo Fan and M35003-4 is situated southeast of Grenada in the Tobago Basin. In addition to the paleomagnetic work all cores were subjected to a comprehensive set of rock magnetic measurements. Detailed age models based on radiocarbon dating are available for all locations, since the sites were already subjects of different aspects of climatic studies. We include these new records and previously presented data from two Ethiopian locations in millennial scale global models of the CALSxk type. Agreement of the new data to previous models and modifications of models due to the additional data are discussed, focussing in particular on magnetic field structures resembling the present-day South Atlantic Anomaly.

  20. Constraining the phantom braneworld model from cosmic structure sizes

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sourav; Kousvos, Stefanos R.

    2017-11-01

    We consider the phantom braneworld model in the context of the maximum turnaround radius, RTA ,max, of a stable, spherical cosmic structure with a given mass. The maximum turnaround radius is the point where the attraction due to the central inhomogeneity gets balanced with the repulsion of the ambient dark energy, beyond which a structure cannot hold any mass, thereby giving the maximum upper bound on the size of a stable structure. In this work we derive an analytical expression of RTA ,max for this model using cosmological scalar perturbation theory. Using this we numerically constrain the parameter space, including a bulk cosmological constant and the Weyl fluid, from the mass versus observed size data for some nearby, nonvirial cosmic structures. We use different values of the matter density parameter Ωm, both larger and smaller than that of the Λ cold dark matter, as the input in our analysis. We show in particular, that (a) with a vanishing bulk cosmological constant the predicted upper bound is always greater than what is actually observed; a similar conclusion holds if the bulk cosmological constant is negative (b) if it is positive, the predicted maximum size can go considerably below than what is actually observed and owing to the involved nature of the field equations, it leads to interesting constraints on not only the bulk cosmological constant itself but on the whole parameter space of the theory.

Top