NASA Astrophysics Data System (ADS)
Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.
2017-02-01
CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.
Source Pulse Estimation of Mine Shock by Blind Deconvolution
NASA Astrophysics Data System (ADS)
Makowski, R.
The objective of seismic signal deconvolution is to extract from the signal information concerning the rockmass or the signal in the source of the shock. In the case of blind deconvolution, we have to extract information regarding both quantities. Many methods of deconvolution made use of in prospective seismology were found to be of minor utility when applied to shock-induced signals recorded in the mines of the Lubin Copper District. The lack of effectiveness should be attributed to the inadequacy of the model on which the methods are based, with respect to the propagation conditions for that type of signal. Each of the blind deconvolution methods involves a number of assumptions; hence, only if these assumptions are fulfilled, we may expect reliable results.Consequently, we had to formulate a different model for the signals recorded in the copper mines of the Lubin District. The model is based on the following assumptions: (1) The signal emitted by the sh ock source is a short-term signal. (2) The signal transmitting system (rockmass) constitutes a parallel connection of elementary systems. (3) The elementary systems are of resonant type. Such a model seems to be justified by the geological structure as well as by the positions of the shock foci and seismometers. The results of time-frequency transformation also support the dominance of resonant-type propagation.Making use of the model, a new method for the blind deconvolution of seismic signals has been proposed. The adequacy of the new model, as well as the efficiency of the proposed method, has been confirmed by the results of blind deconvolution. The slight approximation errors obtained with a small number of approximating elements additionally corroborate the adequacy of the model.
2012-02-12
is the total number of data points, is an approximately unbiased estimate of the “expected relative Kullback - Leibler distance” ( information loss...possible models). Thus, after each model from Table 2 is fit to a data set, we can compute the Akaike weights for the set of candidate models and use ...computed from the OLS best- fit model solution (top), from a deconvolution of the data using normal curves (middle) and from a deconvolution of the data
NASA Astrophysics Data System (ADS)
Schneiderbauer, Simon; Saeedipour, Mahdi
2018-02-01
Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].
A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution
NASA Astrophysics Data System (ADS)
Zuo, B.; Hu, X.; Li, H.
2011-12-01
A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.
Point spread functions and deconvolution of ultrasonic images.
Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten
2015-03-01
This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.
Evaluation of deconvolution modelling applied to numerical combustion
NASA Astrophysics Data System (ADS)
Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît
2018-01-01
A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.
Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution
2015-06-08
deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero
A neural network approach for the blind deconvolution of turbulent flows
NASA Astrophysics Data System (ADS)
Maulik, R.; San, O.
2017-11-01
We present a single-layer feedforward artificial neural network architecture trained through a supervised learning approach for the deconvolution of flow variables from their coarse grained computations such as those encountered in large eddy simulations. We stress that the deconvolution procedure proposed in this investigation is blind, i.e. the deconvolved field is computed without any pre-existing information about the filtering procedure or kernel. This may be conceptually contrasted to the celebrated approximate deconvolution approaches where a filter shape is predefined for an iterative deconvolution process. We demonstrate that the proposed blind deconvolution network performs exceptionally well in the a-priori testing of both two-dimensional Kraichnan and three-dimensional Kolmogorov turbulence and shows promise in forming the backbone of a physics-augmented data-driven closure for the Navier-Stokes equations.
Rosen, I G; Luczak, Susan E; Weiss, Jordan
2014-03-15
We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.
Statistical Deconvolution for Superresolution Fluorescence Microscopy
Mukamel, Eran A.; Babcock, Hazen; Zhuang, Xiaowei
2012-01-01
Superresolution microscopy techniques based on the sequential activation of fluorophores can achieve image resolution of ∼10 nm but require a sparse distribution of simultaneously activated fluorophores in the field of view. Image analysis procedures for this approach typically discard data from crowded molecules with overlapping images, wasting valuable image information that is only partly degraded by overlap. A data analysis method that exploits all available fluorescence data, regardless of overlap, could increase the number of molecules processed per frame and thereby accelerate superresolution imaging speed, enabling the study of fast, dynamic biological processes. Here, we present a computational method, referred to as deconvolution-STORM (deconSTORM), which uses iterative image deconvolution in place of single- or multiemitter localization to estimate the sample. DeconSTORM approximates the maximum likelihood sample estimate under a realistic statistical model of fluorescence microscopy movies comprising numerous frames. The model incorporates Poisson-distributed photon-detection noise, the sparse spatial distribution of activated fluorophores, and temporal correlations between consecutive movie frames arising from intermittent fluorophore activation. We first quantitatively validated this approach with simulated fluorescence data and showed that deconSTORM accurately estimates superresolution images even at high densities of activated fluorophores where analysis by single- or multiemitter localization methods fails. We then applied the method to experimental data of cellular structures and demonstrated that deconSTORM enables an approximately fivefold or greater increase in imaging speed by allowing a higher density of activated fluorophores/frame. PMID:22677393
A method of PSF generation for 3D brightfield deconvolution.
Tadrous, P J
2010-02-01
This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.
NASA Astrophysics Data System (ADS)
Yang, Yang; Chu, Zhigang; Shen, Linbang; Ping, Guoli; Xu, Zhongming
2018-07-01
Being capable of demystifying the acoustic source identification result fast, Fourier-based deconvolution has been studied and applied widely for the delay and sum (DAS) beamforming with two-dimensional (2D) planar arrays. It is, however so far, still blank in the context of spherical harmonics beamforming (SHB) with three-dimensional (3D) solid spherical arrays. This paper is motivated to settle this problem. Firstly, for the purpose of determining the effective identification region, the premise of deconvolution, a shift-invariant point spread function (PSF), is analyzed with simulations. To make the premise be satisfied approximately, the opening angle in elevation dimension of the surface of interest should be small, while no restriction is imposed to the azimuth dimension. Then, two kinds of deconvolution theories are built for SHB using the zero and the periodic boundary conditions respectively. Both simulations and experiments demonstrate that the periodic boundary condition is superior to the zero one, and fits the 3D acoustic source identification with solid spherical arrays better. Finally, four periodic boundary condition based deconvolution methods are formulated, and their performance is disclosed both with simulations and experimentally. All the four methods offer enhanced spatial resolution and reduced sidelobe contaminations over SHB. The recovered source strength approximates to the exact one multiplied with a coefficient that is the square of the focus distance divided by the distance from the source to the array center, while the recovered pressure contribution is scarcely affected by the focus distance, always approximating to the exact one.
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.
A Nonlinear Interactions Approximation Model for Large-Eddy Simulation
NASA Astrophysics Data System (ADS)
Haliloglu, Mehmet U.; Akhavan, Rayhaneh
2003-11-01
A new approach to LES modelling is proposed based on direct approximation of the nonlinear terms \\overlineu_iuj in the filtered Navier-Stokes equations, instead of the subgrid-scale stress, τ_ij. The proposed model, which we call the Nonlinear Interactions Approximation (NIA) model, uses graded filters and deconvolution to parameterize the local interactions across the LES cutoff, and a Smagorinsky eddy viscosity term to parameterize the distant interactions. A dynamic procedure is used to determine the unknown eddy viscosity coefficient, rendering the model free of adjustable parameters. The proposed NIA model has been applied to LES of turbulent channel flows at Re_τ ≈ 210 and Re_τ ≈ 570. The results show good agreement with DNS not only for the mean and resolved second-order turbulence statistics but also for the full (resolved plus subgrid) Reynolds stress and turbulence intensities.
Application of an NLME-Stochastic Deconvolution Approach to Level A IVIVC Modeling.
Kakhi, Maziar; Suarez-Sharp, Sandra; Shepard, Terry; Chittenden, Jason
2017-07-01
Stochastic deconvolution is a parameter estimation method that calculates drug absorption using a nonlinear mixed-effects model in which the random effects associated with absorption represent a Wiener process. The present work compares (1) stochastic deconvolution and (2) numerical deconvolution, using clinical pharmacokinetic (PK) data generated for an in vitro-in vivo correlation (IVIVC) study of extended release (ER) formulations of a Biopharmaceutics Classification System class III drug substance. The preliminary analysis found that numerical and stochastic deconvolution yielded superimposable fraction absorbed (F abs ) versus time profiles when supplied with exactly the same externally determined unit impulse response parameters. In a separate analysis, a full population-PK/stochastic deconvolution was applied to the clinical PK data. Scenarios were considered in which immediate release (IR) data were either retained or excluded to inform parameter estimation. The resulting F abs profiles were then used to model level A IVIVCs. All the considered stochastic deconvolution scenarios, and numerical deconvolution, yielded on average similar results with respect to the IVIVC validation. These results could be achieved with stochastic deconvolution without recourse to IR data. Unlike numerical deconvolution, this also implies that in crossover studies where certain individuals do not receive an IR treatment, their ER data alone can still be included as part of the IVIVC analysis. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Li, Feng-Chen; Wang, Lu; Cai, Wei-Hua
2015-07-01
A mixed subgrid-scale (SGS) model based on coherent structures and temporal approximate deconvolution (MCT) is proposed for turbulent drag-reducing flows of viscoelastic fluids. The main idea of the MCT SGS model is to perform spatial filtering for the momentum equation and temporal filtering for the conformation tensor transport equation of turbulent flow of viscoelastic fluid, respectively. The MCT model is suitable for large eddy simulation (LES) of turbulent drag-reducing flows of viscoelastic fluids in engineering applications since the model parameters can be easily obtained. The LES of forced homogeneous isotropic turbulence (FHIT) with polymer additives and turbulent channel flow with surfactant additives based on MCT SGS model shows excellent agreements with direct numerical simulation (DNS) results. Compared with the LES results using the temporal approximate deconvolution model (TADM) for FHIT with polymer additives, this mixed SGS model MCT behaves better, regarding the enhancement of calculating parameters such as the Reynolds number. For scientific and engineering research, turbulent flows at high Reynolds numbers are expected, so the MCT model can be a more suitable model for the LES of turbulent drag-reducing flows of viscoelastic fluid with polymer or surfactant additives. Project supported by the China Postdoctoral Science Foundation (Grant No. 2011M500652), the National Natural Science Foundation of China (Grant Nos. 51276046 and 51206033), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20112302110020).
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
Using deconvolution to improve the metrological performance of the grid method
NASA Astrophysics Data System (ADS)
Grédiac, Michel; Sur, Frédéric; Badulescu, Claudiu; Mathias, Jean-Denis
2013-06-01
The use of various deconvolution techniques to enhance strain maps obtained with the grid method is addressed in this study. Since phase derivative maps obtained with the grid method can be approximated by their actual counterparts convolved by the envelope of the kernel used to extract phases and phase derivatives, non-blind restoration techniques can be used to perform deconvolution. Six deconvolution techniques are presented and employed to restore a synthetic phase derivative map, namely direct deconvolution, regularized deconvolution, the Richardson-Lucy algorithm and Wiener filtering, the last two with two variants concerning their practical implementations. Obtained results show that the noise that corrupts the grid images must be thoroughly taken into account to limit its effect on the deconvolved strain maps. The difficulty here is that the noise on the grid image yields a spatially correlated noise on the strain maps. In particular, numerical experiments on synthetic data show that direct and regularized deconvolutions are unstable when noisy data are processed. The same remark holds when Wiener filtering is employed without taking into account noise autocorrelation. On the other hand, the Richardson-Lucy algorithm and Wiener filtering with noise autocorrelation provide deconvolved maps where the impact of noise remains controlled within a certain limit. It is also observed that the last technique outperforms the Richardson-Lucy algorithm. Two short examples of actual strain fields restoration are finally shown. They deal with asphalt and shape memory alloy specimens. The benefits and limitations of deconvolution are presented and discussed in these two cases. The main conclusion is that strain maps are correctly deconvolved when the signal-to-noise ratio is high and that actual noise in the actual strain maps must be more specifically characterized than in the current study to address higher noise levels with Wiener filtering.
Quantitation of small intestinal permeability during normal human drug absorption
2013-01-01
Background Understanding the quantitative relationship between a drug’s physical chemical properties and its rate of intestinal absorption (QSAR) is critical for selecting candidate drugs. Because of limited experimental human small intestinal permeability data, approximate surrogates such as the fraction absorbed or Caco-2 permeability are used, both of which have limitations. Methods Given the blood concentration following an oral and intravenous dose, the time course of intestinal absorption in humans was determined by deconvolution and related to the intestinal permeability by the use of a new 3 parameter model function (“Averaged Model” (AM)). The theoretical validity of this AM model was evaluated by comparing it to the standard diffusion-convection model (DC). This analysis was applied to 90 drugs using previously published data. Only drugs that were administered in oral solution form to fasting subjects were considered so that the rate of gastric emptying was approximately known. All the calculations are carried out using the freely available routine PKQuest Java (http://www.pkquest.com) which has an easy to use, simple interface. Results Theoretically, the AM permeability provides an accurate estimate of the intestinal DC permeability for solutes whose absorption ranges from 1% to 99%. The experimental human AM permeabilities determined by deconvolution are similar to those determined by direct human jejunal perfusion. The small intestinal pH varies with position and the results are interpreted in terms of the pH dependent octanol partition. The permeability versus partition relations are presented separately for the uncharged, basic, acidic and charged solutes. The small uncharged solutes caffeine, acetaminophen and antipyrine have very high permeabilities (about 20 x 10-4 cm/sec) corresponding to an unstirred layer of only 45 μm. The weak acid aspirin also has a large AM permeability despite its low octanol partition at pH 7.4, suggesting that it is nearly completely absorbed in the first part of the intestine where the pH is about 5.4. Conclusions The AM deconvolution method provides an accurate estimate of the human intestinal permeability. The results for these 90 drugs should provide a useful benchmark for evaluating QSAR models. PMID:23800230
Scalar flux modeling in turbulent flames using iterative deconvolution
NASA Astrophysics Data System (ADS)
Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.
2018-04-01
In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.
Toxoplasma Modulates Signature Pathways of Human Epilepsy, Neurodegeneration & Cancer.
Ngô, Huân M; Zhou, Ying; Lorenzi, Hernan; Wang, Kai; Kim, Taek-Kyun; Zhou, Yong; El Bissati, Kamal; Mui, Ernest; Fraczek, Laura; Rajagopala, Seesandra V; Roberts, Craig W; Henriquez, Fiona L; Montpetit, Alexandre; Blackwell, Jenefer M; Jamieson, Sarra E; Wheeler, Kelsey; Begeman, Ian J; Naranjo-Galvis, Carlos; Alliey-Rodriguez, Ney; Davis, Roderick G; Soroceanu, Liliana; Cobbs, Charles; Steindler, Dennis A; Boyer, Kenneth; Noble, A Gwendolyn; Swisher, Charles N; Heydemann, Peter T; Rabiah, Peter; Withers, Shawn; Soteropoulos, Patricia; Hood, Leroy; McLeod, Rima
2017-09-13
One third of humans are infected lifelong with the brain-dwelling, protozoan parasite, Toxoplasma gondii. Approximately fifteen million of these have congenital toxoplasmosis. Although neurobehavioral disease is associated with seropositivity, causality is unproven. To better understand what this parasite does to human brains, we performed a comprehensive systems analysis of the infected brain: We identified susceptibility genes for congenital toxoplasmosis in our cohort of infected humans and found these genes are expressed in human brain. Transcriptomic and quantitative proteomic analyses of infected human, primary, neuronal stem and monocytic cells revealed effects on neurodevelopment and plasticity in neural, immune, and endocrine networks. These findings were supported by identification of protein and miRNA biomarkers in sera of ill children reflecting brain damage and T. gondii infection. These data were deconvoluted using three systems biology approaches: "Orbital-deconvolution" elucidated upstream, regulatory pathways interconnecting human susceptibility genes, biomarkers, proteomes, and transcriptomes. "Cluster-deconvolution" revealed visual protein-protein interaction clusters involved in processes affecting brain functions and circuitry, including lipid metabolism, leukocyte migration and olfaction. Finally, "disease-deconvolution" identified associations between the parasite-brain interactions and epilepsy, movement disorders, Alzheimer's disease, and cancer. This "reconstruction-deconvolution" logic provides templates of progenitor cells' potentiating effects, and components affecting human brain parasitism and diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe
2015-02-15
Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less
A new approach to blind deconvolution of astronomical images
NASA Astrophysics Data System (ADS)
Vorontsov, S. V.; Jefferies, S. M.
2017-05-01
We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.
A comparison of deconvolution and the Rutland-Patlak plot in parenchymal renal uptake rate.
Al-Shakhrah, Issa A
2012-07-01
Deconvolution and the Rutland-Patlak (R-P) plot are two of the most commonly used methods for analyzing dynamic radionuclide renography. Both methods allow estimation of absolute and relative renal uptake of radiopharmaceutical and of its rate of transit through the kidney. Seventeen patients (32 kidneys) were referred for further evaluation by renal scanning. All patients were positioned supine with their backs to the scintillation gamma camera, so that the kidneys and the heart are both in the field of view. Approximately 5-7 mCi of (99m)Tc-DTPA (diethylinetriamine penta-acetic acid) in about 0.5 ml of saline is injected intravenously and sequential 20 s frames were acquired, the study on each patient lasts for approximately 20 min. The time-activity curves of the parenchymal region of interest of each kidney, as well as the heart were obtained for analysis. The data were then analyzed with deconvolution and the R-P plot. A strong positive association (n = 32; r = 0.83; R (2) = 0.68) was found between the values that obtained by applying the two methods. Bland-Altman statistical analysis demonstrated that ninety seven percent of the values in the study (31 cases from 32 cases, 97% of the cases) were within limits of agreement (mean ± 1.96 standard deviation). We believe that R-P analysis method is expected to be more reproducible than iterative deconvolution method, because the deconvolution technique (the iterative method) relies heavily on the accuracy of the first point analyzed, as any errors are carried forward into the calculations of all the subsequent points, whereas R-P technique is based on an initial analysis of the data by means of the R-P plot, and it can be considered as an alternative technique to find and calculate the renal uptake rate.
Non-stationary blind deconvolution of medical ultrasound scans
NASA Astrophysics Data System (ADS)
Michailovich, Oleg V.
2017-03-01
In linear approximation, the formation of a radio-frequency (RF) ultrasound image can be described based on a standard convolution model in which the image is obtained as a result of convolution of the point spread function (PSF) of the ultrasound scanner in use with a tissue reflectivity function (TRF). Due to the band-limited nature of the PSF, the RF images can only be acquired at a finite spatial resolution, which is often insufficient for proper representation of the diagnostic information contained in the TRF. One particular way to alleviate this problem is by means of image deconvolution, which is usually performed in a "blind" mode, when both PSF and TRF are estimated at the same time. Despite its proven effectiveness, blind deconvolution (BD) still suffers from a number of drawbacks, chief among which stems from its dependence on a stationary convolution model, which is incapable of accounting for the spatial variability of the PSF. As a result, virtually all existing BD algorithms are applied to localized segments of RF images. In this work, we introduce a novel method for non-stationary BD, which is capable of recovering the TRF concurrently with the spatially variable PSF. Particularly, our approach is based on semigroup theory which allows one to describe the effect of such a PSF in terms of the action of a properly defined linear semigroup. The approach leads to a tractable optimization problem, which can be solved using standard numerical methods. The effectiveness of the proposed solution is supported by experiments with in vivo ultrasound data.
Wang, Xiyin; Guo, Hui; Wang, Jinpeng; Lei, Tianyu; Liu, Tao; Wang, Zhenyi; Li, Yuxian; Lee, Tae-Ho; Li, Jingping; Tang, Haibao; Jin, Dianchuan; Paterson, Andrew H
2016-02-01
The 'apparently' simple genomes of many angiosperms mask complex evolutionary histories. The reference genome sequence for cotton (Gossypium spp.) revealed a ploidy change of a complexity unprecedented to date, indeed that could not be distinguished as to its exact dosage. Herein, by developing several comparative, computational and statistical approaches, we revealed a 5× multiplication in the cotton lineage of an ancestral genome common to cotton and cacao, and proposed evolutionary models to show how such a decaploid ancestor formed. The c. 70% gene loss necessary to bring the ancestral decaploid to its current gene count appears to fit an approximate geometrical model; that is, although many genes may be lost by single-gene deletion events, some may be lost in groups of consecutive genes. Gene loss following cotton decaploidy has largely just reduced gene copy numbers of some homologous groups. We designed a novel approach to deconvolute layers of chromosome homology, providing definitive information on gene orthology and paralogy across broad evolutionary distances, both of fundamental value and serving as an important platform to support further studies in and beyond cotton and genomics communities. No claim to original US government works. New Phytologist © 2015 New Phytologist Trust.
Correction Factor for Gaussian Deconvolution of Optically Thick Linewidths in Homogeneous Sources
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Bhatia, A. K.
1999-01-01
Profiles of optically thick, non-Gaussian emission line profiles convoluted with Gaussian instrumental profiles are constructed, and are deconvoluted on the usual Gaussian basis to examine the departure from accuracy thereby caused in "measured" linewidths. It is found that "measured" linewidths underestimate the true linewidths of optically thick lines, by a factor which depends on the resolution factor r congruent to Doppler width/instrumental width and on the optical thickness tau(sub 0). An approximating expression is obtained for this factor, applicable in the range of at least 0 <= tau(sub 0) <= 10, which can provide estimates of the true linewidth and optical thickness.
Christian, Eric L; Anderson, Vernon E.; Harris, Michael E
2011-01-01
Quantitative analysis of metal ion-phosphodiester interactions is a significant experimental challenge due to the complexities introduced by inner-sphere, outer-sphere (H-bonding with coordinated water), and electrostatic interactions that are difficult to isolate in solution studies. Here, we provide evidence that inner-sphere, H-bonding and electrostatic interactions between ions and dimethyl phosphate can be deconvoluted through peak fitting in the region of the Raman spectrum for the symmetric stretch of non-bridging phosphate oxygens (νsPO 2-). An approximation of the change in vibrational spectra due to different interaction modes is achieved using ions capable of all or a subset of the three forms of metal ion interaction. Contribution of electrostatic interactions to ion-induced changes to the Raman νsPO2- signal could be modeled by monitoring attenuation of νsPO2- in the presence of tetramethylammonium, while contribution of H-bonding and inner-sphere coordination could be approximated from the intensities of altered νsPO2- vibrational modes created by an interaction with ammonia, monovalent or divalent ions. A model is proposed in which discrete spectroscopic signals for inner-sphere, H-bonding, and electrostatic interactions are sufficient to account for the total observed change in νsPO2- signal due to interaction with a specific ion capable of all three modes of interaction. Importantly, the quantitative results are consistent with relative levels of coordination predicted from absolute electronegativity and absolute hardness of alkali and alkaline earth metals. PMID:21334281
Wear, Keith; Liu, Yunbo; Gammell, Paul M; Maruvada, Subha; Harris, Gerald R
2015-01-01
Nonlinear acoustic signals contain significant energy at many harmonic frequencies. For many applications, the sensitivity (frequency response) of a hydrophone will not be uniform over such a broad spectrum. In a continuation of a previous investigation involving deconvolution methodology, deconvolution (implemented in the frequency domain as an inverse filter computed from frequency-dependent hydrophone sensitivity) was investigated for improvement of accuracy and precision of nonlinear acoustic output measurements. Timedelay spectrometry was used to measure complex sensitivities for 6 fiber-optic hydrophones. The hydrophones were then used to measure a pressure wave with rich harmonic content. Spectral asymmetry between compressional and rarefactional segments was exploited to design filters used in conjunction with deconvolution. Complex deconvolution reduced mean bias (for 6 fiber-optic hydrophones) from 163% to 24% for peak compressional pressure (p+), from 113% to 15% for peak rarefactional pressure (p-), and from 126% to 29% for pulse intensity integral (PII). Complex deconvolution reduced mean coefficient of variation (COV) (for 6 fiber optic hydrophones) from 18% to 11% (p+), 53% to 11% (p-), and 20% to 16% (PII). Deconvolution based on sensitivity magnitude or the minimum phase model also resulted in significant reductions in mean bias and COV of acoustic output parameters but was less effective than direct complex deconvolution for p+ and p-. Therefore, deconvolution with appropriate filtering facilitates reliable nonlinear acoustic output measurements using hydrophones with frequency-dependent sensitivity.
NASA Astrophysics Data System (ADS)
Chang, Yong; Zi, Yanyang; Zhao, Jiyuan; Yang, Zhe; He, Wangpeng; Sun, Hailiang
2017-03-01
In guided wave pipeline inspection, echoes reflected from closely spaced reflectors generally overlap, meaning useful information is lost. To solve the overlapping problem, sparse deconvolution methods have been developed in the past decade. However, conventional sparse deconvolution methods have limitations in handling guided wave signals, because the input signal is directly used as the prototype of the convolution matrix, without considering the waveform change caused by the dispersion properties of the guided wave. In this paper, an adaptive sparse deconvolution (ASD) method is proposed to overcome these limitations. First, the Gaussian echo model is employed to adaptively estimate the column prototype of the convolution matrix instead of directly using the input signal as the prototype. Then, the convolution matrix is constructed upon the estimated results. Third, the split augmented Lagrangian shrinkage (SALSA) algorithm is introduced to solve the deconvolution problem with high computational efficiency. To verify the effectiveness of the proposed method, guided wave signals obtained from pipeline inspection are investigated numerically and experimentally. Compared to conventional sparse deconvolution methods, e.g. the {{l}1} -norm deconvolution method, the proposed method shows better performance in handling the echo overlap problem in the guided wave signal.
Wellskins and slug tests: where's the bias?
NASA Astrophysics Data System (ADS)
Rovey, C. W.; Niemann, W. L.
2001-03-01
Pumping tests in an outwash sand at the Camp Dodge Site give hydraulic conductivities ( K) approximately seven times greater than conventional slug tests in the same wells. To determine if this difference is caused by skin bias, we slug tested three sets of wells, each in a progressively greater stage of development. Results were analyzed with both the conventional Bouwer-Rice method and the deconvolution method, which quantifies the skin and eliminates its effects. In 12 undeveloped wells the average skin is +4.0, causing underestimation of conventional slug-test K (Bouwer-Rice method) by approximately a factor of 2 relative to the deconvolution method. In seven nominally developed wells the skin averages just +0.34, and the Bouwer-Rice method gives K within 10% of that calculated with the deconvolution method. The Bouwer-Rice K in this group is also within 5% of that measured by natural-gradient tracer tests at the same site. In 12 intensely developed wells the average skin is <-0.82, consistent with an average skin of -1.7 measured during single-well pumping tests. At this site the maximum possible skin bias is much smaller than the difference between slug and pumping-test Ks. Moreover, the difference in K persists even in intensely developed wells with negative skins. Therefore, positive wellskins do not cause the difference in K between pumping and slug tests at this site.
Data Dependent Peak Model Based Spectrum Deconvolution for Analysis of High Resolution LC-MS Data
2015-01-01
A data dependent peak model (DDPM) based spectrum deconvolution method was developed for analysis of high resolution LC-MS data. To construct the selected ion chromatogram (XIC), a clustering method, the density based spatial clustering of applications with noise (DBSCAN), is applied to all m/z values of an LC-MS data set to group the m/z values into each XIC. The DBSCAN constructs XICs without the need for a user defined m/z variation window. After the XIC construction, the peaks of molecular ions in each XIC are detected using both the first and the second derivative tests, followed by an optimized chromatographic peak model selection method for peak deconvolution. A total of six chromatographic peak models are considered, including Gaussian, log-normal, Poisson, gamma, exponentially modified Gaussian, and hybrid of exponential and Gaussian models. The abundant nonoverlapping peaks are chosen to find the optimal peak models that are both data- and retention-time-dependent. Analysis of 18 spiked-in LC-MS data demonstrates that the proposed DDPM spectrum deconvolution method outperforms the traditional method. On average, the DDPM approach not only detected 58 more chromatographic peaks from each of the testing LC-MS data but also improved the retention time and peak area 3% and 6%, respectively. PMID:24533635
Navarro, Jorge; Ring, Terry A.; Nigg, David W.
2015-03-01
A deconvolution method for a LaBr₃ 1"x1" detector for nondestructive Advanced Test Reactor (ATR) fuel burnup applications was developed. The method consisted of obtaining the detector response function, applying a deconvolution algorithm to 1”x1” LaBr₃ simulated, data along with evaluating the effects that deconvolution have on nondestructively determining ATR fuel burnup. The simulated response function of the detector was obtained using MCNPX as well with experimental data. The Maximum-Likelihood Expectation Maximization (MLEM) deconvolution algorithm was selected to enhance one-isotope source-simulated and fuel- simulated spectra. The final evaluation of the study consisted of measuring the performance of the fuel burnup calibrationmore » curve for the convoluted and deconvoluted cases. The methodology was developed in order to help design a reliable, high resolution, rugged and robust detection system for the ATR fuel canal capable of collecting high performance data for model validation, along with a system that can calculate burnup and using experimental scintillator detector data.« less
Deconvolution of noisy transient signals: a Kalman filtering application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J.V.; Zicker, J.E.
The deconvolution of transient signals from noisy measurements is a common problem occuring in various tests at Lawrence Livermore National Laboratory. The transient deconvolution problem places atypical constraints on algorithms presently available. The Schmidt-Kalman filter, a time-varying, tunable predictor, is designed using a piecewise constant model of the transient input signal. A simulation is developed to test the algorithm for various input signal bandwidths and different signal-to-noise ratios for the input and output sequences. The algorithm performance is reasonable.
Automated deconvolution of structured mixtures from heterogeneous tumor genomic data
Roman, Theodore; Xie, Lu
2017-01-01
With increasing appreciation for the extent and importance of intratumor heterogeneity, much attention in cancer research has focused on profiling heterogeneity on a single patient level. Although true single-cell genomic technologies are rapidly improving, they remain too noisy and costly at present for population-level studies. Bulk sequencing remains the standard for population-scale tumor genomics, creating a need for computational tools to separate contributions of multiple tumor clones and assorted stromal and infiltrating cell populations to pooled genomic data. All such methods are limited to coarse approximations of only a few cell subpopulations, however. In prior work, we demonstrated the feasibility of improving cell type deconvolution by taking advantage of substructure in genomic mixtures via a strategy called simplicial complex unmixing. We improve on past work by introducing enhancements to automate learning of substructured genomic mixtures, with specific emphasis on genome-wide copy number variation (CNV) data, as well as the ability to process quantitative RNA expression data, and heterogeneous combinations of RNA and CNV data. We introduce methods for dimensionality estimation to better decompose mixture model substructure; fuzzy clustering to better identify substructure in sparse, noisy data; and automated model inference methods for other key model parameters. We further demonstrate their effectiveness in identifying mixture substructure in true breast cancer CNV data from the Cancer Genome Atlas (TCGA). Source code is available at https://github.com/tedroman/WSCUnmix PMID:29059177
Multi-frame partially saturated images blind deconvolution
NASA Astrophysics Data System (ADS)
Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2016-12-01
When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.
Model-free quantification of dynamic PET data using nonparametric deconvolution
Zanderigo, Francesca; Parsey, Ramin V; Todd Ogden, R
2015-01-01
Dynamic positron emission tomography (PET) data are usually quantified using compartment models (CMs) or derived graphical approaches. Often, however, CMs either do not properly describe the tracer kinetics, or are not identifiable, leading to nonphysiologic estimates of the tracer binding. The PET data are modeled as the convolution of the metabolite-corrected input function and the tracer impulse response function (IRF) in the tissue. Using nonparametric deconvolution methods, it is possible to obtain model-free estimates of the IRF, from which functionals related to tracer volume of distribution and binding may be computed, but this approach has rarely been applied in PET. Here, we apply nonparametric deconvolution using singular value decomposition to simulated and test–retest clinical PET data with four reversible tracers well characterized by CMs ([11C]CUMI-101, [11C]DASB, [11C]PE2I, and [11C]WAY-100635), and systematically compare reproducibility, reliability, and identifiability of various IRF-derived functionals with that of traditional CMs outcomes. Results show that nonparametric deconvolution, completely free of any model assumptions, allows for estimates of tracer volume of distribution and binding that are very close to the estimates obtained with CMs and, in some cases, show better test–retest performance than CMs outcomes. PMID:25873427
Biomathematical modeling of pulsatile hormone secretion: a historical perspective.
Evans, William S; Farhy, Leon S; Johnson, Michael L
2009-01-01
Shortly after the recognition of the profound physiological significance of the pulsatile nature of hormone secretion, computer-based modeling techniques were introduced for the identification and characterization of such pulses. Whereas these earlier approaches defined perturbations in hormone concentration-time series, deconvolution procedures were subsequently employed to separate such pulses into their secretion event and clearance components. Stochastic differential equation modeling was also used to define basal and pulsatile hormone secretion. To assess the regulation of individual components within a hormone network, a method that quantitated approximate entropy within hormone concentration-times series was described. To define relationships within coupled hormone systems, methods including cross-correlation and cross-approximate entropy were utilized. To address some of the inherent limitations of these methods, modeling techniques with which to appraise the strength of feedback signaling between and among hormone-secreting components of a network have been developed. Techniques such as dynamic modeling have been utilized to reconstruct dose-response interactions between hormones within coupled systems. A logical extension of these advances will require the development of mathematical methods with which to approximate endocrine networks exhibiting multiple feedback interactions and subsequently reconstruct their parameters based on experimental data for the purpose of testing regulatory hypotheses and estimating alterations in hormone release control mechanisms.
Detailed interpretation of aeromagnetic data from the Patagonia Mountains area, southeastern Arizona
Bultman, Mark W.
2015-01-01
Euler deconvolution depth estimates derived from aeromagnetic data with a structural index of 0 show that mapped faults on the northern margin of the Patagonia Mountains generally agree with the depth estimates in the new geologic model. The deconvolution depth estimates also show that the concealed Patagonia Fault southwest of the Patagonia Mountains is more complex than recent geologic mapping represents. Additionally, Euler deconvolution depth estimates with a structural index of 2 locate many potential intrusive bodies that might be associated with known and unknown mineralization.
An optimized algorithm for multiscale wideband deconvolution of radio astronomical images
NASA Astrophysics Data System (ADS)
Offringa, A. R.; Smirnov, O.
2017-10-01
We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.
New regularization scheme for blind color image deconvolution
NASA Astrophysics Data System (ADS)
Chen, Li; He, Yu; Yap, Kim-Hui
2011-01-01
This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.
Langton, Christian M; Wille, Marie-Luise; Flegg, Mark B
2014-04-01
The acceptance of broadband ultrasound attenuation for the assessment of osteoporosis suffers from a limited understanding of ultrasound wave propagation through cancellous bone. It has recently been proposed that the ultrasound wave propagation can be described by a concept of parallel sonic rays. This concept approximates the detected transmission signal to be the superposition of all sonic rays that travel directly from transmitting to receiving transducer. The transit time of each ray is defined by the proportion of bone and marrow propagated. An ultrasound transit time spectrum describes the proportion of sonic rays having a particular transit time, effectively describing lateral inhomogeneity of transit times over the surface of the receiving ultrasound transducer. The aim of this study was to provide a proof of concept that a transit time spectrum may be derived from digital deconvolution of input and output ultrasound signals. We have applied the active-set method deconvolution algorithm to determine the ultrasound transit time spectra in the three orthogonal directions of four cancellous bone replica samples and have compared experimental data with the prediction from the computer simulation. The agreement between experimental and predicted ultrasound transit time spectrum analyses derived from Bland-Altman analysis ranged from 92% to 99%, thereby supporting the concept of parallel sonic rays for ultrasound propagation in cancellous bone. In addition to further validation of the parallel sonic ray concept, this technique offers the opportunity to consider quantitative characterisation of the material and structural properties of cancellous bone, not previously available utilising ultrasound.
XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-08-01
XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.
Harper, Brett; Neumann, Elizabeth K; Stow, Sarah M; May, Jody C; McLean, John A; Solouki, Touradj
2016-10-05
Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting "pure" IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810-1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) "shift factors" to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288.8 Å(2), 295.1 Å(2), 296.8 Å(2), and 300.1 Å(2); all four of these CCS values were within 1.5% of independently measured DTIM-MS values. Copyright © 2016 Elsevier B.V. All rights reserved.
A distance-driven deconvolution method for CT image-resolution improvement
NASA Astrophysics Data System (ADS)
Han, Seokmin; Choi, Kihwan; Yoo, Sang Wook; Yi, Jonghyon
2016-12-01
The purpose of this research is to achieve high spatial resolution in CT (computed tomography) images without hardware modification. The main idea is to consider geometry optics model, which can provide the approximate blurring PSF (point spread function) kernel, which varies according to the distance from the X-ray tube to each point. The FOV (field of view) is divided into several band regions based on the distance from the X-ray source, and each region is deconvolved with a different deconvolution kernel. As the number of subbands increases, the overshoot of the MTF (modulation transfer function) curve increases first. After that, the overshoot begins to decrease while still showing a larger MTF than the normal FBP (filtered backprojection). The case of five subbands seems to show balanced performance between MTF boost and overshoot minimization. It can be seen that, as the number of subbands increases, the noise (STD) can be seen to show a tendency to decrease. The results shows that spatial resolution in CT images can be improved without using high-resolution detectors or focal spot wobbling. The proposed algorithm shows promising results in improving spatial resolution while avoiding excessive noise boost.
NASA Astrophysics Data System (ADS)
Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing
2004-12-01
The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.
Matthews, Grant
2004-12-01
The Geostationary Earth Radiation Budget (GERB) experiment is a broadband satellite radiometer instrument program intended to resolve remaining uncertainties surrounding the effect of cloud radiative feedback on future climate change. By use of a custom-designed diffraction-aberration telescope model, the GERB detector spatial response is recovered by deconvolution applied to the ground calibration point-spread function (PSF) measurements. An ensemble of randomly generated white-noise test scenes, combined with the measured telescope transfer function results in the effect of noise on the deconvolution being significantly reduced. With the recovered detector response as a base, the same model is applied in construction of the predicted in-flight field-of-view response of each GERB pixel to both short- and long-wave Earth radiance. The results of this study can now be used to simulate and investigate the instantaneous sampling errors incurred by GERB. Also, the developed deconvolution method may be highly applicable in enhancing images or PSF data for any telescope system for which a wave-front error measurement is available.
Bardy, Fabrice; Dillon, Harvey; Van Dun, Bram
2014-04-01
Rapid presentation of stimuli in an evoked response paradigm can lead to overlap of multiple responses and consequently difficulties interpreting waveform morphology. This paper presents a deconvolution method allowing overlapping multiple responses to be disentangled. The deconvolution technique uses a least-squared error approach. A methodology is proposed to optimize the stimulus sequence associated with the deconvolution technique under low-jitter conditions. It controls the condition number of the matrices involved in recovering the responses. Simulations were performed using the proposed deconvolution technique. Multiple overlapping responses can be recovered perfectly in noiseless conditions. In the presence of noise, the amount of error introduced by the technique can be controlled a priori by the condition number of the matrix associated with the used stimulus sequence. The simulation results indicate the need for a minimum amount of jitter, as well as a sufficient number of overlap combinations to obtain optimum results. An aperiodic model is recommended to improve reconstruction. We propose a deconvolution technique allowing multiple overlapping responses to be extracted and a method of choosing the stimulus sequence optimal for response recovery. This technique may allow audiologists, psychologists, and electrophysiologists to optimize their experimental designs involving rapidly presented stimuli, and to recover evoked overlapping responses. Copyright © 2013 International Federation of Clinical Neurophysiology. All rights reserved.
NASA Astrophysics Data System (ADS)
Dallmann, N. A.; Carlsten, B. E.; Stonehill, L. C.
2017-12-01
Orbiting nuclear spectrometers have contributed significantly to our understanding of the composition of solar system bodies. Gamma rays and neutrons are produced within the surfaces of bodies by impacting galactic cosmic rays (GCR) and by intrinsic radionuclide decay. Measuring the flux and energy spectrum of these products at one point in an orbit elucidates the elemental content of the area in view. Deconvolution of measurements from many spatially registered orbit points can produce detailed maps of elemental abundances. In applying these well-established techniques to small and irregularly shaped bodies like Phobos, one encounters unique challenges beyond those of a large spheroid. Polar mapping orbits are not possible for Phobos and quasistatic orbits will realize only modest inclinations unavoidably limiting surface coverage and creating North-South ambiguities in deconvolution. The irregular shape causes self-shadowing both of the body to the spectrometer but also of the body to the incoming GCR. The view angle to the surface normal as well as the distance between the surface and the spectrometer is highly irregular. These characteristics can be synthesized into a complicated and continuously changing measurement system point spread function. We have begun to explore different model-based, statistically rigorous, iterative deconvolution methods to produce elemental abundance maps for a proposed future investigation of Phobos. By incorporating the satellite orbit, the existing high accuracy shape-models of Phobos, and the spectrometer response function, a detailed and accurate system model can be constructed. Many aspects of this model formation are particularly well suited to modern graphics processing techniques and parallel processing. We will present the current status and preliminary visualizations of the Phobos measurement system model. We will also discuss different deconvolution strategies and their relative merit in statistical rigor, stability, achievable resolution, and exploitation of the irregular shape to partially resolve ambiguities. The general applicability of these new approaches to existing data sets from Mars, Mercury, and Lunar investigations will be noted.
Approximate Deconvolution and Explicit Filtering For LES of a Premixed Turbulent Jet Flame
2014-09-19
from laminar flamelets computed with the GRI -mechanism for methane-air combustion (Smith et al. 1999) and the progress variable Yc is defined as in... gri - mech/. Subramanian, V., P. Domingo, and L. Vervisch (2010). Large-Eddy Simulation of forced igni- tion of an annular bluff-body burner. Combust
FTIR of binary lead borate glass: Structural investigation
NASA Astrophysics Data System (ADS)
Othman, H. A.; Elkholy, H. S.; Hager, I. Z.
2016-02-01
The glass samples were prepared according to the following formula: (100-x) B2O3 - x PbO, where x = 20-80 mol% by melt quenching method. The density of the prepared samples was measured and molar volume was calculated. IR spectra were measured for the prepared samples to investigate the glass structure. The IR spectra were deconvoluted using curves of Gaussian shape at approximately the same frequencies. The deconvoluted data were used to study the effect of PbO content on all the structural borate groups. Some structural parameters such as density, packing density, bond length and bond force constant were theoretically calculated and were compared to the obtained experimental results. Deviation between the experimental and theoretically calculated parameters reflects the dual role of PbO content on the network of borate glass.
NASA Astrophysics Data System (ADS)
Neuer, Marcus J.
2013-11-01
A technique for the spectral identification of strontium-90 is shown, utilising a Maximum-Likelihood deconvolution. Different deconvolution approaches are discussed and summarised. Based on the intensity distribution of the beta emission and Geant4 simulations, a combined response matrix is derived, tailored to the β- detection process in sodium iodide detectors. It includes scattering effects and attenuation by applying a base material decomposition extracted from Geant4 simulations with a CAD model for a realistic detector system. Inversion results of measurements show the agreement between deconvolution and reconstruction. A detailed investigation with additional masking sources like 40K, 226Ra and 131I shows that a contamination of strontium can be found in the presence of these nuisance sources. Identification algorithms for strontium are presented based on the derived technique. For the implementation of blind identification, an exemplary masking ratio is calculated.
NASA Astrophysics Data System (ADS)
Chu, Zhigang; Yang, Yang; He, Yansong
2015-05-01
Spherical Harmonics Beamforming (SHB) with solid spherical arrays has become a particularly attractive tool for doing acoustic sources identification in cabin environments. However, it presents some intrinsic limitations, specifically poor spatial resolution and severe sidelobe contaminations. This paper focuses on overcoming these limitations effectively by deconvolution. First and foremost, a new formulation is proposed, which expresses SHB's output as a convolution of the true source strength distribution and the point spread function (PSF) defined as SHB's response to a unit-strength point source. Additionally, the typical deconvolution methods initially suggested for planar arrays, deconvolution approach for the mapping of acoustic sources (DAMAS), nonnegative least-squares (NNLS), Richardson-Lucy (RL) and CLEAN, are adapted to SHB successfully, which are capable of giving rise to highly resolved and deblurred maps. Finally, the merits of the deconvolution methods are validated and the relationships of source strength and pressure contribution reconstructed by the deconvolution methods vs. focus distance are explored both with computer simulations and experimentally. Several interesting results have emerged from this study: (1) compared with SHB, DAMAS, NNLS, RL and CLEAN all can not only improve the spatial resolution dramatically but also reduce or even eliminate the sidelobes effectively, allowing clear and unambiguous identification of single source or incoherent sources. (2) The availability of RL for coherent sources is highest, then DAMAS and NNLS, and that of CLEAN is lowest due to its failure in suppressing sidelobes. (3) Whether or not the real distance from the source to the array center equals the assumed one that is referred to as focus distance, the previous two results hold. (4) The true source strength can be recovered by dividing the reconstructed one by a coefficient that is the square of the focus distance divided by the real distance from the source to the array center. (5) The reconstructed pressure contribution is almost not affected by the focus distance, always approximating to the true one. This study will be of great significance to the accurate localization and quantification of acoustic sources in cabin environments.
Regression-assisted deconvolution.
McIntyre, Julie; Stefanski, Leonard A
2011-06-30
We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.
2004-01-01
We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, T.; Popescu, S. C.; Krause, K.
2016-12-01
Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: 1) direct decomposition, 2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from discrete LiDAR data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, < 0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, < 1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (< 1.01m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE.
NASA Astrophysics Data System (ADS)
Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Juerg; Slob, Evert; Thorbecke, Jan; Snieder, Roel
2010-05-01
In recent years, seismic interferometry (or Green's function retrieval) has led to many applications in seismology (exploration, regional and global), underwater acoustics and ultrasonics. One of the explanations for this broad interest lies in the simplicity of the methodology. In passive data applications a simple crosscorrelation of responses at two receivers gives the impulse response (Green's function) at one receiver as if there were a source at the position of the other. In controlled-source applications the procedure is similar, except that it involves in addition a summation along the sources. It has also been recognized that the simple crosscorrelation approach has its limitations. From the various theoretical models it follows that there are a number of underlying assumptions for retrieving the Green's function by crosscorrelation. The most important assumptions are that the medium is lossless and that the waves are equipartitioned. In heuristic terms the latter condition means that the receivers are illuminated isotropically from all directions, which is for example achieved when the sources are regularly distributed along a closed surface, the sources are mutually uncorrelated and their power spectra are identical. Despite the fact that in practical situations these conditions are at most only partly fulfilled, the results of seismic interferometry are generally quite robust, but the retrieved amplitudes are unreliable and the results are often blurred by artifacts. Several researchers have proposed to address some of the shortcomings by replacing the correlation process by deconvolution. In most cases the employed deconvolution procedure is essentially 1-D (i.e., trace-by-trace deconvolution). This compensates the anelastic losses, but it does not account for the anisotropic illumination of the receivers. To obtain more accurate results, seismic interferometry by deconvolution should acknowledge the 3-D nature of the seismic wave field. Hence, from a theoretical point of view, the trace-by-trace process should be replaced by a full 3-D wave field deconvolution process. Interferometry by multidimensional deconvolution is more accurate than the trace-by-trace correlation and deconvolution approaches but the processing is more involved. In the presentation we will give a systematic analysis of seismic interferometry by crosscorrelation versus multi-dimensional deconvolution and discuss applications of both approaches.
Deconvolution of the vestibular evoked myogenic potential.
Lütkenhöner, Bernd; Basel, Türker
2012-02-07
The vestibular evoked myogenic potential (VEMP) and the associated variance modulation can be understood by a convolution model. Two functions of time are incorporated into the model: the motor unit action potential (MUAP) of an average motor unit, and the temporal modulation of the MUAP rate of all contributing motor units, briefly called rate modulation. The latter is the function of interest, whereas the MUAP acts as a filter that distorts the information contained in the measured data. Here, it is shown how to recover the rate modulation by undoing the filtering using a deconvolution approach. The key aspects of our deconvolution algorithm are as follows: (1) the rate modulation is described in terms of just a few parameters; (2) the MUAP is calculated by Wiener deconvolution of the VEMP with the rate modulation; (3) the model parameters are optimized using a figure-of-merit function where the most important term quantifies the difference between measured and model-predicted variance modulation. The effectiveness of the algorithm is demonstrated with simulated data. An analysis of real data confirms the view that there are basically two components, which roughly correspond to the waves p13-n23 and n34-p44 of the VEMP. The rate modulation corresponding to the first, inhibitory component is much stronger than that corresponding to the second, excitatory component. But the latter is more extended so that the two modulations have almost the same equivalent rectangular duration. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Gang; Zhao, Qing
2017-03-01
In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.
Calibration of a polarimetric imaging SAR
NASA Technical Reports Server (NTRS)
Sarabandi, K.; Pierce, L. E.; Ulaby, F. T.
1991-01-01
Calibration of polarimetric imaging Synthetic Aperture Radars (SAR's) using point calibration targets is discussed. The four-port network calibration technique is used to describe the radar error model. The polarimetric ambiguity function of the SAR is then found using a single point target, namely a trihedral corner reflector. Based on this, an estimate for the backscattering coefficient of the terrain is found by a deconvolution process. A radar image taken by the JPL Airborne SAR (AIRSAR) is used for verification of the deconvolution calibration method. The calibrated responses of point targets in the image are compared both with theory and the POLCAL technique. Also, response of a distributed target are compared using the deconvolution and POLCAL techniques.
Wavespace-Based Coherent Deconvolution
NASA Technical Reports Server (NTRS)
Bahr, Christopher J.; Cattafesta, Louis N., III
2012-01-01
Array deconvolution is commonly used in aeroacoustic analysis to remove the influence of a microphone array's point spread function from a conventional beamforming map. Unfortunately, the majority of deconvolution algorithms assume that the acoustic sources in a measurement are incoherent, which can be problematic for some aeroacoustic phenomena with coherent, spatially-distributed characteristics. While several algorithms have been proposed to handle coherent sources, some are computationally intractable for many problems while others require restrictive assumptions about the source field. Newer generalized inverse techniques hold promise, but are still under investigation for general use. An alternate coherent deconvolution method is proposed based on a wavespace transformation of the array data. Wavespace analysis offers advantages over curved-wave array processing, such as providing an explicit shift-invariance in the convolution of the array sampling function with the acoustic wave field. However, usage of the wavespace transformation assumes the acoustic wave field is accurately approximated as a superposition of plane wave fields, regardless of true wavefront curvature. The wavespace technique leverages Fourier transforms to quickly evaluate a shift-invariant convolution. The method is derived for and applied to ideal incoherent and coherent plane wave fields to demonstrate its ability to determine magnitude and relative phase of multiple coherent sources. Multi-scale processing is explored as a means of accelerating solution convergence. A case with a spherical wave front is evaluated. Finally, a trailing edge noise experiment case is considered. Results show the method successfully deconvolves incoherent, partially-coherent, and coherent plane wave fields to a degree necessary for quantitative evaluation. Curved wave front cases warrant further investigation. A potential extension to nearfield beamforming is proposed.
Large-eddy simulations of a forced homogeneous isotropic turbulence with polymer additives
NASA Astrophysics Data System (ADS)
Wang, Lu; Cai, Wei-Hua; Li, Feng-Chen
2014-03-01
Large-eddy simulations (LES) based on the temporal approximate deconvolution model were performed for a forced homogeneous isotropic turbulence (FHIT) with polymer additives at moderate Taylor Reynolds number. Finitely extensible nonlinear elastic in the Peterlin approximation model was adopted as the constitutive equation for the filtered conformation tensor of the polymer molecules. The LES results were verified through comparisons with the direct numerical simulation results. Using the LES database of the FHIT in the Newtonian fluid and the polymer solution flows, the polymer effects on some important parameters such as strain, vorticity, drag reduction, and so forth were studied. By extracting the vortex structures and exploring the flatness factor through a high-order correlation function of velocity derivative and wavelet analysis, it can be found that the small-scale vortex structures and small-scale intermittency in the FHIT are all inhibited due to the existence of the polymers. The extended self-similarity scaling law in the polymer solution flow shows no apparent difference from that in the Newtonian fluid flow at the currently simulated ranges of Reynolds and Weissenberg numbers.
NASA Technical Reports Server (NTRS)
Pan, Jianqiang
1992-01-01
Several important problems in the fields of signal processing and model identification, such as system structure identification, frequency response determination, high order model reduction, high resolution frequency analysis, deconvolution filtering, and etc. Each of these topics involves a wide range of applications and has received considerable attention. Using the Fourier based sinusoidal modulating signals, it is shown that a discrete autoregressive model can be constructed for the least squares identification of continuous systems. Some identification algorithms are presented for both SISO and MIMO systems frequency response determination using only transient data. Also, several new schemes for model reduction were developed. Based upon the complex sinusoidal modulating signals, a parametric least squares algorithm for high resolution frequency estimation is proposed. Numerical examples show that the proposed algorithm gives better performance than the usual. Also, the problem was studied of deconvolution and parameter identification of a general noncausal nonminimum phase ARMA system driven by non-Gaussian stationary random processes. Algorithms are introduced for inverse cumulant estimation, both in the frequency domain via the FFT algorithms and in the domain via the least squares algorithm.
NASA Technical Reports Server (NTRS)
Windhorst, R. A.; Schmidtke, P. C.; Pascarelle, S. M.; Gordon, J. M.; Griffiths, R. E.; Ratnatunga, K. U.; Neuschaefer, L. W.; Ellis, R. S.; Gilmore, G.; Glazebrook, K.
1994-01-01
We present isophotal profiles of six faint field galaxies from some of the first deep images taken for the Hubble Space Telescope (HST) Medium Deep Survey (MDS). These have redshifts in the range z = 0.126 to 0.402. The images were taken with the Wide Field Camera (WFC) in `parallel mode' and deconvolved with the Lucy method using as the point-spread function nearby stars in the image stack. The WFC deconvolutions have a dynamic range of 16 to 20 dB (4 to 5 mag) and an effective resolution approximately less than 0.2 sec (FWHM). The multiorbit HST images allow us to trace the morphology, light profiles, and color gradients of faint field galaxies down to V approximately equal to 22 to 23 mag at sub-kpc resolution, since the redshift range covered is z = 0.1 to 0.4. The goals of the MDS are to study the sub-kpc scale morphology, light profiles, and color gradients for a large samole of faint field galaxies down to V approximately equal to 23 mag, and to trace the fraction of early to late-type galaxies as function of cosmic time. In this paper we study the brighter MDS galaxies in the 13 hour + 43 deg MDS field in detail, and investigate to what extent model fits with pure exponential disks or a(exp 1/4) bulges are justified at V approximately less than 22 mag. Four of the six field galaxies have light profiles that indicate (small) inner bulges following r(exp 1/4) laws down to 0.2 sec resolution, plus a dominant surrounding exponential disk with little or no color gradients. Two occur in a group at z = 0.401, two are barred spiral galaxies at z = 0.179 and z = 0.302, and two are rather subluminous (and edge-on) disk galaxies at z = 0.126 and z = 0.179. Our deep MDS images can detect galaxies down to V, I approximately less than 25 to 26 mag, and demonstrate the impressive potential of HST--even with its pre-refurbished optics--to resolve morphological details in galaxies at cosmologically significant distances (v approximately less than 23 mag). Since the median redshift of these galaxies is approximately less than 0.4, the HST resolution allows us to study sub kpc size scales at the galaxy, which cannot be done with stable images over wide fields from the best ground-based sites.
Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng
2017-01-01
Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.
Convex blind image deconvolution with inverse filtering
NASA Astrophysics Data System (ADS)
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
Sequential deconvolution from wave-front sensing using bivariate simplex splines
NASA Astrophysics Data System (ADS)
Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai
2015-05-01
Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.
Horger, Marius; Fallier-Becker, Petra; Thaiss, Wolfgang M; Sauter, Alexander; Bösmüller, Hans; Martella, Manuela; Preibsch, Heike; Fritz, Jan; Nikolaou, Konstantin; Kloth, Christopher
2018-05-03
This study aimed to test the hypothesis that ultrastructural wall abnormalities of lymphoma vessels correlate with perfusion computed tomography (PCT) kinetics. Our local institutional review board approved this prospective study. Between February 2013 and June 2016, we included 23 consecutive subjects with newly diagnosed lymphoma, who were referred for computed tomography-guided biopsy (6 women, 17 men; mean age, 60.61 ± 12.43 years; range, 28-74 years) and additionally agreed to undergo PCT of the target lymphoma tissues. PCT was obtained for 40 seconds using 80 kV, 120 mAs, 64 × 0.6-mm collimation, 6.9-cm z-axis coverage, and 26 volume measurements. Mean and maximum k-trans (mL/100 mL/min), blood flow (BF; mL/100 mL/min) and blood volume (BV) were quantified using the deconvolution and the maximum slope + Patlak calculation models. Immunohistochemical staining was performed for microvessel density quantification (vessels/m 2 ), and electron microscopy was used to determine the presence or absence of tight junctions, endothelial fenestration, basement membrane, and pericytes, and to measure extracellular matrix thickness. Extracellular matrix thickness as well as the presence or absence of tight junctions, basal lamina, and pericytes did not correlate with computed tomography perfusion parameters. Endothelial fenestrations correlated significantly with mean BF deconvolution (P = .047, r = 0.418) and additionally was significantly associated with higher mean BV deconvolution (P < .005). Mean k-trans Patlak correlated strongly with mean k-trans deconvolution (r = 0.939, P = .001), and both correlated with mean BF deconvolution (P = .001, r = 0.748), max BF deconvolution (P = .028, r = 0.564), mean BV deconvolution (P = .001, r = 0.752), and max BV deconvolution (P = .001, r = 0.771). Microvessel density correlated with max k-trans deconvolution (r = 0.564, P = .023). Vascular endothelial growth factor receptor-3 expression (receptor specific for lymphatics) correlated significantly with max k-trans Patlak (P = .041, r = 0.686) and mean BF deconvolution (P = .038, r = 0.695). k-Trans values of PCT do not correlate with ultrastructural microvessel features, whereas endothelial fenestrations correlate with increased intra-tumoral BVs. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Carnevale Neto, Fausto; Pilon, Alan C; Selegato, Denise M; Freire, Rafael T; Gu, Haiwei; Raftery, Daniel; Lopes, Norberto P; Castro-Gamboa, Ian
2016-01-01
Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, thereby avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY) with Automated Mass Spectral Deconvolution and Identification System software (AMDIS). Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential, and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication was initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor) was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts.
Carnevale Neto, Fausto; Pilon, Alan C.; Selegato, Denise M.; Freire, Rafael T.; Gu, Haiwei; Raftery, Daniel; Lopes, Norberto P.; Castro-Gamboa, Ian
2016-01-01
Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, thereby avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY) with Automated Mass Spectral Deconvolution and Identification System software (AMDIS). Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential, and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication was initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor) was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts. PMID:27747213
Gold - A novel deconvolution algorithm with optimization for waveform LiDAR processing
NASA Astrophysics Data System (ADS)
Zhou, Tan; Popescu, Sorin C.; Krause, Keith; Sheridan, Ryan D.; Putman, Eric
2017-07-01
Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: (1) direct decomposition, (2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson-Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from the corresponding reference data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, <0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, <1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (<1.01 m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE. Additionally, the high level of uncertainty occurs more on areas with high slope and high vegetation. This study provides an alternative and innovative approach for waveform processing that will benefit high fidelity processing of waveform LiDAR data to characterize vegetation structures.
NASA Astrophysics Data System (ADS)
Reischmann, Elizabeth; Yang, Xiao; Rial, José
2014-05-01
Deconvolution is widely used in a wide variety of scientific fields, including its significant use in seismology, as a tool to recover real input from a system's impulse response and output. Our research uses spectral division deconvolution in the context of studying the impulse response of the possible relationship between the nonlinear climates of the Polar Regions by using select δ18O ice cores from both poles. This is feasible in spite of the fact that the records may be the result of nonlinear processes because the two polar climates are synchronized for the period studied, forming a Hilbert transform pair. In order to perform this analysis, the age models of three Greenland and four Antarctica records have been matched using a Monte Carlo method with the methane-matched pair GRIP and BYRD as a basis of calculations. For all of the twelve resulting pairs, various deconvolutions schemes (Weiner, Damped Least Squares, Tikhonov, Truncated Singular Value Decomposition) give consistent, quasi-periodic, impulse responses of the system. Multitaper analysis then demonstrates strong, millennia scale, quasi-periodic oscillations in these system responses with a range of 2,500 to 1,000 years. However, these results are directionally dependent, with the transfer function from north to south differing from that of south north. High amplitude power peaks at 5,000 to 1,7000 years characterize the former, while the latter contains peaks at 2,500 to 1,700 years. These predominant periodicities are also found in the data, some of which have been identified as solar forcing, but others of which may indicate internal oscillations of the climate system (1.6-1.4ky). The approximately 1,500 year period transfer function, which does not have a corresponding solar forcing, may indicate one of these internal periodicities of the system, perhaps even indicating the long-term presence of the Deep Water circulation, also known as the thermo-haline circulation (THC). Simplified models of the polar climate fluctuations are shown to support these findings.
NASA Astrophysics Data System (ADS)
Khodasevich, I. A.; Voitikov, S. V.; Orlovich, V. A.; Kosmyna, M. B.; Shekhovtsov, A. N.
2016-09-01
Unpolarized spontaneous Raman spectra of crystalline double calcium orthovanadates Ca10M(VO4)7 (M = Li, K, Na) in the range 150-1600 cm-1 were measured. Two vibrational bands with full-width at half-maximum (FWHM) of 37-50 cm-1 were found in the regions 150-500 and 700-1000 cm-1. The band shapes were approximated well by deconvolution into Voigt profiles. The band at 700-1000 cm-1 was stronger and deconvoluted into eight Voigt profiles. The frequencies of two strong lines were ~848 and ~862 cm-1 for Ca10Li(VO4)7; ~850 and ~866 cm-1 for Ca10Na(VO4)7; and ~844 and ~866 cm-1 for Ca10K(VO4)7. The Lorentzian width parameters of these lines in the Voigt profiles were ~5 times greater than those of the Gaussian width parameters. The FWHM of the Voigt profiles were ~18-42 cm-1. The two strongest lines had widths of 21-25 cm-1. The vibrational band at 300-500 cm-1 was ~5-6 times weaker than that at 700-1000 cm-1 and was deconvoluted into four lines with widths of 25-40 cm-1. The large FWHM of the Raman lines indicated that the crystal structures were disordered. These crystals could be of interest for Raman conversion of pico- and femtosecond laser pulses because of the intense vibrations with large FWHM in the Raman spectra.
A Robust Deconvolution Method based on Transdimensional Hierarchical Bayesian Inference
NASA Astrophysics Data System (ADS)
Kolb, J.; Lekic, V.
2012-12-01
Analysis of P-S and S-P conversions allows us to map receiver side crustal and lithospheric structure. This analysis often involves deconvolution of the parent wave field from the scattered wave field as a means of suppressing source-side complexity. A variety of deconvolution techniques exist including damped spectral division, Wiener filtering, iterative time-domain deconvolution, and the multitaper method. All of these techniques require estimates of noise characteristics as input parameters. We present a deconvolution method based on transdimensional Hierarchical Bayesian inference in which both noise magnitude and noise correlation are used as parameters in calculating the likelihood probability distribution. Because the noise for P-S and S-P conversion analysis in terms of receiver functions is a combination of both background noise - which is relatively easy to characterize - and signal-generated noise - which is much more difficult to quantify - we treat measurement errors as an known quantity, characterized by a probability density function whose mean and variance are model parameters. This transdimensional Hierarchical Bayesian approach has been successfully used previously in the inversion of receiver functions in terms of shear and compressional wave speeds of an unknown number of layers [1]. In our method we used a Markov chain Monte Carlo (MCMC) algorithm to find the receiver function that best fits the data while accurately assessing the noise parameters. In order to parameterize the receiver function we model the receiver function as an unknown number of Gaussians of unknown amplitude and width. The algorithm takes multiple steps before calculating the acceptance probability of a new model, in order to avoid getting trapped in local misfit minima. Using both observed and synthetic data, we show that the MCMC deconvolution method can accurately obtain a receiver function as well as an estimate of the noise parameters given the parent and daughter components. Furthermore, we demonstrate that this new approach is far less susceptible to generating spurious features even at high noise levels. Finally, the method yields not only the most-likely receiver function, but also quantifies its full uncertainty. [1] Bodin, T., M. Sambridge, H. Tkalčić, P. Arroucau, K. Gallagher, and N. Rawlinson (2012), Transdimensional inversion of receiver functions and surface wave dispersion, J. Geophys. Res., 117, B02301
Kratochvíla, Jiří; Jiřík, Radovan; Bartoš, Michal; Standara, Michal; Starčuk, Zenon; Taxt, Torfinn
2016-03-01
One of the main challenges in quantitative dynamic contrast-enhanced (DCE) MRI is estimation of the arterial input function (AIF). Usually, the signal from a single artery (ignoring contrast dispersion, partial volume effects and flow artifacts) or a population average of such signals (also ignoring variability between patients) is used. Multi-channel blind deconvolution is an alternative approach avoiding most of these problems. The AIF is estimated directly from the measured tracer concentration curves in several tissues. This contribution extends the published methods of multi-channel blind deconvolution by applying a more realistic model of the impulse residue function, the distributed capillary adiabatic tissue homogeneity model (DCATH). In addition, an alternative AIF model is used and several AIF-scaling methods are tested. The proposed method is evaluated on synthetic data with respect to the number of tissue regions and to the signal-to-noise ratio. Evaluation on clinical data (renal cell carcinoma patients before and after the beginning of the treatment) gave consistent results. An initial evaluation on clinical data indicates more reliable and less noise sensitive perfusion parameter estimates. Blind multi-channel deconvolution using the DCATH model might be a method of choice for AIF estimation in a clinical setup. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li
2014-09-01
This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
A time reversal algorithm in acoustic media with Dirac measure approximations
NASA Astrophysics Data System (ADS)
Bretin, Élie; Lucas, Carine; Privat, Yannick
2018-04-01
This article is devoted to the study of a photoacoustic tomography model, where one is led to consider the solution of the acoustic wave equation with a source term writing as a separated variables function in time and space, whose temporal component is in some sense close to the derivative of the Dirac distribution at t = 0. This models a continuous wave laser illumination performed during a short interval of time. We introduce an algorithm for reconstructing the space component of the source term from the measure of the solution recorded by sensors during a time T all along the boundary of a connected bounded domain. It is based at the same time on the introduction of an auxiliary equivalent Cauchy problem allowing to derive explicit reconstruction formula and then to use of a deconvolution procedure. Numerical simulations illustrate our approach. Finally, this algorithm is also extended to elasticity wave systems.
Free energy calculations: an efficient adaptive biasing potential method.
Dickson, Bradley M; Legoll, Frédéric; Lelièvre, Tony; Stoltz, Gabriel; Fleurat-Lessard, Paul
2010-05-06
We develop an efficient sampling and free energy calculation technique within the adaptive biasing potential (ABP) framework. By mollifying the density of states we obtain an approximate free energy and an adaptive bias potential that is computed directly from the population along the coordinates of the free energy. Because of the mollifier, the bias potential is "nonlocal", and its gradient admits a simple analytic expression. A single observation of the reaction coordinate can thus be used to update the approximate free energy at every point within a neighborhood of the observation. This greatly reduces the equilibration time of the adaptive bias potential. This approximation introduces two parameters: strength of mollification and the zero of energy of the bias potential. While we observe that the approximate free energy is a very good estimate of the actual free energy for a large range of mollification strength, we demonstrate that the errors associated with the mollification may be removed via deconvolution. The zero of energy of the bias potential, which is easy to choose, influences the speed of convergence but not the limiting accuracy. This method is simple to apply to free energy or mean force computation in multiple dimensions and does not involve second derivatives of the reaction coordinates, matrix manipulations nor on-the-fly adaptation of parameters. For the alanine dipeptide test case, the new method is found to gain as much as a factor of 10 in efficiency as compared to two basic implementations of the adaptive biasing force methods, and it is shown to be as efficient as well-tempered metadynamics with the postprocess deconvolution giving a clear advantage to the mollified density of states method.
LES-Modeling of a Partially Premixed Flame using a Deconvolution Turbulence Closure
NASA Astrophysics Data System (ADS)
Wang, Qing; Wu, Hao; Ihme, Matthias
2015-11-01
The modeling of the turbulence/chemistry interaction in partially premixed and multi-stream combustion remains an outstanding issue. By extending a recently developed constrained minimum mean-square error deconvolution (CMMSED) method, to objective of this work is to develop a source-term closure for turbulent multi-stream combustion. In this method, the chemical source term is obtained from a three-stream flamelet model, and CMMSED is used as closure model, thereby eliminating the need for presumed PDF-modeling. The model is applied to LES of a piloted turbulent jet flame with inhomogeneous inlets, and simulation results are compared with experiments. Comparisons with presumed PDF-methods are performed, and issues regarding resolution and conservation of the CMMSED method are examined. The author would like to acknowledge the support of funding from Stanford Graduate Fellowship.
Least-Squares Deconvolution of Compton Telescope Data with the Positivity Constraint
NASA Technical Reports Server (NTRS)
Wheaton, William A.; Dixon, David D.; Tumer, O. Tumay; Zych, Allen D.
1993-01-01
We describe a Direct Linear Algebraic Deconvolution (DLAD) approach to imaging of data from Compton gamma-ray telescopes. Imposition of the additional physical constraint, that all components of the model be non-negative, has been found to have a powerful effect in stabilizing the results, giving spatial resolution at or near the instrumental limit. A companion paper (Dixon et al. 1993) presents preliminary images of the Crab Nebula region using data from COMPTEL on the Compton Gamma-Ray Observatory.
An l1-TV Algorithm for Deconvolution with Salt and Pepper Noise
2009-04-01
deblurring in the presence of impulsive noise ,” Int. J. Comput. Vision, vol. 70, no. 3, pp. 279–298, Dec. 2006. [13] A. E. Beaton and J. W. Tukey, “The...AN 1-TV ALGORITHM FOR DECONVOLUTIONWITH SALT AND PEPPER NOISE Brendt Wohlberg∗ T-7 Mathematical Modeling and Analysis Los Alamos National Laboratory...and pepper noise , but the extension of this formulation to more general prob- lems, such as deconvolution, has received little attention. We consider
Ciesielski, Bartlomiej; Marciniak, Agnieszka; Zientek, Agnieszka; Krefft, Karolina; Cieszyński, Mateusz; Boguś, Piotr; Prawdzik-Dampc, Anita
2016-12-01
This study is about the accuracy of EPR dosimetry in bones based on deconvolution of the experimental spectra into the background (BG) and the radiation-induced signal (RIS) components. The model RIS's were represented by EPR spectra from irradiated enamel or bone powder; the model BG signals by EPR spectra of unirradiated bone samples or by simulated spectra. Samples of compact and trabecular bones were irradiated in the 30-270 Gy range and the intensities of their RIS's were calculated using various combinations of those benchmark spectra. The relationships between the dose and the RIS were linear (R 2 > 0.995), with practically no difference between results obtained when using signals from irradiated enamel or bone as the model RIS. Use of different experimental spectra for the model BG resulted in variations in intercepts of the dose-RIS calibration lines, leading to systematic errors in reconstructed doses, in particular for high- BG samples of trabecular bone. These errors were reduced when simulated spectra instead of the experimental ones were used as the benchmark BG signal in the applied deconvolution procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei
2018-07-01
Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.
Thorium concentrations in the lunar surface. V - Deconvolution of the central highlands region
NASA Technical Reports Server (NTRS)
Metzger, A. E.; Etchegaray-Ramirez, M. I.; Haines, E. L.
1982-01-01
The distribution of thorium in the lunar central highlands measured from orbit by the Apollo 16 gamma-ray spectrometer is subjected to a deconvolution analysis to yield improved spatial resolution and contrast. Use of two overlapping data fields for complete coverage also provides a demonstration of the technique's ability to model concentrations several degrees beyond the data track. Deconvolution reveals an association between Th concentration and the Kant Plateau, Descartes Mountain and Cayley plains surface formations. The Kant Plateau and Descartes Mountains model with Th less than 1 part per million, which is typical of farside highlands but is infrequently seen over any other nearside highland portions of the Apollo 15 and 16 ground tracks. It is noted that, if the Cayley plains are the result of basin-forming impact ejecta, the distribution of Th concentration with longitude supports an origin from the Imbrium basin rather than the Nectaris or Orientale basins. Nectaris basin materials are found to have a Th concentration similar to that of the Descartes Mountains, evidence that the latter may have been emplaced as Nectaris basin impact deposits.
Klughammer, Christof; Schreiber, Ulrich
2016-05-01
A newly developed compact measuring system for assessment of transmittance changes in the near-infrared spectral region is described; it allows deconvolution of redox changes due to ferredoxin (Fd), P700, and plastocyanin (PC) in intact leaves. In addition, it can also simultaneously measure chlorophyll fluorescence. The major opto-electronic components as well as the principles of data acquisition and signal deconvolution are outlined. Four original pulse-modulated dual-wavelength difference signals are measured (785-840 nm, 810-870 nm, 870-970 nm, and 795-970 nm). Deconvolution is based on specific spectral information presented graphically in the form of 'Differential Model Plots' (DMP) of Fd, P700, and PC that are derived empirically from selective changes of these three components under appropriately chosen physiological conditions. Whereas information on maximal changes of Fd is obtained upon illumination after dark-acclimation, maximal changes of P700 and PC can be readily induced by saturating light pulses in the presence of far-red light. Using the information of DMP and maximal changes, the new measuring system enables on-line deconvolution of Fd, P700, and PC. The performance of the new device is demonstrated by some examples of practical applications, including fast measurements of flash relaxation kinetics and of the Fd, P700, and PC changes paralleling the polyphasic fluorescence rise upon application of a 300-ms pulse of saturating light.
Deconvolution method for accurate determination of overlapping peak areas in chromatograms.
Nelson, T J
1991-12-20
A method is described for deconvoluting chromatograms which contain overlapping peaks. Parameters can be selected to ensure that attenuation of peak areas is uniform over any desired range of peak widths. A simple extension of the method greatly reduces the negative overshoot frequently encountered with deconvolutions. The deconvoluted chromatograms are suitable for integration by conventional methods.
Deconvolution of interferometric data using interior point iterative algorithms
NASA Astrophysics Data System (ADS)
Theys, C.; Lantéri, H.; Aime, C.
2016-09-01
We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.
Chemometric Data Analysis for Deconvolution of Overlapped Ion Mobility Profiles
NASA Astrophysics Data System (ADS)
Zekavat, Behrooz; Solouki, Touradj
2012-11-01
We present the details of a data analysis approach for deconvolution of the ion mobility (IM) overlapped or unresolved species. This approach takes advantage of the ion fragmentation variations as a function of the IM arrival time. The data analysis involves the use of an in-house developed data preprocessing platform for the conversion of the original post-IM/collision-induced dissociation mass spectrometry (post-IM/CID MS) data to a Matlab compatible format for chemometric analysis. We show that principle component analysis (PCA) can be used to examine the post-IM/CID MS profiles for the presence of mobility-overlapped species. Subsequently, using an interactive self-modeling mixture analysis technique, we show how to calculate the total IM spectrum (TIMS) and CID mass spectrum for each component of the IM overlapped mixtures. Moreover, we show that PCA and IM deconvolution techniques provide complementary results to evaluate the validity of the calculated TIMS profiles. We use two binary mixtures with overlapping IM profiles, including (1) a mixture of two non-isobaric peptides (neurotensin (RRPYIL) and a hexapeptide (WHWLQL)), and (2) an isobaric sugar isomer mixture of raffinose and maltotriose, to demonstrate the applicability of the IM deconvolution.
Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.
Eichstädt, S; Wilkens, V
2017-06-01
An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.
Designing a stable feedback control system for blind image deconvolution.
Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan
2018-05-01
Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Oda, Hirokuni; Xuan, Chuang
2014-10-01
development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.
NASA Astrophysics Data System (ADS)
Xuan, Chuang; Oda, Hirokuni
2015-11-01
The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.
Huang, C.; Townshend, J.R.G.; Liang, S.; Kalluri, S.N.V.; DeFries, R.S.
2002-01-01
Measured and modeled point spread functions (PSF) of sensor systems indicate that a significant portion of the recorded signal of each pixel of a satellite image originates from outside the area represented by that pixel. This hinders the ability to derive surface information from satellite images on a per-pixel basis. In this study, the impact of the PSF of the Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m bands was assessed using four images representing different landscapes. Experimental results showed that though differences between pixels derived with and without PSF effects were small on the average, the PSF generally brightened dark objects and darkened bright objects. This impact of the PSF lowered the performance of a support vector machine (SVM) classifier by 5.4% in overall accuracy and increased the overall root mean square error (RMSE) by 2.4% in estimating subpixel percent land cover. An inversion method based on the known PSF model reduced the signals originating from surrounding areas by as much as 53%. This method differs from traditional PSF inversion deconvolution methods in that the PSF was adjusted with lower weighting factors for signals originating from neighboring pixels than those specified by the PSF model. By using this deconvolution method, the lost classification accuracy due to residual impact of PSF effects was reduced to only 1.66% in overall accuracy. The increase in the RMSE of estimated subpixel land cover proportions due to the residual impact of PSF effects was reduced to 0.64%. Spatial aggregation also effectively reduced the errors in estimated land cover proportion images. About 50% of the estimation errors were removed after applying the deconvolution method and aggregating derived proportion images to twice their dimensional pixel size. ?? 2002 Elsevier Science Inc. All rights reserved.
Deconvolution of acoustic emissions for source localization using time reverse modeling
NASA Astrophysics Data System (ADS)
Kocur, Georg Karl
2017-01-01
Impact experiments on small-scale slabs made of concrete and aluminum were carried out. Wave motion radiated from the epicenter of the impact was recorded as voltage signals by resonant piezoelectric transducers. Numerical simulations of the elastic wave propagation are performed to simulate the physical experiments. The Hertz theory of contact is applied to estimate the force impulse, which is subsequently used for the numerical simulation. Displacements at the transducer positions are calculated numerically. A deconvolution function is obtained by comparing the physical (voltage signal) and the numerical (calculated displacement) experiments. Acoustic emission signals due to pencil-lead breaks are recorded, deconvolved and applied for localization using time reverse modeling.
Crowded field photometry with deconvolved images.
NASA Astrophysics Data System (ADS)
Linde, P.; Spännare, S.
A local implementation of the Lucy-Richardson algorithm has been used to deconvolve a set of crowded stellar field images. The effects of deconvolution on detection limits as well as on photometric and astrometric properties have been investigated as a function of the number of deconvolution iterations. Results show that deconvolution improves detection of faint stars, although artifacts are also found. Deconvolution provides more stars measurable without significant degradation of positional accuracy. The photometric precision is affected by deconvolution in several ways. Errors due to unresolved images are notably reduced, while flux redistribution between stars and background increases the errors.
Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.
2003-01-01
Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.
Fast analytical spectral filtering methods for magnetic resonance perfusion quantification.
Reddy, Kasireddy V; Mitra, Abhishek; Yalavarthy, Phaneendra K
2016-08-01
The deconvolution in the perfusion weighted imaging (PWI) plays an important role in quantifying the MR perfusion parameters. The PWI application to stroke and brain tumor studies has become a standard clinical practice. The standard approach for this deconvolution is oscillatory-limited singular value decomposition (oSVD) and frequency domain deconvolution (FDD). The FDD is widely recognized as the fastest approach currently available for deconvolution of MR perfusion data. In this work, two fast deconvolution methods (namely analytical fourier filtering and analytical showalter spectral filtering) are proposed. Through systematic evaluation, the proposed methods are shown to be computationally efficient and quantitatively accurate compared to FDD and oSVD.
NASA Astrophysics Data System (ADS)
Rajendran, Kishore; Leng, Shuai; Jorgensen, Steven M.; Abdurakhimova, Dilbar; Ritman, Erik L.; McCollough, Cynthia H.
2017-03-01
Changes in arterial wall perfusion are an indicator of early atherosclerosis. This is characterized by an increased spatial density of vasa vasorum (VV), the micro-vessels that supply oxygen and nutrients to the arterial wall. Detection of increased VV during contrast-enhanced computed tomography (CT) imaging is limited due to contamination from blooming effect from the contrast-enhanced lumen. We report the application of an image deconvolution technique using a measured system point-spread function, on CT data obtained from a photon-counting CT system to reduce blooming and to improve the CT number accuracy of arterial wall, which enhances detection of increased VV. A phantom study was performed to assess the accuracy of the deconvolution technique. A porcine model was created with enhanced VV in one carotid artery; the other carotid artery served as a control. CT images at an energy range of 25-120 keV were reconstructed. CT numbers were measured for multiple locations in the carotid walls and for multiple time points, pre and post contrast injection. The mean CT number in the carotid wall was compared between the left (increased VV) and right (control) carotid arteries. Prior to deconvolution, results showed similar mean CT numbers in the left and right carotid wall due to the contamination from blooming effect, limiting the detection of increased VV in the left carotid artery. After deconvolution, the mean CT number difference between the left and right carotid arteries was substantially increased at all the time points, enabling detection of the increased VV in the artery wall.
Fortuna Tessera, Venus - Evidence of horizontal convergence and crustal thickening
NASA Technical Reports Server (NTRS)
Vorder Bruegge, R. W.; Head, J. W.
1989-01-01
Structural and tectonic patterns mapped in Fortuna Tessera are interpreted to reflect a change in the style and intensity of deformation from east to west, beginning with simple tessera terrain at relatively low topographic elevations in the east and progressing through increasingly complex deformation patterns and higher topography to Maxwell Montes in the West. These morphologic and topographic patterns are consistent with east-to-west convergence and compression and the increasing elevations are interpreted to be due to crustal thickening processes associated with the convergent deformational environment. Using an Airy isostatic model, crustal thicknesses of approximately 35 km for the initial tessera terrain, and crustal thicknesses of over 100 km for the Maxwell Montes region are predicted. Detailed mapping with Magellan data will permit the deconvolution of individual components and structures in this terrain.
Ramachandra, Ranjan; de Jonge, Niels
2012-01-01
Three-dimensional (3D) data sets were recorded of gold nanoparticles placed on both sides of silicon nitride membranes using focal series aberration-corrected scanning transmission electron microscopy (STEM). The deconvolution of the 3D datasets was optimized to obtain the highest possible axial resolution. The deconvolution involved two different point spread function (PSF)s, each calculated iteratively via blind deconvolution.. Supporting membranes of different thicknesses were tested to study the effect of beam broadening on the deconvolution. It was found that several iterations of deconvolution was efficient in reducing the imaging noise. With an increasing number of iterations, the axial resolution was increased, and most of the structural information was preserved. Additional iterations improved the axial resolution by maximal a factor of 4 to 6, depending on the particular dataset, and up to 8 nm maximal, but at the cost of a reduction of the lateral size of the nanoparticles in the image. Thus, the deconvolution procedure optimized for highest axial resolution is best suited for applications where one is interested in the 3D locations of nanoparticles only. PMID:22152090
NASA Astrophysics Data System (ADS)
Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.
2014-12-01
Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.
Sparse Solution of Fiber Orientation Distribution Function by Diffusion Decomposition
Yeh, Fang-Cheng; Tseng, Wen-Yih Isaac
2013-01-01
Fiber orientation is the key information in diffusion tractography. Several deconvolution methods have been proposed to obtain fiber orientations by estimating a fiber orientation distribution function (ODF). However, the L 2 regularization used in deconvolution often leads to false fibers that compromise the specificity of the results. To address this problem, we propose a method called diffusion decomposition, which obtains a sparse solution of fiber ODF by decomposing the diffusion ODF obtained from q-ball imaging (QBI), diffusion spectrum imaging (DSI), or generalized q-sampling imaging (GQI). A simulation study, a phantom study, and an in-vivo study were conducted to examine the performance of diffusion decomposition. The simulation study showed that diffusion decomposition was more accurate than both constrained spherical deconvolution and ball-and-sticks model. The phantom study showed that the angular error of diffusion decomposition was significantly lower than those of constrained spherical deconvolution at 30° crossing and ball-and-sticks model at 60° crossing. The in-vivo study showed that diffusion decomposition can be applied to QBI, DSI, or GQI, and the resolved fiber orientations were consistent regardless of the diffusion sampling schemes and diffusion reconstruction methods. The performance of diffusion decomposition was further demonstrated by resolving crossing fibers on a 30-direction QBI dataset and a 40-direction DSI dataset. In conclusion, diffusion decomposition can improve angular resolution and resolve crossing fibers in datasets with low SNR and substantially reduced number of diffusion encoding directions. These advantages may be valuable for human connectome studies and clinical research. PMID:24146772
Photoluminescence and thermoluminescence properties of BaGa2O4
NASA Astrophysics Data System (ADS)
Noto, L. L.; Poelman, D.; Orante-Barrón, V. R.; Swart, H. C.; Mathevula, L. E.; Nyenge, R.; Chithambo, M.; Mothudi, B. M.; Dhlamini, M. S.
2018-04-01
Rare-Earth free luminescent materials are fast becoming important as the cost of rare earth ions gradually increases. In this work, a Rare-Earth free BaGa2O4 luminescent compound was prepared by solid state chemical reaction, which was confirmed to have a single phase by X-ray Diffraction. The Backscattered Electron image and Energy Dispersive X-ray spectroscopy maps confirmed irregular particle and homogeneous compound formation, respectively. The Photoluminescence spectrum displayed broad emission between 350 to 650 nm, which was deconvoluted into two components. The photoluminescence excitation peak was positioned at 254 nm, which corresponds with the band-to-band position observed from the diffuse reflectance spectrum. The band gap was extrapolated to 4.65 ± 0.02 eV using the Kubelka-Munk model. The preliminary thermoluminescence results indicated that the kinetics involved were neither of first nor second order. Additionally, the activation energy of the electrons within the trap centres was approximated to 0.61 ± 0.01 eV using the Initial Rise model.
SU-F-T-478: Effect of Deconvolution in Analysis of Mega Voltage Photon Beam Profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muthukumaran, M; Manigandan, D; Murali, V
2016-06-15
Purpose: To study and compare the penumbra of 6 MV and 15 MV photon beam profiles after deconvoluting different volume ionization chambers. Methods: 0.125cc Semi-Flex chamber, Markus Chamber and PTW Farmer chamber were used to measure the in-plane and cross-plane profiles at 5 cm depth for 6 MV and 15 MV photons. The profiles were measured for various field sizes starting from 2×2 cm till 30×30 cm. PTW TBA scan software was used for the measurements and the “deconvolution” functionality in the software was used to remove the volume averaging effect due to finite volume of the chamber along lateralmore » and longitudinal directions for all the ionization chambers. The predicted true profile was compared and the change in penumbra before and after deconvolution was studied. Results: After deconvoluting the penumbra decreased by 1 mm for field sizes ranging from 2 × 2 cm till 20 x20 cm. This is observed for along both lateral and longitudinal directions. However for field sizes from 20 × 20 till 30 ×30 cm the difference in penumbra was around 1.2 till 1.8 mm. This was observed for both 6 MV and 15 MV photon beams. The penumbra was always lesser in the deconvoluted profiles for all the ionization chambers involved in the study. The variation in difference in penumbral values were in the order of 0.1 till 0.3 mm between the deconvoluted profile along lateral and longitudinal directions for all the chambers under study. Deconvolution of the profiles along longitudinal direction for Farmer chamber was not good and is not comparable with other deconvoluted profiles. Conclusion: The results of the deconvoluted profiles for 0.125cc and Markus chamber was comparable and the deconvolution functionality can be used to overcome the volume averaging effect.« less
NASA Astrophysics Data System (ADS)
Zhang, Yongliang; Day-Uei Li, David
2017-02-01
This comment is to clarify that Poisson noise instead of Gaussian noise shall be included to assess the performances of least-squares deconvolution with Laguerre expansion (LSD-LE) for analysing fluorescence lifetime imaging data obtained from time-resolved systems. Moreover, we also corrected an equation in the paper. As the LSD-LE method is rapid and has the potential to be widely applied not only for diagnostic but for wider bioimaging applications, it is desirable to have precise noise models and equations.
Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data
NASA Astrophysics Data System (ADS)
Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam
2018-06-01
Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filipuzzi, M; Garrigo, E; Venencia, C
2014-06-01
Purpose: To calculate the spatial response function of various radiation detectors, to evaluate the dependence on the field size and to analyze the small fields profiles corrections by deconvolution techniques. Methods: Crossline profiles were measured on a Novalis Tx 6MV beam with a HDMLC. The configuration setup was SSD=100cm and depth=5cm. Five fields were studied (200×200mm2,100×100mm2, 20×20mm2, 10×10mm2and 5×5mm2) and measured were made with passive detectors (EBT3 radiochromic films and TLD700 thermoluminescent detectors), ionization chambers (PTW30013, PTW31003, CC04 and PTW31016) and diodes (PTW60012 and IBA SFD). The results of passive detectors were adopted as the actual beam profile. To calculatemore » the detectors kernels, modeled by Gaussian functions, an iterative process based on a least squares criterion was used. The deconvolutions of the measured profiles were calculated with the Richardson-Lucy method. Results: The profiles of the passive detectors corresponded with a difference in the penumbra less than 0.1mm. Both diodes resolve the profiles with an overestimation of the penumbra smaller than 0.2mm. For the other detectors, response functions were calculated and resulted in Gaussian functions with a standard deviation approximate to the radius of the detector in study (with a variation less than 3%). The corrected profiles resolve the penumbra with less than 1% error. Major discrepancies were observed for cases in extreme conditions (PTW31003 and 5×5mm2 field size). Conclusion: This work concludes that the response function of a radiation detector is independent on the field size, even for small radiation beams. The profiles correction, using deconvolution techniques and response functions of standard deviation equal to the radius of the detector, gives penumbra values with less than 1% difference to the real profile. The implementation of this technique allows estimating the real profile, freeing from the effects of the detector used for the acquisition.« less
Deconvoluting complex structural histories archived in brittle fault zones
NASA Astrophysics Data System (ADS)
Viola, G.; Scheiber, T.; Fredin, O.; Zwingmann, H.; Margreth, A.; Knies, J.
2016-11-01
Brittle deformation can saturate the Earth's crust with faults and fractures in an apparently chaotic fashion. The details of brittle deformational histories and implications on, for example, seismotectonics and landscape, can thus be difficult to untangle. Fortunately, brittle faults archive subtle details of the stress and physical/chemical conditions at the time of initial strain localization and eventual subsequent slip(s). Hence, reading those archives offers the possibility to deconvolute protracted brittle deformation. Here we report K-Ar isotopic dating of synkinematic/authigenic illite coupled with structural analysis to illustrate an innovative approach to the high-resolution deconvolution of brittle faulting and fluid-driven alteration of a reactivated fault in western Norway. Permian extension preceded coaxial reactivation in the Jurassic and Early Cretaceous fluid-related alteration with pervasive clay authigenesis. This approach represents important progress towards time-constrained structural models, where illite characterization and K-Ar analysis are a fundamental tool to date faulting and alteration in crystalline rocks.
NASA Technical Reports Server (NTRS)
Schade, David J.; Elson, Rebecca A. W.
1993-01-01
We describe experiments with deconvolutions of simulations of deep HST Wide Field Camera images containing faint, compact galaxies to determine under what circumstances there is a quantitative advantage to image deconvolution, and explore whether it is (1) helpful for distinguishing between stars and compact galaxies, or between spiral and elliptical galaxies, and whether it (2) improves the accuracy with which characteristic radii and integrated magnitudes may be determined. The Maximum Entropy and Richardson-Lucy deconvolution algorithms give the same results. For medium and low S/N images, deconvolution does not significantly improve our ability to distinguish between faint stars and compact galaxies, nor between spiral and elliptical galaxies. Measurements from both raw and deconvolved images are biased and must be corrected; it is easier to quantify and remove the biases for cases that have not been deconvolved. We find no benefit from deconvolution for measuring luminosity profiles, but these results are limited to low S/N images of very compact (often undersampled) galaxies.
NASA Astrophysics Data System (ADS)
Tian, Yu; Rao, Changhui; Wei, Kai
2008-07-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulence due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frames blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are suitable for blind deconvolution from the recorded AO close-loop frames series are selected by the frame selection technique and then do the multi-frame blind deconvolution. There is no priori knowledge except for the positive constraint in blind deconvolution. It is benefit for the use of multi-frame images to improve the stability and convergence of the blind deconvolution algorithm. The method had been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system at Yunnan Observatory. The results show that the method can effectively improve the images partially corrected by adaptive optics.
Sp and Ps Receiver Function Imaging of the Cenozoic and Precambrian US
NASA Astrophysics Data System (ADS)
Keenan, James; Thurner, Sally; Levander, Alan
2013-04-01
Using teleseismic USArray data we have made Ps and Sp receiver function common conversion point stacked image volumes that extend from the Pacific coast to approximately the Mississippi River. We have used iterative time-domain deconvolution, water-level frequency-domain deconvolution, and least squares inverse filtering to form receiver functions in various frequency bands (Ps: 1.0 and, 0.5 Hz, Sp: 0.2 and 0.1 Hz). The receiver functions were stacked to give an image volume for each frequency band using a hybrid velocity model made by combining Crust2.0 (Bassin et al., 2000) and finite-frequency P and S wave tomography models (Schmandt and Humphreys, 2010; and Schmandt, unpublished). We contrast the lithospheric and asthenospheric structure of the western U.S., modified by Cenozoic tectonism, with that of the Precambrian central U.S. Here we describe 2 notable features: (1) In the Sp image volumes the upper mantle beneath the western U.S. differs dramatically from that to the east of the Rocky Mountain front. In the western U.S. the lithosphere is either thin, or highly variable in thickness (40-140 km) with neither the lithosphere nor asthenosphere having much internal structure (e.g., Levander and Miller, 2012). In contrast, east of the Rocky Mountain front the lithosphere steadily deepens to > 150 km and shows relatively strong internal layering. Individual positive and negative conversions are coherent over 100's of kilometers, suggesting the thrust stacking model of cratonic formation. (2) Beneath parts of the Archean Wyoming Province (Henstock et al, 1998; Snelson et al., 1998; Gorman et al., 2002; Mahan et al, 2012), much of the Great Plains and part of the Midwest lies a vast variable thickness (up to ~25 km) high velocity crustal layer. This layer lies roughly north of the Grenville Front, underlying much of the Yavapai-Mazatzal Province east of the Rockies, parts of the Superior Province, and possibly parts of the Trans-Hudson province.
Blind source deconvolution for deep Earth seismology
NASA Astrophysics Data System (ADS)
Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.
2007-12-01
We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.
NASA Astrophysics Data System (ADS)
Bisdas, Sotirios; Konstantinou, George N.; Sherng Lee, Puor; Thng, Choon Hua; Wagenblast, Jens; Baghi, Mehran; San Koh, Tong
2007-10-01
The objective of this work was to evaluate the feasibility of a two-compartment distributed-parameter (DP) tracer kinetic model to generate functional images of several physiologic parameters from dynamic contrast-enhanced CT data obtained of patients with extracranial head and neck tumors and to compare the DP functional images to those obtained by deconvolution-based DCE-CT data analysis. We performed post-processing of DCE-CT studies, obtained from 15 patients with benign and malignant head and neck cancer. We introduced a DP model of the impulse residue function for a capillary-tissue exchange unit, which accounts for the processes of convective transport and capillary-tissue exchange. The calculated parametric maps represented blood flow (F), intravascular blood volume (v1), extravascular extracellular blood volume (v2), vascular transit time (t1), permeability-surface area product (PS), transfer ratios k12 and k21, and the fraction of extracted tracer (E). Based on the same regions of interest (ROI) analysis, we calculated the tumor blood flow (BF), blood volume (BV) and mean transit time (MTT) by using a modified deconvolution-based analysis taking into account the extravasation of the contrast agent for PS imaging. We compared the corresponding values by using Bland-Altman plot analysis. We outlined 73 ROIs including tumor sites, lymph nodes and normal tissue. The Bland-Altman plot analysis revealed that the two methods showed an accepted degree of agreement for blood flow, and, thus, can be used interchangeably for measuring this parameter. Slightly worse agreement was observed between v1 in the DP model and BV but even here the two tracer kinetic analyses can be used interchangeably. Under consideration of whether both techniques may be used interchangeably was the case of t1 and MTT, as well as for measurements of the PS values. The application of the proposed DP model is feasible in the clinical routine and it can be used interchangeably for measuring blood flow and vascular volume with the commercially available reference standard of the deconvolution-based approach. The lack of substantial agreement between the measurements of vascular transit time and permeability-surface area product may be attributed to the different tracer kinetic principles employed by both models and the detailed capillary tissue exchange physiological modeling of the DP technique.
Erny, Guillaume L; Moeenfard, Marzieh; Alves, Arminda
2015-02-01
In this manuscript, the separation of kahweol and cafestol esters from Arabica coffee brews was investigated using liquid chromatography with a diode array detector. When detected in conjunction, cafestol, and kahweol esters were eluted together, but, after optimization, the kahweol esters could be selectively detected by setting the wavelength at 290 nm to allow their quantification. Such an approach was not possible for the cafestol esters, and spectral deconvolution was used to obtain deconvoluted chromatograms. In each of those chromatograms, the four esters were baseline separated allowing for the quantification of the eight targeted compounds. Because kahweol esters could be quantified either using the chromatogram obtained by setting the wavelength at 290 nm or using the deconvoluted chromatogram, those compounds were used to compare the analytical performances. Slightly better limits of detection were obtained using the deconvoluted chromatogram. Identical concentrations were found in a real sample with both approaches. The peak areas in the deconvoluted chromatograms were repeatable (intraday repeatability of 0.8%, interday repeatability of 1.0%). This work demonstrates the accuracy of spectral deconvolution when using liquid chromatography to mathematically separate coeluting compounds using the full spectra recorded by a diode array detector. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bouridane, Ahmed; Ling, Bingo Wing-Kuen
2018-01-01
This paper presents an unsupervised learning algorithm for sparse nonnegative matrix factor time–frequency deconvolution with optimized fractional β-divergence. The β-divergence is a group of cost functions parametrized by a single parameter β. The Itakura–Saito divergence, Kullback–Leibler divergence and Least Square distance are special cases that correspond to β=0, 1, 2, respectively. This paper presents a generalized algorithm that uses a flexible range of β that includes fractional values. It describes a maximization–minimization (MM) algorithm leading to the development of a fast convergence multiplicative update algorithm with guaranteed convergence. The proposed model operates in the time–frequency domain and decomposes an information-bearing matrix into two-dimensional deconvolution of factor matrices that represent the spectral dictionary and temporal codes. The deconvolution process has been optimized to yield sparse temporal codes through maximizing the likelihood of the observations. The paper also presents a method to estimate the fractional β value. The method is demonstrated on separating audio mixtures recorded from a single channel. The paper shows that the extraction of the spectral dictionary and temporal codes is significantly more efficient by using the proposed algorithm and subsequently leads to better source separation performance. Experimental tests and comparisons with other factorization methods have been conducted to verify its efficacy. PMID:29702629
Ultra-high resolution computed tomography imaging
Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.
2002-01-01
A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.
Calibration of Wide-Field Deconvolution Microscopy for Quantitative Fluorescence Imaging
Lee, Ji-Sook; Wee, Tse-Luen (Erika); Brown, Claire M.
2014-01-01
Deconvolution enhances contrast in fluorescence microscopy images, especially in low-contrast, high-background wide-field microscope images, improving characterization of features within the sample. Deconvolution can also be combined with other imaging modalities, such as confocal microscopy, and most software programs seek to improve resolution as well as contrast. Quantitative image analyses require instrument calibration and with deconvolution, necessitate that this process itself preserves the relative quantitative relationships between fluorescence intensities. To ensure that the quantitative nature of the data remains unaltered, deconvolution algorithms need to be tested thoroughly. This study investigated whether the deconvolution algorithms in AutoQuant X3 preserve relative quantitative intensity data. InSpeck Green calibration microspheres were prepared for imaging, z-stacks were collected using a wide-field microscope, and the images were deconvolved using the iterative deconvolution algorithms with default settings. Afterwards, the mean intensities and volumes of microspheres in the original and the deconvolved images were measured. Deconvolved data sets showed higher average microsphere intensities and smaller volumes than the original wide-field data sets. In original and deconvolved data sets, intensity means showed linear relationships with the relative microsphere intensities given by the manufacturer. Importantly, upon normalization, the trend lines were found to have similar slopes. In original and deconvolved images, the volumes of the microspheres were quite uniform for all relative microsphere intensities. We were able to show that AutoQuant X3 deconvolution software data are quantitative. In general, the protocol presented can be used to calibrate any fluorescence microscope or image processing and analysis procedure. PMID:24688321
Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition
NASA Astrophysics Data System (ADS)
Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding
2018-02-01
Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Plenoptic Image Motion Deblurring.
Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo
2018-04-01
We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.
Rucci, Michael; Hardie, Russell C; Barnard, Kenneth J
2014-05-01
In this paper, we present a computationally efficient video restoration algorithm to address both blur and noise for a Nyquist sampled imaging system. The proposed method utilizes a temporal Kalman filter followed by a correlation-model based spatial adaptive Wiener filter (AWF). The Kalman filter employs an affine background motion model and novel process-noise variance estimate. We also propose and demonstrate a new multidelay temporal Kalman filter designed to more robustly treat local motion. The AWF is a spatial operation that performs deconvolution and adapts to the spatially varying residual noise left in the Kalman filter stage. In image areas where the temporal Kalman filter is able to provide significant noise reduction, the AWF can be aggressive in its deconvolution. In other areas, where less noise reduction is achieved with the Kalman filter, the AWF balances the deconvolution with spatial noise reduction. In this way, the Kalman filter and AWF work together effectively, but without the computational burden of full joint spatiotemporal processing. We also propose a novel hybrid system that combines a temporal Kalman filter and BM3D processing. To illustrate the efficacy of the proposed methods, we test the algorithms on both simulated imagery and video collected with a visible camera.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
NASA Astrophysics Data System (ADS)
Wapenaar, K.; van der Neut, J.; Ruigrok, E.; Draganov, D.; Hunziker, J.; Slob, E.; Thorbecke, J.; Snieder, R.
2008-12-01
It is well-known that under specific conditions the crosscorrelation of wavefields observed at two receivers yields the impulse response between these receivers. This principle is known as 'Green's function retrieval' or 'seismic interferometry'. Recently it has been recognized that in many situations it can be advantageous to replace the correlation process by deconvolution. One of the advantages is that deconvolution compensates for the waveform emitted by the source; another advantage is that it is not necessary to assume that the medium is lossless. The approaches that have been developed to date employ a 1D deconvolution process. We propose a method for seismic interferometry by multidimensional deconvolution and show that under specific circumstances the method compensates for irregularities in the source distribution. This is an important difference with crosscorrelation methods, which rely on the condition that waves are equipartitioned. This condition is for example fulfilled when the sources are regularly distributed along a closed surface and the power spectra of the sources are identical. The proposed multidimensional deconvolution method compensates for anisotropic illumination, without requiring knowledge about the positions and the spectra of the sources.
NASA Technical Reports Server (NTRS)
Ioup, J. W.; Ioup, G. E.; Rayborn, G. H., Jr.; Wood, G. M., Jr.; Upchurch, B. T.
1984-01-01
Mass spectrometer data in the form of ion current versus mass-to-charge ratio often include overlapping mass peaks, especially in low- and medium-resolution instruments. Numerical deconvolution of such data effectively enhances the resolution by decreasing the overlap of mass peaks. In this paper two approaches to deconvolution are presented: a function-domain iterative technique and a Fourier transform method which uses transform-domain function-continuation. Both techniques include data smoothing to reduce the sensitivity of the deconvolution to noise. The efficacy of these methods is demonstrated through application to representative mass spectrometer data and the deconvolved results are discussed and compared to data obtained from a spectrometer with sufficient resolution to achieve separation of the mass peaks studied. A case for which the deconvolution is seriously affected by Gibbs oscillations is analyzed.
Multichannel seismic-reflection data collected in 1980 in the eastern Chukchi Sea
Grantz, Arthur; Mann, Dennis M.; May, Steven D.
1986-01-01
The U.S. Geological Survey (USGS) collected approximately 2,652 km of 24-channel seismic-reflection data in early September, 1980, over the continental shelf in the eastern Chukchi Sea (Fig. 1). The profiles were collected on the USGS Research Vessel S.P. Lee. The seismic energy source consisted of a tuned array of five airguns with a total volume of 1213 cubic inches of air compressed to approximately 1900 psi. The recording system consisted of a 24-channel, 2400 meter long streamer with a group interval of 100 m, and a GUS (Global Universal Science) model 4200 digital recording instrument. Shots were fired every 50 meters. Navigational control for the survey was provided by a Magnavox integrated navigation system using transit satellites and doppler-sonar augmented by Loran C (Rho-Rho). A 2-millisecond sampling rate was used in the field; the data were later desampled to 4-milliseconds during the demultiplexing process. 8 seconds data length was recorded. Processing was done at the USGS Pacific Marine Geology Multichannel Processing Center in Menlo Park, California, in the sequence: editing-demultiplexing, velocity analysis, CDP stacking, deconvolution-filtering, and plotting on an electrostatic plotter. Plate 1 is a trackline chart showing shotpoint navigation.
Parsimonious Charge Deconvolution for Native Mass Spectrometry
2018-01-01
Charge deconvolution infers the mass from mass over charge (m/z) measurements in electrospray ionization mass spectra. When applied over a wide input m/z or broad target mass range, charge-deconvolution algorithms can produce artifacts, such as false masses at one-half or one-third of the correct mass. Indeed, a maximum entropy term in the objective function of MaxEnt, the most commonly used charge deconvolution algorithm, favors a deconvolved spectrum with many peaks over one with fewer peaks. Here we describe a new “parsimonious” charge deconvolution algorithm that produces fewer artifacts. The algorithm is especially well-suited to high-resolution native mass spectrometry of intact glycoproteins and protein complexes. Deconvolution of native mass spectra poses special challenges due to salt and small molecule adducts, multimers, wide mass ranges, and fewer and lower charge states. We demonstrate the performance of the new deconvolution algorithm on a range of samples. On the heavily glycosylated plasma properdin glycoprotein, the new algorithm could deconvolve monomer and dimer simultaneously and, when focused on the m/z range of the monomer, gave accurate and interpretable masses for glycoforms that had previously been analyzed manually using m/z peaks rather than deconvolved masses. On therapeutic antibodies, the new algorithm facilitated the analysis of extensions, truncations, and Fab glycosylation. The algorithm facilitates the use of native mass spectrometry for the qualitative and quantitative analysis of protein and protein assemblies. PMID:29376659
Broadband ion mobility deconvolution for rapid analysis of complex mixtures.
Pettit, Michael E; Brantley, Matthew R; Donnarumma, Fabrizio; Murray, Kermit K; Solouki, Touradj
2018-05-04
High resolving power ion mobility (IM) allows for accurate characterization of complex mixtures in high-throughput IM mass spectrometry (IM-MS) experiments. We previously demonstrated that pure component IM-MS data can be extracted from IM unresolved post-IM/collision-induced dissociation (CID) MS data using automated ion mobility deconvolution (AIMD) software [Matthew Brantley, Behrooz Zekavat, Brett Harper, Rachel Mason, and Touradj Solouki, J. Am. Soc. Mass Spectrom., 2014, 25, 1810-1819]. In our previous reports, we utilized a quadrupole ion filter for m/z-isolation of IM unresolved monoisotopic species prior to post-IM/CID MS. Here, we utilize a broadband IM-MS deconvolution strategy to remove the m/z-isolation requirement for successful deconvolution of IM unresolved peaks. Broadband data collection has throughput and multiplexing advantages; hence, elimination of the ion isolation step reduces experimental run times and thus expands the applicability of AIMD to high-throughput bottom-up proteomics. We demonstrate broadband IM-MS deconvolution of two separate and unrelated pairs of IM unresolved isomers (viz., a pair of isomeric hexapeptides and a pair of isomeric trisaccharides) in a simulated complex mixture. Moreover, we show that broadband IM-MS deconvolution improves high-throughput bottom-up characterization of a proteolytic digest of rat brain tissue. To our knowledge, this manuscript is the first to report successful deconvolution of pure component IM and MS data from an IM-assisted data-independent analysis (DIA) or HDMSE dataset.
A fast algorithm for computer aided collimation gamma camera (CACAO)
NASA Astrophysics Data System (ADS)
Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Franck, D.; Pihet, P.; Ballongue, P.
2000-08-01
The computer aided collimation gamma camera is aimed at breaking down the resolution sensitivity trade-off of the conventional parallel hole collimator. It uses larger and longer holes, having an added linear movement at the acquisition sequence. A dedicated algorithm including shift and sum, deconvolution, parabolic filtering and rotation is described. Examples of reconstruction are given. This work shows that a simple and fast algorithm, based on a diagonal dominant approximation of the problem can be derived. Its gives a practical solution to the CACAO reconstruction problem.
On the response of superpressure balloons to displacements from equilibrium density level
NASA Technical Reports Server (NTRS)
Levanon, N.; Kushnir, Y.
1976-01-01
The response of a superpressure balloon to an initial displacement from its constant-density floating level is examined. An approximate solution is obtained to the governing vertical equation of motion for constant-density superpressure balloons. This solution is used to filter out neutrally buoyant oscillations in balloon records despite the nonlinear behavior of the balloon. The graph depicting the pressure data after deconvolution between the raw pressure data and the normalized balloon wavelet shows clearly the strong filtering-out of the neutral buoyancy oscillations.
NASA Astrophysics Data System (ADS)
Bardy, Fabrice; Van Dun, Bram; Dillon, Harvey; Cowan, Robert
2014-08-01
Objective. To evaluate the viability of disentangling a series of overlapping ‘cortical auditory evoked potentials’ (CAEPs) elicited by different stimuli using least-squares (LS) deconvolution, and to assess the adaptation of CAEPs for different stimulus onset-asynchronies (SOAs). Approach. Optimal aperiodic stimulus sequences were designed by controlling the condition number of matrices associated with the LS deconvolution technique. First, theoretical considerations of LS deconvolution were assessed in simulations in which multiple artificial overlapping responses were recovered. Second, biological CAEPs were recorded in response to continuously repeated stimulus trains containing six different tone-bursts with frequencies 8, 4, 2, 1, 0.5, 0.25 kHz separated by SOAs jittered around 150 (120-185), 250 (220-285) and 650 (620-685) ms. The control condition had a fixed SOA of 1175 ms. In a second condition, using the same SOAs, trains of six stimuli were separated by a silence gap of 1600 ms. Twenty-four adults with normal hearing (<20 dB HL) were assessed. Main results. Results showed disentangling of a series of overlapping responses using LS deconvolution on simulated waveforms as well as on real EEG data. The use of rapid presentation and LS deconvolution did not however, allow the recovered CAEPs to have a higher signal-to-noise ratio than for slowly presented stimuli. The LS deconvolution technique enables the analysis of a series of overlapping responses in EEG. Significance. LS deconvolution is a useful technique for the study of adaptation mechanisms of CAEPs for closely spaced stimuli whose characteristics change from stimulus to stimulus. High-rate presentation is necessary to develop an understanding of how the auditory system encodes natural speech or other intrinsically high-rate stimuli.
Thermal infrared spectroscopy and modeling of experimentally shocked plagioclase feldspars
Johnson, J. R.; Horz, F.; Staid, M.I.
2003-01-01
Thermal infrared emission and reflectance spectra (250-1400 cm-1; ???7???40 ??m) of experimentally shocked albite- and anorthite-rich rocks (17-56 GPa) demonstrate that plagioclase feldspars exhibit characteristic degradations in spectral features with increasing pressure. New measurements of albite (Ab98) presented here display major spectral absorptions between 1000-1250 cm-1 (8-10 ??m) (due to Si-O antisymmetric stretch motions of the silica tetrahedra) and weaker absorptions between 350-700 cm-1 (14-29 ??m) (due to Si-O-Si octahedral bending vibrations). Many of these features persist to higher pressures compared to similar features in measurements of shocked anorthite, consistent with previous thermal infrared absorption studies of shocked feldspars. A transparency feature at 855 cm-1 (11.7 ??m) observed in powdered albite spectra also degrades with increasing pressure, similar to the 830 cm-1 (12.0 ??m) transparency feature in spectra of powders of shocked anorthite. Linear deconvolution models demonstrate that combinations of common mineral and glass spectra can replicate the spectra of shocked anorthite relatively well until shock pressures of 20-25 GPa, above which model errors increase substantially, coincident with the onset of diaplectic glass formation. Albite deconvolutions exhibit higher errors overall but do not change significantly with pressure, likely because certain clay minerals selected by the model exhibit absorption features similar to those in highly shocked albite. The implication for deconvolution of thermal infrared spectra of planetary surfaces (or laboratory spectra of samples) is that the use of highly shocked anorthite spectra in end-member libraries could be helpful in identifying highly shocked calcic plagioclase feldspars.
NASA Astrophysics Data System (ADS)
Fita, P.; Luzina, E.; Dziembowska, T.; Radzewicz, Cz.; Grabowska, A.
2006-11-01
The two structurally related Schiff bases, 2-hydroxynaphthylidene-(8-aminoquinoline) (HNAQ) and 2-hydroxynaphthylidene-1'-naphthylamine (HNAN), were studied by means of steady-state and time resolved optical spectroscopies as well as time-dependent density functional theory (TDDFT) calculations. The first one, HNAQ, is stable as a keto tautomer in the ground state and in the excited state in solutions, therefore it was used as a model of a keto tautomer of HNAN which exists mainly in its enol form in the ground state at room temperature. Excited state intramolecular proton transfer in the HNAN molecule leads to a very weak (quantum yield of the order of 10-4) strongly Stokes-shifted fluorescence. The characteristic time of the proton transfer (about 30fs) was estimated from femtosecond transient absorption data supported by global analysis and deconvolution techniques. Approximately 35% of excited molecules create a photochromic form whose lifetime was beyond the time window of the experiment (2ns). The remaining ones reach the relaxed S1 state (of a lifetime of approximately 4ps), whose emission is present in the decay associated difference spectra. Some evidence for the back proton transfer from the ground state of the keto form with the characteristic time of approximately 13ps was also found. The energies and orbital characteristics of main electronic transitions in both molecules calculated by TDDFT method are also discussed.
NASA Astrophysics Data System (ADS)
Gerwe, David R.; Lee, David J.; Barchers, Jeffrey D.
2000-10-01
A post-processing methodology for reconstructing undersampled image sequences with randomly varying blur is described which can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive optics compensated imagery taken by the Starfire Optical Range 3.5 meter telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques which includes a representation of spatial sampling by the focal plane array elements in the forward stochastic model of the imaging system. This generalization enables the random shifts and shape of the adaptive compensated PSF to be used to partially eliminate the aliasing effects associated with sub- Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss which occurs when imaging in wide FOV modes.
NASA Astrophysics Data System (ADS)
Gerwe, David R.; Lee, David J.; Barchers, Jeffrey D.
2002-09-01
We describe a postprocessing methodology for reconstructing undersampled image sequences with randomly varying blur that can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive-optics-(AO)-compensated imagery taken by the Starfire Optical Range 3.5-m telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground-based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques that include a representation of spatial sampling by the focal plane array elements based on a forward stochastic model. This generalization enables the random shifts and shape of the AO- compensated point spread function (PSF) to be used to partially eliminate the aliasing effects associated with sub-Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss that occurs when imaging in wide- field-of-view (FOV) modes.
Pulse analysis of acoustic emission signals
NASA Technical Reports Server (NTRS)
Houghton, J. R.; Packman, P. F.
1977-01-01
A method for the signature analysis of pulses in the frequency domain and the time domain is presented. Fourier spectrum, Fourier transfer function, shock spectrum and shock spectrum ratio were examined in the frequency domain analysis and pulse shape deconvolution was developed for use in the time domain analysis. Comparisons of the relative performance of each analysis technique are made for the characterization of acoustic emission pulses recorded by a measuring system. To demonstrate the relative sensitivity of each of the methods to small changes in the pulse shape, signatures of computer modeled systems with analytical pulses are presented. Optimization techniques are developed and used to indicate the best design parameter values for deconvolution of the pulse shape. Several experiments are presented that test the pulse signature analysis methods on different acoustic emission sources. These include acoustic emission associated with (a) crack propagation, (b) ball dropping on a plate, (c) spark discharge, and (d) defective and good ball bearings. Deconvolution of the first few micro-seconds of the pulse train is shown to be the region in which the significant signatures of the acoustic emission event are to be found.
NASA Astrophysics Data System (ADS)
Almasganj, Mohammad; Adabi, Saba; Fatemizadeh, Emad; Xu, Qiuyun; Sadeghi, Hamid; Daveluy, Steven; Nasiriavanaki, Mohammadreza
2017-03-01
Optical Coherence Tomography (OCT) has a great potential to elicit clinically useful information from tissues due to its high axial and transversal resolution. In practice, an OCT setup cannot reach to its theoretical resolution due to imperfections of its components, which make its images blurry. The blurriness is different alongside regions of image; thus, they cannot be modeled by a unique point spread function (PSF). In this paper, we investigate the use of solid phantoms to estimate the PSF of each sub-region of imaging system. We then utilize Lucy-Richardson, Hybr and total variation (TV) based iterative deconvolution methods for mitigating occurred spatially variant blurriness. It is shown that the TV based method will suppress the so-called speckle noise in OCT images better than the two other approaches. The performance of proposed algorithm is tested on various samples, including several skin tissues besides the test image blurred with synthetic PSF-map, demonstrating qualitatively and quantitatively the advantage of TV based deconvolution method using spatially-variant PSF for enhancing image quality.
Fast online deconvolution of calcium imaging data
Zhou, Pengcheng; Paninski, Liam
2017-01-01
Fluorescent calcium indicators are a popular means for observing the spiking activity of large neuronal populations, but extracting the activity of each neuron from raw fluorescence calcium imaging data is a nontrivial problem. We present a fast online active set method to solve this sparse non-negative deconvolution problem. Importantly, the algorithm 3progresses through each time series sequentially from beginning to end, thus enabling real-time online estimation of neural activity during the imaging session. Our algorithm is a generalization of the pool adjacent violators algorithm (PAVA) for isotonic regression and inherits its linear-time computational complexity. We gain remarkable increases in processing speed: more than one order of magnitude compared to currently employed state of the art convex solvers relying on interior point methods. Unlike these approaches, our method can exploit warm starts; therefore optimizing model hyperparameters only requires a handful of passes through the data. A minor modification can further improve the quality of activity inference by imposing a constraint on the minimum spike size. The algorithm enables real-time simultaneous deconvolution of O(105) traces of whole-brain larval zebrafish imaging data on a laptop. PMID:28291787
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Haering, Edward A., Jr.; Ehernberger, L. J.
1996-01-01
In-flight measurements of the SR-71 near-field sonic boom were obtained by an F-16XL airplane at flightpath separation distances from 40 to 740 ft. Twenty-two signatures were obtained from Mach 1.60 to Mach 1.84 and altitudes from 47,600 to 49,150 ft. The shock wave signatures were measured by the total and static sensors on the F-16XL noseboo. These near-field signature measurements were distorted by pneumatic attenuation in the pitot-static sensors and accounting for their effects using optimal deconvolution. Measurement system magnitude and phase characteristics were determined from ground-based step-response tests and extrapolated to flight conditions using analytical models. Deconvolution was implemented using Fourier transform methods. Comparisons of the shock wave signatures reconstructed from the total and static pressure data are presented. The good agreement achieved gives confidence of the quality of the reconstruction analysis. although originally developed to reconstruct the sonic boom signatures from SR-71 sonic boom flight tests, the methods presented here generally apply to other types of highly attenuated or distorted pneumatic measurements.
Pulse analysis of acoustic emission signals
NASA Technical Reports Server (NTRS)
Houghton, J. R.; Packman, P. F.
1977-01-01
A method for the signature analysis of pulses in the frequency domain and the time domain is presented. Fourier spectrum, Fourier transfer function, shock spectrum and shock spectrum ratio were examined in the frequency domain analysis, and pulse shape deconvolution was developed for use in the time domain analysis. Comparisons of the relative performance of each analysis technique are made for the characterization of acoustic emission pulses recorded by a measuring system. To demonstrate the relative sensitivity of each of the methods to small changes in the pulse shape, signatures of computer modeled systems with analytical pulses are presented. Optimization techniques are developed and used to indicate the best design parameters values for deconvolution of the pulse shape. Several experiments are presented that test the pulse signature analysis methods on different acoustic emission sources. These include acoustic emissions associated with: (1) crack propagation, (2) ball dropping on a plate, (3) spark discharge and (4) defective and good ball bearings. Deconvolution of the first few micro-seconds of the pulse train are shown to be the region in which the significant signatures of the acoustic emission event are to be found.
Canales-Rodríguez, Erick J.; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M.; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond
2015-01-01
Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024
Minimum entropy deconvolution and blind equalisation
NASA Technical Reports Server (NTRS)
Satorius, E. H.; Mulligan, J. J.
1992-01-01
Relationships between minimum entropy deconvolution, developed primarily for geophysics applications, and blind equalization are pointed out. It is seen that a large class of existing blind equalization algorithms are directly related to the scale-invariant cost functions used in minimum entropy deconvolution. Thus the extensive analyses of these cost functions can be directly applied to blind equalization, including the important asymptotic results of Donoho.
NASA Astrophysics Data System (ADS)
Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.
2009-02-01
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.
Three Channel Polarimetric Based Data Deconvolution
2011-03-01
which have been degraded by atmospheric turbulence and noise . This thesis explains in entirety the process used for deblurring and de- noising images...10 3.1.2 Noise Model...Blur and Noise .............................................................................................................. 34 5.3 Laboratory Results
Lam, France; Cladière, Damien; Guillaume, Cyndélia; Wassmann, Katja; Bolte, Susanne
2017-02-15
In the presented work we aimed at improving confocal imaging to obtain highest possible resolution in thick biological samples, such as the mouse oocyte. We therefore developed an image processing workflow that allows improving the lateral and axial resolution of a standard confocal microscope. Our workflow comprises refractive index matching, the optimization of microscope hardware parameters and image restoration by deconvolution. We compare two different deconvolution algorithms, evaluate the necessity of denoising and establish the optimal image restoration procedure. We validate our workflow by imaging sub resolution fluorescent beads and measuring the maximum lateral and axial resolution of the confocal system. Subsequently, we apply the parameters to the imaging and data restoration of fluorescently labelled meiotic spindles of mouse oocytes. We measure a resolution increase of approximately 2-fold in the lateral and 3-fold in the axial direction throughout a depth of 60μm. This demonstrates that with our optimized workflow we reach a resolution that is comparable to 3D-SIM-imaging, but with better depth penetration for confocal images of beads and the biological sample. Copyright © 2016 Elsevier Inc. All rights reserved.
Methods and Apparatus for Reducing Multipath Signal Error Using Deconvolution
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor); Lau, Kenneth H. (Inventor)
1999-01-01
A deconvolution approach to adaptive signal processing has been applied to the elimination of signal multipath errors as embodied in one preferred embodiment in a global positioning system receiver. The method and receiver of the present invention estimates then compensates for multipath effects in a comprehensive manner. Application of deconvolution, along with other adaptive identification and estimation techniques, results in completely novel GPS (Global Positioning System) receiver architecture.
Improving space debris detection in GEO ring using image deconvolution
NASA Astrophysics Data System (ADS)
Núñez, Jorge; Núñez, Anna; Montojo, Francisco Javier; Condominas, Marta
2015-07-01
In this paper we present a method based on image deconvolution to improve the detection of space debris, mainly in the geostationary ring. Among the deconvolution methods we chose the iterative Richardson-Lucy (R-L), as the method that achieves better goals with a reasonable amount of computation. For this work, we used two sets of real 4096 × 4096 pixel test images obtained with the Telescope Fabra-ROA at Montsec (TFRM). Using the first set of data, we establish the optimal number of iterations in 7, and applying the R-L method with 7 iterations to the images, we show that the astrometric accuracy does not vary significantly while the limiting magnitude of the deconvolved images increases significantly compared to the original ones. The increase is in average about 1.0 magnitude, which means that objects up to 2.5 times fainter can be detected after deconvolution. The application of the method to the second set of test images, which includes several faint objects, shows that, after deconvolution, up to four previously undetected faint objects are detected in a single frame. Finally, we carried out a study of some economic aspects of applying the deconvolution method, showing that an important economic impact can be envisaged.
Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths.
Ingaramo, Maria; York, Andrew G; Hoogendoorn, Eelco; Postma, Marten; Shroff, Hari; Patterson, George H
2014-03-17
We use Richardson-Lucy (RL) deconvolution to combine multiple images of a simulated object into a single image in the context of modern fluorescence microscopy techniques. RL deconvolution can merge images with very different point-spread functions, such as in multiview light-sheet microscopes,1, 2 while preserving the best resolution information present in each image. We show that RL deconvolution is also easily applied to merge high-resolution, high-noise images with low-resolution, low-noise images, relevant when complementing conventional microscopy with localization microscopy. We also use RL deconvolution to merge images produced by different simulated illumination patterns, relevant to structured illumination microscopy (SIM)3, 4 and image scanning microscopy (ISM). The quality of our ISM reconstructions is at least as good as reconstructions using standard inversion algorithms for ISM data, but our method follows a simpler recipe that requires no mathematical insight. Finally, we apply RL deconvolution to merge a series of ten images with varying signal and resolution levels. This combination is relevant to gated stimulated-emission depletion (STED) microscopy, and shows that merges of high-quality images are possible even in cases for which a non-iterative inversion algorithm is unknown. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
He, Xinzi; Yu, Zhen; Wang, Tianfu; Lei, Baiying; Shi, Yiyan
2018-01-01
Dermoscopy imaging has been a routine examination approach for skin lesion diagnosis. Accurate segmentation is the first step for automatic dermoscopy image assessment. The main challenges for skin lesion segmentation are numerous variations in viewpoint and scale of skin lesion region. To handle these challenges, we propose a novel skin lesion segmentation network via a very deep dense deconvolution network based on dermoscopic images. Specifically, the deep dense layer and generic multi-path Deep RefineNet are combined to improve the segmentation performance. The deep representation of all available layers is aggregated to form the global feature maps using skip connection. Also, the dense deconvolution layer is leveraged to capture diverse appearance features via the contextual information. Finally, we apply the dense deconvolution layer to smooth segmentation maps and obtain final high-resolution output. Our proposed method shows the superiority over the state-of-the-art approaches based on the public available 2016 and 2017 skin lesion challenge dataset and achieves the accuracy of 96.0% and 93.9%, which obtained a 6.0% and 1.2% increase over the traditional method, respectively. By utilizing Dense Deconvolution Net, the average time for processing one testing images with our proposed framework was 0.253 s.
NASA Astrophysics Data System (ADS)
Li, Zhong-xiao; Li, Zhen-chun
2016-09-01
The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.
Studing Regional Wave Source Time Functions Using A Massive Automated EGF Deconvolution Procedure
NASA Astrophysics Data System (ADS)
Xie, J. "; Schaff, D. P.
2010-12-01
Reliably estimated source time functions (STF) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection, and minimization of parameter trade-off in attenuation studies. The empirical Green’s function (EGF) method can be used for estimating STF, but it requires a strict recording condition. Waveforms from pairs of events that are similar in focal mechanism, but different in magnitude must be on-scale recorded on the same stations for the method to work. Searching for such waveforms can be very time consuming, particularly for regional waves that contain complex path effects and have reduced S/N ratios due to attenuation. We have developed a massive, automated procedure to conduct inter-event waveform deconvolution calculations from many candidate event pairs. The procedure automatically evaluates the “spikiness” of the deconvolutions by calculating their “sdc”, which is defined as the peak divided by the background value. The background value is calculated as the mean absolute value of the deconvolution, excluding 10 s around the source time function. When the sdc values are about 10 or higher, the deconvolutions are found to be sufficiently spiky (pulse-like), indicating similar path Green’s functions and good estimates of the STF. We have applied this automated procedure to Lg waves and full regional wavetrains from 989 M ≥ 5 events in and around China, calculating about a million deconvolutions. Of these we found about 2700 deconvolutions with sdc greater than 9, which, if having a sufficiently broad frequency band, can be used to estimate the STF of the larger events. We are currently refining our procedure, as well as the estimated STFs. We will infer the source scaling using the STFs. We will also explore the possibility that the deconvolution procedure could complement cross-correlation in a real time event-screening process.
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-06-01
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less
NASA Technical Reports Server (NTRS)
Wood, G. M.; Rayborn, G. H.; Ioup, J. W.; Ioup, G. E.; Upchurch, B. T.; Howard, S. J.
1981-01-01
Mathematical deconvolution of digitized analog signals from scientific measuring instruments is shown to be a means of extracting important information which is otherwise hidden due to time-constant and other broadening or distortion effects caused by the experiment. Three different approaches to deconvolution and their subsequent application to recorded data from three analytical instruments are considered. To demonstrate the efficacy of deconvolution, the use of these approaches to solve the convolution integral for the gas chromatograph, magnetic mass spectrometer, and the time-of-flight mass spectrometer are described. Other possible applications of these types of numerical treatment of data to yield superior results from analog signals of the physical parameters normally measured in aerospace simulation facilities are suggested and briefly discussed.
Parallelization of a blind deconvolution algorithm
NASA Astrophysics Data System (ADS)
Matson, Charles L.; Borelli, Kathy J.
2006-09-01
Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.
Improved deconvolution of very weak confocal signals.
Day, Kasey J; La Rivière, Patrick J; Chandler, Talon; Bindokas, Vytas P; Ferrier, Nicola J; Glick, Benjamin S
2017-01-01
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.
Applying simulation model to uniform field space charge distribution measurements by the PEA method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Salama, M.M.A.
1996-12-31
Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished frommore » space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.« less
Thermal infrared spectroscopy and modeling of experimentally shocked basalts
Johnson, J. R.; Staid, M.I.; Kraft, M.D.
2007-01-01
New measurements of thermal infrared emission spectra (250-1400 cm-1; ???7-40 ??m) of experimentally shocked basalt and basaltic andesite (17-56 GPa) exhibit changes in spectral features with increasing pressure consistent with changes in the structure of plagioclase feldspars. Major spectral absorptions in unshocked rocks between 350-700 cm-1 (due to Si-O-Si octahedral bending vibrations) and between 1000-1250 cm-1 (due to Si-O antisymmetric stretch motions of the silica tetrahedra) transform at pressures >20-25 GPa to two broad spectral features centered near 950-1050 and 400-450 cm-1. Linear deconvolution models using spectral libraries composed of common mineral and glass spectra replicate the spectra of shocked basalt relatively well up to shock pressures of 20-25 GPa, above which model errors increase substantially, coincident with the onset of diaplectic glass formation in plagioclase. Inclusion of shocked feldspar spectra in the libraries improves fits for more highly shocked basalt. However, deconvolution models of the basaltic andesite select shocked feldspar end-members even for unshocked samples, likely caused by the higher primary glass content in the basaltic andesite sample.
Bai, Chen; Xu, Shanshan; Duan, Junbo; Jing, Bowen; Yang, Miao; Wan, Mingxi
2017-08-01
Pulse-inversion subharmonic (PISH) imaging can display information relating to pure cavitation bubbles while excluding that of tissue. Although plane-wave-based ultrafast active cavitation imaging (UACI) can monitor the transient activities of cavitation bubbles, its resolution and cavitation-to-tissue ratio (CTR) are barely satisfactory but can be significantly improved by introducing eigenspace-based (ESB) adaptive beamforming. PISH and UACI are a natural combination for imaging of pure cavitation activity in tissue; however, it raises two problems: 1) the ESB beamforming is hard to implement in real time due to the enormous amount of computation associated with the covariance matrix inversion and eigendecomposition and 2) the narrowband characteristic of the subharmonic filter will incur a drastic degradation in resolution. Thus, in order to jointly address these two problems, we propose a new PISH-UACI method using novel fast ESB (F-ESB) beamforming and cavitation deconvolution for nonlinear signals. This method greatly reduces the computational complexity by using F-ESB beamforming through dimensionality reduction based on principal component analysis, while maintaining the high quality of ESB beamforming. The degraded resolution is recovered using cavitation deconvolution through a modified convolution model and compressive deconvolution. Both simulations and in vitro experiments were performed to verify the effectiveness of the proposed method. Compared with the ESB-based PISH-UACI, the entire computation of our proposed approach was reduced by 99%, while the axial resolution gain and CTR were increased by 3 times and 2 dB, respectively, confirming that satisfactory performance can be obtained for monitoring pure cavitation bubbles in tissue erosion.
Rapidly converging multigrid reconstruction of cone-beam tomographic data
NASA Astrophysics Data System (ADS)
Myers, Glenn R.; Kingston, Andrew M.; Latham, Shane J.; Recur, Benoit; Li, Thomas; Turner, Michael L.; Beeching, Levi; Sheppard, Adrian P.
2016-10-01
In the context of large-angle cone-beam tomography (CBCT), we present a practical iterative reconstruction (IR) scheme designed for rapid convergence as required for large datasets. The robustness of the reconstruction is provided by the "space-filling" source trajectory along which the experimental data is collected. The speed of convergence is achieved by leveraging the highly isotropic nature of this trajectory to design an approximate deconvolution filter that serves as a pre-conditioner in a multi-grid scheme. We demonstrate this IR scheme for CBCT and compare convergence to that of more traditional techniques.
Temporal Large-Eddy Simulation
NASA Technical Reports Server (NTRS)
Pruett, C. D.; Thomas, B. C.
2004-01-01
In 1999, Stolz and Adams unveiled a subgrid-scale model for LES based upon approximately inverting (defiltering) the spatial grid-filter operator and termed .the approximate deconvolution model (ADM). Subsequently, the utility and accuracy of the ADM were demonstrated in a posteriori analyses of flows as diverse as incompressible plane-channel flow and supersonic compression-ramp flow. In a prelude to the current paper, a parameterized temporal ADM (TADM) was developed and demonstrated in both a priori and a posteriori analyses for forced, viscous Burger's flow. The development of a time-filtered variant of the ADM was motivated-primarily by the desire for a unifying theoretical and computational context to encompass direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds averaged Navier-Stokes simulation (RANS). The resultant methodology was termed temporal LES (TLES). To permit exploration of the parameter space, however, previous analyses of the TADM were restricted to Burger's flow, and it has remained to demonstrate the TADM and TLES methodology for three-dimensional flow. For several reasons, plane-channel flow presents an ideal test case for the TADM. Among these reasons, channel flow is anisotropic, yet it lends itself to highly efficient and accurate spectral numerical methods. Moreover, channel-flow has been investigated extensively by DNS, and a highly accurate data base of Moser et.al. exists. In the present paper, we develop a fully anisotropic TADM model and demonstrate its utility in simulating incompressible plane-channel flow at nominal values of Re(sub tau) = 180 and Re(sub tau) = 590 by the TLES method. The TADM model is shown to perform nearly as well as the ADM at equivalent resolution, thereby establishing TLES as a viable alternative to LES. Moreover, as the current model is suboptimal is some respects, there is considerable room to improve TLES.
Real-time deblurring of handshake blurred images on smartphones
NASA Astrophysics Data System (ADS)
Pourreza-Shahri, Reza; Chang, Chih-Hsiang; Kehtarnavaz, Nasser
2015-02-01
This paper discusses an Android app for the purpose of removing blur that is introduced as a result of handshakes when taking images via a smartphone. This algorithm utilizes two images to achieve deblurring in a computationally efficient manner without suffering from artifacts associated with deconvolution deblurring algorithms. The first image is the normal or auto-exposure image and the second image is a short-exposure image that is automatically captured immediately before or after the auto-exposure image is taken. A low rank approximation image is obtained by applying singular value decomposition to the auto-exposure image which may appear blurred due to handshakes. This approximation image does not suffer from blurring while incorporating the image brightness and contrast information. The eigenvalues extracted from the low rank approximation image are then combined with those from the shortexposure image. It is shown that this deblurring app is computationally more efficient than the adaptive tonal correction algorithm which was previously developed for the same purpose.
Ströhl, Florian; Kaminski, Clemens F
2015-01-16
We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson-Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package.
NASA Astrophysics Data System (ADS)
Ströhl, Florian; Kaminski, Clemens F.
2015-03-01
We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson-Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package.
Data matching for free-surface multiple attenuation by multidimensional deconvolution
NASA Astrophysics Data System (ADS)
van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald
2012-09-01
A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.
High-resolution electron microscopy and its applications.
Li, F H
1987-12-01
A review of research on high-resolution electron microscopy (HREM) carried out at the Institute of Physics, the Chinese Academy of Sciences, is presented. Apart from the direct observation of crystal and quasicrystal defects for some alloys, oxides, minerals, etc., and the structure determination for some minute crystals, an approximate image-contrast theory named pseudo-weak-phase object approximation (PWPOA), which shows the image contrast change with crystal thickness, is described. Within the framework of PWPOA, the image contrast of lithium ions in the crystal of R-Li2Ti3O7 has been observed. The usefulness of diffraction analysis techniques such as the direct method and Patterson method in HREM is discussed. Image deconvolution and resolution enhancement for weak-phase objects by use of the direct method are illustrated. In addition, preliminary results of image restoration for thick crystals are given.
Time-Domain Receiver Function Deconvolution using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Moreira, L. P.
2017-12-01
Receiver Functions (RF) are well know method for crust modelling using passive seismological signals. Many different techniques were developed to calculate the RF traces, applying the deconvolution calculation to radial and vertical seismogram components. A popular method used a spectral division of both components, which requires human intervention to apply the Water Level procedure to avoid instabilities from division by small numbers. One of most used method is an iterative procedure to estimate the RF peaks and applying the convolution with vertical component seismogram, comparing the result with the radial component. This method is suitable for automatic processing, however several RF traces are invalid due to peak estimation failure.In this work it is proposed a deconvolution algorithm using Genetic Algorithm (GA) to estimate the RF peaks. This method is entirely processed in the time domain, avoiding the time-to-frequency calculations (and vice-versa), and totally suitable for automatic processing. Estimated peaks can be used to generate RF traces in a seismogram format for visualization. The RF trace quality is similar for high magnitude events, although there are less failures for RF calculation of smaller events, increasing the overall performance for high number of events per station.
Retinal image restoration by means of blind deconvolution
NASA Astrophysics Data System (ADS)
Marrugo, Andrés G.; Šorel, Michal; Šroubek, Filip; Millán, María S.
2011-11-01
Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.
Septal penetration correction in I-131 imaging following thyroid cancer treatment
NASA Astrophysics Data System (ADS)
Barrack, Fiona; Scuffham, James; McQuaid, Sarah
2018-04-01
Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ = 0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ = 0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that could potentially improve the diagnostic accuracy of I-131 imaging. Novelty and significance This work has demonstrated that scatter correction combined with deconvolution can be used to substantially reduce the appearance of septal penetration artefacts in I-131 phantom and patient gamma camera planar images, enable improved visualisation of the I-131 distribution. Deconvolution with symmetric PSF has previously been used to reduce artefacts in gamma camera images however this work details the novel use of an asymmetric PSF to remove the angularly dependent septal penetration artefacts.
Cawello, Willi; Braun, Marina; Andreas, Jens-Otto
2018-01-13
Pharmacokinetic studies using deconvolution methods and non-compartmental analysis to model clinical absorption of drugs are not well represented in the literature. The purpose of this research was (1) to define the system of equations for description of rotigotine (a dopamine receptor agonist delivered via a transdermal patch) absorption based on a pharmacokinetic model and (2) to describe the kinetics of rotigotine disposition after single and multiple dosing. The kinetics of drug disposition was evaluated based on rotigotine plasma concentration data from three phase 1 trials. In two trials, rotigotine was administered via a single patch over 24 h in healthy subjects. In a third trial, rotigotine was administered once daily over 1 month in subjects with early-stage Parkinson's disease (PD). A pharmacokinetic model utilizing deconvolution methods was developed to describe the relationship between drug release from the patch and plasma concentrations. Plasma-concentration over time profiles were modeled based on a one-compartment model with a time lag, a zero-order input (describing a constant absorption via skin into central circulation) and first-order elimination. Corresponding mathematical models for single- and multiple-dose administration were developed. After single-dose administration of rotigotine patches (using 2, 4 or 8 mg/day) in healthy subjects, a constant in vivo absorption was present after a minor time lag (2-3 h). On days 27 and 30 of the multiple-dose study in patients with PD, absorption was constant during patch-on periods and resembled zero-order kinetics. Deconvolution based on rotigotine pharmacokinetic profiles after single- or multiple-dose administration of the once-daily patch demonstrated that in vivo absorption of rotigotine showed constant input through the skin into the central circulation (resembling zero-order kinetics). Continuous absorption through the skin is a basis for stable drug exposure.
NASA Astrophysics Data System (ADS)
McDonald, Geoff L.; Zhao, Qing
2017-01-01
Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.
Improving Range Estimation of a 3-Dimensional Flash Ladar via Blind Deconvolution
2010-09-01
12 2.1.4 Optical Imaging as a Linear and Nonlinear System 15 2.1.5 Coherence Theory and Laser Light Statistics . . . 16 2.2 Deconvolution...rather than deconvolution. 2.1.5 Coherence Theory and Laser Light Statistics. Using [24] and [25], this section serves as background on coherence theory...the laser light incident on the detector surface. The image intensity related to different types of coherence is governed by the laser light’s spatial
NASA Astrophysics Data System (ADS)
Hiroi, T.; Kaiden, H.; Yamaguchi, A.; Kojima, H.; Uemoto, K.; Ohtake, M.; Arai, T.; Sasaki, S.
2016-12-01
Lunar meteorite chip samples recovered by the National Institute of Polar Research (NIPR) have been studied by a UV-visible-near-infrared spectrometer, targeting small areas of about 3 × 2 mm in size. Rock types and approximate mineral compositions of studied meteorites have been identified or obtained through this spectral survey with no sample preparation required. A linear deconvolution method was used to derive end-member mineral spectra from spectra of multiple clasts whenever possible. In addition, the modified Gaussian model was used in an attempt of deriving their major pyroxene compositions. This study demonstrates that a visible-near-infrared spectrometer on a lunar rover would be useful for identifying these kinds of unaltered (non-space-weathered) lunar rocks. In order to prepare for such a future mission, further studies which utilize a smaller spot size are desired for improving the accuracy of identifying the clasts and mineral phases of the rocks.
Faceting for direction-dependent spectral deconvolution
NASA Astrophysics Data System (ADS)
Tasse, C.; Hugo, B.; Mirmont, M.; Smirnov, O.; Atemkeng, M.; Bester, L.; Hardcastle, M. J.; Lakhoo, R.; Perkins, S.; Shimwell, T.
2018-04-01
The new generation of radio interferometers is characterized by high sensitivity, wide fields of view and large fractional bandwidth. To synthesize the deepest images enabled by the high dynamic range of these instruments requires us to take into account the direction-dependent Jones matrices, while estimating the spectral properties of the sky in the imaging and deconvolution algorithms. In this paper we discuss and implement a wideband wide-field spectral deconvolution framework (DDFacet) based on image plane faceting, that takes into account generic direction-dependent effects. Specifically, we present a wide-field co-planar faceting scheme, and discuss the various effects that need to be taken into account to solve for the deconvolution problem (image plane normalization, position-dependent Point Spread Function, etc). We discuss two wideband spectral deconvolution algorithms based on hybrid matching pursuit and sub-space optimisation respectively. A few interesting technical features incorporated in our imager are discussed, including baseline dependent averaging, which has the effect of improving computing efficiency. The version of DDFacet presented here can account for any externally defined Jones matrices and/or beam patterns.
NASA Astrophysics Data System (ADS)
Pompa, P. P.; Cingolani, R.; Rinaldi, R.
2003-07-01
In this paper, we present a deconvolution method aimed at spectrally resolving the broad fluorescence spectra of proteins, namely, of the enzyme bovine liver glutamate dehydrogenase (GDH). The analytical procedure is based on the deconvolution of the emission spectra into three distinct Gaussian fluorescing bands Gj. The relative changes of the Gj parameters are directly related to the conformational changes of the enzyme, and provide interesting information about the fluorescence dynamics of the individual emitting contributions. Our deconvolution method results in an excellent fitting of all the spectra obtained with GDH in a number of experimental conditions (various conformational states of the protein) and describes very well the dynamics of a variety of phenomena, such as the dependence of hexamers association on protein concentration, the dynamics of thermal denaturation, and the interaction process between the enzyme and external quenchers. The investigation was carried out by means of different optical experiments, i.e., native enzyme fluorescence, thermal-induced unfolding, and fluorescence quenching studies, utilizing both the analysis of the “average” behavior of the enzyme and the proposed deconvolution approach.
Deconvoluting the Complexity of Bone Metastatic Prostate Cancer via Computational Modeling
2016-09-01
Fellowship (2015-2017) Consejo Nacional de Ciencia y Tecnologia (CONACYT) MRes/PhD scholarship (2007- 2011) CERN Teacher Programme scholarship (2007...UDLAP Apoyo a Ciencias BSc scholarship (2000-2005) Awards Society for Mathematical Biology (SMB) Travel Award (2015
Improved Scheme of Modified Gaussian Deconvolution for Reflectance Spectra of Lunar Soils
NASA Technical Reports Server (NTRS)
Hiroi, T.; Pieters, C. M.; Noble, S. K.
2000-01-01
In our continuing effort for deconvolving reflectance spectra of lunar soils using the modified Gaussian model, a new scheme has been developed, including a new form of continuum. All the parameters are optimized with certain constraints.
4Pi microscopy deconvolution with a variable point-spread function.
Baddeley, David; Carl, Christian; Cremer, Christoph
2006-09-20
To remove the axial sidelobes from 4Pi images, deconvolution forms an integral part of 4Pi microscopy. As a result of its high axial resolution, the 4Pi point spread function (PSF) is particularly susceptible to imperfect optical conditions within the sample. This is typically observed as a shift in the position of the maxima under the PSF envelope. A significantly varying phase shift renders deconvolution procedures based on a spatially invariant PSF essentially useless. We present a technique for computing the forward transformation in the case of a varying phase at a computational expense of the same order of magnitude as that of the shift invariant case, a method for the estimation of PSF phase from an acquired image, and a deconvolution procedure built on these techniques.
Improved deconvolution of very weak confocal signals
Day, Kasey J.; La Rivière, Patrick J.; Chandler, Talon; Bindokas, Vytas P.; Ferrier, Nicola J.; Glick, Benjamin S.
2017-01-01
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage. PMID:28868135
Improved deconvolution of very weak confocal signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less
Improved deconvolution of very weak confocal signals
Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon; ...
2017-06-06
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less
Blind deconvolution post-processing of images corrected by adaptive optics
NASA Astrophysics Data System (ADS)
Christou, Julian C.
1995-08-01
Experience with the adaptive optics system at the Starfire Optical Range has shown that the point spread function is non-uniform and varies both spatially and temporally as well as being object dependent. Because of this, the application of a standard linear and non-linear deconvolution algorithms make it difficult to deconvolve out the point spread function. In this paper we demonstrate the application of a blind deconvolution algorithm to adaptive optics compensated data where a separate point spread function is not needed.
Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program.
Afouxenidis, D; Polymeris, G S; Tsirliganis, N C; Kitis, G
2012-05-01
This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the GLOw Curve ANalysis INtercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters.
Analysis of the glow curve of SrB 4O 7:Dy compounds employing the GOT model
NASA Astrophysics Data System (ADS)
Ortega, F.; Molina, P.; Santiago, M.; Spano, F.; Lester, M.; Caselli, E.
2006-02-01
The glow curve of SrB 4O 7:Dy phosphors has been analysed with the general one trap model (GOT). To solve the differential equation describing the GOT model a novel algorithm has been employed, which reduces significantly the deconvolution time with respect to the time required by usual integration algorithms, such as the Runge-Kutta method.
Linear MALDI-ToF simultaneous spectrum deconvolution and baseline removal.
Picaud, Vincent; Giovannelli, Jean-Francois; Truntzer, Caroline; Charrier, Jean-Philippe; Giremus, Audrey; Grangeat, Pierre; Mercier, Catherine
2018-04-05
Thanks to a reasonable cost and simple sample preparation procedure, linear MALDI-ToF spectrometry is a growing technology for clinical microbiology. With appropriate spectrum databases, this technology can be used for early identification of pathogens in body fluids. However, due to the low resolution of linear MALDI-ToF instruments, robust and accurate peak picking remains a challenging task. In this context we propose a new peak extraction algorithm from raw spectrum. With this method the spectrum baseline and spectrum peaks are processed jointly. The approach relies on an additive model constituted by a smooth baseline part plus a sparse peak list convolved with a known peak shape. The model is then fitted under a Gaussian noise model. The proposed method is well suited to process low resolution spectra with important baseline and unresolved peaks. We developed a new peak deconvolution procedure. The paper describes the method derivation and discusses some of its interpretations. The algorithm is then described in a pseudo-code form where the required optimization procedure is detailed. For synthetic data the method is compared to a more conventional approach. The new method reduces artifacts caused by the usual two-steps procedure, baseline removal then peak extraction. Finally some results on real linear MALDI-ToF spectra are provided. We introduced a new method for peak picking, where peak deconvolution and baseline computation are performed jointly. On simulated data we showed that this global approach performs better than a classical one where baseline and peaks are processed sequentially. A dedicated experiment has been conducted on real spectra. In this study a collection of spectra of spiked proteins were acquired and then analyzed. Better performances of the proposed method, in term of accuracy and reproductibility, have been observed and validated by an extended statistical analysis.
NASA Astrophysics Data System (ADS)
Arslan, Musa T.; Tofighi, Mohammad; Sevimli, Rasim A.; ćetin, Ahmet E.
2015-05-01
One of the main disadvantages of using commercial broadcasts in a Passive Bistatic Radar (PBR) system is the range resolution. Using multiple broadcast channels to improve the radar performance is offered as a solution to this problem. However, it suffers from detection performance due to the side-lobes that matched filter creates for using multiple channels. In this article, we introduce a deconvolution algorithm to suppress the side-lobes. The two-dimensional matched filter output of a PBR is further analyzed as a deconvolution problem. The deconvolution algorithm is based on making successive projections onto the hyperplanes representing the time delay of a target. Resulting iterative deconvolution algorithm is globally convergent because all constraint sets are closed and convex. Simulation results in an FM based PBR system are presented.
Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image
NASA Astrophysics Data System (ADS)
He, Xingwu; You, Junchen
2018-03-01
Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.
NASA Astrophysics Data System (ADS)
Yu, Zhongzhi; Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Liu, Xu
2018-06-01
Parallel detection, which can use the additional information of a pinhole plane image taken at every excitation scan position, could be an efficient method to enhance the resolution of a confocal laser scanning microscope. In this paper, we discuss images obtained under different conditions and using different image restoration methods with parallel detection to quantitatively compare the imaging quality. The conditions include different noise levels and different detector array settings. The image restoration methods include linear deconvolution and pixel reassignment with Richard-Lucy deconvolution and with maximum-likelihood estimation deconvolution. The results show that the linear deconvolution share properties such as high-efficiency and the best performance under all different conditions, and is therefore expected to be of use for future biomedical routine research.
A gene profiling deconvolution approach to estimating immune cell composition from complex tissues.
Chen, Shu-Hwa; Kuo, Wen-Yu; Su, Sheng-Yao; Chung, Wei-Chun; Ho, Jen-Ming; Lu, Henry Horng-Shing; Lin, Chung-Yen
2018-05-08
A new emerged cancer treatment utilizes intrinsic immune surveillance mechanism that is silenced by those malicious cells. Hence, studies of tumor infiltrating lymphocyte populations (TILs) are key to the success of advanced treatments. In addition to laboratory methods such as immunohistochemistry and flow cytometry, in silico gene expression deconvolution methods are available for analyses of relative proportions of immune cell types. Herein, we used microarray data from the public domain to profile gene expression pattern of twenty-two immune cell types. Initially, outliers were detected based on the consistency of gene profiling clustering results and the original cell phenotype notation. Subsequently, we filtered out genes that are expressed in non-hematopoietic normal tissues and cancer cells. For every pair of immune cell types, we ran t-tests for each gene, and defined differentially expressed genes (DEGs) from this comparison. Equal numbers of DEGs were then collected as candidate lists and numbers of conditions and minimal values for building signature matrixes were calculated. Finally, we used v -Support Vector Regression to construct a deconvolution model. The performance of our system was finally evaluated using blood biopsies from 20 adults, in which 9 immune cell types were identified using flow cytometry. The present computations performed better than current state-of-the-art deconvolution methods. Finally, we implemented the proposed method into R and tested extensibility and usability on Windows, MacOS, and Linux operating systems. The method, MySort, is wrapped as the Galaxy platform pluggable tool and usage details are available at https://testtoolshed.g2.bx.psu.edu/view/moneycat/mysort/e3afe097e80a .
NASA Technical Reports Server (NTRS)
Shykoff, Barbara E.; Swanson, Harvey T.
1987-01-01
A new method for correction of mass spectrometer output signals is described. Response-time distortion is reduced independently of any model of mass spectrometer behavior. The delay of the system is found first from the cross-correlation function of a step change and its response. A two-sided time-domain digital correction filter (deconvolution filter) is generated next from the same step response data using a regression procedure. Other data are corrected using the filter and delay. The mean squared error between a step response and a step is reduced considerably more after the use of a deconvolution filter than after the application of a second-order model correction. O2 consumption and CO2 production values calculated from data corrupted by a simulated dynamic process return to near the uncorrupted values after correction. Although a clean step response or the ensemble average of several responses contaminated with noise is needed for the generation of the filter, random noise of magnitude not above 0.5 percent added to the response to be corrected does not impair the correction severely.
Santos, Radleigh G; Appel, Jon R; Giulianotti, Marc A; Edwards, Bruce S; Sklar, Larry A; Houghten, Richard A; Pinilla, Clemencia
2013-05-30
In the past 20 years, synthetic combinatorial methods have fundamentally advanced the ability to synthesize and screen large numbers of compounds for drug discovery and basic research. Mixture-based libraries and positional scanning deconvolution combine two approaches for the rapid identification of specific scaffolds and active ligands. Here we present a quantitative assessment of the screening of 32 positional scanning libraries in the identification of highly specific and selective ligands for two formylpeptide receptors. We also compare and contrast two mixture-based library approaches using a mathematical model to facilitate the selection of active scaffolds and libraries to be pursued for further evaluation. The flexibility demonstrated in the differently formatted mixture-based libraries allows for their screening in a wide range of assays.
Semi-blind sparse image reconstruction with application to MRFM.
Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O
2012-09-01
We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.
Application of deconvolution interferometry with both Hi-net and KiK-net data
NASA Astrophysics Data System (ADS)
Nakata, N.
2013-12-01
Application of deconvolution interferometry to wavefields observed by KiK-net, a strong-motion recording network in Japan, is useful for estimating wave velocities and S-wave splitting in the near surface. Using this technique, for example, Nakata and Snieder (2011, 2012) found changed in velocities caused by Tohoku-Oki earthquake in Japan. At the location of the borehole accelerometer of each KiK-net station, a velocity sensor is also installed as a part of a high-sensitivity seismograph network (Hi-net). I present a technique that uses both Hi-net and KiK-net records for computing deconvolution interferometry. The deconvolved waveform obtained from the combination of Hi-net and KiK-net data is similar to the waveform computed from KiK-net data only, which indicates that one can use Hi-net wavefields for deconvolution interferometry. Because Hi-net records have a high signal-to-noise ratio (S/N) and high dynamic resolution, the S/N and the quality of amplitude and phase of deconvolved waveforms can be improved with Hi-net data. These advantages are especially important for short-time moving-window seismic interferometry and deconvolution interferometry using later coda waves.
NASA Astrophysics Data System (ADS)
Oba, T.; Riethmüller, T. L.; Solanki, S. K.; Iida, Y.; Quintero Noda, C.; Shimizu, T.
2017-11-01
Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode/SP data in an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of -3.0 km s-1 and +3.0 km s-1 at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.
Peptide de novo sequencing of mixture tandem mass spectra
Hotta, Stéphanie Yuki Kolbeck; Verano‐Braga, Thiago; Kjeldsen, Frank
2016-01-01
The impact of mixture spectra deconvolution on the performance of four popular de novo sequencing programs was tested using artificially constructed mixture spectra as well as experimental proteomics data. Mixture fragmentation spectra are recognized as a limitation in proteomics because they decrease the identification performance using database search engines. De novo sequencing approaches are expected to be even more sensitive to the reduction in mass spectrum quality resulting from peptide precursor co‐isolation and thus prone to false identifications. The deconvolution approach matched complementary b‐, y‐ions to each precursor peptide mass, which allowed the creation of virtual spectra containing sequence specific fragment ions of each co‐isolated peptide. Deconvolution processing resulted in equally efficient identification rates but increased the absolute number of correctly sequenced peptides. The improvement was in the range of 20–35% additional peptide identifications for a HeLa lysate sample. Some correct sequences were identified only using unprocessed spectra; however, the number of these was lower than those where improvement was obtained by mass spectral deconvolution. Tight candidate peptide score distribution and high sensitivity to small changes in the mass spectrum introduced by the employed deconvolution method could explain some of the missing peptide identifications. PMID:27329701
Milardi, Demetrio; Cacciola, Alberto; Calamuneri, Alessandro; Ghilardi, Maria F.; Caminiti, Fabrizia; Cascio, Filippo; Andronaco, Veronica; Anastasi, Giuseppe; Mormina, Enricomaria; Arrigo, Alessandro; Bruschetta, Daniele; Quartarone, Angelo
2017-01-01
Although the olfactory sense has always been considered with less interest than the visual, auditive or somatic senses, it does plays a major role in our ordinary life, with important implication in dangerous situations or in social and emotional behaviors. Traditional Diffusion Tensor signal model and related tractography have been used in the past years to reconstruct the cranial nerves, including the olfactory nerve (ON). However, no supplementary information with regard to the pathways of the olfactory network have been provided. Here, by using the more advanced Constrained Spherical Deconvolution (CSD) diffusion model, we show for the first time in vivo and non-invasively that, in healthy humans, the olfactory system has a widely distributed anatomical network to several cortical regions as well as to many subcortical structures. Although the present study focuses on an healthy sample size, a similar approach could be applied in the near future to gain important insights with regard to the early involvement of olfaction in several neurodegenerative disorders. PMID:28443000
Cartailler, Jerome; Kwon, Taekyung; Yuste, Rafael; Holcman, David
2018-03-07
Most synaptic excitatory connections are made on dendritic spines. But how the voltage in spines is modulated by its geometry remains unclear. To investigate the electrical properties of spines, we combine voltage imaging data with electro-diffusion modeling. We first present a temporal deconvolution procedure for the genetically encoded voltage sensor expressed in hippocampal cultured neurons and then use electro-diffusion theory to compute the electric field and the current-voltage conversion. We extract a range for the neck resistances of 〈R〉=100±35MΩ. When a significant current is injected in a spine, the neck resistance can be inversely proportional to its radius, but not to the radius square, as predicted by Ohm's law. We conclude that the postsynaptic voltage cannot only be modulated by changing the number of receptors, but also by the spine geometry. Thus, spine morphology could be a key component in determining synaptic transduction and plasticity. Copyright © 2018 Elsevier Inc. All rights reserved.
Deconvolution using a neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehman, S.K.
1990-11-15
Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.
Deconvolution of gas chromatographic data
NASA Technical Reports Server (NTRS)
Howard, S.; Rayborn, G. H.
1980-01-01
The use of deconvolution methods on gas chromatographic data to obtain an accurate determination of the relative amounts of each material present by mathematically separating the merged peaks is discussed. Data were obtained on a gas chromatograph with a flame ionization detector. Chromatograms of five xylenes with differing degrees of separation were generated by varying the column temperature at selected rates. The merged peaks were then successfully separated by deconvolution. The concept of function continuation in the frequency domain was introduced in striving to reach the theoretical limit of accuracy, but proved to be only partially successful.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ruixing; Yang, LV; Xu, Kele
Purpose: Deconvolution is a widely used tool in the field of image reconstruction algorithm when the linear imaging system has been blurred by the imperfect system transfer function. However, due to the nature of Gaussian-liked distribution for point spread function (PSF), the components with coherent high frequency in the image are hard to restored in most of the previous scanning imaging system, even the relatively accurate PSF is acquired. We propose a novel method for deconvolution of images which are obtained by using shape-modulated PSF. Methods: We use two different types of PSF - Gaussian shape and donut shape -more » to convolute the original image in order to simulate the process of scanning imaging. By employing deconvolution of the two images with corresponding given priors, the image quality of the deblurred images are compared. Then we find the critical size of the donut shape compared with the Gaussian shape which has similar deconvolution results. Through calculation of tightened focusing process using radially polarized beam, such size of donut is achievable under same conditions. Results: The effects of different relative size of donut and Gaussian shapes are investigated. When the full width at half maximum (FWHM) ratio of donut and Gaussian shape is set about 1.83, similar resolution results are obtained through our deconvolution method. Decreasing the size of donut will favor the deconvolution method. A mask with both amplitude and phase modulation is used to create a donut-shaped PSF compared with the non-modulated Gaussian PSF. Donut with size smaller than our critical value is obtained. Conclusion: The utility of donutshaped PSF are proved useful and achievable in the imaging and deconvolution processing, which is expected to have potential practical applications in high resolution imaging for biological samples.« less
Chromatogram simulation by area reproduction.
Boe, Bjarne
2007-01-12
A modified Poisson function has been developed for the simulation of chromatographic peaks. The proposed model is shown to have the property of exactly recreating the experimentally determined peak area. Model parameters are obtained directly from the experimental peak, and overlapping peaks are deconvoluted such that the area sum of overlapping peaks is kept unchanged. The method was applied to real, complex chromatograms.
Hao, Jie; Astle, William; De Iorio, Maria; Ebbels, Timothy M D
2012-08-01
Nuclear Magnetic Resonance (NMR) spectra are widely used in metabolomics to obtain metabolite profiles in complex biological mixtures. Common methods used to assign and estimate concentrations of metabolites involve either an expert manual peak fitting or extra pre-processing steps, such as peak alignment and binning. Peak fitting is very time consuming and is subject to human error. Conversely, alignment and binning can introduce artefacts and limit immediate biological interpretation of models. We present the Bayesian automated metabolite analyser for NMR spectra (BATMAN), an R package that deconvolutes peaks from one-dimensional NMR spectra, automatically assigns them to specific metabolites from a target list and obtains concentration estimates. The Bayesian model incorporates information on characteristic peak patterns of metabolites and is able to account for shifts in the position of peaks commonly seen in NMR spectra of biological samples. It applies a Markov chain Monte Carlo algorithm to sample from a joint posterior distribution of the model parameters and obtains concentration estimates with reduced error compared with conventional numerical integration and comparable to manual deconvolution by experienced spectroscopists. http://www1.imperial.ac.uk/medicine/people/t.ebbels/ t.ebbels@imperial.ac.uk.
A frequency-domain seismic blind deconvolution based on Gini correlations
NASA Astrophysics Data System (ADS)
Wang, Zhiguo; Zhang, Bing; Gao, Jinghuai; Huo Liu, Qing
2018-02-01
In reflection seismic processing, the seismic blind deconvolution is a challenging problem, especially when the signal-to-noise ratio (SNR) of the seismic record is low and the length of the seismic record is short. As a solution to this ill-posed inverse problem, we assume that the reflectivity sequence is independent and identically distributed (i.i.d.). To infer the i.i.d. relationships from seismic data, we first introduce the Gini correlations (GCs) to construct a new criterion for the seismic blind deconvolution in the frequency-domain. Due to a unique feature, the GCs are robust in their higher tolerance of the low SNR data and less dependent on record length. Applications of the seismic blind deconvolution based on the GCs show their capacity in estimating the unknown seismic wavelet and the reflectivity sequence, whatever synthetic traces or field data, even with low SNR and short sample record.
Charge reconstruction in large-area photomultipliers
NASA Astrophysics Data System (ADS)
Grassi, M.; Montuschi, M.; Baldoncini, M.; Mantovani, F.; Ricci, B.; Andronico, G.; Antonelli, V.; Bellato, M.; Bernieri, E.; Brigatti, A.; Brugnera, R.; Budano, A.; Buscemi, M.; Bussino, S.; Caruso, R.; Chiesa, D.; Corti, D.; Dal Corso, F.; Ding, X. F.; Dusini, S.; Fabbri, A.; Fiorentini, G.; Ford, R.; Formozov, A.; Galet, G.; Garfagnini, A.; Giammarchi, M.; Giaz, A.; Insolia, A.; Isocrate, R.; Lippi, I.; Longhitano, F.; Lo Presti, D.; Lombardi, P.; Marini, F.; Mari, S. M.; Martellini, C.; Meroni, E.; Mezzetto, M.; Miramonti, L.; Monforte, S.; Nastasi, M.; Ortica, F.; Paoloni, A.; Parmeggiano, S.; Pedretti, D.; Pelliccia, N.; Pompilio, R.; Previtali, E.; Ranucci, G.; Re, A. C.; Romani, A.; Saggese, P.; Salamanna, G.; Sawy, F. H.; Settanta, G.; Sisti, M.; Sirignano, C.; Spinetti, M.; Stanco, L.; Strati, V.; Verde, G.; Votano, L.
2018-02-01
Large-area PhotoMultiplier Tubes (PMT) allow to efficiently instrument Liquid Scintillator (LS) neutrino detectors, where large target masses are pivotal to compensate for neutrinos' extremely elusive nature. Depending on the detector light yield, several scintillation photons stemming from the same neutrino interaction are likely to hit a single PMT in a few tens/hundreds of nanoseconds, resulting in several photoelectrons (PEs) to pile-up at the PMT anode. In such scenario, the signal generated by each PE is entangled to the others, and an accurate PMT charge reconstruction becomes challenging. This manuscript describes an experimental method able to address the PMT charge reconstruction in the case of large PE pile-up, providing an unbiased charge estimator at the permille level up to 15 detected PEs. The method is based on a signal filtering technique (Wiener filter) which suppresses the noise due to both PMT and readout electronics, and on a Fourier-based deconvolution able to minimize the influence of signal distortions—such as an overshoot. The analysis of simulated PMT waveforms shows that the slope of a linear regression modeling the relation between reconstructed and true charge values improves from 0.769 ± 0.001 (without deconvolution) to 0.989 ± 0.001 (with deconvolution), where unitary slope implies perfect reconstruction. A C++ implementation of the charge reconstruction algorithm is available online at [1].
Receiver function structure beneath a broad-band seismic station in south Sumatra
NASA Astrophysics Data System (ADS)
MacPherson, K. A.; Hidayat, D.; Goh, S.
2010-12-01
We estimated the one-dimensional velocity structure beneath a broad-band station in south Sumatra by the forward modeling and inversion of receiver functions. Station PMBI belongs to the GEOFON seismic network maintained by GFZ-Potsdam, and at a longitude of 104.77° and latitude of -2.93°, sits atop the south Sumatran basin. This station is of interest to researchers at the Earth Observatory of Singapore, as data from it and other stations in Sumatra and Singapore will be incorporated into a regional velocity model for use in seismic hazard analyses. Three-component records from 193 events at teleseismic distances and Mw ≥ 5.0 were examined for this study and 67 records were deemed to have sufficient signal to noise characteristics to be retained for analysis. Observations are primarily from source zones in the Bougainville trench with back-azimuths to the east-south-east, the Japan and Kurile trenches with back-azimuths to the northeast, and a scattering of observations from other azimuths. Due to the level of noise present in even the higher-quality records, the usual frequency-domain deconvolution method of computing receiver functions was ineffective, and a time-domain iterative deconvolution was employed to obtain usable wave forms. Receiver functions with similar back-azimuths were stacked in order to improve their signal to noise ratios. The resulting wave forms are relatively complex, with significant energy being present in the tangential components, indicating heterogeneity in the underlying structure. A dip analysis was undertaken but no clear pattern was observed. However, it is apparent that polarities of the tangential components were generally reversed for records that sample the Sunda trench. Forward modeling of the receiver functions indicates the presence of a near-surface low-velocity layer (Vp≈1.9 km/s) and a Moho depth of ~31 km. Details of the crustal structure were investigated by employing time-domain inversions of the receiver functions. General features of those velocity models providing a good fit to the waveform include an approximately one kilometer thick near-surface low-velocity zone, a high-velocity layer over a velocity inversion at mid-crustal depths, and a crust-mantle transition at depths between 30 km and 34 km.
Assessing Multiple Methods for Determining Active Source Travel Times in a Dense Array
NASA Astrophysics Data System (ADS)
Parker, L.; Zeng, X.; Thurber, C. H.; Team, P.
2016-12-01
238 three-component nodal seismometers were deployed at the Brady Hot Springs geothermal field in Nevada to characterize changes in the subsurface as a result of changes in pumping conditions. The array consisted of a 500 meter by 1600 meter irregular grid with 50 meter spacing centered in an approximately rectangular 1200 meter by 1600 meter grid with 200 meter spacing. A large vibroseis truck (T-Rex) was deployed as an active seismic source at 216 locations. Over the course of 15 days, the truck occupied each location up to four times. At each location a swept-frequency source between 5 and 80 Hz over 20 seconds was produced using three vibration modes: longitudinal S-wave, transverse S-wave, and P-wave. Seismic wave arrivals were identified using three methods: cross-correlation, deconvolution, and Wigner-Ville distribution (WVD) plus the Hough Transform (HT). Surface wave arrivals were clear for all three modes of vibration using all three methods. Preliminary tomographic models will be presented, using the arrivals of the identified phases. This analysis is part of the PoroTomo project: Poroelastic Tomography by Adjoint Inverse Modeling of Data from Seismology, Geodesy, and Hydrology; http://geoscience.wisc.edu/feigl/porotomo.
Takagaki, Naohisa; Kurose, Ryoichi; Kimura, Atsushi; Komori, Satoru
2016-11-14
The mass transfer across a sheared gas-liquid interface strongly depends on the Schmidt number. Here we investigate the relationship between mass transfer coefficient on the liquid side, k L , and Schmidt number, Sc, in the wide range of 0.7 ≤ Sc ≤ 1000. We apply a three-dimensional semi direct numerical simulation (SEMI-DNS), in which the mass transfer is solved based on an approximated deconvolution model (ADM) scheme, to wind-driven turbulence with mass transfer across a sheared wind-driven wavy gas-liquid interface. In order to capture the deforming gas-liquid interface, an arbitrary Lagrangian-Eulerian (ALE) method is employed. Our results show that similar to the case for flat gas-liquid interfaces, k L for the wind-driven wavy gas-liquid interface is generally proportional to Sc -0.5 , and can be roughly estimated by the surface divergence model. This trend is endorsed by the fact that the mass transfer across the gas-liquid interface is controlled mainly by streamwise vortices on the liquid side even for the wind-driven turbulence under the conditions of low wind velocities without wave breaking.
Takagaki, Naohisa; Kurose, Ryoichi; Kimura, Atsushi; Komori, Satoru
2016-01-01
The mass transfer across a sheared gas-liquid interface strongly depends on the Schmidt number. Here we investigate the relationship between mass transfer coefficient on the liquid side, kL, and Schmidt number, Sc, in the wide range of 0.7 ≤ Sc ≤ 1000. We apply a three-dimensional semi direct numerical simulation (SEMI-DNS), in which the mass transfer is solved based on an approximated deconvolution model (ADM) scheme, to wind-driven turbulence with mass transfer across a sheared wind-driven wavy gas-liquid interface. In order to capture the deforming gas-liquid interface, an arbitrary Lagrangian-Eulerian (ALE) method is employed. Our results show that similar to the case for flat gas-liquid interfaces, kL for the wind-driven wavy gas-liquid interface is generally proportional to Sc−0.5, and can be roughly estimated by the surface divergence model. This trend is endorsed by the fact that the mass transfer across the gas-liquid interface is controlled mainly by streamwise vortices on the liquid side even for the wind-driven turbulence under the conditions of low wind velocities without wave breaking. PMID:27841325
Processing strategy for water-gun seismic data from the Gulf of Mexico
Lee, Myung W.; Hart, Patrick E.; Agena, Warren F.
2000-01-01
In order to study the regional distribution of gas hydrates and their potential relationship to a large-scale sea-fl oor failures, more than 1,300 km of near-vertical-incidence seismic profi les were acquired using a 15-in3 water gun across the upper- and middle-continental slope in the Garden Banks and Green Canyon regions of the Gulf of Mexico. Because of the highly mixed phase water-gun signature, caused mainly by a precursor of the source arriving about 18 ms ahead of the main pulse, a conventional processing scheme based on the minimum phase assumption is not suitable for this data set. A conventional processing scheme suppresses the reverberations and compresses the main pulse, but the failure to suppress precursors results in complex interference between the precursors and primary refl ections, thus obscuring true refl ections. To clearly image the subsurface without interference from the precursors, a wavelet deconvolution based on the mixedphase assumption using variable norm is attempted. This nonminimum- phase wavelet deconvolution compresses a longwave- train water-gun signature into a simple zero-phase wavelet. A second-zero-crossing predictive deconvolution followed by a wavelet deconvolution suppressed variable ghost arrivals attributed to the variable depths of receivers. The processing strategy of using wavelet deconvolution followed by a secondzero- crossing deconvolution resulted in a sharp and simple wavelet and a better defi nition of the polarity of refl ections. Also, the application of dip moveout correction enhanced lateral resolution of refl ections and substantially suppressed coherent noise.
Complex metabolic oscillations in plants forced by harmonic irradiance.
Nedbal, Ladislav; Brezina, Vítezslav
2002-01-01
Plants exposed to harmonically modulated irradiance, approximately 1 + cos(omegat), exhibit a complex periodic pattern of chlorophyll fluorescence emission that can be deconvoluted into a steady-state component, a component that is modulated with the frequency of the irradiance (omega), and into at least two upper harmonic components (2omega and 3omega). A model is proposed that accounts for the upper harmonics in fluorescence emission by nonlinear negative feedback regulation of photosynthesis. In contrast to simpler linear models, the model predicts that the steady-state fluorescence component will depend on the frequency of light modulation, and that amplitudes of all fluorescence components will exhibit resonance peak(s) when the irradiance frequency is tuned to an internal frequency of a regulatory component. The experiments confirmed that the upper harmonic components appear and exhibit distinct resonant peaks. The frequency of autonomous oscillations observed earlier upon an abrupt increase in CO(2) concentration corresponds to the sharpest of the resonant peaks of the forced oscillations. We propose that the underlying principles are general for a wide spectrum of negative-feedback regulatory mechanisms. The analysis by forced harmonic oscillations will enable us to examine internal dynamics of regulatory processes that have not been accessible to noninvasive fluorescence monitoring to date. PMID:12324435
Pulse analysis of acoustic emission signals. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Houghton, J. R.
1976-01-01
A method for the signature analysis of pulses in the frequency domain and the time domain is presented. Fourier spectrum, Fourier transfer function, shock spectrum and shock spectrum ratio are examined in the frequency domain analysis, and pulse shape deconvolution is developed for use in the time domain analysis. To demonstrate the relative sensitivity of each of the methods to small changes in the pulse shape, signatures of computer modeled systems with analytical pulses are presented. Optimization techniques are developed and used to indicate the best design parameters values for deconvolution of the pulse shape. Several experiments are presented that test the pulse signature analysis methods on different acoustic emission sources. These include acoustic emissions associated with: (1) crack propagation, (2) ball dropping on a plate, (3) spark discharge and (4) defective and good ball bearings.
Santos, Radleigh G.; Appel, Jon R.; Giulianotti, Marc A.; Edwards, Bruce S.; Sklar, Larry A.; Houghten, Richard A.; Pinilla, Clemencia
2014-01-01
In the past 20 years, synthetic combinatorial methods have fundamentally advanced the ability to synthesize and screen large numbers of compounds for drug discovery and basic research. Mixture-based libraries and positional scanning deconvolution combine two approaches for the rapid identification of specific scaffolds and active ligands. Here we present a quantitative assessment of the screening of 32 positional scanning libraries in the identification of highly specific and selective ligands for two formylpeptide receptors. We also compare and contrast two mixture-based library approaches using a mathematical model to facilitate the selection of active scaffolds and libraries to be pursued for further evaluation. The flexibility demonstrated in the differently formatted mixture-based libraries allows for their screening in a wide range of assays. PMID:23722730
A digital algorithm for spectral deconvolution with noise filtering and peak picking: NOFIPP-DECON
NASA Technical Reports Server (NTRS)
Edwards, T. R.; Settle, G. L.; Knight, R. D.
1975-01-01
Noise-filtering, peak-picking deconvolution software incorporates multiple convoluted convolute integers and multiparameter optimization pattern search. The two theories are described and three aspects of the software package are discussed in detail. Noise-filtering deconvolution was applied to a number of experimental cases ranging from noisy, nondispersive X-ray analyzer data to very noisy photoelectric polarimeter data. Comparisons were made with published infrared data, and a man-machine interactive language has evolved for assisting in very difficult cases. A modified version of the program is being used for routine preprocessing of mass spectral and gas chromatographic data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oba, T.; Riethmüller, T. L.; Solanki, S. K.
Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode /SP data inmore » an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of −3.0 km s{sup −1} and +3.0 km s{sup −1} at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.« less
Peptide de novo sequencing of mixture tandem mass spectra.
Gorshkov, Vladimir; Hotta, Stéphanie Yuki Kolbeck; Verano-Braga, Thiago; Kjeldsen, Frank
2016-09-01
The impact of mixture spectra deconvolution on the performance of four popular de novo sequencing programs was tested using artificially constructed mixture spectra as well as experimental proteomics data. Mixture fragmentation spectra are recognized as a limitation in proteomics because they decrease the identification performance using database search engines. De novo sequencing approaches are expected to be even more sensitive to the reduction in mass spectrum quality resulting from peptide precursor co-isolation and thus prone to false identifications. The deconvolution approach matched complementary b-, y-ions to each precursor peptide mass, which allowed the creation of virtual spectra containing sequence specific fragment ions of each co-isolated peptide. Deconvolution processing resulted in equally efficient identification rates but increased the absolute number of correctly sequenced peptides. The improvement was in the range of 20-35% additional peptide identifications for a HeLa lysate sample. Some correct sequences were identified only using unprocessed spectra; however, the number of these was lower than those where improvement was obtained by mass spectral deconvolution. Tight candidate peptide score distribution and high sensitivity to small changes in the mass spectrum introduced by the employed deconvolution method could explain some of the missing peptide identifications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Density Deconvolution With EPI Splines
2015-09-01
effects of various substances on test subjects [11], [12]. Whereas in geophysics, a shot may be fired into the ground, in pharmacokinetics, a signal is...be significant, including medicine, bioinformatics, chemistry, as- tronomy, and econometrics , as well as an extensive review of kernel based methods...demonstrate the effectiveness of our model in simulations motivated by test instances in [32]. We consider an additive measurement model scenario where
Semi-automated Image Processing for Preclinical Bioluminescent Imaging.
Slavine, Nikolai V; McColl, Roderick W
Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.
NASA Astrophysics Data System (ADS)
Escobar Gómez, J. D.; Torres-Verdín, C.
2018-03-01
Single-well pressure-diffusion simulators enable improved quantitative understanding of hydraulic-testing measurements in the presence of arbitrary spatial variations of rock properties. Simulators of this type implement robust numerical algorithms which are often computationally expensive, thereby making the solution of the forward modeling problem onerous and inefficient. We introduce a time-domain perturbation theory for anisotropic permeable media to efficiently and accurately approximate the transient pressure response of spatially complex aquifers. Although theoretically valid for any spatially dependent rock/fluid property, our single-phase flow study emphasizes arbitrary spatial variations of permeability and anisotropy, which constitute key objectives of hydraulic-testing operations. Contrary to time-honored techniques, the perturbation method invokes pressure-flow deconvolution to compute the background medium's permeability sensitivity function (PSF) with a single numerical simulation run. Subsequently, the first-order term of the perturbed solution is obtained by solving an integral equation that weighs the spatial variations of permeability with the spatial-dependent and time-dependent PSF. Finally, discrete convolution transforms the constant-flow approximation to arbitrary multirate conditions. Multidimensional numerical simulation studies for a wide range of single-well field conditions indicate that perturbed solutions can be computed in less than a few CPU seconds with relative errors in pressure of <5%, corresponding to perturbations in background permeability of up to two orders of magnitude. Our work confirms that the proposed joint perturbation-convolution (JPC) method is an efficient alternative to analytical and numerical solutions for accurate modeling of pressure-diffusion phenomena induced by Neumann or Dirichlet boundary conditions.
Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C
2013-05-01
Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. Copyright © 2013 Elsevier B.V. All rights reserved.
Deconvolution of azimuthal mode detection measurements
NASA Astrophysics Data System (ADS)
Sijtsma, Pieter; Brouwer, Harry
2018-05-01
Unequally spaced transducer rings make it possible to extend the range of detectable azimuthal modes. The disadvantage is that the response of the mode detection algorithm to a single mode is distributed over all detectable modes, similarly to the Point Spread Function of Conventional Beamforming with microphone arrays. With multiple modes the response patterns interfere, leading to a relatively high "noise floor" of spurious modes in the detected mode spectrum, in other words, to a low dynamic range. In this paper a deconvolution strategy is proposed for increasing this dynamic range. It starts with separating the measured sound into shaft tones and broadband noise. For broadband noise modes, a standard Non-Negative Least Squares solver appeared to be a perfect deconvolution tool. For shaft tones a Matching Pursuit approach is proposed, taking advantage of the sparsity of dominant modes. The deconvolution methods were applied to mode detection measurements in a fan rig. An increase in dynamic range of typically 10-15 dB was found.
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1991-01-01
The final report for work on the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution is presented. Papers and theses prepared during the research report period are included. Among all the research results reported, note should be made of the specific investigation of the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution. A methodology was developed to determine design and operation parameters for error minimization when deconvolution is included in data analysis. An error surface is plotted versus the signal-to-noise ratio (SNR) and all parameters of interest. Instrumental characteristics will determine a curve in this space. The SNR and parameter values which give the projection from the curve to the surface, corresponding to the smallest value for the error, are the optimum values. These values are constrained by the curve and so will not necessarily correspond to an absolute minimum in the error surface.
NASA Technical Reports Server (NTRS)
Becker, Joseph F.; Valentin, Jose
1996-01-01
The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.
Joint deconvolution and classification with applications to passive acoustic underwater multipath.
Anderson, Hyrum S; Gupta, Maya R
2008-11-01
This paper addresses the problem of classifying signals that have been corrupted by noise and unknown linear time-invariant (LTI) filtering such as multipath, given labeled uncorrupted training signals. A maximum a posteriori approach to the deconvolution and classification is considered, which produces estimates of the desired signal, the unknown channel, and the class label. For cases in which only a class label is needed, the classification accuracy can be improved by not committing to an estimate of the channel or signal. A variant of the quadratic discriminant analysis (QDA) classifier is proposed that probabilistically accounts for the unknown LTI filtering, and which avoids deconvolution. The proposed QDA classifier can work either directly on the signal or on features whose transformation by LTI filtering can be analyzed; as an example a classifier for subband-power features is derived. Results on simulated data and real Bowhead whale vocalizations show that jointly considering deconvolution with classification can dramatically improve classification performance over traditional methods over a range of signal-to-noise ratios.
Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin
2012-11-21
New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.
A new scoring function for top-down spectral deconvolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kou, Qiang; Wu, Si; Liu, Xiaowen
2014-12-18
Background: Top-down mass spectrometry plays an important role in intact protein identification and characterization. Top-down mass spectra are more complex than bottom-up mass spectra because they often contain many isotopomer envelopes from highly charged ions, which may overlap with one another. As a result, spectral deconvolution, which converts a complex top-down mass spectrum into a monoisotopic mass list, is a key step in top-down spectral interpretation. Results: In this paper, we propose a new scoring function, L-score, for evaluating isotopomer envelopes. By combining L-score with MS-Deconv, a new software tool, MS-Deconv+, was developed for top-down spectral deconvolution. Experimental results showedmore » that MS-Deconv+ outperformed existing software tools in top-down spectral deconvolution. Conclusions: L-score shows high discriminative ability in identification of isotopomer envelopes. Using L-score, MS-Deconv+ reports many correct monoisotopic masses missed by other software tools, which are valuable for proteoform identification and characterization.« less
Bayesian Deconvolution for Angular Super-Resolution in Forward-Looking Scanning Radar
Zha, Yuebo; Huang, Yulin; Sun, Zhichao; Wang, Yue; Yang, Jianyu
2015-01-01
Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP) criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson–Lucy algorithm. PMID:25806871
Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C.
2014-01-01
Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. PMID:23542422
BP Piscium: its flaring disc imaged with SPHERE/ZIMPOL★
NASA Astrophysics Data System (ADS)
de Boer, J.; Girard, J. H.; Canovas, H.; Min, M.; Sitko, M.; Ginski, C.; Jeffers, S. V.; Mawet, D.; Milli, J.; Rodenhuis, M.; Snik, F.; Keller, C. U.
2017-03-01
Whether BP Piscium (BP Psc) is either a pre-main sequence T Tauri star at d ≈ 80 pc, or a post-main sequence G giant at d ≈ 300 pc is still not clear. As a first-ascent giant, it is the first to be observed with a molecular and dust disc. Alternatively, BP Psc would be among the nearest T Tauri stars with a protoplanetary disc (PPD). We investigate whether the disc geometry resembles typical PPDs, by comparing polarimetric images with radiative transfer models. Our Very Large Telescope/Spectro-Polarimetric High-contrast Exoplanet REsearch (SPHERE)/Zurich IMaging Polarimeter (ZIMPOL) observations allow us to perform polarimetric differential imaging, reference star differential imaging, and Richardson-Lucy deconvolution. We present the first visible light polarization and intensity images of the disc of BP Psc. Our deconvolution confirms the disc shape as detected before, mainly showing the southern side of the disc. In polarized intensity the disc is imaged at larger detail and also shows the northern side, giving it the typical shape of high-inclination flared discs. We explain the observed disc features by retrieving the large-scale geometry with MCMAX radiative transfer modelling, which yields a strongly flared model, atypical for discs of T Tauri stars.
NASA Astrophysics Data System (ADS)
Ainiwaer, A.; Gurrola, H.
2018-03-01
Common conversion point stacking or migration of receiver functions (RFs) and H-k (H is depth and k is Vp/Vs) stacking of RFs has become a common method to study the crust and upper mantle beneath broad-band three-component seismic stations. However, it can be difficult to interpret Pds RFs due to interference between the Pds, PPds and PSds phases, especially in the mantle portion of the lithosphere. We propose a phase separation method to isolate the prominent phases of the RFs and produce separate Pds, PPds and PSds `phase specific' receiver functions (referred to as PdsRFs, PPdsRFs and PSdsRFs, respectively) by deconvolution of the wavefield rather than single seismograms. One of the most important products of this deconvolution method is to produce Ps receiver functions (PdsRFs) that are free of crustal multiples. This is accomplished by using H-k analysis to identify specific phases in the wavefield from all seismograms recorded at a station which enables development of an iterative deconvolution procedure to produce the above-mentioned phase specific RFs. We refer to this method as wavefield iterative deconvolution (WID). The WID method differentiates and isolates different RF phases by exploiting their differences in moveout curves across the entire wave front. We tested the WID by applying it to synthetic seismograms produced using a modified version of the PREM velocity model. The WID effectively separates phases from each stacked RF in synthetic data. We also applied this technique to produce RFs from seismograms recorded at ARU (a broad-band station in Arti, Russia). The phase specific RFs produced using WID are easier to interpret than traditional RFs. The PdsRFs computed using WID are the most improved, owing to the distinct shape of its moveout curves as compared to the moveout curves for the PPds and PSds phases. The importance of this WID method is most significant in reducing interference between phases for depths of less than 300 km. Phases from deeper layers (i.e. P660s as compared to PP220s) are less likely to be misinterpreted because the large amount of moveout causes the appropriate phases to stack coherently if there is sufficient distribution in ray parameter. WID is most effective in producing clean PdsRFs that are relatively free of reverberations whereas PPdsRFs and PSdsRFs retain contamination from reverberations.
Lee, Myung W.
1999-01-01
Processing of 20 seismic profiles acquired in the Chesapeake Bay area aided in analysis of the details of an impact structure and allowed more accurate mapping of the depression caused by a bolide impact. Particular emphasis was placed on enhancement of seismic reflections from the basement. Application of wavelet deconvolution after a second zero-crossing predictive deconvolution improved the resolution of shallow reflections, and application of a match filter enhanced the basement reflections. The use of deconvolution and match filtering with a two-dimensional signal enhancement technique (F-X filtering) significantly improved the interpretability of seismic sections.
Langenbucher, Frieder
2003-11-01
Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.
Microseismic source locations with deconvolution migration
NASA Astrophysics Data System (ADS)
Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu
2018-03-01
Identifying and locating microseismic events are critical problems in hydraulic fracturing monitoring for unconventional resources exploration. In contrast to active seismic data, microseismic data are usually recorded with unknown source excitation time and source location. In this study, we introduce deconvolution migration by combining deconvolution interferometry with interferometric cross-correlation migration (CCM). This method avoids the need for the source excitation time and enhances both the spatial resolution and robustness by eliminating the square term of the source wavelets from CCM. The proposed algorithm is divided into the following three steps: (1) generate the virtual gathers by deconvolving the master trace with all other traces in the microseismic gather to remove the unknown excitation time; (2) migrate the virtual gather to obtain a single image of the source location and (3) stack all of these images together to get the final estimation image of the source location. We test the proposed method on complex synthetic and field data set from the surface hydraulic fracturing monitoring, and compare the results with those obtained by interferometric CCM. The results demonstrate that the proposed method can obtain a 50 per cent higher spatial resolution image of the source location, and more robust estimation with smaller errors of the localization especially in the presence of velocity model errors. This method is also beneficial for source mechanism inversion and global seismology applications.
A Compartmental Model for Computing Cell Numbers in CFSE-based Lymphocyte Proliferation Assays
2012-01-31
of the “expected relative Kullback - Leibler distance” ( information loss) when a model is used to describe a data set [23...deconvolution of the data into cell numbers, it cannot be used to accurately assess the number of cells in a particular generation. This information could be...notation is meant to emphasize the dependence of the estimate on the particular data set used to fit the model. It should be noted that, rather
High quality image-pair-based deblurring method using edge mask and improved residual deconvolution
NASA Astrophysics Data System (ADS)
Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting
2017-04-01
Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.
Windprofiler optimization using digital deconvolution procedures
NASA Astrophysics Data System (ADS)
Hocking, W. K.; Hocking, A.; Hocking, D. G.; Garbanzo-Salas, M.
2014-10-01
Digital improvements to data acquisition procedures used for windprofiler radars have the potential for improving the height coverage at optimum resolution, and permit improved height resolution. A few newer systems already use this capability. Real-time deconvolution procedures offer even further optimization, and this has not been effectively employed in recent years. In this paper we demonstrate the advantages of combining these features, with particular emphasis on the advantages of real-time deconvolution. Using several multi-core CPUs, we have been able to achieve speeds of up to 40 GHz from a standard commercial motherboard, allowing data to be digitized and processed without the need for any type of hardware except for a transmitter (and associated drivers), a receiver and a digitizer. No Digital Signal Processor chips are needed, allowing great flexibility with analysis algorithms. By using deconvolution procedures, we have then been able to not only optimize height resolution, but also have been able to make advances in dealing with spectral contaminants like ground echoes and other near-zero-Hz spectral contamination. Our results also demonstrate the ability to produce fine-resolution measurements, revealing small-scale structures within the backscattered echoes that were previously not possible to see. Resolutions of 30 m are possible for VHF radars. Furthermore, our deconvolution technique allows the removal of range-aliasing effects in real time, a major bonus in many instances. Results are shown using new radars in Canada and Costa Rica.
Calculation of total cross sections for charge exchange in molecular collisions
NASA Technical Reports Server (NTRS)
Ioup, J.
1979-01-01
Areas of investigation summarized include nitrogen ion-nitrogen molecule collisions; molecular collisions with surfaces; molecular identification from analysis of cracking patterns of selected gases; computer modelling of a quadrupole mass spectrometer; study of space charge in a quadrupole; transmission of the 127 deg cylindrical electrostatic analyzer; and mass spectrometer data deconvolution.
NASA Astrophysics Data System (ADS)
Asfahani, J.; Tlas, M.
2015-10-01
An easy and practical method for interpreting residual gravity anomalies due to simple geometrically shaped models such as cylinders and spheres has been proposed in this paper. This proposed method is based on both the deconvolution technique and the simplex algorithm for linear optimization to most effectively estimate the model parameters, e.g., the depth from the surface to the center of a buried structure (sphere or horizontal cylinder) or the depth from the surface to the top of a buried object (vertical cylinder), and the amplitude coefficient from the residual gravity anomaly profile. The method was tested on synthetic data sets corrupted by different white Gaussian random noise levels to demonstrate the capability and reliability of the method. The results acquired show that the estimated parameter values derived by this proposed method are close to the assumed true parameter values. The validity of this method is also demonstrated using real field residual gravity anomalies from Cuba and Sweden. Comparable and acceptable agreement is shown between the results derived by this method and those derived from real field data.
NASA Astrophysics Data System (ADS)
Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying
2018-03-01
In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.
Stovin, V R; Guymer, I; Chappell, M J; Hattersley, J G
2010-01-01
Mixing and dispersion processes affect the timing and concentration of contaminants transported within urban drainage systems. Hence, methods of characterising the mixing effects of specific hydraulic structures are of interest to drainage network modellers. Previous research, focusing on surcharged manholes, utilised the first-order Advection-Dispersion Equation (ADE) and Aggregated Dead Zone (ADZ) models to characterise dispersion. However, although systematic variations in travel time as a function of discharge and surcharge depth have been identified, the first order ADE and ADZ models do not provide particularly good fits to observed manhole data, which means that the derived parameter values are not independent of the upstream temporal concentration profile. An alternative, more robust, approach utilises the system's Cumulative Residence Time Distribution (CRTD), and the solute transport characteristics of a surcharged manhole have been shown to be characterised by just two dimensionless CRTDs, one for pre- and the other for post-threshold surcharge depths. Although CRTDs corresponding to instantaneous upstream injections can easily be generated using Computational Fluid Dynamics (CFD) models, the identification of CRTD characteristics from non-instantaneous and noisy laboratory data sets has been hampered by practical difficulties. This paper shows how a deconvolution approach derived from systems theory may be applied to identify the CRTDs associated with urban drainage structures.
Genomics Assisted Ancestry Deconvolution in Grape
Sawler, Jason; Reisch, Bruce; Aradhya, Mallikarjuna K.; Prins, Bernard; Zhong, Gan-Yuan; Schwaninger, Heidi; Simon, Charles; Buckler, Edward; Myles, Sean
2013-01-01
The genus Vitis (the grapevine) is a group of highly diverse, diploid woody perennial vines consisting of approximately 60 species from across the northern hemisphere. It is the world’s most valuable horticultural crop with ~8 million hectares planted, most of which is processed into wine. To gain insights into the use of wild Vitis species during the past century of interspecific grape breeding and to provide a foundation for marker-assisted breeding programmes, we present a principal components analysis (PCA) based ancestry estimation method to calculate admixture proportions of hybrid grapes in the United States Department of Agriculture grape germplasm collection using genome-wide polymorphism data. We find that grape breeders have backcrossed to both the domesticated V. vinifera and wild Vitis species and that reasonably accurate genome-wide ancestry estimation can be performed on interspecific Vitis hybrids using a panel of fewer than 50 ancestry informative markers (AIMs). We compare measures of ancestry informativeness used in selecting SNP panels for two-way admixture estimation, and verify the accuracy of our method on simulated populations of admixed offspring. Our method of ancestry deconvolution provides a first step towards selection at the seed or seedling stage for desirable admixture profiles, which will facilitate marker-assisted breeding that aims to introgress traits from wild Vitis species while retaining the desirable characteristics of elite V. vinifera cultivars. PMID:24244717
Denoised Wigner distribution deconvolution via low-rank matrix completion
Lee, Justin; Barbastathis, George
2016-08-23
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less
Denoised Wigner distribution deconvolution via low-rank matrix completion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Justin; Barbastathis, George
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less
Laramée, J A; Arbogast, B; Deinzer, M L
1989-10-01
It is shown that one-electron reduction is a common process that occurs in negative ion liquid secondary ion mass spectrometry (LSIMS) of oligonucleotides and synthetic oligonucleosides and that this process is in competition with proton loss. Deconvolution of the molecular anion cluster reveals contributions from (M-2H).-, (M-H)-, M.-, and (M + H)-. A model based on these ionic species gives excellent agreement with the experimental data. A correlation between the concentration of species arising via one-electron reduction [M.- and (M + H)-] and the electron affinity of the matrix has been demonstrated. The relative intensity of M.- is mass-dependent; this is rationalized on the basis of base-stacking. Base sequence ion formation is theorized to arise from M.- radical anion among other possible pathways.
NASA Astrophysics Data System (ADS)
Supriyanto, Noor, T.; Suhanto, E.
2017-07-01
The Endut geothermal prospect is located in Banten Province, Indonesia. The geological setting of the area is dominated by quaternary volcanic, tertiary sediments and tertiary rock intrusion. This area has been in the preliminary study phase of geology, geochemistry, and geophysics. As one of the geophysical study, the gravity data measurement has been carried out and analyzed in order to understand geological condition especially subsurface fault structure that control the geothermal system in Endut area. After precondition applied to gravity data, the complete Bouguer anomaly have been analyzed using advanced derivatives method such as Horizontal Gradient (HG) and Euler Deconvolution (ED) to clarify the existance of fault structures. These techniques detected boundaries of body anomalies and faults structure that were compared with the lithologies in the geology map. The analysis result will be useful in making a further realistic conceptual model of the Endut geothermal area.
Blind channel estimation and deconvolution in colored noise using higher-order cumulants
NASA Astrophysics Data System (ADS)
Tugnait, Jitendra K.; Gummadavelli, Uma
1994-10-01
Existing approaches to blind channel estimation and deconvolution (equalization) focus exclusively on channel or inverse-channel impulse response estimation. It is well-known that the quality of the deconvolved output depends crucially upon the noise statistics also. Typically it is assumed that the noise is white and the signal-to-noise ratio is known. In this paper we remove these restrictions. Both the channel impulse response and the noise model are estimated from the higher-order (fourth, e.g.) cumulant function and the (second-order) correlation function of the received data via a least-squares cumulant/correlation matching criterion. It is assumed that the noise higher-order cumulant function vanishes (e.g., Gaussian noise, as is the case for digital communications). Consistency of the proposed approach is established under certain mild sufficient conditions. The approach is illustrated via simulation examples involving blind equalization of digital communications signals.
Marciano, Michael A; Adelman, Jonathan D
2017-03-01
The deconvolution of DNA mixtures remains one of the most critical challenges in the field of forensic DNA analysis. In addition, of all the data features required to perform such deconvolution, the number of contributors in the sample is widely considered the most important, and, if incorrectly chosen, the most likely to negatively influence the mixture interpretation of a DNA profile. Unfortunately, most current approaches to mixture deconvolution require the assumption that the number of contributors is known by the analyst, an assumption that can prove to be especially faulty when faced with increasingly complex mixtures of 3 or more contributors. In this study, we propose a probabilistic approach for estimating the number of contributors in a DNA mixture that leverages the strengths of machine learning. To assess this approach, we compare classification performances of six machine learning algorithms and evaluate the model from the top-performing algorithm against the current state of the art in the field of contributor number classification. Overall results show over 98% accuracy in identifying the number of contributors in a DNA mixture of up to 4 contributors. Comparative results showed 3-person mixtures had a classification accuracy improvement of over 6% compared to the current best-in-field methodology, and that 4-person mixtures had a classification accuracy improvement of over 20%. The Probabilistic Assessment for Contributor Estimation (PACE) also accomplishes classification of mixtures of up to 4 contributors in less than 1s using a standard laptop or desktop computer. Considering the high classification accuracy rates, as well as the significant time commitment required by the current state of the art model versus seconds required by a machine learning-derived model, the approach described herein provides a promising means of estimating the number of contributors and, subsequently, will lead to improved DNA mixture interpretation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Roelfsema, Ferdinand; Pereira, Alberto M; Adriaanse, Ria; Endert, Erik; Fliers, Eric; Romijn, Johannes A; Veldhuis, Johannes D
2010-02-01
Twenty-four-hour TSH secretion profiles in primary hypothyroidism have been analyzed with methods no longer in use. The insights afforded by earlier methods are limited. We studied TSH secretion in patients with primary hypothyroidism (eight patients with severe and eight patients with mild hypothyroidism) with up-to-date analytical tools and compared the results with outcomes in 38 healthy controls. Patients and controls underwent a 24-h study with 10-min blood sampling. TSH data were analyzed with a newly developed automated deconvolution program, approximate entropy, spikiness assessment, and cosinor regression. Both basal and pulsatile TSH secretion rates were increased in hypothyroid patients, the latter by increased burst mass with unchanged frequency. Secretory regularity (approximate entropy) was diminished, and spikiness was increased only in patients with severe hypothyroidism. A diurnal TSH rhythm was present in all but two patients, although with an earlier acrophase in severe hypothyroidism. The estimated slow component of the TSH half-life was shortened in all patients. Increased TSH concentrations in hypothyroidism are mediated by amplification of basal secretion and burst size. Secretory abnormalities quantitated by approximate entropy and spikiness were only present in patients with severe disease and thus are possibly related to the increased thyrotrope cell mass.
Design of an essentially non-oscillatory reconstruction procedure in finite-element type meshes
NASA Technical Reports Server (NTRS)
Abgrall, Remi
1992-01-01
An essentially non oscillatory reconstruction for functions defined on finite element type meshes is designed. Two related problems are studied: the interpolation of possibly unsmooth multivariate functions on arbitary meshes and the reconstruction of a function from its averages in the control volumes surrounding the nodes of the mesh. Concerning the first problem, the behavior of the highest coefficients of two polynomial interpolations of a function that may admit discontinuities of locally regular curves is studied: the Lagrange interpolation and an approximation such that the mean of the polynomial on any control volume is equal to that of the function to be approximated. This enables the best stencil for the approximation to be chosen. The choice of the smallest possible number of stencils is addressed. Concerning the reconstruction problem, two methods were studied: one based on an adaptation of the so called reconstruction via deconvolution method to irregular meshes and one that lies on the approximation on the mean as defined above. The first method is conservative up to a quadrature formula and the second one is exactly conservative. The two methods have the expected order of accuracy, but the second one is much less expensive than the first one. Some numerical examples are given which demonstrate the efficiency of the reconstruction.
Strehl-constrained iterative blind deconvolution for post-adaptive-optics data
NASA Astrophysics Data System (ADS)
Desiderà, G.; Carbillet, M.
2009-12-01
Aims: We aim to improve blind deconvolution applied to post-adaptive-optics (AO) data by taking into account one of their basic characteristics, resulting from the necessarily partial AO correction: the Strehl ratio. Methods: We apply a Strehl constraint in the framework of iterative blind deconvolution (IBD) of post-AO near-infrared images simulated in a detailed end-to-end manner and considering a case that is as realistic as possible. Results: The results obtained clearly show the advantage of using such a constraint, from the point of view of both performance and stability, especially for poorly AO-corrected data. The proposed algorithm has been implemented in the freely-distributed and CAOS-based Software Package AIRY.
Characterizing and Discovering Spatiotemporal Social Contact Patterns for Healthcare.
Yang, Bo; Pei, Hongbin; Chen, Hechang; Liu, Jiming; Xia, Shang
2017-08-01
During an epidemic, the spatial, temporal and demographic patterns of disease transmission are determined by multiple factors. In addition to the physiological properties of the pathogens and hosts, the social contact of the host population, which characterizes the reciprocal exposures of individuals to infection according to their demographic structure and various social activities, are also pivotal to understanding and predicting the prevalence of infectious diseases. How social contact is measured will affect the extent to which we can forecast the dynamics of infections in the real world. Most current work focuses on modeling the spatial patterns of static social contact. In this work, we use a novel perspective to address the problem of how to characterize and measure dynamic social contact during an epidemic. We propose an epidemic-model-based tensor deconvolution framework in which the spatiotemporal patterns of social contact are represented by the factors of the tensors. These factors can be discovered using a tensor deconvolution procedure with the integration of epidemic models based on rich types of data, mainly heterogeneous outbreak surveillance data, socio-demographic census data and physiological data from medical reports. Using reproduction models that include SIR/SIS/SEIR/SEIS models as case studies, the efficacy and applications of the proposed framework are theoretically analyzed, empirically validated and demonstrated through a set of rigorous experiments using both synthetic and real-world data.
Histogram deconvolution - An aid to automated classifiers
NASA Technical Reports Server (NTRS)
Lorre, J. J.
1983-01-01
It is shown that N-dimensional histograms are convolved by the addition of noise in the picture domain. Three methods are described which provide the ability to deconvolve such noise-affected histograms. The purpose of the deconvolution is to provide automated classifiers with a higher quality N-dimensional histogram from which to obtain classification statistics.
NASA Technical Reports Server (NTRS)
Ioup, G. E.
1985-01-01
Appendix 5 of the Study of One- and Two-Dimensional Filtering and Deconvolution Algorithms for a Streaming Array Computer includes a resume of the professional background of the Principal Investigator on the project, lists of this publications and research papers, graduate thesis supervised, and grants received.
NASA Technical Reports Server (NTRS)
Hucek, Richard R.; Ardanuy, Philip E.; Kyle, H. Lee
1987-01-01
A deconvolution method for extracting the top of the atmosphere (TOA) mean, daily albedo field from a set of wide-FOV (WFOV) shortwave radiometer measurements is proposed. The method is based on constructing a synthetic measurement for each satellite observation. The albedo field is represented as a truncated series of spherical harmonic functions, and these linear equations are presented. Simulation studies were conducted to determine the sensitivity of the method. It is observed that a maximum of about 289 pieces of data can be extracted from a set of Nimbus 7 WFOV satellite measurements. The albedos derived using the deconvolution method are compared with albedos derived using the WFOV archival method; the developed albedo field achieved a 20 percent reduction in the global rms regional reflected flux density errors. The deconvolution method is applied to estimate the mean, daily average TOA albedo field for January 1983. A strong and extensive albedo maximum (0.42), which corresponds to the El Nino/Southern Oscillation event of 1982-1983, is detected over the south central Pacific Ocean.
Deconvolution of astronomical images using SOR with adaptive relaxation.
Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J
2011-07-04
We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.
Gaussian and linear deconvolution of LC-MS/MS chromatograms of the eight aminobutyric acid isomers
Vemula, Harika; Kitase, Yukiko; Ayon, Navid J.; Bonewald, Lynda; Gutheil, William G.
2016-01-01
Isomeric molecules present a challenge for analytical resolution and quantification, even with MS-based detection. The eight-aminobutyric acid (ABA) isomers are of interest for their various biological activities, particularly γ-aminobutyric acid (GABA) and the d- and l-isomers of β-aminoisobutyric acid (β-AIBA; BAIBA). This study aimed to investigate LC-MS/MS-based resolution of these ABA isomers as their Marfey's (Mar) reagent derivatives. HPLC was able to separate three Mar-ABA isomers l-β-ABA (l-BABA), and l- and d-α-ABA (AABA) completely, with three isomers (GABA, and d/l-BAIBA) in one chromatographic cluster, and two isomers (α-AIBA (AAIBA) and d-BABA) in a second cluster. Partially separated cluster components were deconvoluted using Gaussian peak fitting except for GABA and d-BAIBA. MS/MS detection of Marfey's derivatized ABA isomers provided six MS/MS fragments, with substantially different intensity profiles between structural isomers. This allowed linear deconvolution of ABA isomer peaks. Combining HPLC separation with linear and Gaussian deconvolution allowed resolution of all eight ABA isomers. Application to human serum found a substantial level of l-AABA (13 μM), an intermediate level of l-BAIBA (0.8 μM), and low but detectable levels (<0.2 μM) of GABA, l-BABA, AAIBA, d-BAIBA, and d-AABA. This approach should be useful for LC-MS/MS deconvolution of other challenging groups of isomeric molecules. PMID:27771391
SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, C; Jin, M; Ouyang, L
2015-06-15
Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used tomore » recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation compared to the direct method.« less
Improving cell mixture deconvolution by identifying optimal DNA methylation libraries (IDOL).
Koestler, Devin C; Jones, Meaghan J; Usset, Joseph; Christensen, Brock C; Butler, Rondi A; Kobor, Michael S; Wiencke, John K; Kelsey, Karl T
2016-03-08
Confounding due to cellular heterogeneity represents one of the foremost challenges currently facing Epigenome-Wide Association Studies (EWAS). Statistical methods leveraging the tissue-specificity of DNA methylation for deconvoluting the cellular mixture of heterogenous biospecimens offer a promising solution, however the performance of such methods depends entirely on the library of methylation markers being used for deconvolution. Here, we introduce a novel algorithm for Identifying Optimal Libraries (IDOL) that dynamically scans a candidate set of cell-specific methylation markers to find libraries that optimize the accuracy of cell fraction estimates obtained from cell mixture deconvolution. Application of IDOL to training set consisting of samples with both whole-blood DNA methylation data (Illumina HumanMethylation450 BeadArray (HM450)) and flow cytometry measurements of cell composition revealed an optimized library comprised of 300 CpG sites. When compared existing libraries, the library identified by IDOL demonstrated significantly better overall discrimination of the entire immune cell landscape (p = 0.038), and resulted in improved discrimination of 14 out of the 15 pairs of leukocyte subtypes. Estimates of cell composition across the samples in the training set using the IDOL library were highly correlated with their respective flow cytometry measurements, with all cell-specific R (2)>0.99 and root mean square errors (RMSEs) ranging from [0.97 % to 1.33 %] across leukocyte subtypes. Independent validation of the optimized IDOL library using two additional HM450 data sets showed similarly strong prediction performance, with all cell-specific R (2)>0.90 and R M S E<4.00 %. In simulation studies, adjustments for cell composition using the IDOL library resulted in uniformly lower false positive rates compared to competing libraries, while also demonstrating an improved capacity to explain epigenome-wide variation in DNA methylation within two large publicly available HM450 data sets. Despite consisting of half as many CpGs compared to existing libraries for whole blood mixture deconvolution, the optimized IDOL library identified herein resulted in outstanding prediction performance across all considered data sets and demonstrated potential to improve the operating characteristics of EWAS involving adjustments for cell distribution. In addition to providing the EWAS community with an optimized library for whole blood mixture deconvolution, our work establishes a systematic and generalizable framework for the assembly of libraries that improve the accuracy of cell mixture deconvolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, L; Tan, S; Lu, W
2014-06-01
Purpose: To implement a new method that integrates deconvolution with segmentation under the variational framework for PET tumor delineation. Methods: Deconvolution and segmentation are both challenging problems in image processing. The partial volume effect (PVE) makes tumor boundaries in PET image blurred which affects the accuracy of tumor segmentation. Deconvolution aims to obtain a PVE-free image, which can help to improve the segmentation accuracy. Conversely, a correct localization of the object boundaries is helpful to estimate the blur kernel, and thus assist in the deconvolution. In this study, we proposed to solve the two problems simultaneously using a variational methodmore » so that they can benefit each other. The energy functional consists of a fidelity term and a regularization term, and the blur kernel was limited to be the isotropic Gaussian kernel. We minimized the energy functional by solving the associated Euler-Lagrange equations and taking the derivative with respect to the parameters of the kernel function. An alternate minimization method was used to iterate between segmentation, deconvolution and blur-kernel recovery. The performance of the proposed method was tested on clinic PET images of patients with non-Hodgkin's lymphoma, and compared with seven other segmentation methods using the dice similarity index (DSI) and volume error (VE). Results: Among all segmentation methods, the proposed one (DSI=0.81, VE=0.05) has the highest accuracy, followed by the active contours without edges (DSI=0.81, VE=0.25), while other methods including the Graph Cut and the Mumford-Shah (MS) method have lower accuracy. A visual inspection shows that the proposed method localizes the real tumor contour very well. Conclusion: The result showed that deconvolution and segmentation can contribute to each other. The proposed variational method solve the two problems simultaneously, and leads to a high performance for tumor segmentation in PET. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less
Bilinear Inverse Problems: Theory, Algorithms, and Applications
NASA Astrophysics Data System (ADS)
Ling, Shuyang
We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical guarantees and stability theory are derived and the number of sampling complexity is nearly optimal (up to a poly-log factor). Applications in imaging sciences and signal processing are discussed and numerical simulations are presented to demonstrate the effectiveness and efficiency of our approach.
O'Sullivan, Finbarr; Muzi, Mark; Spence, Alexander M; Mankoff, David M; O'Sullivan, Janet N; Fitzgerald, Niall; Newman, George C; Krohn, Kenneth A
2009-06-01
Kinetic analysis is used to extract metabolic information from dynamic positron emission tomography (PET) uptake data. The theory of indicator dilutions, developed in the seminal work of Meier and Zierler (1954), provides a probabilistic framework for representation of PET tracer uptake data in terms of a convolution between an arterial input function and a tissue residue. The residue is a scaled survival function associated with tracer residence in the tissue. Nonparametric inference for the residue, a deconvolution problem, provides a novel approach to kinetic analysis-critically one that is not reliant on specific compartmental modeling assumptions. A practical computational technique based on regularized cubic B-spline approximation of the residence time distribution is proposed. Nonparametric residue analysis allows formal statistical evaluation of specific parametric models to be considered. This analysis needs to properly account for the increased flexibility of the nonparametric estimator. The methodology is illustrated using data from a series of cerebral studies with PET and fluorodeoxyglucose (FDG) in normal subjects. Comparisons are made between key functionals of the residue, tracer flux, flow, etc., resulting from a parametric (the standard two-compartment of Phelps et al. 1979) and a nonparametric analysis. Strong statistical evidence against the compartment model is found. Primarily these differences relate to the representation of the early temporal structure of the tracer residence-largely a function of the vascular supply network. There are convincing physiological arguments against the representations implied by the compartmental approach but this is the first time that a rigorous statistical confirmation using PET data has been reported. The compartmental analysis produces suspect values for flow but, notably, the impact on the metabolic flux, though statistically significant, is limited to deviations on the order of 3%-4%. The general advantage of the nonparametric residue analysis is the ability to provide a valid kinetic quantitation in the context of studies where there may be heterogeneity or other uncertainty about the accuracy of a compartmental model approximation of the tissue residue.
NASA Astrophysics Data System (ADS)
Chantry, V.; Sluse, D.; Magain, P.
2010-11-01
Aims: We attempt to place very accurate positional constraints on seven gravitationally lensed quasars currently being monitored by the COSMOGRAIL collaboration, and shape parameters for the light distribution of the lensing galaxy. We attempt to determine simple mass models that reproduce the observed configuration and predict time delays. We finally test, for the quads, whether there is evidence of astrometric perturbations produced by substructures in the lensing galaxy, which may preclude a good fit with the simple models. Methods: We apply the iterative MCS deconvolution method to near-IR HST archival data of seven gravitationally lensed quasars. This deconvolution method allows us to differentiate the contributions of the point sources from those of extended structures such as Einstein rings. This method leads to an accuracy of 1-2 mas in the relative positions of the sources and lens. The limiting factor of the method is the uncertainty in the instrumental geometric distortions. We then compute mass models of the lensing galaxy using state-of-the-art modeling techniques. Results: We determine the relative positions of the lensed images and lens shape parameters of seven lensed quasars: HE 0047-1756, RX J1131-1231, SDSS J1138+0314, SDSS J1155+6346, SDSS J1226-0006, WFI J2026-4536, and HS 2209+1914. The lensed image positions are derived with 1-2 mas accuracy. Isothermal and de Vaucouleurs mass models are calculated for the whole sample. The effect of the lens environment on the lens mass models is taken into account with a shear term. Doubly imaged quasars are equally well fitted by each of these models. A large amount of shear is necessary to reproduce SDSS J1155+6346 and SDSS J1226-006. In the latter case, we identify a nearby galaxy as the dominant source of shear. The quadruply imaged quasar SDSS J1138+0314 is reproduced well by simple lens models, which is not the case for the two other quads, RX J1131-1231 and WFI J2026-4536. This might be the signature of astrometric perturbations caused by massive substructures in the galaxy, which are unaccounted for by the models. Other possible explanations are also presented. Based on observations made with the NASA/ESA HST Hubble Space Telescope, obtained from the data archive at the Space Science Institute, which is operated by AURA, the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS-5-26555.
ERIC Educational Resources Information Center
Alter, Krystyn P.; Molloy, John L.; Niemeyer, Emily D.
2005-01-01
A laboratory experiment reinforces the concept of acid-base equilibria while introducing a common application of spectrophotometry and can easily be completed within a standard four-hour laboratory period. It provides students with an opportunity to use advanced data analysis techniques like data smoothing and spectral deconvolution to…
Deconvolution of Energy Spectra in the ATIC Experiment
NASA Technical Reports Server (NTRS)
Batkov, K. E.; Panov, A. D.; Adams, J. H.; Ahn, H. S.; Bashindzhagyan, G. L.; Chang, J.; Christl, M.; Fazley, A. R.; Ganel, O.; Gunasigha, R. M.;
2005-01-01
The Advanced Thin Ionization Calorimeter (ATIC) balloon-borne experiment is designed to perform cosmic- ray elemental spectra measurements from below 100 GeV up to tens TeV for nuclei from hydrogen to iron. The instrument is composed of a silicon matrix detector followed by a carbon target, interleaved with scintillator tracking layers, and a segmented BGO calorimeter composed of 320 individual crystals totalling 18 radiation lengths, used to determine the particle energy. The technique for deconvolution of the energy spectra measured in the thin calorimeter is based on detailed simulations of the response of the ATIC instrument to different cosmic ray nuclei over a wide energy range. The method of deconvolution is described and energy spectrum of carbon obtained by this technique is presented.
SOURCE PULSE ENHANCEMENT BY DECONVOLUTION OF AN EMPIRICAL GREEN'S FUNCTION.
Mueller, Charles S.
1985-01-01
Observations of the earthquake source-time function are enhanced if path, recording-site, and instrument complexities can be removed from seismograms. Assuming that a small earthquake has a simple source, its seismogram can be treated as an empirical Green's function and deconvolved from the seismogram of a larger and/or more complex earthquake by spectral division. When the deconvolution is well posed, the quotient spectrum represents the apparent source-time function of the larger event. This study shows that with high-quality locally recorded earthquake data it is feasible to Fourier transform the quotient and obtain a useful result in the time domain. In practice, the deconvolution can be stabilized by one of several simple techniques. Application of the method is given. Refs.
Deconvolution of time series in the laboratory
NASA Astrophysics Data System (ADS)
John, Thomas; Pietschmann, Dirk; Becker, Volker; Wagner, Christian
2016-10-01
In this study, we present two practical applications of the deconvolution of time series in Fourier space. First, we reconstruct a filtered input signal of sound cards that has been heavily distorted by a built-in high-pass filter using a software approach. Using deconvolution, we can partially bypass the filter and extend the dynamic frequency range by two orders of magnitude. Second, we construct required input signals for a mechanical shaker in order to obtain arbitrary acceleration waveforms, referred to as feedforward control. For both situations, experimental and theoretical approaches are discussed to determine the system-dependent frequency response. Moreover, for the shaker, we propose a simple feedback loop as an extension to the feedforward control in order to handle nonlinearities of the system.
Artificial neural systems for interpretation and inversion of seismic data
NASA Astrophysics Data System (ADS)
Calderon-Macias, Carlos
The goal of this work is to investigate the feasibility of using neural network (NN) models for solving geophysical exploration problems. First, a feedforward neural network (FNN) is used to solve inverse problems. The operational characteristics of a FNN are primarily controlled by a set of weights and a nonlinear function that performs a mapping between two sets of data. In a process known as training, the FNN weights are iteratively adjusted to perform the mapping. After training, the computed weights encode important features of the data that enable one pattern to be distinguished from another. Synthetic data computed from an ensemble of earth models and the corresponding models provide the training data. Two training methods are studied: the backpropagation method which is a gradient scheme, and a global optimization method called very fast simulated annealing (VFSA). A trained network is then used to predict models from new data (e.g., data from a new location) in a one-step procedure. The application of this method to the problems of obtaining formation resistivities and layer thicknesses from resistivity sounding data and 1D velocity models from seismic data shows that trained FNNs produce reasonably accurate earth models when observed data are input to the FNNs. In a second application, a FNN is used for automating the NMO correction process of seismic reflection data. The task of the FNN is to map CMP data at control locations along a seismic line into subsurface velocities. The network is trained while the velocity analyses are performed at the control locations. Once trained, the computed weights are used as an operator that acts on the remaining CMP data as a velocity interpolator, resulting in a fast method for NMO correction. The second part of this dissertation describes the application of a Hopfield neural network (HNN) to the problems of deconvolution and multiple attenuation. In these applications, the unknown parameters (reflection coefficients and source wavelet in the first problem and an operator in the second) are mapped as neurons of the HNN. The proposed deconvolution method attempts to reproduce the data with a limited number of events. The multiple attenuation method resembles the predictive deconvolution method. Results of this method are compared with a multiple elimination method based on estimating the source wavelet from the seismic data.
Toledo-Núñez, Citlali; Vera-Robles, L Iraís; Arroyo-Maya, Izlia J; Hernández-Arana, Andrés
2016-09-15
A frequent outcome in differential scanning calorimetry (DSC) experiments carried out with large proteins is the irreversibility of the observed endothermic effects. In these cases, DSC profiles are analyzed according to methods developed for temperature-induced denaturation transitions occurring under kinetic control. In the one-step irreversible model (native → denatured) the characteristics of the observed single-peaked endotherm depend on the denaturation enthalpy and the temperature dependence of the reaction rate constant, k. Several procedures have been devised to obtain the parameters that determine the variation of k with temperature. Here, we have elaborated on one of these procedures in order to analyze more complex DSC profiles. Synthetic data for a heat capacity curve were generated according to a model with two sequential reactions; the temperature dependence of each of the two rate constants involved was determined, according to the Eyring's equation, by two fixed parameters. It was then shown that our deconvolution procedure, by making use of heat capacity data alone, permits to extract the parameter values that were initially used. Finally, experimental DSC traces showing two and three maxima were analyzed and reproduced with relative success according to two- and four-step sequential models. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Aziz, Akram Mekhael; Sauck, William August; Shendi, El-Arabi Hendi; Rashed, Mohamed Ahmed; Abd El-Maksoud, Mohamed
2013-07-01
Progress in the past three decades in geophysical data processing and interpretation techniques was particularly focused in the field of aero-geophysics. The present study is to demonstrate the application of some of these techniques, including Analytic Signal, Located Euler Deconvolution, Standard Euler Deconvolution, and 2D inverse modelling, to help in enhancing and interpreting the archeo-magnetic measurements. A high-resolution total magnetic field survey was conducted at the ancient city of Pelusium (name derived from the ancient Pelusiac branch of the Nile, and recently called Tell el-Farama), located in the northwestern corner of the Sinai Peninsula. The historical city had served as a harbour throughout the Egyptian history. Different ruins at the site have been dated back to late Pharaonic, Graeco-Roman, Byzantine, Coptic, and Islamic periods. An area of 10,000 m2, to the west of the famous huge red brick citadel of Pelusium, was surveyed using the magnetic method. The chosen location was recommended by the Egyptian archaeologists, where they suspected the presence of buried foundations of a temple to the gods Zeus and Kasios. The interpretation of the results revealed interesting shallow-buried features, which may represent the Temple's outer walls. These walls are elongated in the same azimuth as the northern wall of the citadel, which supports the hypothesis of a controlling feature such as a former seacoast or shore of a distributary channel.
Kunze, Karl P; Nekolla, Stephan G; Rischpler, Christoph; Zhang, Shelley HuaLei; Hayes, Carmel; Langwieser, Nicolas; Ibrahim, Tareq; Laugwitz, Karl-Ludwig; Schwaiger, Markus
2018-04-19
Systematic differences with respect to myocardial perfusion quantification exist between DCE-MRI and PET. Using the potential of integrated PET/MRI, this study was conceived to compare perfusion quantification on the basis of simultaneously acquired 13 NH 3 -ammonia PET and DCE-MRI data in patients at rest and stress. Twenty-nine patients were examined on a 3T PET/MRI scanner. DCE-MRI was implemented in dual-sequence design and additional T 1 mapping for signal normalization. Four different deconvolution methods including a modified version of the Fermi technique were compared against 13 NH 3 -ammonia results. Cohort-average flow comparison yielded higher resting flows for DCE-MRI than for PET and, therefore, significantly lower DCE-MRI perfusion ratios under the common assumption of equal arterial and tissue hematocrit. Absolute flow values were strongly correlated in both slice-average (R 2 = 0.82) and regional (R 2 = 0.7) evaluations. Different DCE-MRI deconvolution methods yielded similar flow result with exception of an unconstrained Fermi method exhibiting outliers at high flows when compared with PET. Thresholds for Ischemia classification may not be directly tradable between PET and MRI flow values. Differences in perfusion ratios between PET and DCE-MRI may be lifted by using stress/rest-specific hematocrit conversion. Proper physiological constraints are advised in model-constrained deconvolution. © 2018 International Society for Magnetic Resonance in Medicine.
Deblurring of Class-Averaged Images in Single-Particle Electron Microscopy.
Park, Wooram; Madden, Dean R; Rockmore, Daniel N; Chirikjian, Gregory S
2010-03-01
This paper proposes a method for deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre-Fourier expansions, and Hermite expansion and Laguerre-Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method.
Estimating the Earthquake Source Time Function by Markov Chain Monte Carlo Sampling
NASA Astrophysics Data System (ADS)
Dȩbski, Wojciech
2008-07-01
Many aspects of earthquake source dynamics like dynamic stress drop, rupture velocity and directivity, etc. are currently inferred from the source time functions obtained by a deconvolution of the propagation and recording effects from seismograms. The question of the accuracy of obtained results remains open. In this paper we address this issue by considering two aspects of the source time function deconvolution. First, we propose a new pseudo-spectral parameterization of the sought function which explicitly takes into account the physical constraints imposed on the sought functions. Such parameterization automatically excludes non-physical solutions and so improves the stability and uniqueness of the deconvolution. Secondly, we demonstrate that the Bayesian approach to the inverse problem at hand, combined with an efficient Markov Chain Monte Carlo sampling technique, is a method which allows efficient estimation of the source time function uncertainties. The key point of the approach is the description of the solution of the inverse problem by the a posteriori probability density function constructed according to the Bayesian (probabilistic) theory. Next, the Markov Chain Monte Carlo sampling technique is used to sample this function so the statistical estimator of a posteriori errors can be easily obtained with minimal additional computational effort with respect to modern inversion (optimization) algorithms. The methodological considerations are illustrated by a case study of the mining-induced seismic event of the magnitude M L ≈3.1 that occurred at Rudna (Poland) copper mine. The seismic P-wave records were inverted for the source time functions, using the proposed algorithm and the empirical Green function technique to approximate Green functions. The obtained solutions seem to suggest some complexity of the rupture process with double pulses of energy release. However, the error analysis shows that the hypothesis of source complexity is not justified at the 95% confidence level. On the basis of the analyzed event we also show that the separation of the source inversion into two steps introduces limitations on the completeness of the a posteriori error analysis.
NASA Astrophysics Data System (ADS)
Jeffs, Brian D.; Christou, Julian C.
1998-09-01
This paper addresses post processing for resolution enhancement of sequences of short exposure adaptive optics (AO) images of space objects. The unknown residual blur is removed using Bayesian maximum a posteriori blind image restoration techniques. In the problem formulation, both the true image and the unknown blur psf's are represented by the flexible generalized Gaussian Markov random field (GGMRF) model. The GGMRF probability density function provides a natural mechanism for expressing available prior information about the image and blur. Incorporating such prior knowledge in the deconvolution optimization is crucial for the success of blind restoration algorithms. For example, space objects often contain sharp edge boundaries and geometric structures, while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits smoothed, random , texture-like features on a peaked central core. By properly choosing parameters, GGMRF models can accurately represent both the blur psf and the object, and serve to regularize the deconvolution problem. These two GGMRF models also serve as discriminator functions to separate blur and object in the solution. Algorithm performance is demonstrated with examples from synthetic AO images. Results indicate significant resolution enhancement when applied to partially corrected AO images. An efficient computational algorithm is described.
Efficient volumetric estimation from plenoptic data
NASA Astrophysics Data System (ADS)
Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.
2013-03-01
The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.
The Filtered Abel Transform and Its Application in Combustion Diagnostics
NASA Technical Reports Server (NTRS)
Simons, Stephen N. (Technical Monitor); Yuan, Zeng-Guang
2003-01-01
Many non-intrusive combustion diagnosis methods generate line-of-sight projections of a flame field. To reconstruct the spatial field of the measured properties, these projections need to be deconvoluted. When the spatial field is axisymmetric, commonly used deconvolution method include the Abel transforms, the onion peeling method and the two-dimensional Fourier transform method and its derivatives such as the filtered back projection methods. This paper proposes a new approach for performing the Abel transform method is developed, which possesses the exactness of the Abel transform and the flexibility of incorporating various filters in the reconstruction process. The Abel transform is an exact method and the simplest among these commonly used methods. It is evinced in this paper that all the exact reconstruction methods for axisymmetric distributions must be equivalent to the Abel transform because of its uniqueness and exactness. Detailed proof is presented to show that the two dimensional Fourier methods when applied to axisymmetric cases is identical to the Abel transform. Discrepancies among various reconstruction method stem from the different approximations made to perform numerical calculations. An equation relating the spectrum of a set of projection date to that of the corresponding spatial distribution is obtained, which shows that the spectrum of the projection is equal to the Abel transform of the spectrum of the corresponding spatial distribution. From the equation, if either the projection or the distribution is bandwidth limited, the other is also bandwidth limited, and both have the same bandwidth. If the two are not bandwidth limited, the Abel transform has a bias against low wave number components in most practical cases. This explains why the Abel transform and all exact deconvolution methods are sensitive to high wave number noises. The filtered Abel transform is based on the fact that the Abel transform of filtered projection data is equal to an integral transform of the original projection data with the kernel function being the Abel transform of the filtering function. The kernel function is independent of the projection data and can be obtained separately when the filtering function is selected. Users can select the best filtering function for a particular set of experimental data. When the kernal function is obtained, it can be used repeatedly to a number of projection data sets (rovs) from the same experiment. When an entire flame image that contains a large number of projection lines needs to be processed, the new approach significantly reduces computational effort in comparison with the conventional approach in which each projection data set is deconvoluted separately. Computer codes have been developed to perform the filter Abel transform for an entire flame field. Measured soot volume fraction data of a jet diffusion flame are processed as an example.
The deconvolution of complex spectra by artificial immune system
NASA Astrophysics Data System (ADS)
Galiakhmetova, D. I.; Sibgatullin, M. E.; Galimullin, D. Z.; Kamalova, D. I.
2017-11-01
An application of the artificial immune system method for decomposition of complex spectra is presented. The results of decomposition of the model contour consisting of three components, Gaussian contours, are demonstrated. The method of artificial immune system is an optimization method, which is based on the behaviour of the immune system and refers to modern methods of search for the engine optimization.
Towards the Detection of Reflected Light from Exo-planets: a Comparison of Two Methods
NASA Astrophysics Data System (ADS)
Rodler, Florian; Kürster, Martin
For exo-planets the huge brightness contrast between the star and the planet constitutes an enormous challenge when attempting to observe some kind of direct signal from the planet. With high resolution spectroscopy in the visual one can exploit the fact that the spectrum reflected from the planet is essentially a copy of the rich stellar absorption line spectrum. This spectrum is shifted in wavelength according to the orbital RV of the planet and strongly scaled down in brightness by a factor of a few times 10-5, and therefore deeply buried in the noise. The S/N of the plantetary signal can be increased by applying one of the following methods. The Least Squares Deconvolution Method (LSDM, eg. Collier Cameron et al. 2002) combines the observed spectral lines into a high S/N mean line profile (star + planet), determined by least-squares deconvolution of the observed spectrum with a template spectrum (from VALD, Kupka et al. 1999). Another approach is the Data Synthesis Method (DSM, eg. Charbonneau et al. 1999), a forward data modelling technique in which the planetary signal is modelled as a scaled-down and RV-shifted version of the stellar spectrum.
VizieR Online Data Catalog: Spatial deconvolution code (Quintero Noda+, 2015)
NASA Astrophysics Data System (ADS)
Quintero Noda, C.; Asensio Ramos, A.; Orozco Suarez, D.; Ruiz Cobo, B.
2015-05-01
This deconvolution method follows the scheme presented in Ruiz Cobo & Asensio Ramos (2013A&A...549L...4R) The Stokes parameters are projected onto a few spectral eigenvectors and the ensuing maps of coefficients are deconvolved using a standard Lucy-Richardson algorithm. This introduces a stabilization because the PCA filtering reduces the amount of noise. (1 data file).
NASA Astrophysics Data System (ADS)
Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats
2000-05-01
Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.
A method to measure the presampling MTF in digital radiography using Wiener deconvolution
NASA Astrophysics Data System (ADS)
Zhou, Zhongxing; Zhu, Qingzhen; Gao, Feng; Zhao, Huijuan; Zhang, Lixin; Li, Guohui
2013-03-01
We developed a novel method for determining the presampling modulation transfer function (MTF) of digital radiography systems from slanted edge images based on Wiener deconvolution. The degraded supersampled edge spread function (ESF) was obtained from simulated slanted edge images with known MTF in the presence of poisson noise, and its corresponding ideal ESF without degration was constructed according to its central edge position. To meet the requirements of the absolute integrable condition of Fourier transformation, the origianl ESFs were mirrored to construct the symmetric pattern of ESFs. Then based on Wiener deconvolution technique, the supersampled line spread function (LSF) could be acquired from the symmetric pattern of degraded supersampled ESFs in the presence of ideal symmetric ESFs and system noise. The MTF is then the normalized magnitude of the Fourier transform of the LSF. The determined MTF showed a strong agreement with the theoritical true MTF when appropriated Wiener parameter was chosen. The effects of Wiener parameter value and the width of square-like wave peak in symmetric ESFs were illustrated and discussed. In conclusion, an accurate and simple method to measure the presampling MTF was established using Wiener deconvolution technique according to slanted edge images.
Single-Ion Deconvolution of Mass Peak Overlaps for Atom Probe Microscopy.
London, Andrew J; Haley, Daniel; Moody, Michael P
2017-04-01
Due to the intrinsic evaporation properties of the material studied, insufficient mass-resolving power and lack of knowledge of the kinetic energy of incident ions, peaks in the atom probe mass-to-charge spectrum can overlap and result in incorrect composition measurements. Contributions to these peak overlaps can be deconvoluted globally, by simply examining adjacent peaks combined with knowledge of natural isotopic abundances. However, this strategy does not account for the fact that the relative contributions to this convoluted signal can often vary significantly in different regions of the analysis volume; e.g., across interfaces and within clusters. Some progress has been made with spatially localized deconvolution in cases where the discrete microstructural regions can be easily identified within the reconstruction, but this means no further point cloud analyses are possible. Hence, we present an ion-by-ion methodology where the identity of each ion, normally obscured by peak overlap, is resolved by examining the isotopic abundance of their immediate surroundings. The resulting peak-deconvoluted data are a point cloud and can be analyzed with any existing tools. We present two detailed case studies and discussion of the limitations of this new technique.
Image deblurring by motion estimation for remote sensing
NASA Astrophysics Data System (ADS)
Chen, Yueting; Wu, Jiagu; Xu, Zhihai; Li, Qi; Feng, Huajun
2010-08-01
The imagery resolution of imaging systems for remote sensing is often limited by image degradation resulting from unwanted motion disturbances of the platform during image exposures. Since the form of the platform vibration can be arbitrary, the lack of priori knowledge about the motion function (the PSF) suggests blind restoration approaches. A deblurring method which combines motion estimation and image deconvolution both for area-array and TDI remote sensing has been proposed in this paper. The image motion estimation is accomplished by an auxiliary high-speed detector and a sub-pixel correlation algorithm. The PSF is then reconstructed from estimated image motion vectors. Eventually, the clear image can be recovered by the Richardson-Lucy (RL) iterative deconvolution algorithm from the blurred image of the prime camera with the constructed PSF. The image deconvolution for the area-array detector is direct. While for the TDICCD detector, an integral distortion compensation step and a row-by-row deconvolution scheme are applied. Theoretical analyses and experimental results show that, the performance of the proposed concept is convincing. Blurred and distorted images can be properly recovered not only for visual observation, but also with significant objective evaluation increment.
Wille, M-L; Zapf, M; Ruiter, N V; Gemmeke, H; Langton, C M
2015-06-21
The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.
Receiver function stacks: initial steps for seismic imaging of Cotopaxi volcano, Ecuador
NASA Astrophysics Data System (ADS)
Bishop, J. W.; Lees, J. M.; Ruiz, M. C.
2017-12-01
Cotopaxi volcano is a large, andesitic stratovolcano located within 50 km of the the Ecuadorean capital of Quito. Cotopaxi most recently erupted for the first time in 73 years during August 2015. This eruptive cycle (VEI = 1) featured phreatic explosions and ejection of an ash column 9 km above the volcano edifice. Following this event, ash covered approximately 500 km2 of the surrounding area. Analysis of Multi-GAS data suggests that this eruption was fed from a shallow source. However, stratigraphic evidence surveying the last 800 years of Cotopaxi's activity suggests that there may be a deep magmatic source. To establish a geophysical framework for Cotopaxi's activity, receiver functions were calculated from well recorded earthquakes detected from April 2015 to December 2015 at 9 permanent broadband seismic stations around the volcano. These events were located, and phase arrivals were manually picked. Radial teleseismic receiver functions were then calculated using an iterative deconvolution technique with a Gaussian width of 2.5. A maximum of 200 iterations was allowed in each deconvolution. Iterations were stopped when either the maximum iteration number was reached or the percent change fell beneath a pre-determined tolerance. Receiver functions were then visually inspected for anomalous pulses before the initial P arrival or later peaks larger than the initial P-wave correlated pulse, which were also discarded. Using this data, initial crustal thickness and slab depth estimates beneath the volcano were obtained. Estimates of crustal Vp/Vs ratio for the region were also calculated.
NASA Astrophysics Data System (ADS)
Poletto, Flavio; Schleifer, Andrea; Zgauc, Franco; Meneghini, Fabio; Petronio, Lorenzo
2016-12-01
We present the results of a novel borehole-seismic experiment in which we used different types of onshore-transient-impulsive and non-impulsive-surface sources together with direct ground-force recordings. The ground-force signals were obtained by baseplate load cells located beneath the sources, and by buried soil-stress sensors installed in the very shallow-subsurface together with accelerometers. The aim was to characterize the source's emission by its complex impedance, function of the near-field vibrations and soil stress components, and above all to obtain appropriate deconvolution operators to remove the signature of the sources in the far-field seismic signals. The data analysis shows the differences in the reference measurements utilized to deconvolve the source signature. As downgoing waves, we process the signals of vertical seismic profiles (VSP) recorded in the far-field approximation by an array of permanent geophones cemented at shallow-medium depth outside the casing of an instrumented well. We obtain a significant improvement in the waveform of the radiated seismic-vibrator signals deconvolved by ground force, similar to that of the seismograms generated by the impulsive sources, and demonstrates that the results obtained by different sources present low values in their repeatability norm. The comparison evidences the potentiality of the direct ground-force measurement approach to effectively remove the far-field source signature in VSP onshore data, and to increase the performance of permanent acquisition installations for time-lapse application purposes.
Deconvolution of Stark broadened spectra for multi-point density measurements in a flow Z-pinch
Vogman, G. V.; Shumlak, U.
2011-10-13
Stark broadened emission spectra, once separated from other broadening effects, provide a convenient non-perturbing means of making plasma density measurements. A deconvolution technique has been developed to measure plasma densities in the ZaP flow Z-pinch experiment. The ZaP experiment uses sheared flow to mitigate MHD instabilities. The pinches exhibit Stark broadened emission spectra, which are captured at 20 locations using a multi-chord spectroscopic system. Spectra that are time- and chord-integrated are well approximated by a Voigt function. The proposed method simultaneously resolves plasma electron density and ion temperature by deconvolving the spectral Voigt profile into constituent functions: a Gaussian functionmore » associated with instrument effects and Doppler broadening by temperature; and a Lorentzian function associated with Stark broadening by electron density. The method uses analytic Fourier transforms of the constituent functions to fit the Voigt profile in the Fourier domain. The method is discussed and compared to a basic least-squares fit. The Fourier transform fitting routine requires fewer fitting parameters and shows promise in being less susceptible to instrumental noise and to contamination from neighboring spectral lines. The method is evaluated and tested using simulated lines and is applied to experimental data for the 229.69 nm C III line from multiple chords to determine plasma density and temperature across the diameter of the pinch. As a result, these measurements are used to gain a better understanding of Z-pinch equilibria.« less
Deconvolution of Stark broadened spectra for multi-point density measurements in a flow Z-pinch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogman, G. V.; Shumlak, U.
2011-10-15
Stark broadened emission spectra, once separated from other broadening effects, provide a convenient non-perturbing means of making plasma density measurements. A deconvolution technique has been developed to measure plasma densities in the ZaP flow Z-pinch experiment. The ZaP experiment uses sheared flow to mitigate MHD instabilities. The pinches exhibit Stark broadened emission spectra, which are captured at 20 locations using a multi-chord spectroscopic system. Spectra that are time- and chord-integrated are well approximated by a Voigt function. The proposed method simultaneously resolves plasma electron density and ion temperature by deconvolving the spectral Voigt profile into constituent functions: a Gaussian functionmore » associated with instrument effects and Doppler broadening by temperature; and a Lorentzian function associated with Stark broadening by electron density. The method uses analytic Fourier transforms of the constituent functions to fit the Voigt profile in the Fourier domain. The method is discussed and compared to a basic least-squares fit. The Fourier transform fitting routine requires fewer fitting parameters and shows promise in being less susceptible to instrumental noise and to contamination from neighboring spectral lines. The method is evaluated and tested using simulated lines and is applied to experimental data for the 229.69 nm C III line from multiple chords to determine plasma density and temperature across the diameter of the pinch. These measurements are used to gain a better understanding of Z-pinch equilibria.« less
Dey, Dipesh K; Guha, Saumyen
2007-02-15
Phospholipid fatty acids (PLFAs) as biomarkers are well established in the literature. A general method based on least square approximation (LSA) was developed for the estimation of community structure from the PLFA signature of a mixed population where biomarker PLFA signatures of the component species were known. Fatty acid methyl ester (FAME) standards were used as species analogs and mixture of the standards as representative of the mixed population. The PLFA/FAME signatures were analyzed by gas chromatographic separation, followed by detection in flame ionization detector (GC-FID). The PLFAs in the signature were quantified as relative weight percent of the total PLFA. The PLFA signatures were analyzed by the models to predict community structure of the mixture. The LSA model results were compared with the existing "functional group" approach. Both successfully predicted community structure of mixed population containing completely unrelated species with uncommon PLFAs. For slightest intersection in PLFA signatures of component species, the LSA model produced better results. This was mainly due to inability of the "functional group" approach to distinguish the relative amounts of the common PLFA coming from more than one species. The performance of the LSA model was influenced by errors in the chromatographic analyses. Suppression (or enhancement) of a component's PLFA signature in chromatographic analysis of the mixture, led to underestimation (or overestimation) of the component's proportion in the mixture by the model. In mixtures of closely related species with common PLFAs, the errors in the common components were adjusted across the species by the model.
Shu, Jie; Dolman, G E; Duan, Jiang; Qiu, Guoping; Ilyas, Mohammad
2016-04-27
Colour is the most important feature used in quantitative immunohistochemistry (IHC) image analysis; IHC is used to provide information relating to aetiology and to confirm malignancy. Statistical modelling is a technique widely used for colour detection in computer vision. We have developed a statistical model of colour detection applicable to detection of stain colour in digital IHC images. Model was first trained by massive colour pixels collected semi-automatically. To speed up the training and detection processes, we removed luminance channel, Y channel of YCbCr colour space and chose 128 histogram bins which is the optimal number. A maximum likelihood classifier is used to classify pixels in digital slides into positively or negatively stained pixels automatically. The model-based tool was developed within ImageJ to quantify targets identified using IHC and histochemistry. The purpose of evaluation was to compare the computer model with human evaluation. Several large datasets were prepared and obtained from human oesophageal cancer, colon cancer and liver cirrhosis with different colour stains. Experimental results have demonstrated the model-based tool achieves more accurate results than colour deconvolution and CMYK model in the detection of brown colour, and is comparable to colour deconvolution in the detection of pink colour. We have also demostrated the proposed model has little inter-dataset variations. A robust and effective statistical model is introduced in this paper. The model-based interactive tool in ImageJ, which can create a visual representation of the statistical model and detect a specified colour automatically, is easy to use and available freely at http://rsb.info.nih.gov/ij/plugins/ihc-toolbox/index.html . Testing to the tool by different users showed only minor inter-observer variations in results.
Multi-limit unsymmetrical MLIBD image restoration algorithm
NASA Astrophysics Data System (ADS)
Yang, Yang; Cheng, Yiping; Chen, Zai-wang; Bo, Chen
2012-11-01
A novel multi-limit unsymmetrical iterative blind deconvolution(MLIBD) algorithm was presented to enhance the performance of adaptive optics image restoration.The algorithm enhances the reliability of iterative blind deconvolution by introducing the bandwidth limit into the frequency domain of point spread(PSF),and adopts the PSF dynamic support region estimation to improve the convergence speed.The unsymmetrical factor is automatically computed to advance its adaptivity.Image deconvolution comparing experiments between Richardson-Lucy IBD and MLIBD were done,and the result indicates that the iteration number is reduced by 22.4% and the peak signal-to-noise ratio is improved by 10.18dB with MLIBD method. The performance of MLIBD algorithm is outstanding in the images restoration the FK5-857 adaptive optics and the double-star adaptive optics.
A hybrid method for synthetic aperture ladar phase-error compensation
NASA Astrophysics Data System (ADS)
Hua, Zhili; Li, Hongping; Gu, Yongjian
2009-07-01
As a high resolution imaging sensor, synthetic aperture ladar data contain phase-error whose source include uncompensated platform motion and atmospheric turbulence distortion errors. Two previously devised methods, rank one phase-error estimation algorithm and iterative blind deconvolution are reexamined, of which a hybrid method that can recover both the images and PSF's without any a priori information on the PSF is built to speed up the convergence rate by the consideration in the choice of initialization. To be integrated into spotlight mode SAL imaging model respectively, three methods all can effectively reduce the phase-error distortion. For each approach, signal to noise ratio, root mean square error and CPU time are computed, from which we can see the convergence rate of the hybrid method can be improved because a more efficient initialization set of blind deconvolution. Moreover, by making a further discussion of the hybrid method, the weight distribution of ROPE and IBD is found to be an important factor that affects the final result of the whole compensation process.
[Application of numerical convolution in in vivo/in vitro correlation research].
Yue, Peng
2009-01-01
This paper introduced the conception and principle of in vivo/in vitro correlation (IVIVC) and convolution/deconvolution methods, and elucidated in details the convolution strategy and method for calculating the in vivo absorption performance of the pharmaceutics according to the their pharmacokinetic data in Excel, then put the results forward to IVIVC research. Firstly, the pharmacokinetic data ware fitted by mathematical software to make up the lost points. Secondly, the parameters of the optimal fitted input function were defined by trail-and-error method according to the convolution principle in Excel under the hypothesis that all the input functions fit the Weibull functions. Finally, the IVIVC between in vivo input function and the in vitro dissolution was studied. In the examples, not only the application of this method was demonstrated in details but also its simplicity and effectiveness were proved by comparing with the compartment model method and deconvolution method. It showed to be a powerful tool for IVIVC research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nie, K; Yue, N; Jabbour, S
Purpose: To compare three different pharmacokinetic models for analysis of dynamic-contrast-enhanced (DCE)-CT data with respect to different acquisition times and location of region of interest. Methods: Eight rectal cancer patients with pre-treatment DCE-CTs were included. The dynamic sequence started 4–10seconds(s) after the injection of contrast agent. The scan included a 110s acquisition with intervals of 40×1s+15×3s+4×6s. An experienced oncologist outlined the tumor region. Hotspots with top-5%-enhancement were also identified. Pharmacokinetic analysis was performed using three different models: deconvolution method, Patlak model, and modified Toft’s model. Perfusion parameters as blood flow (BF), blood volume (BV), mean transit time (MTT), permeability-surface-area-product (PS),more » volume transfer constant (Ktrans), and flux rate constant (Kep), were compared with respect to different acquisition times of 45s, 65s, 85s and 105s. Both hotspot and whole-volume variances were also assessed. The differences were compared using the Wilcoxon matched-pairs test and Bland-Altman plots. Results: Moderate correlation was observed for various perfusion parameters (r=0.56–0.72, p<0.0001) but the Wilcoxon test revealed a significant difference among the three models (P < .001). Significant differences in PS were noted between acquisitions of 45s versus longer time of 85s or 105s (p<0.05) using Patlak but not with the deconvolution method. In addition, measurements varied substantially between whole-volume vs. hotspot analysis. Conclusion: The radiation dose of DCE-CT was on average 1.5 times of an abdomen/pelvic CT, which is not insubstantial. To take the DCE-CT forward as a biomarker in oncology, prospective studies should be carefully designed with the optimal image acquisition and analysis technique. Our study suggested that: (1) different kinetic models are not interchangeable; (2) a 45s acquisition might not be sufficient for reliable permeability measurement in rectal cancer using Patlak model, but might be achievable using deconvolution method; and (3) local variations existed inside the tumor, and both whole-volume-averaged and local-heterogeneity analysis is recommended for future quantitative studies. This work is supported by the National High-tech R&D program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917), Natural Science Foundation of China (NSFC Grant No. 81201091).« less
An isotopic biogeochemical study of the Green River oil shale
NASA Technical Reports Server (NTRS)
Collister, J. W.; Summons, R. E.; Lichtfouse, E.; Hayes, J. M.
1992-01-01
Thirty-five different samples from three different sulfur cycles were examined in this stratigraphically oriented study of the Shell 22x-l well (U.S.G.S. C177 core) in the Piceance Basin, Colorado. Carbon isotopic compositions of constituents of Green River bitumens indicate mixing of three main components: products of primary photoautotrophs and their immediate consumers (delta approximately -30% vs PDB), products of methanotrophic bacteria (delta approximately -85%), and products of unknown bacteria (delta approximately -40%). For individual compounds synthesized by primary producers, delta-values ranged from -28 to -32%. 13C contents of individual primary products (beta-carotane, steranes, acyclic isoprenoids, tricyclic triterpenoids) were not closely correlated, suggesting diverse origins for these materials. 13C contents of numerous hopanoids were inversely related to sulfur abundance, indicating that they derived both from methanotrophs and from other bacteria, with abundances of methanotrophs depressed when sulfur was plentiful in the paleoenvironment. gamma-Cerane coeluted with 3 beta(CH3),17 alpha(H),21 beta(H)-hopane, but delta-values could be determined after deconvolution. gamma-Cerane (delta approximately -25%) probably derives from a eukaryotic heterotroph grazing on primary materials, the latter compound (delta approximately -90%) must derive from methanotrophic organisms. 13C contents of n-alkanes in bitumen differed markedly from those of paraffins generated pyrolytically. Isotopic and quantitative relationships suggest that alkanes released by pyrolysis derived from a resistant biopolymer of eukaryotic origin and that this was a dominant constituent of total organic carbon.
NASA Astrophysics Data System (ADS)
Navarro, Jorge
The goal of this study presented is to determine the best available nondestructive technique necessary to collect validation data as well as to determine burnup and cooling time of the fuel elements on-site at the Advanced Test Reactor (ATR) canal. This study makes a recommendation of the viability of implementing a permanent fuel scanning system at the ATR canal and leads to the full design of a permanent fuel scan system. The study consisted at first in determining if it was possible and which equipment was necessary to collect useful spectra from ATR fuel elements at the canal adjacent to the reactor. Once it was establish that useful spectra can be obtained at the ATR canal, the next step was to determine which detector and which configuration was better suited to predict burnup and cooling time of fuel elements nondestructively. Three different detectors of High Purity Germanium (HPGe), Lanthanum Bromide (LaBr3), and High Pressure Xenon (HPXe) in two system configurations of above and below the water pool were used during the study. The data collected and analyzed were used to create burnup and cooling time calibration prediction curves for ATR fuel. The next stage of the study was to determine which of the three detectors tested was better suited for the permanent system. From spectra taken and the calibration curves obtained, it was determined that although the HPGe detector yielded better results, a detector that could better withstand the harsh environment of the ATR canal was needed. The in-situ nature of the measurements required a rugged fuel scanning system, low in maintenance and easy to control system. Based on the ATR canal feasibility measurements and calibration results, it was determined that the LaBr3 detector was the best alternative for canal in-situ measurements; however, in order to enhance the quality of the spectra collected using this scintillator, a deconvolution method was developed. Following the development of the deconvolution method for ATR applications, the technique was tested using one-isotope, multi-isotope, and fuel simulated sources. Burnup calibrations were perfomed using convoluted and deconvoluted data. The calibrations results showed burnup prediction by this method improves using deconvolution. The final stage of the deconvolution method development was to perform an irradiation experiment in order to create a surrogate fuel source to test the deconvolution method using experimental data. A conceptual design of the fuel scan system is path forward using the rugged LaBr 3 detector in an above the water configuration and deconvolution algorithms.
A MAP blind image deconvolution algorithm with bandwidth over-constrained
NASA Astrophysics Data System (ADS)
Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong
2018-03-01
We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.
Toward Overcoming the Local Minimum Trap in MFBD
2015-07-14
the first two years of this grant: • A. Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind Deconvolution...Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Numerical Optimization Meth- ods for Blind Deconvolution, Numerical Algorithms, volume 65, issue 1...Publications (published) during reporting period: A. Cornelio, E. Loli Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind
Zhang, Fang; Wang, Haoyang; Zhang, Li; Zhang, Jing; Fan, Ruojing; Yu, Chongtian; Wang, Wenwen; Guo, Yinlong
2014-10-01
A strategy for suspected-target screening of pesticide residues in complicated matrices was exploited using gas chromatography in combination with hybrid quadrupole time-of-flight mass spectrometry (GC-QTOF MS). The screening workflow followed three key steps of, initial detection, preliminary identification, and final confirmation. The initial detection of components in a matrix was done by a high resolution mass spectrum deconvolution; the preliminary identification of suspected pesticides was based on a special retention index/mass spectrum (RI/MS) library that contained both the first-stage mass spectra (MS(1) spectra) and retention indices; and the final confirmation was accomplished by accurate mass measurements of representative ions with their response ratios from the MS(1) spectra or representative product ions from the second-stage mass spectra (MS(2) spectra). To evaluate the applicability of the workflow in real samples, three matrices of apple, spinach, and scallion, each spiked with 165 test pesticides in a set of concentrations, were selected as the models. The results showed that the use of high-resolution TOF enabled effective extractions of spectra from noisy chromatograms, which was based on a narrow mass window (5 mDa) and suspected-target compounds identified by the similarity match of deconvoluted full mass spectra and filtering of linear RIs. On average, over 74% of pesticides at 50 ng/mL could be identified using deconvolution and the RI/MS library. Over 80% of pesticides at 5 ng/mL or lower concentrations could be confirmed in each matrix using at least two representative ions with their response ratios from the MS(1) spectra. In addition, the application of product ion spectra was capable of confirming suspected pesticides with specificity for some pesticides in complicated matrices. In conclusion, GC-QTOF MS combined with the RI/MS library seems to be one of the most efficient tools for the analysis of suspected-target pesticide residues in complicated matrices. Copyright © 2014 Elsevier B.V. All rights reserved.
[Glossary of terms used by radiologists in image processing].
Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P
1995-01-01
We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.
NASA Technical Reports Server (NTRS)
Lester, D. F.; Harvey, P. M.; Joy, M.; Ellis, H. B., Jr.
1986-01-01
Far-infrared continuum studies from the Kuiper Airborne Observatory are described that are designed to fully exploit the small-scale spatial information that this facility can provide. This work gives the clearest picture to data on the structure of galactic and extragalactic star forming regions in the far infrared. Work is presently being done with slit scans taken simultaneously at 50 and 100 microns, yielding one-dimensional data. Scans of sources in different directions have been used to get certain information on two dimensional structure. Planned work with linear arrays will allow us to generalize our techniques to two dimensional image restoration. For faint sources, spatial information at the diffraction limit of the telescope is obtained, while for brighter sources, nonlinear deconvolution techniques have allowed us to improve over the diffraction limit by as much as a factor of four. Information on the details of the color temperature distribution is derived as well. This is made possible by the accuracy with which the instrumental point-source profile (PSP) is determined at both wavelengths. While these two PSPs are different, data at different wavelengths can be compared by proper spatial filtering. Considerable effort has been devoted to implementing deconvolution algorithms. Nonlinear deconvolution methods offer the potential of superresolution -- that is, inference of power at spatial frequencies that exceed D lambda. This potential is made possible by the implicit assumption by the algorithm of positivity of the deconvolved data, a universally justifiable constraint for photon processes. We have tested two nonlinear deconvolution algorithms on our data; the Richardson-Lucy (R-L) method and the Maximum Entropy Method (MEM). The limits of image deconvolution techniques for achieving spatial resolution are addressed.
Nonlinear Simulation of the Tooth Enamel Spectrum for EPR Dosimetry
NASA Astrophysics Data System (ADS)
Kirillov, V. A.; Dubovsky, S. V.
2016-07-01
Software was developed where initial EPR spectra of tooth enamel were deconvoluted based on nonlinear simulation, line shapes and signal amplitudes in the model initial spectrum were calculated, the regression coefficient was evaluated, and individual spectra were summed. Software validation demonstrated that doses calculated using it agreed excellently with the applied radiation doses and the doses reconstructed by the method of additive doses.
NASA Astrophysics Data System (ADS)
Sukhanov, D. Ya.; Zav'yalova, K. V.
2018-03-01
The paper represents induced currents in an electrically conductive object as a totality of elementary eddy currents. The proposed scanning method includes measurements of only one component of the secondary magnetic field. Reconstruction of the current distribution is performed by deconvolution with regularization. Numerical modeling supported by the field experiments show that this approach is of direct practical relevance.
Wen, C; Wan, W; Li, F H; Tang, D
2015-04-01
The [110] cross-sectional samples of 3C-SiC/Si (001) were observed with a spherical aberration-corrected 300 kV high-resolution transmission electron microscope. Two images taken not close to the Scherzer focus condition and not representing the projected structures intuitively were utilized for performing the deconvolution. The principle and procedure of image deconvolution and atomic sort recognition are summarized. The defect structure restoration together with the recognition of Si and C atoms from the experimental images has been illustrated. The structure maps of an intrinsic stacking fault in the area of SiC, and of Lomer and 60° shuffle dislocations at the interface have been obtained at atomic level. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sheet-scanned dual-axis confocal microscopy using Richardson-Lucy deconvolution.
Wang, D; Meza, D; Wang, Y; Gao, L; Liu, J T C
2014-09-15
We have previously developed a line-scanned dual-axis confocal (LS-DAC) microscope with subcellular resolution suitable for high-frame-rate diagnostic imaging at shallow depths. Due to the loss of confocality along one dimension, the contrast (signal-to-background ratio) of a LS-DAC microscope is deteriorated compared to a point-scanned DAC microscope. However, by using a sCMOS camera for detection, a short oblique light-sheet is imaged at each scanned position. Therefore, by scanning the light sheet in only one dimension, a thin 3D volume is imaged. Both sequential two-dimensional deconvolution and three-dimensional deconvolution are performed on the thin image volume to improve the resolution and contrast of one en face confocal image section at the center of the volume, a technique we call sheet-scanned dual-axis confocal (SS-DAC) microscopy.
NASA Astrophysics Data System (ADS)
Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.
The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.
NASA Technical Reports Server (NTRS)
Liang, Steven Y.; Dornfeld, David A.; Nickerson, Jackson A.
1987-01-01
The coloring effect on the acoustic emission signal due to the frequency response of the data acquisition/processing instrumentation may bias the interpretation of AE signal characteristics. In this paper, a frequency domain deconvolution technique, which involves the identification of the instrumentation transfer functions and multiplication of the AE signal spectrum by the inverse of these system functions, has been carried out. In this way, the change in AE signal characteristics can be better interpreted as the result of the change in only the states of the process. Punch stretching process was used as an example to demonstrate the application of the technique. Results showed that, through the deconvolution, the frequency characteristics of AE signals generated during the stretching became more distinctive and can be more effectively used as tools for process monitoring.
Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D.
2010-01-01
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors. PMID:22294896
Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D
2010-01-01
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.
NASA Astrophysics Data System (ADS)
Andrianova, Anastasia A.; DiProspero, Thomas; Geib, Clayton; Smoliakova, Irina P.; Kozliak, Evguenii I.; Kubátová, Alena
2018-05-01
The capability to characterize lignin, lignocellulose, and their degradation products is essential for the development of new renewable feedstocks. Electrospray ionization high-resolution time-of-flight mass spectrometry (ESI-HR TOF-MS) method was developed expanding the lignomics toolkit while targeting the simultaneous detection of low and high molecular weight (MW) lignin species. The effect of a broad range of electrolytes and various ionization conditions on ion formation and ionization effectiveness was studied using a suite of mono-, di-, and triarene lignin model compounds as well as kraft alkali lignin. Contrary to the previous studies, the positive ionization mode was found to be more effective for methoxy-substituted arenes and polyphenols, i.e., species of a broadly varied MW structurally similar to the native lignin. For the first time, we report an effective formation of multiply charged species of lignin with the subsequent mass spectrum deconvolution in the presence of 100 mmol L-1 formic acid in the positive ESI mode. The developed method enabled the detection of lignin species with an MW between 150 and 9000 Da or higher, depending on the mass analyzer. The obtained M n and M w values of 1500 and 2500 Da, respectively, were in good agreement with those determined by gel permeation chromatography. Furthermore, the deconvoluted ESI mass spectrum was similar to that obtained with matrix-assisted laser desorption/ionization (MALDI)-HR TOF-MS, yet featuring a higher signal-to-noise ratio. The formation of multiply charged species was confirmed with ion mobility ESI-HR Q-TOF-MS. [Figure not available: see fulltext.
Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi
2017-01-18
Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.
NASA Astrophysics Data System (ADS)
Andrianova, Anastasia A.; DiProspero, Thomas; Geib, Clayton; Smoliakova, Irina P.; Kozliak, Evguenii I.; Kubátová, Alena
2018-03-01
The capability to characterize lignin, lignocellulose, and their degradation products is essential for the development of new renewable feedstocks. Electrospray ionization high-resolution time-of-flight mass spectrometry (ESI-HR TOF-MS) method was developed expanding the lignomics toolkit while targeting the simultaneous detection of low and high molecular weight (MW) lignin species. The effect of a broad range of electrolytes and various ionization conditions on ion formation and ionization effectiveness was studied using a suite of mono-, di-, and triarene lignin model compounds as well as kraft alkali lignin. Contrary to the previous studies, the positive ionization mode was found to be more effective for methoxy-substituted arenes and polyphenols, i.e., species of a broadly varied MW structurally similar to the native lignin. For the first time, we report an effective formation of multiply charged species of lignin with the subsequent mass spectrum deconvolution in the presence of 100 mmol L-1 formic acid in the positive ESI mode. The developed method enabled the detection of lignin species with an MW between 150 and 9000 Da or higher, depending on the mass analyzer. The obtained M n and M w values of 1500 and 2500 Da, respectively, were in good agreement with those determined by gel permeation chromatography. Furthermore, the deconvoluted ESI mass spectrum was similar to that obtained with matrix-assisted laser desorption/ionization (MALDI)-HR TOF-MS, yet featuring a higher signal-to-noise ratio. The formation of multiply charged species was confirmed with ion mobility ESI-HR Q-TOF-MS. [Figure not available: see fulltext.
Wen, Yanhua; Wei, Yanjun; Zhang, Shumei; Li, Song; Liu, Hongbo; Wang, Fang; Zhao, Yue; Zhang, Dongwei; Zhang, Yan
2017-05-01
Tumour heterogeneity describes the coexistence of divergent tumour cell clones within tumours, which is often caused by underlying epigenetic changes. DNA methylation is commonly regarded as a significant regulator that differs across cells and tissues. In this study, we comprehensively reviewed research progress on estimating of tumour heterogeneity. Bioinformatics-based analysis of DNA methylation has revealed the evolutionary relationships between breast cancer cell lines and tissues. Further analysis of the DNA methylation profiles in 33 breast cancer-related cell lines identified cell line-specific methylation patterns. Next, we reviewed the computational methods in inferring clonal evolution of tumours from different perspectives and then proposed a deconvolution strategy for modelling cell subclonal populations dynamics in breast cancer tissues based on DNA methylation. Further analysis of simulated cancer tissues and real cell lines revealed that this approach exhibits satisfactory performance and relative stability in estimating the composition and proportions of cellular subpopulations. The application of this strategy to breast cancer individuals of the Cancer Genome Atlas's identified different cellular subpopulations with distinct molecular phenotypes. Moreover, the current and potential future applications of this deconvolution strategy to clinical breast cancer research are discussed, and emphasis was placed on the DNA methylation-based recognition of intra-tumour heterogeneity. The wide use of these methods for estimating heterogeneity to further clinical cohorts will improve our understanding of neoplastic progression and the design of therapeutic interventions for treating breast cancer and other malignancies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
1983-06-01
system, provides a convenient, low- noise , fully parallel method of improving contrast and enhancing structural detail in an image prior to input to a...directed towards problems in deconvolution, reconstruction from projections, bandlimited extrapolation, and shift varying deblurring of images...deconvolution algorithm has been studied with promising 5 results [I] for simulated motion blurs. Future work will focus on noise effects and the extension
2014-02-24
Suite 600 Washington, DC 20036 NRL/MR/ 6110 --14-9521 Approved for public release; distribution is unlimited. 1Science & Engineering Apprenticeship...Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/ 6110 --14-9521 Chemometric Deconvolution of Continuous Electrokinetic Injection Micellar... Engineering Apprenticeship Program American Society for Engineering Education Washington, DC Kevin Johnson Navy Technology Center for Safety and
Enhanced Seismic Imaging of Turbidite Deposits in Chicontepec Basin, Mexico
NASA Astrophysics Data System (ADS)
Chavez-Perez, S.; Vargas-Meleza, L.
2007-05-01
We test, as postprocessing tools, a combination of migration deconvolution and geometric attributes to attack the complex problems of reflector resolution and detection in migrated seismic volumes. Migration deconvolution has been empirically shown to be an effective approach for enhancing the illumination of migrated images, which are blurred versions of the subsurface reflectivity distribution, by decreasing imaging artifacts, improving spatial resolution, and alleviating acquisition footprint problems. We utilize migration deconvolution as a means to improve the quality and resolution of 3D prestack time migrated results from Chicontepec basin, Mexico, a very relevant portion of the producing onshore sector of Pemex, the Mexican petroleum company. Seismic data covers the Agua Fria, Coapechaca, and Tajin fields. It exhibits acquisition footprint problems, migration artifacts and a severe lack of resolution in the target area, where turbidite deposits need to be characterized between major erosional surfaces. Vertical resolution is about 35 m and the main hydrocarbon plays are turbidite beds no more than 60 m thick. We also employ geometric attributes (e.g., coherent energy and curvature), computed after migration deconvolution, to detect and map out depositional features, and help design development wells in the area. Results of this workflow show imaging enhancement and allow us to identify meandering channels and individual sand bodies, previously undistinguishable in the original seismic migrated images.
Dependence of quantitative accuracy of CT perfusion imaging on system parameters
NASA Astrophysics Data System (ADS)
Li, Ke; Chen, Guang-Hong
2017-03-01
Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.
NASA Astrophysics Data System (ADS)
Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.
2009-02-01
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.
Krudopp, Heimke; Sönnichsen, Frank D; Steffen-Heins, Anja
2015-08-15
The partitioning behavior of paramagnetic nitroxides in dispersed systems can be determined by deconvolution of electron paramagnetic resonance (EPR) spectra giving equivalent results with the validated methods of ultrafiltration techniques (UF) and pulsed-field gradient nuclear magnetic resonance spectroscopy (PFG-NMR). The partitioning behavior of nitroxides with increasing lipophilicity was investigated in anionic, cationic and nonionic micellar systems and 10 wt% o/w emulsions. Apart from EPR spectra deconvolution, the PFG-NMR was used in micellar solutions as a non-destructive approach, while UF based on separation of very small volume of the aqueous phase. As a function of their substituent and lipophilicity, the proportions of nitroxides that were solubilized in the micellar or emulsion interface increased with increasing nitroxide lipophilicity for all emulsifier used. Comparing the different approaches, EPR deconvolution and UF revealed comparable nitroxide proportions that were solubilized in the interfaces. Those proportions were higher than found with PFG-NMR. For PFG-NMR self-diffusion experiments the reduced nitroxides were used revealing a high dynamic of hydroxylamines and emulsifiers. Deconvolution of EPR spectra turned out to be the preferred method for measuring the partitioning behavior of paramagnetic molecules as it enables distinguishing between several populations at their individual solubilization sites. Copyright © 2015 Elsevier Inc. All rights reserved.
Extraction of near-surface properties for a lossy layered medium using the propagator matrix
Mehta, K.; Snieder, R.; Graizer, V.
2007-01-01
Near-surface properties play an important role in advancing earthquake hazard assessment. Other areas where near-surface properties are crucial include civil engineering and detection and delineation of potable groundwater. From an exploration point of view, near-surface properties are needed for wavefield separation and correcting for the local near-receiver structure. It has been shown that these properties can be estimated for a lossless homogeneous medium using the propagator matrix. To estimate the near-surface properties, we apply deconvolution to passive borehole recordings of waves excited by an earthquake. Deconvolution of these incoherent waveforms recorded by the sensors at different depths in the borehole with the recording at the surface results in waves that propagate upwards and downwards along the array. These waves, obtained by deconvolution, can be used to estimate the P- and S-wave velocities near the surface. As opposed to waves obtained by cross-correlation that represent filtered version of the sum of causal and acausal Green's function between the two receivers, the waves obtained by deconvolution represent the elements of the propagator matrix. Finally, we show analytically the extension of the propagator matrix analysis to a lossy layered medium for a special case of normal incidence. ?? 2007 The Authors Journal compilation ?? 2007 RAS.
NASA Astrophysics Data System (ADS)
Zhou, Q.; Michailovich, O.; Rathi, Y.
2014-03-01
High angular resolution diffusion imaging (HARDI) improves upon more traditional diffusion tensor imaging (DTI) in its ability to resolve the orientations of crossing and branching neural fibre tracts. The HARDI signals are measured over a spherical shell in q-space, and are usually used as an input to q-ball imaging (QBI) which allows estimation of the diffusion orientation distribution functions (ODFs) associated with a given region-of interest. Unfortunately, the partial nature of single-shell sampling imposes limits on the estimation accuracy. As a result, the recovered ODFs may not possess sufficient resolution to reveal the orientations of fibre tracts which cross each other at acute angles. A possible solution to the problem of limited resolution of QBI is provided by means of spherical deconvolution, a particular instance of which is sparse deconvolution. However, while capable of yielding high-resolution reconstructions over spacial locations corresponding to white matter, such methods tend to become unstable when applied to anatomical regions with a substantial content of isotropic diffusion. To resolve this problem, a new deconvolution approach is proposed in this paper. Apart from being uniformly stable across the whole brain, the proposed method allows one to quantify the isotropic component of cerebral diffusion, which is known to be a useful diagnostic measure by itself.
Catastrophic Disruption of Comet ISON
NASA Technical Reports Server (NTRS)
Keane, Jacqueline V.; Milam, Stefanie N.; Coulson, Iain M.; Kleyna, Jan T.; Sekanina, Zdenek; Kracht, Rainer; Riesen, Timm-Emmanuel; Meech, Karen J.; Charnley, Steven B.
2016-01-01
We report submillimeter 450 and 850 microns dust continuum observations for comet C/2012 S1 (ISON) obtained at heliocentric distances 0.31-0.08 au prior to perihelion on 2013 November 28 (rh?=?0.0125 au). These observations reveal a rapidly varying dust environment in which the dust emission was initially point-like. As ISON approached perihelion, the continuum emission became an elongated dust column spread out over as much as 60? (greater than 10(exp 5) km in the anti-solar direction. Deconvolution of the November 28.04 850 microns image reveals numerous distinct clumps consistent with the catastrophic disruption of comet ISON, producing approximately 5.2?×?10(exp 10) kg of submillimeter-sized dust. Orbital computations suggest that the SCUBA-2 emission peak coincides with the comet's residual nucleus.
Influence of deposition temperature on WTiN coatings tribological performance
NASA Astrophysics Data System (ADS)
Londoño-Menjura, R. F.; Ospina, R.; Escobar, D.; Quintero, J. H.; Olaya, J. J.; Mello, A.; Restrepo-Parra, E.
2018-01-01
WTiN films were grown on silicon and stainless-steel substrates using the DC magnetron sputtering technique. The substrate temperature was varied taking values of 100 °C, 200 °C, 300 °C, and 400 °C. X-ray diffraction analysis allowed us to identify a rock salt-type face centered cubic (FCC) structure, with a lattice parameter of approximately 4.2 nm, a relatively low microstrain (deformations at microscopy level, between 4.7% and 6.7%), and a crystallite size of a few nanometers (11.6 nm-31.5 nm). The C1s, N1s, O1s, Ti2p, W4s, W4p, W4d and W4f narrow spectra were obtained using X-ray photoelectron spectroscopy (XPS) and depending on the substrate temperature, the deconvoluted spectra presented different binding energies. Grain sizes and roughness (approximately 4 nm) of films were determined using atomic force microscopy. Scratch and pin on disc tests were conducted, showing better performance of the film grown at 200 °C. This sample exhibited a lower roughness, coefficient of friction, and wear rate.
Fox, Christopher; Simon, Tom; Simon, Bill; Dempsey, James F.; Kahler, Darren; Palta, Jatinder R.; Liu, Chihray; Yan, Guanghua
2010-01-01
Purpose: Accurate modeling of beam profiles is important for precise treatment planning dosimetry. Calculated beam profiles need to precisely replicate profiles measured during machine commissioning. Finite detector size introduces perturbations into the measured profiles, which, in turn, impact the resulting modeled profiles. The authors investigate a method for extracting the unperturbed beam profiles from those measured during linear accelerator commissioning. Methods: In-plane and cross-plane data were collected for an Elekta Synergy linac at 6 MV using ionization chambers of volume 0.01, 0.04, 0.13, and 0.65 cm3 and a diode of surface area 0.64 mm2. The detectors were orientated with the stem perpendicular to the beam and pointing away from the gantry. Profiles were measured for a 10×10 cm2 field at depths ranging from 0.8 to 25.0 cm and SSDs from 90 to 110 cm. Shaping parameters of a Gaussian response function were obtained relative to the Edge detector. The Gaussian function was deconvolved from the measured ionization chamber data. The Edge detector profile was taken as an approximation to the true profile, to which deconvolved data were compared. Data were also collected with CC13 and Edge detectors for additional fields and energies on an Elekta Synergy, Varian Trilogy, and Siemens Oncor linear accelerator and response functions obtained. Response functions were compared as a function of depth, SSD, and detector scan direction. Variations in the shaping parameter were introduced and the effect on the resulting deconvolution profiles assessed. Results: Up to 10% setup dependence in the Gaussian shaping parameter occurred, for each detector for a particular plane. This translated to less than a ±0.7 mm variation in the 80%–20% penumbral width. For large volume ionization chambers such as the FC65 Farmer type, where the cavity length to diameter ratio is far from 1, the scan direction produced up to a 40% difference in the shaping parameter between in-plane and cross-plane measurements. This is primarily due to the directional difference in penumbral width measured by the FC65 chamber, which can more than double in profiles obtained with the detector stem parallel compared to perpendicular to the scan direction. For the more symmetric CC13 chamber the variation was only 3% between in-plane and cross-plane measurements. Conclusions: The authors have shown that the detector response varies with detector type, depth, SSD, and detector scan direction. In-plane vs cross-plane scanning can require calculation of a direction dependent response function. The effect of a 10% overall variation in the response function, for an ionization chamber, translates to a small deviation in the penumbra from that of the Edge detector measured profile when deconvolved. Due to the uncertainties introduced by deconvolution the Edge detector would be preferable in obtaining an approximation of the true profile, particularly for field sizes where the energy dependence of the diode can be neglected. However, an averaged response function could be utilized to provide a good approximation of the true profile for large ionization chambers and for larger fields for which diode detectors are not recommended. PMID:20229856
NASA Astrophysics Data System (ADS)
Darudi, Ahmad; Bakhshi, Hadi; Asgari, Reza
2015-05-01
In this paper we present the results of image restoration using the data taken by a Hartmann sensor. The aberration is measure by a Hartmann sensor in which the object itself is used as reference. Then the Point Spread Function (PSF) is simulated and used for image reconstruction using the Lucy-Richardson technique. A technique is presented for quantitative evaluation the Lucy-Richardson technique for deconvolution.
2007-02-28
Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex Medium Response, International Journal of Imaging Systems and...1767-1782, 2006. 31. Z. Mu, R. Plemmons, and P. Santago. Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex...rigorous mathematical and computational research on inverse problems in optical imaging of direct interest to the Army and also the intelligence agencies
Adaptive Optics Image Restoration Based on Frame Selection and Multi-frame Blind Deconvolution
NASA Astrophysics Data System (ADS)
Tian, Yu; Rao, Chang-hui; Wei, Kai
Restricted by the observational condition and the hardware, adaptive optics can only make a partial correction of the optical images blurred by atmospheric turbulence. A postprocessing method based on frame selection and multi-frame blind deconvolution is proposed for the restoration of high-resolution adaptive optics images. By frame selection we mean we first make a selection of the degraded (blurred) images for participation in the iterative blind deconvolution calculation, with no need of any a priori knowledge, and with only a positivity constraint. This method has been applied to the restoration of some stellar images observed by the 61-element adaptive optics system installed on the Yunnan Observatory 1.2m telescope. The experimental results indicate that this method can effectively compensate for the residual errors of the adaptive optics system on the image, and the restored image can reach the diffraction-limited quality.
Huang, Yulin; Zha, Yuebo; Wang, Yue; Yang, Jianyu
2015-06-18
The forward looking radar imaging task is a practical and challenging problem for adverse weather aircraft landing industry. Deconvolution method can realize the forward looking imaging but it often leads to the noise amplification in the radar image. In this paper, a forward looking radar imaging based on deconvolution method is presented for adverse weather aircraft landing. We first present the theoretical background of forward looking radar imaging task and its application for aircraft landing. Then, we convert the forward looking radar imaging task into a corresponding deconvolution problem, which is solved in the framework of algebraic theory using truncated singular decomposition method. The key issue regarding the selecting of the truncated parameter is addressed using generalized cross validation approach. Simulation and experimental results demonstrate that the proposed method is effective in achieving angular resolution enhancement with suppressing the noise amplification in forward looking radar imaging.
Towards real-time image deconvolution: application to confocal and STED microscopy
Zanella, R.; Zanghirati, G.; Cavicchioli, R.; Zanni, L.; Boccacci, P.; Bertero, M.; Vicidomini, G.
2013-01-01
Although deconvolution can improve the quality of any type of microscope, the high computational time required has so far limited its massive spreading. Here we demonstrate the ability of the scaled-gradient-projection (SGP) method to provide accelerated versions of the most used algorithms in microscopy. To achieve further increases in efficiency, we also consider implementations on graphic processing units (GPUs). We test the proposed algorithms both on synthetic and real data of confocal and STED microscopy. Combining the SGP method with the GPU implementation we achieve a speed-up factor from about a factor 25 to 690 (with respect the conventional algorithm). The excellent results obtained on STED microscopy images demonstrate the synergy between super-resolution techniques and image-deconvolution. Further, the real-time processing allows conserving one of the most important property of STED microscopy, i.e the ability to provide fast sub-diffraction resolution recordings. PMID:23982127
Removing the echoes from terahertz pulse reflection system and sample
NASA Astrophysics Data System (ADS)
Liu, Haishun; Zhang, Zhenwei; Zhang, Cunlin
2018-01-01
Due to the echoes both from terahertz (THz) pulse reflection system and sample, the THz primary pulse will be distorted. The system echoes include two types. One preceding the main peak probably is caused by ultrafast laser pulse and the other at the back of the primary pulse is caused by the Fabry-Perot (F-P) etalon effect of detector. We attempt to remove the corresponding echoes by using two kinds of deconvolution. A Si wafer of 400μm was selected as the tested sample. Firstly, the method of double Gaussian filter (DGF) decnvolution was used to remove the systematic echoes, and then another deconvolution technique was employed to eliminate the two obvious echoes of the sample. The ultimate results indicated: although the combination of two deconvolution techniques could not entirely remove the echoes of sample and system, the echoes were largely reduced.
Batsoulis, A N; Nacos, M K; Pappas, C S; Tarantilis, P A; Mavromoustakos, T; Polissiou, M G
2004-02-01
Hemicellulose samples were isolated from kenaf (Hibiscus cannabinus L.). Hemicellulosic fractions usually contain a variable percentage of uronic acids. The uronic acid content (expressed in polygalacturonic acid) of the isolated hemicelluloses was determined by diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method. A linear relationship between uronic acids content and the sum of the peak areas at 1745, 1715, and 1600 cm(-1) was established with a high correlation coefficient (0.98). The deconvolution analysis using the curve-fitting method allowed the elimination of spectral interferences from other cell wall components. The above method was compared with an established spectrophotometric method and was found equivalent for accuracy and repeatability (t-test, F-test). This method is applicable in analysis of natural or synthetic mixtures and/or crude substances. The proposed method is simple, rapid, and nondestructive for the samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Navarro, Jorge
2013-12-01
The goal of this study presented is to determine the best available non-destructive technique necessary to collect validation data as well as to determine burn-up and cooling time of the fuel elements onsite at the Advanced Test Reactor (ATR) canal. This study makes a recommendation of the viability of implementing a permanent fuel scanning system at the ATR canal and leads3 to the full design of a permanent fuel scan system. The study consisted at first in determining if it was possible and which equipment was necessary to collect useful spectra from ATR fuel elements at the canal adjacent tomore » the reactor. Once it was establish that useful spectra can be obtained at the ATR canal the next step was to determine which detector and which configuration was better suited to predict burnup and cooling time of fuel elements non-destructively. Three different detectors of High Purity Germanium (HPGe), Lanthanum Bromide (LaBr3), and High Pressure Xenon (HPXe) in two system configurations of above and below the water pool were used during the study. The data collected and analyzed was used to create burnup and cooling time calibration prediction curves for ATR fuel. The next stage of the study was to determine which of the three detectors tested was better suited for the permanent system. From spectra taken and the calibration curves obtained, it was determined that although the HPGe detector yielded better results, a detector that could better withstand the harsh environment of the ATR canal was needed. The in-situ nature of the measurements required a rugged fuel scanning system, low in maintenance and easy to control system. Based on the ATR canal feasibility measurements and calibration results it was determined that the LaBr3 detector was the best alternative for canal in-situ measurements; however in order to enhance the quality of the spectra collected using this scintillator a deconvolution method was developed. Following the development of the deconvolution method for ATR applications the technique was tested using one-isotope, multi-isotope and fuel simulated sources. Burnup calibrations were perfomed using convoluted and deconvoluted data. The calibrations results showed burnup prediction by this method improves using deconvolution. The final stage of the deconvolution method development was to perform an irradiation experiment in order to create a surrogate fuel source to test the deconvolution method using experimental data. A conceptual design of the fuel scan system is path forward using the rugged LaBr3 detector in an above the water configuration and deconvolution algorithms.« less
Three-dimensional FLASH Laser Radar Range Estimation via Blind Deconvolution
2009-10-01
scene can result in errors due to several factors including the optical spatial impulse response, detector blurring, photon noise , timing jitter, and...estimation error include spatial blur, detector blurring, noise , timing jitter, and inter-sample targets. Unlike previous research, this paper ac- counts...for pixel coupling by defining the range image mathematical model as a 2D convolution between the system spatial impulse response and the object (target
Bayesian deconvolution and quantification of metabolites in complex 1D NMR spectra using BATMAN.
Hao, Jie; Liebeke, Manuel; Astle, William; De Iorio, Maria; Bundy, Jacob G; Ebbels, Timothy M D
2014-01-01
Data processing for 1D NMR spectra is a key bottleneck for metabolomic and other complex-mixture studies, particularly where quantitative data on individual metabolites are required. We present a protocol for automated metabolite deconvolution and quantification from complex NMR spectra by using the Bayesian automated metabolite analyzer for NMR (BATMAN) R package. BATMAN models resonances on the basis of a user-controllable set of templates, each of which specifies the chemical shifts, J-couplings and relative peak intensities for a single metabolite. Peaks are allowed to shift position slightly between spectra, and peak widths are allowed to vary by user-specified amounts. NMR signals not captured by the templates are modeled non-parametrically by using wavelets. The protocol covers setting up user template libraries, optimizing algorithmic input parameters, improving prior information on peak positions, quality control and evaluation of outputs. The outputs include relative concentration estimates for named metabolites together with associated Bayesian uncertainty estimates, as well as the fit of the remainder of the spectrum using wavelets. Graphical diagnostics allow the user to examine the quality of the fit for multiple spectra simultaneously. This approach offers a workflow to analyze large numbers of spectra and is expected to be useful in a wide range of metabolomics studies.
NASA Astrophysics Data System (ADS)
Theodorsen, Audun; Garcia, Odd Erik; Kube, Ralph; Labombard, Brian; Terry, Jim
2017-10-01
In the far scrape-off layer (SOL), radial motion of filamentary structures leads to excess transport of particles and heat. Amplitudes and arrival times of these filaments have previously been studied by conditional averaging in single-point measurements from Langmuir Probes and Gas Puff Imaging (GPI). Conditional averaging can be problematic: the cutoff for large amplitudes is mostly chosen by convention; the conditional windows used may influence the arrival time distribution; and the amplitudes cannot be separated from a background. Previous work has shown that SOL fluctuations are well described by a stochastic model consisting of a super-position of pulses with fixed shape and randomly distributed amplitudes and arrival times. The model can be formulated as a pulse shape convolved with a train of delta pulses. By choosing a pulse shape consistent with the power spectrum of the fluctuation time series, Richardson-Lucy deconvolution can be used to recover the underlying amplitudes and arrival times of the delta pulses. We apply this technique to both L and H-mode GPI data from the Alcator C-Mod tokamak. The pulse arrival times are shown to be uncorrelated and uniformly distributed, consistent with a Poisson process, and the amplitude distribution has an exponential tail.
Dao, Lam; Glancy, Brian; Lucotte, Bertrand; Chang, Lin-Ching; Balaban, Robert S; Hsu, Li-Yueh
2015-01-01
SUMMARY This paper investigates a post-processing approach to correct spatial distortion in two-photon fluorescence microscopy images for vascular network reconstruction. It is aimed at in vivo imaging of large field-of-view, deep-tissue studies of vascular structures. Based on simple geometric modeling of the object-of-interest, a distortion function is directly estimated from the image volume by deconvolution analysis. Such distortion function is then applied to sub volumes of the image stack to adaptively adjust for spatially varying distortion and reduce the image blurring through blind deconvolution. The proposed technique was first evaluated in phantom imaging of fluorescent microspheres that are comparable in size to the underlying capillary vascular structures. The effectiveness of restoring three-dimensional spherical geometry of the microspheres using the estimated distortion function was compared with empirically measured point-spread function. Next, the proposed approach was applied to in vivo vascular imaging of mouse skeletal muscle to reduce the image distortion of the capillary structures. We show that the proposed method effectively improve the image quality and reduce spatially varying distortion that occurs in large field-of-view deep-tissue vascular dataset. The proposed method will help in qualitative interpretation and quantitative analysis of vascular structures from fluorescence microscopy images. PMID:26224257
Bending the Rules: Widefield Microscopy and the Abbe Limit of Resolution
Verdaasdonk, Jolien S.; Stephens, Andrew D.; Haase, Julian; Bloom, Kerry
2014-01-01
One of the most fundamental concepts of microscopy is that of resolution–the ability to clearly distinguish two objects as separate. Recent advances such as structured illumination microscopy (SIM) and point localization techniques including photoactivated localization microscopy (PALM), and stochastic optical reconstruction microscopy (STORM) strive to overcome the inherent limits of resolution of the modern light microscope. These techniques, however, are not always feasible or optimal for live cell imaging. Thus, in this review, we explore three techniques for extracting high resolution data from images acquired on a widefield microscope–deconvolution, model convolution, and Gaussian fitting. Deconvolution is a powerful tool for restoring a blurred image using knowledge of the point spread function (PSF) describing the blurring of light by the microscope, although care must be taken to ensure accuracy of subsequent quantitative analysis. The process of model convolution also requires knowledge of the PSF to blur a simulated image which can then be compared to the experimentally acquired data to reach conclusions regarding its geometry and fluorophore distribution. Gaussian fitting is the basis for point localization microscopy, and can also be applied to tracking spot motion over time or measuring spot shape and size. All together, these three methods serve as powerful tools for high-resolution imaging using widefield microscopy. PMID:23893718
NASA Astrophysics Data System (ADS)
de Sousa, Marcelo; Martinez, Diego Stéfani Teodoro; Alves, Oswaldo Luiz
2016-06-01
Mannosylation is a method commonly used to deliver nanomaterials to specific organs and tissues via cellular macrophage uptake. In this work, for the first time, we proposed a method that involves the binding of d-mannose to ethylenediamine to form mannosylated ethylenediamine, which is then coupled to oxidized and purified multiwalled carbon nanotubes. The advantage of this approach is that mannosylated ethylenediamine precipitates in methanol, which greatly facilitates the separation of this product in the synthesis process. Carbon nanotubes were oxidized using concentrated H2SO4 and HNO3 by conventional reflux method. However, during this oxidation process, carbon nanotubes generated carboxylated carbonaceous fragments (oxidation debris). These by-products were removed from the oxidized carbon nanotubes to ensure that the functionalization would occur only on the carbon nanotube surface. The coupling of mannosylated ethylenediamine to debris-free carbon nanotubes was accomplished using n-(3-dimethylaminopropyl)-n-ethylcarbodiimide and n-hydroxysuccinimide. Deconvoluted N1s spectra obtained from X-ray photoelectron spectroscopy gave binding energies of 399.8 and 401.7 eV, which we attributed to the amide and amine groups, respectively, of carbon nanotubes functionalized with mannosylated ethylenediamine. Deconvoluted O1s spectra showed a binding energy of 532.4 eV, which we suggest is caused by an overlap in the binding energies of the aliphatic CO groups of d-mannose and the O=C group of the amide bond. The functionalization degree was approximately 3.4 %, according to the thermogravimetric analysis. Scanning electron microscopy demonstrated that an extended carbon nanotube morphology was preserved following the oxidation, purification, and functionalization steps.
Quantitative fluorescence microscopy and image deconvolution.
Swedlow, Jason R
2013-01-01
Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used to remove blurred signal from an image. There are two major types of deconvolution approaches, deblurring and restoration algorithms. Deblurring algorithms remove blur, but treat a series of optical sections as individual two-dimensional entities, and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed. Copyright © 1998 Elsevier Inc. All rights reserved.
Brost, Eric Edward; Watanabe, Yoichi
2018-06-01
Cerenkov photons are created by high-energy radiation beams used for radiation therapy. In this study, we developed a Cerenkov light dosimetry technique to obtain a two-dimensional dose distribution in a superficial region of medium from the images of Cerenkov photons by using a deconvolution method. An integral equation was derived to represent the Cerenkov photon image acquired by a camera for a given incident high-energy photon beam by using convolution kernels. Subsequently, an equation relating the planar dose at a depth to a Cerenkov photon image using the well-known relationship between the incident beam fluence and the dose distribution in a medium was obtained. The final equation contained a convolution kernel called the Cerenkov dose scatter function (CDSF). The CDSF function was obtained by deconvolving the Cerenkov scatter function (CSF) with the dose scatter function (DSF). The GAMOS (Geant4-based Architecture for Medicine-Oriented Simulations) Monte Carlo particle simulation software was used to obtain the CSF and DSF. The dose distribution was calculated from the Cerenkov photon intensity data using an iterative deconvolution method with the CDSF. The theoretical formulation was experimentally evaluated by using an optical phantom irradiated by high-energy photon beams. The intensity of the deconvolved Cerenkov photon image showed linear dependence on the dose rate and the photon beam energy. The relative intensity showed a field size dependence similar to the beam output factor. Deconvolved Cerenkov images showed improvement in dose profiles compared with the raw image data. In particular, the deconvolution significantly improved the agreement in the high dose gradient region, such as in the penumbra. Deconvolution with a single iteration was found to provide the most accurate solution of the dose. Two-dimensional dose distributions of the deconvolved Cerenkov images agreed well with the reference distributions for both square fields and a multileaf collimator (MLC) defined, irregularly shaped field. The proposed technique improved the accuracy of the Cerenkov photon dosimetry in the penumbra region. The results of this study showed initial validation of the deconvolution method for beam profile measurements in a homogeneous media. The new formulation accounted for the physical processes of Cerenkov photon transport in the medium more accurately than previously published methods. © 2018 American Association of Physicists in Medicine.
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-21
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed 'MPD-AwTTV'. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
NASA Astrophysics Data System (ADS)
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-01
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
MetaUniDec: High-Throughput Deconvolution of Native Mass Spectra
NASA Astrophysics Data System (ADS)
Reid, Deseree J.; Diesing, Jessica M.; Miller, Matthew A.; Perry, Scott M.; Wales, Jessica A.; Montfort, William R.; Marty, Michael T.
2018-04-01
The expansion of native mass spectrometry (MS) methods for both academic and industrial applications has created a substantial need for analysis of large native MS datasets. Existing software tools are poorly suited for high-throughput deconvolution of native electrospray mass spectra from intact proteins and protein complexes. The UniDec Bayesian deconvolution algorithm is uniquely well suited for high-throughput analysis due to its speed and robustness but was previously tailored towards individual spectra. Here, we optimized UniDec for deconvolution, analysis, and visualization of large data sets. This new module, MetaUniDec, centers around a hierarchical data format 5 (HDF5) format for storing datasets that significantly improves speed, portability, and file size. It also includes code optimizations to improve speed and a new graphical user interface for visualization, interaction, and analysis of data. To demonstrate the utility of MetaUniDec, we applied the software to analyze automated collision voltage ramps with a small bacterial heme protein and large lipoprotein nanodiscs. Upon increasing collisional activation, bacterial heme-nitric oxide/oxygen binding (H-NOX) protein shows a discrete loss of bound heme, and nanodiscs show a continuous loss of lipids and charge. By using MetaUniDec to track changes in peak area or mass as a function of collision voltage, we explore the energetic profile of collisional activation in an ultra-high mass range Orbitrap mass spectrometer. [Figure not available: see fulltext.
Eddy-Current Sensors with Asymmetrical Point Spread Function
Gajda, Janusz; Stencel, Marek
2016-01-01
This paper concerns a special type of eddy-current sensor in the form of inductive loops. Such sensors are applied in the measuring systems classifying road vehicles. They usually have a rectangular shape with dimensions of 1 × 2 m, and are installed under the surface of the traffic lane. The wide Point Spread Function (PSF) of such sensors causes the information on chassis geometry, contained in the measurement signal, to be strongly averaged. This significantly limits the effectiveness of the vehicle classification. Restoration of the chassis shape, by solving the inverse problem (deconvolution), is also difficult due to the fact that it is ill-conditioned. An original approach to solving this problem is presented in this paper. It is a hardware-based solution and involves the use of inductive loops with an asymmetrical PSF. Laboratory experiments and simulation tests, conducted with models of an inductive loop, confirmed the effectiveness of the proposed solution. In this case, the principle applies that the higher the level of sensor spatial asymmetry, the greater the effectiveness of the deconvolution algorithm. PMID:27782033
Eddy-Current Sensors with Asymmetrical Point Spread Function.
Gajda, Janusz; Stencel, Marek
2016-10-04
This paper concerns a special type of eddy-current sensor in the form of inductive loops. Such sensors are applied in the measuring systems classifying road vehicles. They usually have a rectangular shape with dimensions of 1 × 2 m, and are installed under the surface of the traffic lane. The wide Point Spread Function (PSF) of such sensors causes the information on chassis geometry, contained in the measurement signal, to be strongly averaged. This significantly limits the effectiveness of the vehicle classification. Restoration of the chassis shape, by solving the inverse problem (deconvolution), is also difficult due to the fact that it is ill-conditioned. An original approach to solving this problem is presented in this paper. It is a hardware-based solution and involves the use of inductive loops with an asymmetrical PSF. Laboratory experiments and simulation tests, conducted with models of an inductive loop, confirmed the effectiveness of the proposed solution. In this case, the principle applies that the higher the level of sensor spatial asymmetry, the greater the effectiveness of the deconvolution algorithm.
Haji-Saeed, B; Sengupta, S K; Testorf, M; Goodhue, W; Khoury, J; Woods, C L; Kierstead, J
2006-05-10
We propose and demonstrate a new photorefractive real-time holographic deconvolution technique for adaptive one-way image transmission through aberrating media by means of four-wave mixing. In contrast with earlier methods, which typically required various codings of the exact phase or two-way image transmission for correcting phase distortion, our technique relies on one-way image transmission through the use of exact phase information. Our technique can simultaneously correct both amplitude and phase distortions. We include several forms of image degradation, various test cases, and experimental results. We characterize the performance as a function of the input beam ratios for four metrics: signal-to-noise ratio, normalized root-mean-square error, edge restoration, and peak-to-total energy ratio. In our characterization we use false-color graphic images to display the best beam-intensity ratio two-dimensional region(s) for each of these metrics. Test cases are simulated at the optimal values of the beam-intensity ratios. We demonstrate our results through both experiment and computer simulation.
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R.
2016-01-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator’s temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector’s single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal. PMID:27295658
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R
2016-11-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator's temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector's single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal.
NASA Astrophysics Data System (ADS)
Špiclin, Žiga; Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan
2012-03-01
Spatial resolution of hyperspectral imaging systems can vary significantly due to axial optical aberrations that originate from wavelength-induced index-of-refraction variations of the imaging optics. For systems that have a broad spectral range, the spatial resolution will vary significantly both with respect to the acquisition wavelength and with respect to the spatial position within each spectral image. Variations of the spatial resolution can be effectively characterized as part of the calibration procedure by a local image-based estimation of the pointspread function (PSF) of the hyperspectral imaging system. The estimated PSF can then be used in the image deconvolution methods to improve the spatial resolution of the spectral images. We estimated the PSFs from the spectral images of a line grid geometric caliber. From individual line segments of the line grid, the PSF was obtained by a non-parametric estimation procedure that used an orthogonal series representation of the PSF. By using the non-parametric estimation procedure, the PSFs were estimated at different spatial positions and at different wavelengths. The variations of the spatial resolution were characterized by the radius and the fullwidth half-maximum of each PSF and by the modulation transfer function, computed from images of USAF1951 resolution target. The estimation and characterization of the PSFs and the image deconvolution based spatial resolution enhancement were tested on images obtained by a hyperspectral imaging system with an acousto-optic tunable filter in the visible spectral range. The results demonstrate that the spatial resolution of the acquired spectral images can be significantly improved using the estimated PSFs and image deconvolution methods.
Hybrid Imaging for Extended Depth of Field Microscopy
NASA Astrophysics Data System (ADS)
Zahreddine, Ramzi Nicholas
An inverse relationship exists in optical systems between the depth of field (DOF) and the minimum resolvable feature size. This trade-off is especially detrimental in high numerical aperture microscopy systems where resolution is pushed to the diffraction limit resulting in a DOF on the order of 500 nm. Many biological structures and processes of interest span over micron scales resulting in significant blurring during imaging. This thesis explores a two-step computational imaging technique known as hybrid imaging to create extended DOF (EDF) microscopy systems with minimal sacrifice in resolution. In the first step a mask is inserted at the pupil plane of the microscope to create a focus invariant system over 10 times the traditional DOF, albeit with reduced contrast. In the second step the contrast is restored via deconvolution. Several EDF pupil masks from the literature are quantitatively compared in the context of biological microscopy. From this analysis a new mask is proposed, the incoherently partitioned pupil with binary phase modulation (IPP-BPM), that combines the most advantageous properties from the literature. Total variation regularized deconvolution models are derived for the various noise conditions and detectors commonly used in biological microscopy. State of the art algorithms for efficiently solving the deconvolution problem are analyzed for speed, accuracy, and ease of use. The IPP-BPM mask is compared with the literature and shown to have the highest signal-to-noise ratio and lowest mean square error post-processing. A prototype of the IPP-BPM mask is fabricated using a combination of 3D femtosecond glass etching and standard lithography techniques. The mask is compared against theory and demonstrated in biological imaging applications.
Liu, Xiaozheng; Yuan, Zhenming; Guo, Zhongwei; Xu, Dongrong
2015-05-01
Diffusion tensor imaging is widely used for studying neural fiber trajectories in white matter and for quantifying changes in tissue using diffusion properties at each voxel in the brain. To better model the nature of crossing fibers within complex architectures, rather than using a simplified tensor model that assumes only a single fiber direction at each image voxel, a model mixing multiple diffusion tensors is used to profile diffusion signals from high angular resolution diffusion imaging (HARDI) data. Based on the HARDI signal and a multiple tensors model, spherical deconvolution methods have been developed to overcome the limitations of the diffusion tensor model when resolving crossing fibers. The Richardson-Lucy algorithm is a popular spherical deconvolution method used in previous work. However, it is based on a Gaussian distribution, while HARDI data are always very noisy, and the distribution of HARDI data follows a Rician distribution. This current work aims to present a novel solution to address these issues. By simultaneously considering both the Rician bias and neighbor correlation in HARDI data, the authors propose a localized Richardson-Lucy (LRL) algorithm to estimate fiber orientations for HARDI data. The proposed method can simultaneously reduce noise and correct the Rician bias. Mean angular error (MAE) between the estimated Fiber orientation distribution (FOD) field and the reference FOD field was computed to examine whether the proposed LRL algorithm offered any advantage over the conventional RL algorithm at various levels of noise. Normalized mean squared error (NMSE) was also computed to measure the similarity between the true FOD field and the estimated FOD filed. For MAE comparisons, the proposed LRL approach obtained the best results in most of the cases at different levels of SNR and b-values. For NMSE comparisons, the proposed LRL approach obtained the best results in most of the cases at b-value = 3000 s/mm(2), which is the recommended schema for HARDI data acquisition. In addition, the FOD fields estimated by the proposed LRL approach in regions of fiber crossing regions using real data sets also showed similar fiber structures which agreed with common acknowledge in these regions. The novel spherical deconvolution method for improved accuracy in investigating crossing fibers can simultaneously reduce noise and correct Rician bias. With the noise smoothed and bias corrected, this algorithm is especially suitable for estimation of fiber orientations in HARDI data. Experimental results using both synthetic and real imaging data demonstrated the success and effectiveness of the proposed LRL algorithm.
NASA Astrophysics Data System (ADS)
Olurin, Oluwaseun Tolutope
2017-12-01
Interpretation of high resolution aeromagnetic data of Ilesha and its environs within the basement complex of the geological setting of Southwestern Nigeria was carried out in the study. The study area is delimited by geographic latitudes 7°30'-8°00'N and longitudes 4°30'-5°00'E. This investigation was carried out using Euler deconvolution on filtered digitised total magnetic data (Sheet Number 243) to delineate geological structures within the area under consideration. The digitised airborne magnetic data acquired in 2009 were obtained from the archives of the Nigeria Geological Survey Agency (NGSA). The airborne magnetic data were filtered, processed and enhanced; the resultant data were subjected to qualitative and quantitative magnetic interpretation, geometry and depth weighting analyses across the study area using Euler deconvolution filter control file in Oasis Montag software. Total magnetic intensity distribution in the field ranged from -77.7 to 139.7 nT. Total magnetic field intensities reveal high-magnitude magnetic intensity values (high-amplitude anomaly) and magnetic low intensities (low-amplitude magnetic anomaly) in the area under consideration. The study area is characterised with high intensity correlated with lithological variation in the basement. The sharp contrast is enhanced due to the sharp contrast in magnetic intensity between the magnetic susceptibilities of the crystalline and sedimentary rocks. The reduced-to-equator (RTE) map is characterised by high frequencies, short wavelengths, small size, weak intensity, sharp low amplitude and nearly irregular shaped anomalies, which may due to near-surface sources, such as shallow geologic units and cultural features. Euler deconvolution solution indicates a generally undulating basement, with a depth ranging from -500 to 1000 m. The Euler deconvolution results show that the basement relief is generally gentle and flat, lying within the basement terrain.
Estimating Fluctuating Pressures From Distorted Measurements
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Leondes, Cornelius T.
1994-01-01
Two algorithms extract estimates of time-dependent input (upstream) pressures from outputs of pressure sensors located at downstream ends of pneumatic tubes. Effect deconvolutions that account for distoring effects of tube upon pressure signal. Distortion of pressure measurements by pneumatic tubes also discussed in "Distortion of Pressure Signals in Pneumatic Tubes," (ARC-12868). Varying input pressure estimated from measured time-varying output pressure by one of two deconvolution algorithms that take account of measurement noise. Algorithms based on minimum-covariance (Kalman filtering) theory.
Fourier Deconvolution Methods for Resolution Enhancement in Continuous-Wave EPR Spectroscopy.
Reed, George H; Poyner, Russell R
2015-01-01
An overview of resolution enhancement of conventional, field-swept, continuous-wave electron paramagnetic resonance spectra using Fourier transform-based deconvolution methods is presented. Basic steps that are involved in resolution enhancement of calculated spectra using an implementation based on complex discrete Fourier transform algorithms are illustrated. Advantages and limitations of the method are discussed. An application to an experimentally obtained spectrum is provided to illustrate the power of the method for resolving overlapped transitions. © 2015 Elsevier Inc. All rights reserved.
Characterization of Flap Edge Noise Radiation from a High-Fidelity Airframe Model
NASA Technical Reports Server (NTRS)
Humphreys, William M., Jr.; Khorrami, Mehdi R.; Lockard, David P.; Neuhart, Dan H.; Bahr, Christopher J.
2015-01-01
The results of an experimental study of the noise generated by a baseline high-fidelity airframe model are presented. The test campaign was conducted in the open-jet test section of the NASA Langley 14- by 22-foot Subsonic Tunnel on an 18%-scale, semi-span Gulfstream airframe model incorporating a trailing edge flap and main landing gear. Unsteady surface pressure measurements were obtained from a series of sensors positioned along the two flap edges, and far field acoustic measurements were obtained using a 97-microphone phased array that viewed the pressure side of the airframe. The DAMAS array deconvolution method was employed to determine the locations and strengths of relevant noise sources in the vicinity of the flap edges and the landing gear. A Coherent Output Power (COP) spectral method was used to couple the unsteady surface pressures measured along the flap edges with the phased array output. The results indicate that outboard flap edge noise is dominated by the flap bulb seal cavity with very strong COP coherence over an approximate model-scale frequency range of 1 to 5 kHz observed between the array output and those unsteady pressure sensors nearest the aft end of the cavity. An examination of experimental COP spectra for the inboard flap proved inconclusive, most likely due to a combination of coherence loss caused by decorrelation of acoustic waves propagating through the thick wind tunnel shear layer and contamination of the spectra by tunnel background noise at lower frequencies. Directivity measurements obtained from integration of DAMAS pressure-squared values over defined geometric zones around the model show that the baseline flap and landing gear are only moderately directional as a function of polar emission angle.
Texas two-step: a framework for optimal multi-input single-output deconvolution.
Neelamani, Ramesh; Deffenbaugh, Max; Baraniuk, Richard G
2007-11-01
Multi-input single-output deconvolution (MISO-D) aims to extract a deblurred estimate of a target signal from several blurred and noisy observations. This paper develops a new two step framework--Texas Two-Step--to solve MISO-D problems with known blurs. Texas Two-Step first reduces the MISO-D problem to a related single-input single-output deconvolution (SISO-D) problem by invoking the concept of sufficient statistics (SSs) and then solves the simpler SISO-D problem using an appropriate technique. The two-step framework enables new MISO-D techniques (both optimal and suboptimal) based on the rich suite of existing SISO-D techniques. In fact, the properties of SSs imply that a MISO-D algorithm is mean-squared-error optimal if and only if it can be rearranged to conform to the Texas Two-Step framework. Using this insight, we construct new wavelet- and curvelet-based MISO-D algorithms with asymptotically optimal performance. Simulated and real data experiments verify that the framework is indeed effective.
Voigt deconvolution method and its applications to pure oxygen absorption spectrum at 1270 nm band.
Al-Jalali, Muhammad A; Aljghami, Issam F; Mahzia, Yahia M
2016-03-15
Experimental spectral lines of pure oxygen at 1270 nm band were analyzed by Voigt deconvolution method. The method gave a total Voigt profile, which arises from two overlapping bands. Deconvolution of total Voigt profile leads to two Voigt profiles, the first as a result of O2 dimol at 1264 nm band envelope, and the second from O2 monomer at 1268 nm band envelope. In addition, Voigt profile itself is the convolution of Lorentzian and Gaussian distributions. Competition between thermal and collisional effects was clearly observed through competition between Gaussian and Lorentzian width for each band envelope. Voigt full width at half-maximum height (Voigt FWHM) for each line, and the width ratio between Lorentzian and Gaussian width (ΓLΓG(-1)) have been investigated. The following applied pressures were at 1, 2, 3, 4, 5, and 8 bar, while the temperatures were at 298 K, 323 K, 348 K, and 373 K range. Copyright © 2015 Elsevier B.V. All rights reserved.
Hom, Erik F. Y.; Marchis, Franck; Lee, Timothy K.; Haase, Sebastian; Agard, David A.; Sedat, John W.
2011-01-01
We describe an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of multi-frame and three-dimensional data acquired through astronomical and microscopic imaging. AIDA is a reimplementation and extension of the MISTRAL method developed by Mugnier and co-workers and shown to yield object reconstructions with excellent edge preservation and photometric precision [J. Opt. Soc. Am. A 21, 1841 (2004)]. Written in Numerical Python with calls to a robust constrained conjugate gradient method, AIDA has significantly improved run times over the original MISTRAL implementation. Included in AIDA is a scheme to automatically balance maximum-likelihood estimation and object regularization, which significantly decreases the amount of time and effort needed to generate satisfactory reconstructions. We validated AIDA using synthetic data spanning a broad range of signal-to-noise ratios and image types and demonstrated the algorithm to be effective for experimental data from adaptive optics–equipped telescope systems and wide-field microscopy. PMID:17491626
A novel SURE-based criterion for parametric PSF estimation.
Xue, Feng; Blu, Thierry
2015-02-01
We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.
Zunder, Eli R.; Finck, Rachel; Behbehani, Gregory K.; Amir, El-ad D.; Krishnaswamy, Smita; Gonzalez, Veronica D.; Lorang, Cynthia G.; Bjornson, Zach; Spitzer, Matthew H.; Bodenmiller, Bernd; Fantl, Wendy J.; Pe’er, Dana; Nolan, Garry P.
2015-01-01
SUMMARY Mass-tag cell barcoding (MCB) labels individual cell samples with unique combinatorial barcodes, after which they are pooled for processing and measurement as a single multiplexed sample. The MCB method eliminates variability between samples in antibody staining and instrument sensitivity, reduces antibody consumption, and shortens instrument measurement time. Here, we present an optimized MCB protocol with several improvements over previously described methods. The use of palladium-based labeling reagents expands the number of measurement channels available for mass cytometry and reduces interference with lanthanide-based antibody measurement. An error-detecting combinatorial barcoding scheme allows cell doublets to be identified and removed from the analysis. A debarcoding algorithm that is single cell-based rather than population-based improves the accuracy and efficiency of sample deconvolution. This debarcoding algorithm has been packaged into software that allows rapid and unbiased sample deconvolution. The MCB procedure takes 3–4 h, not including sample acquisition time of ~1 h per million cells. PMID:25612231
Automated processing for proton spectroscopic imaging using water reference deconvolution.
Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W
1994-06-01
Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.
DECONV-TOOL: An IDL based deconvolution software package
NASA Technical Reports Server (NTRS)
Varosi, F.; Landsman, W. B.
1992-01-01
There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.
Deconvolutions based on singular value decomposition and the pseudoinverse: a guide for beginners.
Hendler, R W; Shrager, R I
1994-01-01
Singular value decomposition (SVD) is deeply rooted in the theory of linear algebra, and because of this is not readily understood by a large group of researchers who could profit from its application. In this paper, we discuss the subject on a level that should be understandable to scientists who are not well versed in linear algebra. However, because it is necessary that certain key concepts in linear algebra be appreciated in order to comprehend what is accomplished by SVD, we present the section, 'Bare basics of linear algebra'. This is followed by a discussion of the theory of SVD. Next we present step-by-step examples to illustrate how SVD is applied to deconvolute a titration involving a mixture of three pH indicators. One noiseless case is presented as well as two cases where either a fixed or varying noise level is present. Finally, we discuss additional deconvolutions of mixed spectra based on the use of the pseudoinverse.
NASA Astrophysics Data System (ADS)
Gong, Changfei; Zeng, Dong; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua
2016-03-01
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for diagnosis and risk stratification of coronary artery disease by assessing the myocardial perfusion hemodynamic maps (MPHM). Meanwhile, the repeated scanning of the same region results in a relatively large radiation dose to patients potentially. In this work, we present a robust MPCT deconvolution algorithm with adaptive-weighted tensor total variation regularization to estimate residue function accurately under the low-dose context, which is termed `MPD-AwTTV'. More specifically, the AwTTV regularization takes into account the anisotropic edge property of the MPCT images compared with the conventional total variation (TV) regularization, which can mitigate the drawbacks of TV regularization. Subsequently, an effective iterative algorithm was adopted to minimize the associative objective function. Experimental results on a modified XCAT phantom demonstrated that the present MPD-AwTTV algorithm outperforms and is superior to other existing deconvolution algorithms in terms of noise-induced artifacts suppression, edge details preservation and accurate MPHM estimation.
Exploring the Llaima Volcano Using Receiver Functions
NASA Astrophysics Data System (ADS)
Bishop, J. W.; Biryol, C.; Lees, J. M.
2016-12-01
The Llaima volcano in Chile is one of the most active volcanos in the Southern Andes, erupting at least 50 times since 1640. To understand the eruption dynamics behind these frequent paroxysms, it is important to identify the depth and extent of the magma chamber beneath the volcano. Furthermore, it is also important to identify structural controls on the magma storage regions and volcanic plumbing system, such as fault and fracture zones. To probe these questions, a dense, 26 station broadband seismic array was deployed around the Llaima volcano for 3 months (January to March, 2015). Additionally, broadband seismic data from 7 stations in the nearby Observatorio Volcanológico de Los Andes del Sur (OVDAS) seismic network was also obtained for this period. Teleseismic receiver functions were calculated from this combined data using an iterative deconvolution technique. Receiver function stacks (both H-K and CCP) yield seismic images of the deep structure beneath the volcano. Initial results depict two low velocity layers at approximately 4km and 12km. Furthermore, Moho calculations are 5-8 km deeper than expected from regional models, but a shallow ( 40 km) region is detected beneath the volcano peak. A large high Vp/Vs ratio anomaly (Vp/Vs > 0.185) is discernable to the east of the main peak of the volcano.
Dimensions and Fragmentation of the Nuclei of Comet Shoemaker-Levi 9
NASA Technical Reports Server (NTRS)
Sekanina, Zdenek
1994-01-01
Central regions on the digital maps of 13 nuclear condensations of Comet Shoemaker-Levi 9, obtained with the Planetary Camera of the Hubble Space Telescope on January 27, March 30, and July 4, 1994, have been analyzed with the aim to identify the presence of distinct, major fragments in each condensation, to deconvolve their contributions to the signal that also includes the contribution from a surrounding cloud of dust (modeled as an extended source, using two different laws), to estimate the dimensions of the fragments and to study their temporal variations, and to determine the spatial distributions of the fragments as projected on to the plane of the sky. The deconvolution method applied is described and the results of the analysis are summarized, including the finding that sizable fragments did survive until the time of atmospheric entry. This result does not contradict evidence of the comet's continuing, apparently spontaneous fragmentation, which still went on long after the extremely close approach to Jupiter in July 1992 and which, because of the Jovian tidal effects, may have intensified in the final days before the crash on Jupiter. Since the developed approach is based on certain premises and involves approximations, the results should be viewed as preliminary and the problem should be the subject of further investigation.
Optimal application of Morrison's iterative noise removal for deconvolution. Appendices
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1987-01-01
Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.
Total variation based image deconvolution for extended depth-of-field microscopy images
NASA Astrophysics Data System (ADS)
Hausser, F.; Beckers, I.; Gierlak, M.; Kahraman, O.
2015-03-01
One approach for a detailed understanding of dynamical cellular processes during drug delivery is the use of functionalized biocompatible nanoparticles and fluorescent markers. An appropriate imaging system has to detect these moving particles so as whole cell volumes in real time with high lateral resolution in a range of a few 100 nm. In a previous study Extended depth-of-field microscopy (EDF-microscopy) has been applied to fluorescent beads and tradiscantia stamen hair cells and the concept of real-time imaging has been proved in different microscopic modes. In principle a phase retardation system like a programmable space light modulator or a static waveplate is incorporated in the light path and modulates the wavefront of light. Hence the focal ellipsoid is smeared out and images seem to be blurred in a first step. An image restoration by deconvolution using the known point-spread-function (PSF) of the optical system is necessary to achieve sharp microscopic images of an extended depth-of-field. This work is focused on the investigation and optimization of deconvolution algorithms to solve this restoration problem satisfactorily. This inverse problem is challenging due to presence of Poisson distributed noise and Gaussian noise, and since the PSF used for deconvolution exactly fits in just one plane within the object. We use non-linear Total Variation based image restoration techniques, where different types of noise can be treated properly. Various algorithms are evaluated for artificially generated 3D images as well as for fluorescence measurements of BPAE cells.
Isotope pattern deconvolution as a tool to study iron metabolism in plants.
Rodríguez-Castrillón, José Angel; Moldovan, Mariella; García Alonso, J Ignacio; Lucena, Juan José; García-Tomé, Maria Luisa; Hernández-Apaolaza, Lourdes
2008-01-01
Isotope pattern deconvolution is a mathematical technique for isolating distinct isotope signatures from mixtures of natural abundance and enriched tracers. In iron metabolism studies measurement of all four isotopes of the element by high-resolution multicollector or collision cell ICP-MS allows the determination of the tracer/tracee ratio with simultaneous internal mass bias correction and lower uncertainties. This technique was applied here for the first time to study iron uptake by cucumber plants using 57Fe-enriched iron chelates of the o,o and o,p isomers of ethylenediaminedi(o-hydroxyphenylacetic) acid (EDDHA) and ethylenediamine tetraacetic acid (EDTA). Samples of root, stem, leaves, and xylem sap, after exposure of the cucumber plants to the mentioned 57Fe chelates, were collected, dried, and digested using nitric acid. The isotopic composition of iron in the samples was measured by ICP-MS using a high-resolution multicollector instrument. Mass bias correction was computed using both a natural abundance iron standard and by internal correction using isotope pattern deconvolution. It was observed that, for plants with low 57Fe enrichment, isotope pattern deconvolution provided lower tracer/tracee ratio uncertainties than the traditional method applying external mass bias correction. The total amount of the element in the plants was determined by isotope dilution analysis, using a collision cell quadrupole ICP-MS instrument, after addition of 57Fe or natural abundance Fe in a known amount which depended on the isotopic composition of the sample.
Deconvolution single shot multibox detector for supermarket commodity detection and classification
NASA Astrophysics Data System (ADS)
Li, Dejian; Li, Jian; Nie, Binling; Sun, Shouqian
2017-07-01
This paper proposes an image detection model to detect and classify supermarkets shelves' commodity. Based on the principle of the features directly affects the accuracy of the final classification, feature maps are performed to combine high level features with bottom level features. Then set some fixed anchors on those feature maps, finally the label and the position of commodity is generated by doing a box regression and classification. In this work, we proposed a model named Deconvolutiuon Single Shot MultiBox Detector, we evaluated the model using 300 images photographed from real supermarket shelves. Followed the same protocol in other recent methods, the results showed that our model outperformed other baseline methods.
NASA Astrophysics Data System (ADS)
Varatharajan, I.; D'Amore, M.; Maturilli, A.; Helbert, J.; Hiesinger, H.
2017-12-01
The Mercury Radiometer and Thermal Imaging Spectrometer (MERTIS) payload of ESA/JAXA Bepicolombo mission to Mercury will map the thermal emissivity at wavelength range of 7-14 μm and spatial resolution of 500 m/pixel [1]. Mercury was also imaged at the same wavelength range using the Boston University's Mid-Infrared Spectrometer and Imager (MIRSI) mounted on the NASA Infrared Telescope Facility (IRTF) on Mauna Kea, Hawaii with the minimum spatial coverage of 400-600km/spectra which blends all rocks, minerals, and soil types [2]. Therefore, the study [2] used quantitative deconvolution algorithm developed by [3] for spectral unmixing of this composite thermal emissivity spectrum from telescope to their respective areal fractions of endmember spectra; however, the thermal emissivity of endmembers used in [2] is the inverted reflectance measurements (Kirchhoff's law) of various samples measured at room temperature and pressure. Over a decade, the Planetary Spectroscopy Laboratory (PSL) at the Institute of Planetary Research (PF) at the German Aerospace Center (DLR) facilitates the thermal emissivity measurements under controlled and simulated surface conditions of Mercury by taking emissivity measurements at varying temperatures from 100-500°C under vacuum conditions supporting MERTIS payload. The measured thermal emissivity endmember spectral library therefore includes major silicates such as bytownite, anorthoclase, synthetic glass, olivine, enstatite, nepheline basanite, rocks like komatiite, tektite, Johnson Space Center lunar simulant (1A), and synthetic powdered sulfides which includes MgS, FeS, CaS, CrS, TiS, NaS, and MnS. Using such specialized endmember spectral library created under Mercury's conditions significantly increases the accuracy of the deconvolution model results. In this study, we revisited the available telescope spectra and redeveloped the algorithm by [3] by only choosing the endmember spectral library created at PSL for unbiased model accuracy with the RMS value of 0.03-0.04. Currently, the telescope spectra are investigated for its calibrations and the results will be presented at AGU. References: [1] Hiesinger, H. and J. Helbert (2010) PSS, 58(1-2): 144-165. [2] Sprague, A.L. et al (2009) PSS, 57, 364-383. [3] Ramsey and Christiansen (1998) JGR, 103, 577-596
Men, Kuo; Chen, Xinyuan; Zhang, Ye; Zhang, Tao; Dai, Jianrong; Yi, Junlin; Li, Yexiong
2017-01-01
Radiotherapy is one of the main treatment methods for nasopharyngeal carcinoma (NPC). It requires exact delineation of the nasopharynx gross tumor volume (GTVnx), the metastatic lymph node gross tumor volume (GTVnd), the clinical target volume (CTV), and organs at risk in the planning computed tomography images. However, this task is time-consuming and operator dependent. In the present study, we developed an end-to-end deep deconvolutional neural network (DDNN) for segmentation of these targets. The proposed DDNN is an end-to-end architecture enabling fast training and testing. It consists of two important components: an encoder network and a decoder network. The encoder network was used to extract the visual features of a medical image and the decoder network was used to recover the original resolution by deploying deconvolution. A total of 230 patients diagnosed with NPC stage I or stage II were included in this study. Data from 184 patients were chosen randomly as a training set to adjust the parameters of DDNN, and the remaining 46 patients were the test set to assess the performance of the model. The Dice similarity coefficient (DSC) was used to quantify the segmentation results of the GTVnx, GTVnd, and CTV. In addition, the performance of DDNN was compared with the VGG-16 model. The proposed DDNN method outperformed the VGG-16 in all the segmentation. The mean DSC values of DDNN were 80.9% for GTVnx, 62.3% for the GTVnd, and 82.6% for CTV, whereas VGG-16 obtained 72.3, 33.7, and 73.7% for the DSC values, respectively. DDNN can be used to segment the GTVnx and CTV accurately. The accuracy for the GTVnd segmentation was relatively low due to the considerable differences in its shape, volume, and location among patients. The accuracy is expected to increase with more training data and combination of MR images. In conclusion, DDNN has the potential to improve the consistency of contouring and streamline radiotherapy workflows, but careful human review and a considerable amount of editing will be required.
Spatial and spectral imaging of point-spread functions using a spatial light modulator
NASA Astrophysics Data System (ADS)
Munagavalasa, Sravan; Schroeder, Bryce; Hua, Xuanwen; Jia, Shu
2017-12-01
We develop a point-spread function (PSF) engineering approach to imaging the spatial and spectral information of molecular emissions using a spatial light modulator (SLM). We show that a dispersive grating pattern imposed upon the emission reveals spectral information. We also propose a deconvolution model that allows the decoupling of the spectral and 3D spatial information in engineered PSFs. The work is readily applicable to single-molecule measurements and fluorescent microscopy.
Enhancing the accuracy of subcutaneous glucose sensors: a real-time deconvolution-based approach.
Guerra, Stefania; Facchinetti, Andrea; Sparacino, Giovanni; Nicolao, Giuseppe De; Cobelli, Claudio
2012-06-01
Minimally invasive continuous glucose monitoring (CGM) sensors can greatly help diabetes management. Most of these sensors consist of a needle electrode, placed in the subcutaneous tissue, which measures an electrical current exploiting the glucose-oxidase principle. This current is then transformed to glucose levels after calibrating the sensor on the basis of one, or more, self-monitoring blood glucose (SMBG) samples. In this study, we design and test a real-time signal-enhancement module that, cascaded to the CGM device, improves the quality of its output by a proper postprocessing of the CGM signal. In fact, CGM sensors measure glucose in the interstitium rather than in the blood compartment. We show that this distortion can be compensated by means of a regularized deconvolution procedure relying on a linear regression model that can be updated whenever a pair of suitably sampled SMBG references is collected. Tests performed both on simulated and real data demonstrate a significant accuracy improvement of the CGM signal. Simulation studies also demonstrate the robustness of the method against departures from nominal conditions, such as temporal misplacement of the SMBG samples and uncertainty in the blood-to-interstitium glucose kinetic model. Thanks to its online capabilities, the proposed signal-enhancement algorithm can be used to improve the performance of CGM-based real-time systems such as the hypo/hyper glycemic alert generators or the artificial pancreas.
Peng, Xian; Yuan, Han; Chen, Wufan; Ding, Lei
2017-01-01
Continuous loop averaging deconvolution (CLAD) is one of the proven methods for recovering transient auditory evoked potentials (AEPs) in rapid stimulation paradigms, which requires an elaborated stimulus sequence design to attenuate impacts from noise in data. The present study aimed to develop a new metric in gauging a CLAD sequence in terms of noise gain factor (NGF), which has been proposed previously but with less effectiveness in the presence of pink (1/f) noise. We derived the new metric by explicitly introducing the 1/f model into the proposed time-continuous sequence. We selected several representative CLAD sequences to test their noise property on typical EEG recordings, as well as on five real CLAD electroencephalogram (EEG) recordings to retrieve the middle latency responses. We also demonstrated the merit of the new metric in generating and quantifying optimized sequences using a classic genetic algorithm. The new metric shows evident improvements in measuring actual noise gains at different frequencies, and better performance than the original NGF in various aspects. The new metric is a generalized NGF measurement that can better quantify the performance of a CLAD sequence, and provide a more efficient mean of generating CLAD sequences via the incorporation with optimization algorithms. The present study can facilitate the specific application of CLAD paradigm with desired sequences in the clinic. PMID:28414803
Gene expression distribution deconvolution in single-cell RNA sequencing.
Wang, Jingshu; Huang, Mo; Torre, Eduardo; Dueck, Hannah; Shaffer, Sydney; Murray, John; Raj, Arjun; Li, Mingyao; Zhang, Nancy R
2018-06-26
Single-cell RNA sequencing (scRNA-seq) enables the quantification of each gene's expression distribution across cells, thus allowing the assessment of the dispersion, nonzero fraction, and other aspects of its distribution beyond the mean. These statistical characterizations of the gene expression distribution are critical for understanding expression variation and for selecting marker genes for population heterogeneity. However, scRNA-seq data are noisy, with each cell typically sequenced at low coverage, thus making it difficult to infer properties of the gene expression distribution from raw counts. Based on a reexamination of nine public datasets, we propose a simple technical noise model for scRNA-seq data with unique molecular identifiers (UMI). We develop deconvolution of single-cell expression distribution (DESCEND), a method that deconvolves the true cross-cell gene expression distribution from observed scRNA-seq counts, leading to improved estimates of properties of the distribution such as dispersion and nonzero fraction. DESCEND can adjust for cell-level covariates such as cell size, cell cycle, and batch effects. DESCEND's noise model and estimation accuracy are further evaluated through comparisons to RNA FISH data, through data splitting and simulations and through its effectiveness in removing known batch effects. We demonstrate how DESCEND can clarify and improve downstream analyses such as finding differentially expressed genes, identifying cell types, and selecting differentiation markers. Copyright © 2018 the Author(s). Published by PNAS.
Sparse Poisson noisy image deblurring.
Carlavan, Mikael; Blanc-Féraud, Laure
2012-04-01
Deblurring noisy Poisson images has recently been a subject of an increasing amount of works in many areas such as astronomy and biological imaging. In this paper, we focus on confocal microscopy, which is a very popular technique for 3-D imaging of biological living specimens that gives images with a very good resolution (several hundreds of nanometers), although degraded by both blur and Poisson noise. Deconvolution methods have been proposed to reduce these degradations, and in this paper, we focus on techniques that promote the introduction of an explicit prior on the solution. One difficulty of these techniques is to set the value of the parameter, which weights the tradeoff between the data term and the regularizing term. Only few works have been devoted to the research of an automatic selection of this regularizing parameter when considering Poisson noise; therefore, it is often set manually such that it gives the best visual results. We present here two recent methods to estimate this regularizing parameter, and we first propose an improvement of these estimators, which takes advantage of confocal images. Following these estimators, we secondly propose to express the problem of the deconvolution of Poisson noisy images as the minimization of a new constrained problem. The proposed constrained formulation is well suited to this application domain since it is directly expressed using the antilog likelihood of the Poisson distribution and therefore does not require any approximation. We show how to solve the unconstrained and constrained problems using the recent alternating-direction technique, and we present results on synthetic and real data using well-known priors, such as total variation and wavelet transforms. Among these wavelet transforms, we specially focus on the dual-tree complex wavelet transform and on the dictionary composed of curvelets and an undecimated wavelet transform.
X-RAYING THE DARK SIDE OF VENUS—SCATTER FROM VENUS’ MAGNETOTAIL?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afshari, M.; Peres, G.; Petralia, A.
We analyze significant X-ray, EUV, and UV emission coming from the dark side of Venus observed with Hinode /XRT and Solar Dynamics Observatory /Atmospheric Imaging Assembly ( SDO /AIA) during a transit across the solar disk that occurred in 2012. As a check we have analyzed an analogous Mercury transit that occurred in 2006. We have used the latest version of the Hinode /XRT point spread function to deconvolve Venus and Mercury X-ray images, to remove instrumental scattering. After deconvolution, the flux from Venus’ shadow remains significant while that of Mercury becomes negligible. Since stray light contamination affects the XRT Ti-poly filtermore » data we use, we performed the same analysis with XRT Al-mesh filter data, not affected by the light leak. Even the latter data show residual flux. We have also found significant EUV (304 Å, 193 Å, 335 Å) and UV (1700 Å) flux in Venus’ shadow, measured with SDO /AIA. The EUV emission from Venus’ dark side is reduced, but still significant, when deconvolution is applied. The light curves of the average flux of the shadow in the X-ray, EUV, and UV bands appear different as Venus crosses the solar disk, but in any of them the flux is, at any time, approximately proportional to the average flux in a ring surrounding Venus, and therefore proportional to that of the solar regions around Venus’ obscuring disk line of sight. The proportionality factor depends on the band. This phenomenon has no clear origin; we suggest that it may be due to scatter occurring in the very long magnetotail of Venus.« less
Receiver function deconvolution using transdimensional hierarchical Bayesian inference
NASA Astrophysics Data System (ADS)
Kolb, J. M.; Lekić, V.
2014-06-01
Teleseismic waves can convert from shear to compressional (Sp) or compressional to shear (Ps) across impedance contrasts in the subsurface. Deconvolving the parent waveforms (P for Ps or S for Sp) from the daughter waveforms (S for Ps or P for Sp) generates receiver functions which can be used to analyse velocity structure beneath the receiver. Though a variety of deconvolution techniques have been developed, they are all adversely affected by background and signal-generated noise. In order to take into account the unknown noise characteristics, we propose a method based on transdimensional hierarchical Bayesian inference in which both the noise magnitude and noise spectral character are parameters in calculating the likelihood probability distribution. We use a reversible-jump implementation of a Markov chain Monte Carlo algorithm to find an ensemble of receiver functions whose relative fits to the data have been calculated while simultaneously inferring the values of the noise parameters. Our noise parametrization is determined from pre-event noise so that it approximates observed noise characteristics. We test the algorithm on synthetic waveforms contaminated with noise generated from a covariance matrix obtained from observed noise. We show that the method retrieves easily interpretable receiver functions even in the presence of high noise levels. We also show that we can obtain useful estimates of noise amplitude and frequency content. Analysis of the ensemble solutions produced by our method can be used to quantify the uncertainties associated with individual receiver functions as well as with individual features within them, providing an objective way for deciding which features warrant geological interpretation. This method should make possible more robust inferences on subsurface structure using receiver function analysis, especially in areas of poor data coverage or under noisy station conditions.
NASA Astrophysics Data System (ADS)
Chen, Jincai; Jin, Guodong; Zhang, Jian
2016-03-01
The rotational motion and orientational distribution of ellipsoidal particles in turbulent flows are of significance in environmental and engineering applications. Whereas the translational motion of an ellipsoidal particle is controlled by the turbulent motions at large scales, its rotational motion is determined by the fluid velocity gradient tensor at small scales, which raises a challenge when predicting the rotational dispersion of ellipsoidal particles using large eddy simulation (LES) method due to the lack of subgrid scale (SGS) fluid motions. We report the effects of the SGS fluid motions on the orientational and rotational statistics, such as the alignment between the long axis of ellipsoidal particles and the vorticity, the mean rotational energy at various aspect ratios against those obtained with direct numerical simulation (DNS) and filtered DNS. The performances of a stochastic differential equation (SDE) model for the SGS velocity gradient seen by the particles and the approximate deconvolution method (ADM) for LES are investigated. It is found that the missing SGS fluid motions in LES flow fields have significant effects on the rotational statistics of ellipsoidal particles. Alignment between the particles and the vorticity is weakened; and the rotational energy of the particles is reduced in LES. The SGS-SDE model leads to a large error in predicting the alignment between the particles and the vorticity and over-predicts the rotational energy of rod-like particles. The ADM significantly improves the rotational energy prediction of particles in LES.
Deconvolution of the PSF of a seismic lens
NASA Astrophysics Data System (ADS)
Yu, Jianhua; Wang, Yue; Schuster, Gerard T.
2002-12-01
We show that if seismic data d is related to the migration image by mmig = LTd. then mmig is a blurred version of the actual reflectivity distribution m, i.e., mmig = (LTL)m. Here L is the acoustic forward modeling operator under the Born approximation where d = Lm. The blurring operator (LTL), or point spread function, distorts the image because of defects in the seismic lens, i.e., small source-receiver recording aperture and irregular/coarse geophone-source spacing. These distortions can be partly suppressed by applying the deblurring operator (LTL)-1 to the migration image to get m = (LTL)-1mmig. This deblurred image is known as a least squares migration (LSM) image if (LTL)-1LT is applied to the data d using a conjugate gradient method, and is known as a migration deconvolved (MD) image if (LTL)-1 is directly applied to the migration image mmig in (kx, ky, z) space. The MD algorithm is an order-of-magnitude faster than LSM, but it employs more restrictive assumptions. We also show that deblurring can be used to filter out coherent noise in the data such as multiple reflections. The procedure is to, e.g., decompose the forward modeling operator into both primary and multiple reflection operators d = (Lprim + Lmulti)m, invert for m, and find the primary reflection data by dprim = Lprimm. This method is named least squares migration filtering (LSMF). The above three algorithms (LSM, MD and LSMF) might be useful for attacking problems in optical imaging.
NASA Astrophysics Data System (ADS)
Leow, Alex D.; Zhu, Siwei
2008-03-01
Diffusion weighted MR imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitizing gradients along a minimum of 6 directions, second-order tensors (represetnted by 3-by-3 positive definiite matrices) can be computed to model dominant diffusion processes. However, it has been shown that conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g. crossing fiber tracts. More recently, High Angular Resolution Diffusion Imaging (HARDI) seeks to address this issue by employing more than 6 gradient directions. To account for fiber crossing when analyzing HARDI data, several methodologies have been introduced. For example, q-ball imaging was proposed to approximate Orientation Diffusion Function (ODF). Similarly, the PAS method seeks to reslove the angular structure of displacement probability functions using the maximum entropy principle. Alternatively, deconvolution methods extract multiple fiber tracts by computing fiber orientations using a pre-specified single fiber response function. In this study, we introduce Tensor Distribution Function (TDF), a probability function defined on the space of symmetric and positive definite matrices. Using calculus of variations, we solve for the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, ODF can easily be computed by analytical integration of the resulting displacement probability function. Moreover, principle fiber directions can also be directly derived from the TDF.
NASA Astrophysics Data System (ADS)
McCallum, James L.; Engdahl, Nicholas B.; Ginn, Timothy R.; Cook, Peter. G.
2014-03-01
Residence time distributions (RTDs) have been used extensively for quantifying flow and transport in subsurface hydrology. In geochemical approaches, environmental tracer concentrations are used in conjunction with simple lumped parameter models (LPMs). Conversely, numerical simulation techniques require large amounts of parameterization and estimated RTDs are certainly limited by associated uncertainties. In this study, we apply a nonparametric deconvolution approach to estimate RTDs using environmental tracer concentrations. The model is based only on the assumption that flow is steady enough that the observed concentrations are well approximated by linear superposition of the input concentrations with the RTD; that is, the convolution integral holds. Even with large amounts of environmental tracer concentration data, the entire shape of an RTD remains highly nonunique. However, accurate estimates of mean ages and in some cases prediction of young portions of the RTD may be possible. The most useful type of data was found to be the use of a time series of tritium. This was due to the sharp variations in atmospheric concentrations and a short half-life. Conversely, the use of CFC compounds with smoothly varying atmospheric concentrations was more prone to nonuniqueness. This work highlights the benefits and limitations of using environmental tracer data to estimate whole RTDs with either LPMs or through numerical simulation. However, the ability of the nonparametric approach developed here to correct for mixing biases in mean ages appears promising.
NASA Astrophysics Data System (ADS)
Kazakis, Nikolaos A.
2018-01-01
The present comment concerns the correct presentation of an algorithm proposed in the above paper for the glow-curve deconvolution in the case of continuous distribution of trapping states. Since most researchers would use directly the proposed algorithm as published, they should be notified of its correct formulation during the fitting of TL glow curves of materials with continuous trap distribution using this Equation.
An l1-TV algorithm for deconvolution with salt and pepper noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohlberg, Brendt; Rodriguez, Paul
2008-01-01
There has recently been considerable interest in applying Total Variation with an {ell}{sup 1} data fidelity term to the denoising of images subject to salt and pepper noise, but the extension of this formulation to more general problems, such as deconvolution, has received little attention, most probably because most efficient algorithms for {ell}{sup 1}-TV denoising can not handle more general inverse problems. We apply the Iteratively Reweighted Norm algorithm to this problem, and compare performance with an alternative algorithm based on the Mumford-Shah functional.
Migration of scattered teleseismic body waves
NASA Astrophysics Data System (ADS)
Bostock, M. G.; Rondenay, S.
1999-06-01
The retrieval of near-receiver mantle structure from scattered waves associated with teleseismic P and S and recorded on three-component, linear seismic arrays is considered in the context of inverse scattering theory. A Ray + Born formulation is proposed which admits linearization of the forward problem and economy in the computation of the elastic wave Green's function. The high-frequency approximation further simplifies the problem by enabling (1) the use of an earth-flattened, 1-D reference model, (2) a reduction in computations to 2-D through the assumption of 2.5-D experimental geometry, and (3) band-diagonalization of the Hessian matrix in the inverse formulation. The final expressions are in a form reminiscent of the classical diffraction stack of seismic migration. Implementation of this procedure demands an accurate estimate of the scattered wave contribution to the impulse response, and thus requires the removal of both the reference wavefield and the source time signature from the raw record sections. An approximate separation of direct and scattered waves is achieved through application of the inverse free-surface transfer operator to individual station records and a Karhunen-Loeve transform to the resulting record sections. This procedure takes the full displacement field to a wave vector space wherein the first principal component of the incident wave-type section is identified with the direct wave and is used as an estimate of the source time function. The scattered displacement field is reconstituted from the remaining principal components using the forward free-surface transfer operator, and may be reduced to a scattering impulse response upon deconvolution of the source estimate. An example employing pseudo-spectral synthetic seismograms demonstrates an application of the methodology.
PEPSI spectro-polarimeter for the LBT
NASA Astrophysics Data System (ADS)
Strassmeier, Klaus G.; Hofmann, Axel; Woche, Manfred F.; Rice, John B.; Keller, Christoph U.; Piskunov, N. E.; Pallavicini, Roberto
2003-02-01
PEPSI (Postham Echelle Polarimetric and Spectroscopic Instrument) is to use the unique feature of the LBT and its powerful double mirror configuration to provide high and extremely high spectral resolution full-Stokes four-vector spectra in the wavelength range 450-1100nm. For the given aperture of 8.4m in single mirror mode and 11.8m in double mirror mode, and at a spectral resolution of 40,000-300,000 as designed for the fiber-fed Echelle spectrograph, a polarimetric accuracy between 10-4 and 10-2 can be reached for targets with visual magnitudes of up to 17th magnitude. A polarimetric accuracy better than 10-4 can only be reached for either targets brighter than approximately 10th magnitude together wiht a substantial trade-off wiht the spectral resolution or with spectrum deconvolution techniques. At 10-2, however, we will be able to observe the brightest AGNs down to 17th magnitude.
Maximum entropy deconvolution of the optical jet of 3C 273
NASA Technical Reports Server (NTRS)
Evans, I. N.; Ford, H. C.; Hui, X.
1989-01-01
The technique of maximum entropy image restoration is applied to the problem of deconvolving the point spread function from a deep, high-quality V band image of the optical jet of 3C 273. The resulting maximum entropy image has an approximate spatial resolution of 0.6 arcsec and has been used to study the morphology of the optical jet. Four regularly-spaced optical knots are clearly evident in the data, together with an optical 'extension' at each end of the optical jet. The jet oscillates around its center of gravity, and the spatial scale of the oscillations is very similar to the spacing between the optical knots. The jet is marginally resolved in the transverse direction and has an asymmetric profile perpendicular to the jet axis. The distribution of V band flux along the length of the jet, and accurate astrometry of the optical knot positions are presented.
Prediction of human errors by maladaptive changes in event-related brain networks.
Eichele, Tom; Debener, Stefan; Calhoun, Vince D; Specht, Karsten; Engel, Andreas K; Hugdahl, Kenneth; von Cramon, D Yves; Ullsperger, Markus
2008-04-22
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve approximately 30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations.
Multichannel myopic deconvolution in underwater acoustic channels via low-rank recovery
Tian, Ning; Byun, Sung-Hoon; Sabra, Karim; Romberg, Justin
2017-01-01
This paper presents a technique for solving the multichannel blind deconvolution problem. The authors observe the convolution of a single (unknown) source with K different (unknown) channel responses; from these channel outputs, the authors want to estimate both the source and the channel responses. The authors show how this classical signal processing problem can be viewed as solving a system of bilinear equations, and in turn can be recast as recovering a rank-1 matrix from a set of linear observations. Results of prior studies in the area of low-rank matrix recovery have identified effective convex relaxations for problems of this type and efficient, scalable heuristic solvers that enable these techniques to work with thousands of unknown variables. The authors show how a priori information about the channels can be used to build a linear model for the channels, which in turn makes solving these systems of equations well-posed. This study demonstrates the robustness of this methodology to measurement noises and parametrization errors of the channel impulse responses with several stylized and shallow water acoustic channel simulations. The performance of this methodology is also verified experimentally using shipping noise recorded on short bottom-mounted vertical line arrays. PMID:28599565
NASA Astrophysics Data System (ADS)
Wang, Qing; Zhao, Xinyu; Ihme, Matthias
2017-11-01
Particle-laden turbulent flows are important in numerous industrial applications, such as spray combustion engines, solar energy collectors etc. It is of interests to study this type of flows numerically, especially using large-eddy simulations (LES). However, capturing the turbulence-particle interaction in LES remains challenging due to the insufficient representation of the effect of sub-grid scale (SGS) dispersion. In the present work, a closure technique for the SGS dispersion using regularized deconvolution method (RDM) is assessed. RDM was proposed as the closure for the SGS dispersion in a counterflow spray that is studied numerically using finite difference method on a structured mesh. A presumed form of LES filter is used in the simulations. In the present study, this technique has been extended to finite volume method with an unstructured mesh, where no presumption on the filter form is required. The method is applied to a series of particle-laden turbulent jets. Parametric analyses of the model performance are conducted for flows with different Stokes numbers and Reynolds numbers. The results from LES will be compared against experiments and direct numerical simulations (DNS).
Monaural Sound Localization Based on Reflective Structure and Homomorphic Deconvolution
Park, Yeonseok; Choi, Anthony
2017-01-01
The asymmetric structure around the receiver provides a particular time delay for the specific incoming propagation. This paper designs a monaural sound localization system based on the reflective structure around the microphone. The reflective plates are placed to present the direction-wise time delay, which is naturally processed by convolutional operation with a sound source. The received signal is separated for estimating the dominant time delay by using homomorphic deconvolution, which utilizes the real cepstrum and inverse cepstrum sequentially to derive the propagation response’s autocorrelation. Once the localization system accurately estimates the information, the time delay model computes the corresponding reflection for localization. Because of the structure limitation, two stages of the localization process perform the estimation procedure as range and angle. The software toolchain from propagation physics and algorithm simulation realizes the optimal 3D-printed structure. The acoustic experiments in the anechoic chamber denote that 79.0% of the study range data from the isotropic signal is properly detected by the response value, and 87.5% of the specific direction data from the study range signal is properly estimated by the response time. The product of both rates shows the overall hit rate to be 69.1%. PMID:28946625
NASA Astrophysics Data System (ADS)
Singh, Arvind; Singh, Upendra Kumar
2017-02-01
This paper deals with the application of continuous wavelet transform (CWT) and Euler deconvolution methods to estimate the source depth using magnetic anomalies. These methods are utilized mainly to focus on the fundamental issue of mapping the major coal seam and locating tectonic lineaments. The main aim of the study is to locate and characterize the source of the magnetic field by transferring the data into an auxiliary space by CWT. The method has been tested on several synthetic source anomalies and finally applied to magnetic field data from Jharia coalfield, India. Using magnetic field data, the mean depth of causative sources points out the different lithospheric depth over the study region. Also, it is inferred that there are two faults, namely the northern boundary fault and the southern boundary fault, which have an orientation in the northeastern and southeastern direction respectively. Moreover, the central part of the region is more faulted and folded than the other parts and has sediment thickness of about 2.4 km. The methods give mean depth of the causative sources without any a priori information, which can be used as an initial model in any inversion algorithm.
Gabor Deconvolution as Preliminary Method to Reduce Pitfall in Deeper Target Seismic Data
NASA Astrophysics Data System (ADS)
Oktariena, M.; Triyoso, W.
2018-03-01
Anelastic attenuation process during seismic wave propagation is the trigger of seismic non-stationary characteristic. An absorption and a scattering of energy are causing the seismic energy loss as the depth increasing. A series of thin reservoir layers found in the study area is located within Talang Akar Fm. Level, showing an indication of interpretation pitfall due to attenuation effect commonly occurred in deeper level seismic data. Attenuation effect greatly influences the seismic images of deeper target level, creating pitfalls in several aspect. Seismic amplitude in deeper target level often could not represent its real subsurface character due to a low amplitude value or a chaotic event nearing the Basement. Frequency wise, the decaying could be seen as the frequency content diminishing in deeper target. Meanwhile, seismic amplitude is the simple tool to point out Direct Hydrocarbon Indicator (DHI) in preliminary Geophysical study before a further advanced interpretation method applied. A quick-look of Post-Stack Seismic Data shows the reservoir associated with a bright spot DHI while another bigger bright spot body detected in the North East area near the field edge. A horizon slice confirms a possibility that the other bright spot zone has smaller delineation; an interpretation pitfall commonly occurs in deeper level of seismic. We evaluates this pitfall by applying Gabor Deconvolution to address the attenuation problem. Gabor Deconvolution forms a Partition of Unity to factorize the trace into smaller convolution window that could be processed as stationary packets. Gabor Deconvolution estimates both the magnitudes of source signature alongside its attenuation function. The enhanced seismic shows a better imaging in the pitfall area that previously detected as a vast bright spot zone. When the enhanced seismic is used for further advanced reprocessing process, the Seismic Impedance and Vp/Vs Ratio slices show a better reservoir delineation, in which the pitfall area is reduced and some morphed as background lithology. Gabor Deconvolution removes the attenuation by performing Gabor Domain spectral division, which in extension also reduces interpretation pitfall in deeper target seismic.
DECONVOLUTION OF IMAGES FROM BLAST 2005: INSIGHT INTO THE K3-50 AND IC 5146 STAR-FORMING REGIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, Arabindo; Netterfield, Calvin B.; Ade, Peter A. R.
2011-04-01
We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed itsmore » performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4.'5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and {sup 12}CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below {approx} 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region.« less
Deconvolution of Images from BLAST 2005: Insight into the K3-50 and IC 5146 Star-forming Regions
NASA Astrophysics Data System (ADS)
Roy, Arabindo; Ade, Peter A. R.; Bock, James J.; Brunt, Christopher M.; Chapin, Edward L.; Devlin, Mark J.; Dicker, Simon R.; France, Kevin; Gibb, Andrew G.; Griffin, Matthew; Gundersen, Joshua O.; Halpern, Mark; Hargrave, Peter C.; Hughes, David H.; Klein, Jeff; Marsden, Gaelen; Martin, Peter G.; Mauskopf, Philip; Netterfield, Calvin B.; Olmi, Luca; Patanchon, Guillaume; Rex, Marie; Scott, Douglas; Semisch, Christopher; Truch, Matthew D. P.; Tucker, Carole; Tucker, Gregory S.; Viero, Marco P.; Wiebe, Donald V.
2011-04-01
We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed its performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4farcm5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and 12CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below ~ 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region.
Image processing tools dedicated to quantification in 3D fluorescence microscopy
NASA Astrophysics Data System (ADS)
Dieterlen, A.; De Meyer, A.; Colicchio, B.; Le Calvez, S.; Haeberlé, O.; Jacquey, S.
2006-05-01
3-D optical fluorescent microscopy now becomes an efficient tool for the volume investigation of living biological samples. Developments in instrumentation have permitted to beat off the conventional Abbe limit. In any case the recorded image can be described by the convolution equation between the original object and the Point Spread Function (PSF) of the acquisition system. Due to the finite resolution of the instrument, the original object is recorded with distortions and blurring, and contaminated by noise. This induces that relevant biological information cannot be extracted directly from raw data stacks. If the goal is 3-D quantitative analysis, then to assess optimal performance of the instrument and to ensure the data acquisition reproducibility, the system characterization is mandatory. The PSF represents the properties of the image acquisition system; we have proposed the use of statistical tools and Zernike moments to describe a 3-D PSF system and to quantify the variation of the PSF. This first step toward standardization is helpful to define an acquisition protocol optimizing exploitation of the microscope depending on the studied biological sample. Before the extraction of geometrical information and/or intensities quantification, the data restoration is mandatory. Reduction of out-of-focus light is carried out computationally by deconvolution process. But other phenomena occur during acquisition, like fluorescence photo degradation named "bleaching", inducing an alteration of information needed for restoration. Therefore, we have developed a protocol to pre-process data before the application of deconvolution algorithms. A large number of deconvolution methods have been described and are now available in commercial package. One major difficulty to use this software is the introduction by the user of the "best" regularization parameters. We have pointed out that automating the choice of the regularization level; also greatly improves the reliability of the measurements although it facilitates the use. Furthermore, to increase the quality and the repeatability of quantitative measurements a pre-filtering of images improves the stability of deconvolution process. In the same way, the PSF prefiltering stabilizes the deconvolution process. We have shown that Zemike polynomials can be used to reconstruct experimental PSF, preserving system characteristics and removing the noise contained in the PSF.
Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N
2017-01-25
This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.
Computed inverse resonance imaging for magnetic susceptibility map reconstruction.
Chen, Zikuan; Calhoun, Vince
2012-01-01
This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.
Computed inverse MRI for magnetic susceptibility map reconstruction
Chen, Zikuan; Calhoun, Vince
2015-01-01
Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372
An improved method for polarimetric image restoration in interferometry
NASA Astrophysics Data System (ADS)
Pratley, Luke; Johnston-Hollitt, Melanie
2016-11-01
Interferometric radio astronomy data require the effects of limited coverage in the Fourier plane to be accounted for via a deconvolution process. For the last 40 years this process, known as `cleaning', has been performed almost exclusively on all Stokes parameters individually as if they were independent scalar images. However, here we demonstrate for the case of the linear polarization P, this approach fails to properly account for the complex vector nature resulting in a process which is dependent on the axes under which the deconvolution is performed. We present here an improved method, `Generalized Complex CLEAN', which properly accounts for the complex vector nature of polarized emission and is invariant under rotations of the deconvolution axes. We use two Australia Telescope Compact Array data sets to test standard and complex CLEAN versions of the Högbom and SDI (Steer-Dwedney-Ito) CLEAN algorithms. We show that in general the complex CLEAN version of each algorithm produces more accurate clean components with fewer spurious detections and lower computation cost due to reduced iterations than the current methods. In particular, we find that the complex SDI CLEAN produces the best results for diffuse polarized sources as compared with standard CLEAN algorithms and other complex CLEAN algorithms. Given the move to wide-field, high-resolution polarimetric imaging with future telescopes such as the Square Kilometre Array, we suggest that Generalized Complex CLEAN should be adopted as the deconvolution method for all future polarimetric surveys and in particular that the complex version of an SDI CLEAN should be used.
NASA Astrophysics Data System (ADS)
Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François
2018-06-01
The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.
Fiori, Simone
2003-12-01
In recent work, we introduced nonlinear adaptive activation function (FAN) artificial neuron models, which learn their activation functions in an unsupervised way by information-theoretic adapting rules. We also applied networks of these neurons to some blind signal processing problems, such as independent component analysis and blind deconvolution. The aim of this letter is to study some fundamental aspects of FAN units' learning by investigating the properties of the associated learning differential equation systems.
NASA Technical Reports Server (NTRS)
Carpenter, Paul; Curreri, Peter A. (Technical Monitor)
2002-01-01
This course will cover practical applications of the energy-dispersive spectrometer (EDS) to x-ray microanalysis. Topics covered will include detector technology, advances in pulse processing, resolution and performance monitoring, detector modeling, peak deconvolution and fitting, qualitative and quantitative analysis, compositional mapping, and standards. An emphasis will be placed on use of the EDS for quantitative analysis, with discussion of typical problems encountered in the analysis of a wide range of materials and sample geometries.
Image analysis of the AXAF VETA-I x ray mirror
NASA Technical Reports Server (NTRS)
Freeman, Mark D.; Hughes, John P; Vanspeybroeck, L.; Weisskopf, M.; Bilbro, J.
1992-01-01
Initial core scan data of the VETA-I x-ray mirror proved disappointing, showing considerable unpredicted image structure and poor measured FWHM. 2-D core scans were performed, providing important insight into the nature of the distortion. Image deconvolutions using a ray traced model PSF was performed successfully to reinforce our conclusion regarding the origin of the astigmatism. A mechanical correction was made to the optical structure, and the mirror was tested successfully (FWHM 0.22 arcsec) as a result.
Design of Geopolymeric Materials Based on Nanostructural Characterization and Modeling
2006-04-01
composition M2O.mAl2O3.nSiO2, usually with m 1 and 2 ≤ n ≤ 6, and where M represents one or more alkali metals. Some geopolymers also contain alkaline...shown to have interesting and potentially very useful properties. Geopolymer -calcium phosphate composites are also being investigated for potential...110-100-90-80-70 Chemical shift (ppm from TMS) Figure 5. Deconvoluted 29Si MAS NMR spectra of Na- geopolymers with compositions (a) x = 0.535, (b
Lütkenhöner, Bernd
2015-10-06
The vestibular evoked myogenic potential (VEMP) can be modelled reasonably well by convolving two functions: one representing an average motor unit action potential (MUAP), the other representing the temporal modulation of the MUAP rate (rate modulation). It is the latter which contains the information of interest, and so it would be desirable to be able to estimate this function from a combination of the VEMP with some other data. As the VEMP is simply a stimulus-triggered average of the electromyogram (EMG), a supplementary, easily accessible source of information is the EMG power spectrum, which can be shown to be roughly proportional to the squared modulus of the Fourier transform of the MUAP. But no phase information is available for the MUAP so that a straightforward deconvolution is not possible. To get around the problem of incomplete information, the rate modulation is described by a thoughtfully chosen function with just a few adjustable parameters. The convolution model is then used to make predictions as to the energy spectral density of the VEMP, and the parameters are optimized using a cost function that quantifies the difference between model prediction and data. The workability of the proposed approach is demonstrated by analysing Monte Carlo simulated data and exemplary data from patients who underwent VEMP testing as part of a clinical evaluation of their dizziness symptoms. The approach is suited, for example, to estimate the duration of the inhibition causing the VEMP or to disentangle a VEMP consisting of more than one component.
Pomerantsev, Alexey L; Kutsenova, Alla V; Rodionova, Oxana Ye
2017-02-01
A novel non-linear regression method for modeling non-isothermal thermogravimetric data is proposed. Experiments for several heating rates are analyzed simultaneously. The method is applicable to complex multi-stage processes when the number of stages is unknown. Prior knowledge of the type of kinetics is not required. The main idea is a consequent estimation of parameters when the overall model is successively changed from one level of modeling to another. At the first level, the Avrami-Erofeev functions are used. At the second level, the Sestak-Berggren functions are employed with the goal to broaden the overall model. The method is tested using both simulated and real-world data. A comparison of the proposed method with a recently published 'model-free' deconvolution method is presented.
Uncertainty analysis of signal deconvolution using a measured instrument response function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartouni, E. P.; Beeman, B.; Caggiano, J. A.
2016-10-05
A common analysis procedure minimizes the ln-likelihood that a set of experimental observables matches a parameterized model of the observation. The model includes a description of the underlying physical process as well as the instrument response function (IRF). Here, we investigate the National Ignition Facility (NIF) neutron time-of-flight (nTOF) spectrometers, the IRF is constructed from measurements and models. IRF measurements have a finite precision that can make significant contributions to the uncertainty estimate of the physical model’s parameters. Finally, we apply a Bayesian analysis to properly account for IRF uncertainties in calculating the ln-likelihood function used to find the optimummore » physical parameters.« less
Snieder, R.; Safak, E.
2006-01-01
The motion of a building depends on the excitation, the coupling of the building to the ground, and the mechanical properties of the building. We separate the building response from the excitation and the ground coupling by deconvolving the motion recorded at different levels in the building and apply this to recordings of the motion in the Robert A. Millikan Library in Pasadena, California. This deconvolution allows for the separation of instrinsic attenuation and radiation damping. The waveforms obtained from deconvolution with the motion in the top floor show a superposition of one upgoing and one downgoing wave. The waveforms obtained by deconvolution with the motion in the basement can be formulated either as a sum of upgoing and downgoing waves, or as a sum over normal modes. Because these deconvolved waves for late time have a monochromatic character, they are most easily analyzed with normal-mode theory. For this building we estimate a shear velocity c = 322 m/sec and a quality factor Q = 20. These values explain both the propagating waves and the normal modes.
Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy
NASA Astrophysics Data System (ADS)
Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong
2011-06-01
With the use of adaptive optics (AO), the ocular aberrations can be compensated to get high-resolution image of living human retina. However, the wavefront correction is not perfect due to the wavefront measure error and hardware restrictions. Thus, it is necessary to use a deconvolution algorithm to recover the retinal images. In this paper, a blind deconvolution technique called Incremental Wiener filter is used to restore the adaptive optics confocal scanning laser ophthalmoscope (AOSLO) images. The point-spread function (PSF) measured by wavefront sensor is only used as an initial value of our algorithm. We also realize the Incremental Wiener filter on graphics processing unit (GPU) in real-time. When the image size is 512 × 480 pixels, six iterations of our algorithm only spend about 10 ms. Retinal blood vessels as well as cells in retinal images are restored by our algorithm, and the PSFs are also revised. Retinal images with and without adaptive optics are both restored. The results show that Incremental Wiener filter reduces the noises and improve the image quality.
Gainer, Christian F; Utzinger, Urs; Romanowski, Marek
2012-07-01
The use of upconverting lanthanide nanoparticles in fast-scanning microscopy is hindered by a long luminescence decay time, which greatly blurs images acquired in a nondescanned mode. We demonstrate herein an image processing method based on Richardson-Lucy deconvolution that mitigates the detrimental effects of their luminescence lifetime. This technique generates images with lateral resolution on par with the system's performance, ∼1.2 μm, while maintaining an axial resolution of 5 μm or better at a scan rate comparable with traditional two-photon microscopy. Remarkably, this can be accomplished with near infrared excitation power densities of 850 W/cm(2), several orders of magnitude below those used in two-photon imaging with molecular fluorophores. By way of illustration, we introduce the use of lipids to coat and functionalize these nanoparticles, rendering them water dispersible and readily conjugated to biologically relevant ligands, in this case epidermal growth factor receptor antibody. This deconvolution technique combined with the functionalized nanoparticles will enable three-dimensional functional tissue imaging at exceptionally low excitation power densities.
Spatial studies of planetary nebulae with IRAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawkins, G.W.; Zuckerman, B.
1991-06-01
The infrared sizes at the four IRAS wavelengths of 57 planetaries, most with 20-60 arcsec optical size, are derived from spatial deconvolution of one-dimensional survey mode scans. Survey observations from multiple detectors and hours confirmed (HCON) observations are combined to increase the sampling to a rate that is sufficient for successful deconvolution. The Richardson-Lucy deconvolution algorithm is used to obtain an increase in resolution of a factor of about 2 or 3 from the normal IRAS detector sizes of 45, 45, 90, and 180 arcsec at wavelengths 12, 25, 60, and 100 microns. Most of the planetaries deconvolve at 12more » and 25 microns to sizes equal to or smaller than the optical size. Some of the planetaries with optical rings 60 arcsec or more in diameter show double-peaked IRAS profiles. Many, such as NGC 6720 and NGC 6543 show all infrared sizes equal to the optical size, while others indicate increasing infrared size with wavelength. Deconvolved IRAS profiles are presented for the 57 planetaries at nearly all wavelengths where IRAS flux densities are 1-2 Jy or higher. 60 refs.« less
A stopping criterion to halt iterations at the Richardson-Lucy deconvolution of radiographic images
NASA Astrophysics Data System (ADS)
Almeida, G. L.; Silvani, M. I.; Souza, E. S.; Lopes, R. T.
2015-07-01
Radiographic images, as any experimentally acquired ones, are affected by spoiling agents which degrade their final quality. The degradation caused by agents of systematic character, can be reduced by some kind of treatment such as an iterative deconvolution. This approach requires two parameters, namely the system resolution and the best number of iterations in order to achieve the best final image. This work proposes a novel procedure to estimate the best number of iterations, which replaces the cumbersome visual inspection by a comparison of numbers. These numbers are deduced from the image histograms, taking into account the global difference G between them for two subsequent iterations. The developed algorithm, including a Richardson-Lucy deconvolution procedure has been embodied into a Fortran program capable to plot the 1st derivative of G as the processing progresses and to stop it automatically when this derivative - within the data dispersion - reaches zero. The radiograph of a specially chosen object acquired with thermal neutrons from the Argonauta research reactor at Institutode Engenharia Nuclear - CNEN, Rio de Janeiro, Brazil, have undergone this treatment with fair results.
Wang, Jian; Chen, Hong-Ping; Liu, You-Ping; Wei, Zheng; Liu, Rong; Fan, Dan-Qing
2013-05-01
This experiment shows how to use the automated mass spectral deconvolution & identification system (AMDIS) to deconvolve the overlapped peaks in the total ion chromatogram (TIC) of volatile oil from Chineses materia medica (CMM). The essential oil was obtained by steam distillation. Its TIC was gotten by GC-MS, and the superimposed peaks in TIC were deconvolved by AMDIS. First, AMDIS can detect the number of components in TIC through the run function. Then, by analyzing the extracted spectrum of corresponding scan point of detected component and the original spectrum of this scan point, and their counterparts' spectra in the referred MS Library, researchers can ascertain the component's structure accurately or deny some compounds, which don't exist in nature. Furthermore, through examining the changeability of characteristic fragment ion peaks of identified compounds, the previous outcome can be affirmed again. The result demonstrated that AMDIS could efficiently deconvolve the overlapped peaks in TIC by taking out the spectrum of matching scan point of discerned component, which led to exact identification of the component's structure.
Two-dimensional imaging of two types of radicals by the CW-EPR method
NASA Astrophysics Data System (ADS)
Czechowski, Tomasz; Krzyminiewski, Ryszard; Jurga, Jan; Chlewicki, Wojciech
2008-01-01
The CW-EPR method of image reconstruction is based on sample rotation in a magnetic field with a constant gradient (50 G/cm). In order to obtain a projection (radical density distribution) along a given direction, the EPR spectra are recorded with and without the gradient. Deconvolution, then gives the distribution of the spin density. Projection at 36 different angles gives the information that is necessary for reconstruction of the radical distribution. The problem becomes more complex when there are at least two types of radicals in the sample, because the deconvolution procedure does not give satisfactory results. We propose a method to calculate the projections for each radical, based on iterative procedures. The images of density distribution for each radical obtained by our procedure have proved that the method of deconvolution, in combination with iterative fitting, provides correct results. The test was performed on a sample of polymer PPS Br 111 ( p-phenylene sulphide) with glass fibres and minerals. The results indicated a heterogeneous distribution of radicals in the sample volume. The images obtained were in agreement with the known shape of the sample.
Liu, Yunbo; Wear, Keith A; Harris, Gerald R
2017-10-01
Reliable acoustic characterization is fundamental for patient safety and clinical efficacy during high-intensity therapeutic ultrasound (HITU) treatment. Technical challenges, such as measurement variation and signal analysis, still exist for HITU exposimetry using ultrasound hydrophones. In this work, four hydrophones were compared for pressure measurement: a robust needle hydrophone, a small polyvinylidene fluoride capsule hydrophone and two fiberoptic hydrophones. The focal waveform and beam distribution of a single-element HITU transducer (1.05 MHz and 3.3 MHz) were evaluated. Complex deconvolution between the hydrophone voltage signal and frequency-dependent complex sensitivity was performed to obtain pressure waveforms. Compressional pressure (p + ), rarefactional pressure (p - ) and focal beam distribution were compared up to 10.6/-6.0 MPa (p + /p - ) (1.05 MHz) and 20.65/-7.20 MPa (3.3 MHz). The effects of spatial averaging, local non-linear distortion, complex deconvolution and hydrophone damage thresholds were investigated. This study showed a variation of no better than 10%-15% among hydrophones during HITU pressure characterization. Published by Elsevier Inc.
Blind image deconvolution using the Fields of Experts prior
NASA Astrophysics Data System (ADS)
Dong, Wende; Feng, Huajun; Xu, Zhihai; Li, Qi
2012-11-01
In this paper, we present a method for single image blind deconvolution. To improve its ill-posedness, we formulate the problem under Bayesian probabilistic framework and use a prior named Fields of Experts (FoE) which is learnt from natural images to regularize the latent image. Furthermore, due to the sparse distribution of the point spread function (PSF), we adopt a Student-t prior to regularize it. An improved alternating minimization (AM) approach is proposed to solve the resulted optimization problem. Experiments on both synthetic and real world blurred images show that the proposed method can achieve results of high quality.
Application of the Lucy–Richardson Deconvolution Procedure to High Resolution Photoemission Spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rameau, J.; Yang, H.-B.; Johnson, P.D.
2010-07-01
Angle-resolved photoemission has developed into one of the leading probes of the electronic structure and associated dynamics of condensed matter systems. As with any experimental technique the ability to resolve features in the spectra is ultimately limited by the resolution of the instrumentation used in the measurement. Previously developed for sharpening astronomical images, the Lucy-Richardson deconvolution technique proves to be a useful tool for improving the photoemission spectra obtained in modern hemispherical electron spectrometers where the photoelectron spectrum is displayed as a 2D image in energy and momentum space.
An introduction to chaotic and random time series analysis
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1989-01-01
The origin of chaotic behavior and the relation of chaos to randomness are explained. Two mathematical results are described: (1) a representation theorem guarantees the existence of a specific time-domain model for chaos and addresses the relation between chaotic, random, and strictly deterministic processes; (2) a theorem assures that information on the behavior of a physical system in its complete state space can be extracted from time-series data on a single observable. Focus is placed on an important connection between the dynamical state space and an observable time series. These two results lead to a practical deconvolution technique combining standard random process modeling methods with new embedded techniques.
Identification of Experimental Unsteady Aerodynamic Impulse Responses
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Piatak, David J.; Scott, Robert C.
2003-01-01
The identification of experimental unsteady aerodynamic impulse responses using the Oscillating Turntable (OTT) at NASA Langley's Transonic Dynamics Tunnel (TDT) is described. Results are presented for two configurations: a Rigid Semispan Model (RSM) and a rectangular wing with a supercritical airfoil section. Both models were used to acquire unsteady pressure data due to pitching oscillations on the OTT. A deconvolution scheme involving a step input in pitch and the resultant step response in pressure, for several pressure transducers, is used to identify the pressure impulse responses. The identified impulse responses are then used to predict the pressure response due to pitching oscillations at several frequencies. Comparisons with the experimental data are presented.
Accounting for pharmacokinetic differences in dual-tracer receptor density imaging
Tichauer, K M; Diop, M; Elliott, J T; Samkoe, K S; Hasan, T; St. Lawrence, K; Pogue, B W
2014-01-01
Dual-tracer molecular imaging is a powerful approach to quantify receptor expression in a wide range of tissues by using an untargeted tracer to account for any nonspecific uptake of a molecular-targeted tracer. This approach has previously required the pharmacokinetics of the receptor-targeted and untargeted tracers to be identical, requiring careful selection of an ideal untargeted tracer for any given targeted tracer. In this study, methodology capable of correcting for tracer differences in arterial input functions, as well as binding-independent delivery and retention, is derived and evaluated in a mouse U251 glioma xenograft model using an Affibody tracer targeted to epidermal growth factor receptor (EGFR), a cell membrane receptor overexpressed in many cancers. Simulations demonstrated that blood, and to a lesser extent vascular-permeability, pharmacokinetic differences between targeted and untargeted tracers could be quantified by deconvolving the uptakes of the two tracers in a region of interest devoid of targeted tracer binding, and therefore corrected for, by convolving the uptake of the untargeted tracer in all regions of interest by the product of the deconvolution. Using fluorescently labelled, EGFR-targeted and untargeted Affibodies (known to have different blood clearance rates), the average tumor concentration of EGFR in 4 mice was estimated using dual-tracer kinetic modelling to be 3.9 ± 2.4 nM compared to an expected concentration of 2.0 ± 0.4 nM. However, with deconvolution correction a more equivalent EGFR concentration of 2.0 ± 0.4 nM was measured. PMID:24743262
Wave optics theory and 3-D deconvolution for the light field microscope
Broxton, Michael; Grosenick, Logan; Yang, Samuel; Cohen, Noy; Andalman, Aaron; Deisseroth, Karl; Levoy, Marc
2013-01-01
Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method. PMID:24150383
Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model
NASA Astrophysics Data System (ADS)
Yeh, M.-Y.; Lee, T.-H.; Yang, S.-T.; Kuo, H.-H.; Chyi, T.-K.; Liu, H.-L.
2009-05-01
Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.
NASA Technical Reports Server (NTRS)
Velusamy, T.; Langer, William D.; Marsh, Kenneth. A.
2007-01-01
We present new details of the structure and morphology of the jets and outflows in HH 46/47 as seen in Spitzer infrared images from IRAC and MIPS, reprocessed using the 'HiRes' deconvolution technique. HiRes improves the visualization of spatial morphology by enhancing resolution (to subarcsecond levels in IRAC bands) and removing the contaminating side lobes from bright sources. In addition to sharper views of previously reported bow shocks, we have detected (1) the sharply delineated cavity walls of the wide-angle biconical outflow, seen in scattered light on both sides of the protostar, (2) several very narrow jet features at distances approximately 400 AU to approximately 0.1 pc from the star, and (3) compact emissions at MIPS 24 m with the jet heads, tracing the hottest atomic/ionic gas in the bow shocks. Together the IRAC and MIPS images provide a more complete picture of the bow shocks, tracing both the molecular and atomic/ionic gases, respectively. The narrow width and alignment of all jet-related features indicate a high degree of jet collimation and low divergence (width of approximately 400 AU increasing by only a factor of 2.3 over 0.2 pc). The morphology of this jet, bow shocks, wide-angle outflows, and the fact that the jet is nonprecessing and episodic, constrain the mechanisms for producing the jet's entrained molecular gas, and origins of the fast jet, and slower wide-angle outflow.
Deep HST imaging of distant weak radio and field galaxies
NASA Technical Reports Server (NTRS)
Windhorst, R. A.; Gordon, J. M.; Pascarelle, S. M.; Schmidtke, P. C.; Keel, W. C.; Burkey, J. M.; Dunlop, J. S.
1994-01-01
We present deep Hubble Space Telescope (HST) Wide-Field Camera (WFC) V- and I-band images of three distant weak radio galaxies with z = 0.311-2.390 and seven field galaxies with z = 0.131-0.58. The images were deconvolved with both the Lucy and multiresolution CLEAN methods, which yield a restoring Full Width at Half Maximum (FWHM) of less than or equal to 0.2 sec, (nearly) preserve photons and signal-to-noise ratio at low spatial frequencies, and produce consistent light profiles down to our 2 sigma surface brightness sensitivity limit of V approximately 27.2 and I approximately 25.9 mag/sq arcsec. Multi-component image modeling was used to provide deconvolution-independent estimates of structural parameters for symmetric galaxies. We present 12-band (m(sub 2750) UBVRIgriJHK) photometry for a subset of the galaxies and bootstrap the unknown FOC/48 zero point at 2750 A in three independent ways (yielding m(sub 2750) = 21.34 +/- 0.09 mag for 1.0 e(-)/s). Two radio galaxies with z = 0.311 and 0.528, as well as one field galaxy with z = 0.58, have the colors and spectra of early-type galaxies, and a(exp 1/4)-like light profiles in the HST images. The two at z greater than 0.5 have little or no color gradients in V - I and are likely giant ellipticals, while the z = 0.311 radio galaxy has a dim exponential disk and is likely an S0. Six of the seven field galaxies have light profiles that indicate (small) inner bulges following a(exp 1/4) laws and outer exponential disks, both with little or no color gradients. These are (early-type) spiral galaxies with z = 0.131-0.528. About half have faint companions or bars. One shows lumpy structure, possibly a merger. The compact narrow-line galaxy 53W002 at z = 2.390 has less than or = 30% +/- 10% of its HST V and I flux in the central kiloparsec (due to its weak Active Galactic Nucleus (AGN)). Most of its light (V approximately equal to 23.3) occurs in a symmetric envelope with a regular a(exp 1/4)-like profile of effective radius a approximately equal to 1.1 sec (approximately equal to 12 kpc for H(sub 0) = 50, q(sub 0) = 0.1. Its (HST) V - I color varies at most from approximately 0.3 mag at a approximately equal to 0.2 sec to approximately 1.2 mag at a approximately greater than 0.4 sec, and possibly to approximately greater than 2.2 mag at a approximately greater than 1.2 sec. Together with its I - K color (approximately equal to 2.5 mag for a approximately greater than 1.0 sec-2.0 sec), this is consistent with an aging stellar population approximately 0.3-0.5 Gyr old in the galaxy center (a approx. less than 2 kpc radius), and possibly approximately 0.5-1.0 Gyr old at a approximately greater than 10 kpc radius. While its outer part may thus have started to collapse at z = 2.5-4, its inner part still is aligned with its redshifted Ly(alpha) cloud and its radio axis, possibly caused by star formation associated with the radio jet, or by reflection from its AGN cone.
Samanipour, Saer; Reid, Malcolm J; Bæk, Kine; Thomas, Kevin V
2018-04-17
Nontarget analysis is considered one of the most comprehensive tools for the identification of unknown compounds in a complex sample analyzed via liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS). Due to the complexity of the data generated via LC-HRMS, the data-dependent acquisition mode, which produces the MS 2 spectra of a limited number of the precursor ions, has been one of the most common approaches used during nontarget screening. However, data-independent acquisition mode produces highly complex spectra that require proper deconvolution and library search algorithms. We have developed a deconvolution algorithm and a universal library search algorithm (ULSA) for the analysis of complex spectra generated via data-independent acquisition. These algorithms were validated and tested using both semisynthetic and real environmental data. A total of 6000 randomly selected spectra from MassBank were introduced across the total ion chromatograms of 15 sludge extracts at three levels of background complexity for the validation of the algorithms via semisynthetic data. The deconvolution algorithm successfully extracted more than 60% of the added ions in the analytical signal for 95% of processed spectra (i.e., 3 complexity levels multiplied by 6000 spectra). The ULSA ranked the correct spectra among the top three for more than 95% of cases. We further tested the algorithms with 5 wastewater effluent extracts for 59 artificial unknown analytes (i.e., their presence or absence was confirmed via target analysis). These algorithms did not produce any cases of false identifications while correctly identifying ∼70% of the total inquiries. The implications, capabilities, and the limitations of both algorithms are further discussed.
Schinkel, Lena; Lehner, Sandro; Knobloch, Marco; Lienemann, Peter; Bogdal, Christian; McNeill, Kristopher; Heeb, Norbert V
2018-03-01
Chlorinated paraffins (CPs) are high production volume chemicals widely used as additives in metal working fluids. Thereby, CPs are exposed to hot metal surfaces which may induce degradation processes. We hypothesized that the elimination of hydrochloric acid would transform CPs into chlorinated olefins (COs). Mass spectrometry is widely used to detect CPs, mostly in the selected ion monitoring mode (SIM) evaluating 2-3 ions at mass resolutions R < 20'000. This approach is not suited to detected COs, because their mass spectra strongly overlap with CPs. We applied a mathematical deconvolution method based on full-scan MS data to separate interfered CP/CO spectra. Metal drilling indeed induced HCl-losses. CO proportions in exposed mixtures of chlorotridecanes increased. Thermal exposure of chlorotridecanes at 160, 180, 200 and 220 °C also induced dehydrohalogenation reactions and CO proportions also increased. Deconvolution of respective mass spectra is needed to study the CP transformation kinetics without bias from CO interferences. Apparent first-order rate constants (k app ) increased up to 0.17, 0.29 and 0.46 h -1 for penta-, hexa- and heptachloro-tridecanes exposed at 220 °C. Respective half-life times (τ 1/2 ) decreased from 4.0 to 2.4 and 1.5 h. Thus, higher chlorinated paraffins degrade faster than lower chlorinated ones. In conclusion, exposure of CPs during metal drilling and thermal treatment induced HCl losses and CO formation. It is expected that CPs and COs are co-released from such processes. Full-scan mass spectra and subsequent deconvolution of interfered signals is a promising approach to tackle the CP/CO problem, in case of insufficient mass resolution. Copyright © 2017 Elsevier Ltd. All rights reserved.
Continuous monitoring of high-rise buildings using seismic interferometry
NASA Astrophysics Data System (ADS)
Mordret, A.; Sun, H.; Prieto, G. A.; Toksoz, M. N.; Buyukozturk, O.
2016-12-01
The linear seismic response of a building is commonly extracted from ambient vibration measurements. Seismic deconvolution interferometry performed on ambient vibration measurements can also be used to estimate the dynamic characteristics of a building, such as the velocity of shear-waves travelling inside the building as well as a damping parameter depending on the intrinsic attenuation of the building and the soil-structure coupling. The continuous nature of the ambient vibrations allows us to measure these parameters repeatedly and to observe their temporal variations. We used 2 weeks of ambient vibration recorded by 36 accelerometers installed in the Green Building on the Massachusetts Institute of Technology campus (Cambridge, MA) to continuously monitor the shear-wave speed and the attenuation factor of the building. Due to the low strain of the ambient vibrations, the observed changes are totally reversible. The relative velocity changes between a reference deconvolution function and the current deconvolution functions are measured with two different methods: 1) the Moving Window Cross-Spectral technique and 2) the stretching technique. Both methods show similar results. We show that measuring the stretching coefficient for the deconvolution functions filtered around the fundamental mode frequency is equivalent to measuring the wandering of the fundamental frequency in the raw ambient vibration data. By comparing these results with local weather parameters, we show that the relative air humidity is the factor dominating the relative seismic velocity variations in the Green Building, as well as the wandering of the fundamental mode. The one-day periodic variations are affected by both the temperature and the humidity. The attenuation factor, measured as the exponential decay of the fundamental mode waveforms, shows a more complex behaviour with respect to the weather measurements.
Properties of the Residual Stress of the Temporally Filtered Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Pruett, C. D.; Gatski, T. B.; Grosch, C. E.; Thacker, W. D.
2002-01-01
The development of a unifying framework among direct numerical simulations, large-eddy simulations, and statistically averaged formulations of the Navier-Stokes equations, is of current interest. Toward that goal, the properties of the residual (subgrid-scale) stress of the temporally filtered Navier-Stokes equations are carefully examined. Causal time-domain filters, parameterized by a temporal filter width 0 less than Delta less than infinity, are considered. For several reasons, the differential forms of such filters are preferred to their corresponding integral forms; among these, storage requirements for differential forms are typically much less than for integral forms and, for some filters, are independent of Delta. The behavior of the residual stress in the limits of both vanishing and in infinite filter widths is examined. It is shown analytically that, in the limit Delta to 0, the residual stress vanishes, in which case the Navier-Stokes equations are recovered from the temporally filtered equations. Alternately, in the limit Delta to infinity, the residual stress is equivalent to the long-time averaged stress, and the Reynolds-averaged Navier-Stokes equations are recovered from the temporally filtered equations. The predicted behavior at the asymptotic limits of filter width is further validated by numerical simulations of the temporally filtered forced, viscous Burger's equation. Finally, finite filter widths are also considered, and a priori analyses of temporal similarity and temporal approximate deconvolution models of the residual stress are conducted.
Rapid scatter estimation for CBCT using the Boltzmann transport equation
NASA Astrophysics Data System (ADS)
Sun, Mingshan; Maslowski, Alex; Davis, Ian; Wareing, Todd; Failla, Gregory; Star-Lack, Josh
2014-03-01
Scatter in cone-beam computed tomography (CBCT) is a significant problem that degrades image contrast, uniformity and CT number accuracy. One means of estimating and correcting for detected scatter is through an iterative deconvolution process known as scatter kernel superposition (SKS). While the SKS approach is efficient, clinically significant errors on the order 2-4% (20-40 HU) still remain. We have previously shown that the kernel method can be improved by perturbing the kernel parameters based on reference data provided by limited Monte Carlo simulations of a first-pass reconstruction. In this work, we replace the Monte Carlo modeling with a deterministic Boltzmann solver (AcurosCTS) to generate the reference scatter data in a dramatically reduced time. In addition, the algorithm is improved so that instead of adjusting kernel parameters, we directly perturb the SKS scatter estimates. Studies were conducted on simulated data and on a large pelvis phantom scanned on a tabletop system. The new method reduced average reconstruction errors (relative to a reference scan) from 2.5% to 1.8%, and significantly improved visualization of low contrast objects. In total, 24 projections were simulated with an AcurosCTS execution time of 22 sec/projection using an 8-core computer. We have ported AcurosCTS to the GPU, and current run-times are approximately 4 sec/projection using two GPU's running in parallel.
Input-output characterization of an ultrasonic testing system by digital signal analysis
NASA Technical Reports Server (NTRS)
Williams, J. H., Jr.; Lee, S. S.; Karagulle, H.
1986-01-01
Ultrasonic test system input-output characteristics were investigated by directly coupling the transmitting and receiving transducers face to face without a test specimen. Some of the fundamentals of digital signal processing were summarized. Input and output signals were digitized by using a digital oscilloscope, and the digitized data were processed in a microcomputer by using digital signal-processing techniques. The continuous-time test system was modeled as a discrete-time, linear, shift-invariant system. In estimating the unit-sample response and frequency response of the discrete-time system, it was necessary to use digital filtering to remove low-amplitude noise, which interfered with deconvolution calculations. A digital bandpass filter constructed with the assistance of a Blackman window and a rectangular time window were used. Approximations of the impulse response and the frequency response of the continuous-time test system were obtained by linearly interpolating the defining points of the unit-sample response and the frequency response of the discrete-time system. The test system behaved as a linear-phase bandpass filter in the frequency range 0.6 to 2.3 MHz. These frequencies were selected in accordance with the criterion that they were 6 dB below the maximum peak of the amplitude of the frequency response. The output of the system to various inputs was predicted and the results were compared with the corresponding measurements on the system.
Algorithm for ion beam figuring of low-gradient mirrors.
Jiao, Changjun; Li, Shengyi; Xie, Xuhui
2009-07-20
Ion beam figuring technology for low-gradient mirrors is discussed. Ion beam figuring is a noncontact machining technique in which a beam of high-energy ions is directed toward a target workpiece to remove material in a predetermined and controlled fashion. Owing to this noncontact mode of material removal, problems associated with tool wear and edge effects, which are common in conventional contact polishing processes, are avoided. Based on the Bayesian principle, an iterative dwell time algorithm for planar mirrors is deduced from the computer-controlled optical surfacing (CCOS) principle. With the properties of the removal function, the shaping process of low-gradient mirrors can be approximated by the linear model for planar mirrors. With these discussions, the error surface figuring technology for low-gradient mirrors with a linear path is set up. With the near-Gaussian property of the removal function, the figuring process with a spiral path can be described by the conventional linear CCOS principle, and a Bayesian-based iterative algorithm can be used to deconvolute the dwell time. Moreover, the selection criterion of the spiral parameter is given. Ion beam figuring technology with a spiral scan path based on these methods can be used to figure mirrors with non-axis-symmetrical errors. Experiments on SiC chemical vapor deposition planar and Zerodur paraboloid samples are made, and the final surface errors are all below 1/100 lambda.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quigley, B; Smith, C; La Riviere, P
2016-06-15
Purpose: To evaluate the resolution and sensitivity of XIL imaging using a surface radiance simulation based on optical diffusion and maximum likelihood expectation maximization (MLEM) image reconstruction. XIL imaging seeks to determine the distribution of luminescent nanophosphors, which could be used as nanodosimeters or radiosensitizers. Methods: The XIL simulation generated a homogeneous slab with optical properties similar to tissue. X-ray activated nanophosphors were placed at 1.0 cm depth in the tissue in concentrations of 10{sup −4} g/mL in two volumes of 10 mm{sup 3} with varying separations between each other. An analytical optical diffusion model determined the surface radiance frommore » the photon distributions generated at depth in the tissue by the nanophosphors. The simulation then determined the detected luminescent signal collected with a f/1.0 aperture lens and back-illuminated EMCCD camera. The surface radiance was deconvolved using a MLEM algorithm to estimate the nanophosphors distribution and the resolution. To account for both Poisson and Gaussian noise, a shifted Poisson imaging model was used in the deconvolution. The deconvolved distributions were fitted to a Gaussian after radial averaging to measure the full width at half maximum (FWHM) and the peak to peak distance between distributions was measured to determine the resolving power. Results: Simulated surface radiances for doses from 1mGy to 100 cGy were computed. Each image was deconvolved using 1000 iterations. At 1mGy, deconvolution reduced the FWHM of the nanophosphors distribution by 65% and had a resolving power is 3.84 mm. Decreasing the dose from 100 cGy to 1 mGy increased the FWHM by 22% but allowed for a dose reduction of a factor of 1000. Conclusion: Deconvolving the detected surface radiance allows for dose reduction while maintaining the resolution of the nanophosphors. It proves to be a useful technique in overcoming the resolution limitations of diffuse optical imaging in tissue. C. S. acknowledges support from the NIH National Institute of General Medical Sciences (Award number R25GM109439, Project Title: University of Chicago Initiative for Maximizing Student Development, IMSD). B. Q. and P. L. acknowledge support from NIH grant R01EB017293.« less
Reduced nocturnal ACTH-driven cortisol secretion during critical illness
Boonen, Eva; Meersseman, Philippe; Vervenne, Hilke; Meyfroidt, Geert; Guïza, Fabian; Wouters, Pieter J.; Veldhuis, Johannes D.
2014-01-01
Recently, during critical illness, cortisol metabolism was found to be reduced. We hypothesize that such reduced cortisol breakdown may suppress pulsatile ACTH and cortisol secretion via feedback inhibition. To test this hypothesis, nocturnal ACTH and cortisol secretory profiles were constructed by deconvolution analysis from plasma concentration time series in 40 matched critically ill patients and eight healthy controls, excluding diseases or drugs that affect the hypothalamic-pituitary-adrenal axis. Blood was sampled every 10 min between 2100 and 0600 to quantify plasma concentrations of ACTH and (free) cortisol. Approximate entropy, an estimation of process irregularity, cross-approximate entropy, a measure of ACTH-cortisol asynchrony, and ACTH-cortisol dose-response relationships were calculated. Total and free plasma cortisol concentrations were higher at all times in patients than in controls (all P < 0.04). Pulsatile cortisol secretion was 54% lower in patients than in controls (P = 0.005), explained by reduced cortisol burst mass (P = 0.03), whereas cortisol pulse frequency (P = 0.35) and nonpulsatile cortisol secretion (P = 0.80) were unaltered. Pulsatile ACTH secretion was 31% lower in patients than in controls (P = 0.03), again explained by a lower ACTH burst mass (P = 0.02), whereas ACTH pulse frequency (P = 0.50) and nonpulsatile ACTH secretion (P = 0.80) were unchanged. ACTH-cortisol dose response estimates were similar in patients and controls. ACTH and cortisol approximate entropy were higher in patients (P ≤ 0.03), as was ACTH-cortisol cross-approximate entropy (P ≤ 0.001). We conclude that hypercortisolism during critical illness coincided with suppressed pulsatile ACTH and cortisol secretion and a normal ACTH-cortisol dose response. Increased irregularity and asynchrony of the ACTH and cortisol time series supported non-ACTH-dependent mechanisms driving hypercortisolism during critical illness. PMID:24569590
Probabilistic density function method for nonlinear dynamical systems driven by colored noise.
Barajas-Solano, David A; Tartakovsky, Alexandre M
2016-05-01
We present a probability density function (PDF) method for a system of nonlinear stochastic ordinary differential equations driven by colored noise. The method provides an integrodifferential equation for the temporal evolution of the joint PDF of the system's state, which we close by means of a modified large-eddy-diffusivity (LED) closure. In contrast to the classical LED closure, the proposed closure accounts for advective transport of the PDF in the approximate temporal deconvolution of the integrodifferential equation. In addition, we introduce the generalized local linearization approximation for deriving a computable PDF equation in the form of a second-order partial differential equation. We demonstrate that the proposed closure and localization accurately describe the dynamics of the PDF in phase space for systems driven by noise with arbitrary autocorrelation time. We apply the proposed PDF method to analyze a set of Kramers equations driven by exponentially autocorrelated Gaussian colored noise to study nonlinear oscillators and the dynamics and stability of a power grid. Numerical experiments show the PDF method is accurate when the noise autocorrelation time is either much shorter or longer than the system's relaxation time, while the accuracy decreases as the ratio of the two timescales approaches unity. Similarly, the PDF method accuracy decreases with increasing standard deviation of the noise.
NASA Astrophysics Data System (ADS)
Pominova, Daria V.; Ryabova, Anastasia V.; Grachev, Pavel V.; Romanishkin, Igor D.; Kuznetsov, Sergei V.; Rozhnova, Julia A.; Yasyrkina, Daria S.; Fedorov, Pavel P.; Loschenov, Victor B.
2016-09-01
The great interest in upconversion nanoparticles exists due to their high efficiency under multiphoton excitation. However, when these particles are used in scanning microscopy, the upconversion luminescence causes a streaking effect due to the long lifetime. This article describes a method of upconversion microparticle luminescence lifetime determination with help of modified Lucy-Richardson deconvolution of laser scanning microscope (LSM) image obtained under near-IR excitation using nondescanned detectors. Determination of the upconversion luminescence intensity and the decay time of separate microparticles was done by intensity profile along the image fast scan axis approximation. We studied upconversion submicroparticles based on fluoride hosts doped with Yb3+-Er3+ and Yb3+-Tm3+ rare earth ion pairs, and the characteristic decay times were 0.1 to 1.5 ms. We also compared the results of LSM measurements with the photon counting method results; the spread of values was about 13% and was associated with the approximation error. Data obtained from live cells showed the possibility of distinguishing the position of upconversion submicroparticles inside and outside the cells by the difference of their lifetime. The proposed technique allows using the upconversion microparticles without shells as probes for the presence of OH- ions and CO2 molecules.
NASA Astrophysics Data System (ADS)
Dyez, K. A.; Hoenisch, B.
2015-12-01
Atmospheric CO2 concentrations in the late Pleistocene have been characterized from ancient air bubbles trapped within polar ice sheets. Ice-core records clearly demonstrate the glacial-interglacial relationship between the global carbon cycle and climate, but they are so far limited to the last 800 ky, when glacial cycles occurred approximately every 100-ky. Boron isotope ratios (δ11B) recorded in the tests of fossil planktic foraminifera offer an opportunity to extend the atmospheric pCO2 record into the early Pleistocene, when glacial cycles instead occurred approximately every 41-ky. We present a new high-resolution record of planktic foraminiferal d11B, Mg/Ca (a sea surface temperature proxy) and salinity estimates from the deconvolution of δ18O and Mg/Ca. Combined with reasonable assumptions of ocean alkalinity, these data allow us to estimate pCO2 over three of the 41-ky climate cycles at ~1.5 Ma. Our results confirm the hypothesis that climate and atmospheric pCO2 were coupled beyond ice core records and provide new constraints for studies of long-term CO2 storage and release, regional controls on the early Pleistocene carbon cycle, and estimating climate sensitivity before the mid-Pleistocene transition.
Improved Phased Array Imaging of a Model Jet
NASA Technical Reports Server (NTRS)
Dougherty, Robert P.; Podboy, Gary G.
2010-01-01
An advanced phased array system, OptiNav Array 48, and a new deconvolution algorithm, TIDY, have been used to make octave band images of supersonic and subsonic jet noise produced by the NASA Glenn Small Hot Jet Acoustic Rig (SHJAR). The results are much more detailed than previous jet noise images. Shock cell structures and the production of screech in an underexpanded supersonic jet are observed directly. Some trends are similar to observations using spherical and elliptic mirrors that partially informed the two-source model of jet noise, but the radial distribution of high frequency noise near the nozzle appears to differ from expectations of this model. The beamforming approach has been validated by agreement between the integrated image results and the conventional microphone data.
Improved scatter correction using adaptive scatter kernel superposition
NASA Astrophysics Data System (ADS)
Sun, M.; Star-Lack, J. M.
2010-11-01
Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandlik, Nandkumar, E-mail: ntmandlik@gmail.com; Patil, B. J.; Bhoraskar, V. N.
2014-04-24
Nanorods of CaSO{sub 4}: Dy having diameter 20 nm and length 200 nm have been synthesized by the chemical coprecipitation method. These samples were irradiated with gamma radiation for the dose varying from 0.1 Gy to 50 kGy and their TL characteristics have been studied. TL dose response shows a linear behavior up to 5 kGy and further saturates with increase in the dose. A Computerized Glow Curve Deconvolution (CGCD) program was used for the analysis of TL glow curves. Trapping parameters for various peaks have been calculated by using CGCD program.
NASA Astrophysics Data System (ADS)
Mandlik, Nandkumar; Patil, B. J.; Bhoraskar, V. N.; Sahare, P. D.; Dhole, S. D.
2014-04-01
Nanorods of CaSO4: Dy having diameter 20 nm and length 200 nm have been synthesized by the chemical coprecipitation method. These samples were irradiated with gamma radiation for the dose varying from 0.1 Gy to 50 kGy and their TL characteristics have been studied. TL dose response shows a linear behavior up to 5 kGy and further saturates with increase in the dose. A Computerized Glow Curve Deconvolution (CGCD) program was used for the analysis of TL glow curves. Trapping parameters for various peaks have been calculated by using CGCD program.
Iterative Transform Phase Diversity: An Image-Based Object and Wavefront Recovery
NASA Technical Reports Server (NTRS)
Smith, Jeffrey
2012-01-01
The Iterative Transform Phase Diversity algorithm is designed to solve the problem of recovering the wavefront in the exit pupil of an optical system and the object being imaged. This algorithm builds upon the robust convergence capability of Variable Sampling Mapping (VSM), in combination with the known success of various deconvolution algorithms. VSM is an alternative method for enforcing the amplitude constraints of a Misell-Gerchberg-Saxton (MGS) algorithm. When provided the object and additional optical parameters, VSM can accurately recover the exit pupil wavefront. By combining VSM and deconvolution, one is able to simultaneously recover the wavefront and the object.
Deconvolution Method on OSL Curves from ZrO2 Irradiated by Beta and UV Radiations
NASA Astrophysics Data System (ADS)
Rivera, T.; Kitis, G.; Azorín, J.; Furetta, C.
This paper reports the optically stimulated luminescent (OSL) response of ZrO2 to beta and ultraviolet radiations in order to investigate the potential use of this material as a radiation dosimeter. The experimentally obtained OSL decay curves were analyzed using the computerized curve de-convolution (CCD) method. It was found that the OSL curve structure, for the short (practical) illumination time used, consists of three first order components. The individual OSL dose response behavior of each component was found. The values of the time at the OSL peak maximum and the decay constant of each component were also estimated.
Pooling across cells to normalize single-cell RNA sequencing data with many zero counts.
Lun, Aaron T L; Bach, Karsten; Marioni, John C
2016-04-27
Normalization of single-cell RNA sequencing data is necessary to eliminate cell-specific biases prior to downstream analyses. However, this is not straightforward for noisy single-cell data where many counts are zero. We present a novel approach where expression values are summed across pools of cells, and the summed values are used for normalization. Pool-based size factors are then deconvolved to yield cell-based factors. Our deconvolution approach outperforms existing methods for accurate normalization of cell-specific biases in simulated data. Similar behavior is observed in real data, where deconvolution improves the relevance of results of downstream analyses.
Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho
2018-01-01
To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p < 0.001) and for the overall image quality (mean, 3.8; range, 3.3-4.0; p < 0.001). The most preferred anatomical regions were the azygoesophageal recess, thoracic spine, and unobscured lung. The visibility of chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.
Ultrasonic inspection of studs (bolts) using dynamic predictive deconvolution and wave shaping.
Suh, D M; Kim, W W; Chung, J G
1999-01-01
Bolt degradation has become a major issue in the nuclear industry since the 1980's. If small cracks in stud bolts are not detected early enough, they grow rapidly and cause catastrophic disasters. Their detection, despite its importance, is known to be a very difficult problem due to the complicated structures of the stud bolts. This paper presents a method of detecting and sizing a small crack in the root between two adjacent crests in threads. The key idea is from the fact that the mode-converted Rayleigh wave travels slowly down the face of the crack and turns from the intersection of the crack and the root of thread to the transducer. Thus, when a crack exists, a small delayed pulse due to the Rayleigh wave is detected between large regularly spaced pulses from the thread. The delay time is the same as the propagation delay time of the slow Rayleigh wave and is proportional to the site of the crack. To efficiently detect the slow Rayleigh wave, three methods based on digital signal processing are proposed: wave shaping, dynamic predictive deconvolution, and dynamic predictive deconvolution combined with wave shaping.
Torres-Lapasió, J R; Pous-Torres, S; Ortiz-Bolsico, C; García-Alvarez-Coque, M C
2015-01-16
The optimisation of the resolution in high-performance liquid chromatography is traditionally performed attending only to the time information. However, even in the optimal conditions, some peak pairs may remain unresolved. Such incomplete resolution can be still accomplished by deconvolution, which can be carried out with more guarantees of success by including spectral information. In this work, two-way chromatographic objective functions (COFs) that incorporate both time and spectral information were tested, based on the peak purity (analyte peak fraction free of overlapping) and the multivariate selectivity (figure of merit derived from the net analyte signal) concepts. These COFs are sensitive to situations where the components that coelute in a mixture show some spectral differences. Therefore, they are useful to find out experimental conditions where the spectrochromatograms can be recovered by deconvolution. Two-way multivariate selectivity yielded the best performance and was applied to the separation using diode-array detection of a mixture of 25 phenolic compounds, which remained unresolved in the chromatographic order using linear and multi-linear gradients of acetonitrile-water. Peak deconvolution was carried out using the combination of orthogonal projection approach and alternating least squares. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Schawinski, Kevin; Zhang, Ce; Zhang, Hantian; Fowler, Lucas; Santhanam, Gokula Krishnan
2017-05-01
Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon-Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.
Imaging samples in silica aerogel using an experimental point spread function.
White, Amanda J; Ebel, Denton S
2015-02-01
Light microscopy is a powerful tool that allows for many types of samples to be examined in a rapid, easy, and nondestructive manner. Subsequent image analysis, however, is compromised by distortion of signal by instrument optics. Deconvolution of images prior to analysis allows for the recovery of lost information by procedures that utilize either a theoretically or experimentally calculated point spread function (PSF). Using a laser scanning confocal microscope (LSCM), we have imaged whole impact tracks of comet particles captured in silica aerogel, a low density, porous SiO2 solid, by the NASA Stardust mission. In order to understand the dynamical interactions between the particles and the aerogel, precise grain location and track volume measurement are required. We report a method for measuring an experimental PSF suitable for three-dimensional deconvolution of imaged particles in aerogel. Using fluorescent beads manufactured into Stardust flight-grade aerogel, we have applied a deconvolution technique standard in the biological sciences to confocal images of whole Stardust tracks. The incorporation of an experimentally measured PSF allows for better quantitative measurements of the size and location of single grains in aerogel and more accurate measurements of track morphology.
Varma, Manthena V; El-Kattan, Ayman F
2016-07-01
A large body of evidence suggests hepatic uptake transporters, organic anion-transporting polypeptides (OATPs), are of high clinical relevance in determining the pharmacokinetics of substrate drugs, based on which recent regulatory guidances to industry recommend appropriate assessment of investigational drugs for the potential drug interactions. We recently proposed an extended clearance classification system (ECCS) framework in which the systemic clearance of class 1B and 3B drugs is likely determined by hepatic uptake. The ECCS framework therefore predicts the possibility of drug-drug interactions (DDIs) involving OATPs and the effects of genetic variants of SLCO1B1 early in the discovery and facilitates decision making in the candidate selection and progression. Although OATP-mediated uptake is often the rate-determining process in the hepatic clearance of substrate drugs, metabolic and/or biliary components also contribute to the overall hepatic disposition and, more importantly, to liver exposure. Clinical evidence suggests that alteration in biliary efflux transport or metabolic enzymes associated with genetic polymorphism leads to change in the pharmacodynamic response of statins, for which the pharmacological target resides in the liver. Perpetrator drugs may show inhibitory and/or induction effects on transporters and enzymes simultaneously. It is therefore important to adopt models that frame these multiple processes in a mechanistic sense for quantitative DDI predictions and to deconvolute the effects of individual processes on the plasma and hepatic exposure. In vitro data-informed mechanistic static and physiologically based pharmacokinetic models are proven useful in rationalizing and predicting transporter-mediated DDIs and the complex DDIs involving transporter-enzyme interplay. © 2016, The American College of Clinical Pharmacology.
Peng, Ke; Nguyen, Dang Khoa; Vannasing, Phetsamone; Tremblay, Julie; Lesage, Frédéric; Pouliot, Philippe
2016-02-01
Functional near-infrared spectroscopy (fNIRS) can be combined with electroencephalography (EEG) to continuously monitor the hemodynamic signal evoked by epileptic events such as seizures or interictal epileptiform discharges (IEDs, aka spikes). As estimation methods assuming a canonical shape of the hemodynamic response function (HRF) might not be optimal, we sought to model patient-specific HRF (sHRF) with a simple deconvolution approach for IED-related analysis with EEG-fNIRS data. Furthermore, a quadratic term was added to the model to account for the nonlinearity in the response when IEDs are frequent. Prior to analyzing clinical data, simulations were carried out to show that the HRF was estimable by the proposed deconvolution methods under proper conditions. EEG-fNIRS data of five patients with refractory focal epilepsy were selected due to the presence of frequent clear IEDs and their unambiguous focus localization. For each patient, both the linear sHRF and the nonlinear sHRF were estimated at each channel. Variability of the estimated sHRFs was seen across brain regions and different patients. Compared with the SPM8 canonical HRF (cHRF), including these sHRFs in the general linear model (GLM) analysis led to hemoglobin activations with higher statistical scores as well as larger spatial extents on all five patients. In particular, for patients with frequent IEDs, nonlinear sHRFs were seen to provide higher sensitivity in activation detection than linear sHRFs. These observations support using sHRFs in the analysis of IEDs with EEG-fNIRS data. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ramsey, Michael S.
2002-08-01
A spectral deconvolution using a constrained least squares approach was applied to airborne thermal infrared multispectral scanner (TIMS) data of Meteor Crater, Arizona. The three principal sedimentary units sampled by the impact were chosen as end-members, and their spectra were derived from the emissivity images. To validate previous estimates of the erosion of the near-rim ejecta, the model was used to identify the areal extent of the reworked material. The outputs of the algorithm reveal subtle mixing patterns in the ejecta, identified larger ejecta blocks, and were used to further constrain the volume of Coconino Sandstone present in the vicinity of the crater. The availability of the multialtitude data set also provided a means to examine the effects of resolution degradation and quantify the subsequent errors on the model. These data served as a test case for the use of image-derived lithologic end-members at various scales, which is critical for examining thermal infrared data of planetary surfaces. The model results indicate that the Coconino Ss. reworked ejecta is detectable over 3 km from the crater. This was confirmed by field sampling within the primary ejecta field and wind streak. The areal distribution patterns of this unit imply past erosion and subsequent sediment transport that was low to moderate compared with early studies and therefore places further constraints on the ejecta degradation of Meteor Crater. It also provides an important example of the analysis that can be performed on thermal infrared data currently being returned from Earth orbit and expected from Mars in 2002.
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographsmore » is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy laboratories.« less
Jo, J A; Marcu, L; Fang, Q; Papaioannou, T; Qiao, J H; Fishbein, M C; Beseth, B; Dorafshar, A H; Reil, T; Baker, D; Freischlag, J
2007-01-01
A new deconvolution method for the analysis of time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data is introduced and applied for tissue diagnosis. The intrinsic TR-LIFS decays are expanded on a Laguerre basis, and the computed Laguerre expansion coefficients (LEC) are used to characterize the sample fluorescence emission. The method was applied for the diagnosis of atherosclerotic vulnerable plaques. At a first stage, using a rabbit atherosclerotic model, 73 TR-LIFS in-vivo measurements from the normal and atherosclerotic aorta segments of eight rabbits were taken. The Laguerre deconvolution technique was able to accurately deconvolve the TR-LIFS measurements. More interesting, the LEC reflected the changes in the arterial biochemical composition and provided discrimination of lesions rich in macrophages/foam-cells with high sensitivity (> 85%) and specificity (> 95%). At a second stage, 348 TR-LIFS measurements were obtained from the explanted carotid arteries of 30 patients. Lesions with significant inflammatory cells (macrophages/foam-cells and lymphocytes) were detected with high sensitivity (> 80%) and specificity (> 90%), using LEC-based classifiers. This study has demonstrated the potential of using TR-LIFS information by means of LEC for in vivo tissue diagnosis, and specifically for detecting inflammation in atherosclerotic lesions, a key marker of plaque vulnerability.
NASA Astrophysics Data System (ADS)
de Macedo, Isadora A. S.; da Silva, Carolina B.; de Figueiredo, J. J. S.; Omoboya, Bode
2017-01-01
Wavelet estimation as well as seismic-to-well tie procedures are at the core of every seismic interpretation workflow. In this paper we perform a comparative study of wavelet estimation methods for seismic-to-well tie. Two approaches to wavelet estimation are discussed: a deterministic estimation, based on both seismic and well log data, and a statistical estimation, based on predictive deconvolution and the classical assumptions of the convolutional model, which provides a minimum-phase wavelet. Our algorithms, for both wavelet estimation methods introduce a semi-automatic approach to determine the optimum parameters of deterministic wavelet estimation and statistical wavelet estimation and, further, to estimate the optimum seismic wavelets by searching for the highest correlation coefficient between the recorded trace and the synthetic trace, when the time-depth relationship is accurate. Tests with numerical data show some qualitative conclusions, which are probably useful for seismic inversion and interpretation of field data, by comparing deterministic wavelet estimation and statistical wavelet estimation in detail, especially for field data example. The feasibility of this approach is verified on real seismic and well data from Viking Graben field, North Sea, Norway. Our results also show the influence of the washout zones on well log data on the quality of the well to seismic tie.
Tectonics and crustal structure of the Saurashtra peninsula: based on Gravity and Magnetic data
NASA Astrophysics Data System (ADS)
Mishra, A. K.; Singh, A.; Singh, U. K.
2016-12-01
The Saurashtra peninsula is located at the North Western margin of the Indian shield which occurs as a horst block between the rifts namely as Kachchh, Cambay and Narmada. It is important because of occurrence of moderate earthquake and presence of mesozoic sediments below the Deccan trap. The maps of bouguer gravity anomaly and the total intensity magnetic anomalies of Saurashtra have delineated six circular gravity highs of magnitudes 40-60 mGal and 800-1000 nT respectively. In order to understand the location, structure and depth of the source body, methods like continuous wavelet transform (CWT), Euler deconvolution and power spectrum analysis have been implemented in the potential field data. The CWT and Euler deconvolution give 16-18 km average depth of volcanic plug in Junagadh and Rajula region. From the power spectrum analysis, it is found that average Moho depth in the Saurashtra is about 36-38 km. Keeping the constraints obtained from geophysical studies like borehole, deep seismic survey, receiver function analysis and geological information, combined gravity and magnetic modeling have been performed. Detailed crustal structure of the Saurashtra region has been delineated along two profiles which pass from prominent geological features Junagadh and Rajula volcanic plugs respectively.
Dempah, Kassibla Elodie; Lubach, Joseph W; Munson, Eric J
2017-03-06
A variety of particle sizes of a model compound, dicumarol, were prepared and characterized in order to investigate the correlation between particle size and solid-state NMR (SSNMR) proton spin-lattice relaxation ( 1 H T 1 ) times. Conventional laser diffraction and scanning electron microscopy were used as particle size measurement techniques and showed crystalline dicumarol samples with sizes ranging from tens of micrometers to a few micrometers. Dicumarol samples were prepared using both bottom-up and top-down particle size control approaches, via antisolvent microprecipitation and cryogrinding. It was observed that smaller particles of dicumarol generally had shorter 1 H T 1 times than larger ones. Additionally, cryomilled particles had the shortest 1 H T 1 times encountered (8 s). SSNMR 1 H T 1 times of all the samples were measured and showed as-received dicumarol to have a T 1 of 1500 s, whereas the 1 H T 1 times of the precipitated samples ranged from 20 to 80 s, with no apparent change in the physical form of dicumarol. Physical mixtures of different sized particles were also analyzed to determine the effect of sample inhomogeneity on 1 H T 1 values. Mixtures of cryoground and as-received dicumarol were clearly inhomogeneous as they did not fit well to a one-component relaxation model, but could be fit much better to a two-component model with both fast-and slow-relaxing regimes. Results indicate that samples of crystalline dicumarol containing two significantly different particle size populations could be deconvoluted solely based on their differences in 1 H T 1 times. Relative populations of each particle size regime could also be approximated using two-component fitting models. Using NMR theory on spin diffusion as a reference, and taking into account the presence of crystal defects, a model for the correlation between the particle size of dicumarol and its 1 H T 1 time was proposed.
NASA Astrophysics Data System (ADS)
Yu, Y.; Hopkins, C.
2018-05-01
Time-dependent forces applied by 2 and 4.5 mm diameter drops of water (with velocities up to terminal velocity) impacting upon a glass plate with or without a water layer (up to 10 mm depth) have been measured using two different approaches, force transduction and wavelet deconvolution. Both approaches are in close agreement for drops falling on dry glass. However, only the wavelet approach is able to measure natural features of the splash on shallow water layers that impart forces to the plate after the initial impact. At relatively high velocities (including terminal velocity) the measured peak force from the initial impact is significantly higher than that predicted by idealised drop shape models and models from Roisman et al. and Marengo et al. Hence empirical formulae are developed for the initial time-dependent impact force from drops falling at (a) different velocities up to and including terminal velocity onto a dry glass surface, (b) terminal velocity onto dry glass or glass with a water layer and (c) different velocities below terminal velocity onto dry glass or glass with a water layer. For drops on dry glass, the empirical formulae are applicable to a glass plate or a composite layered plate with a glass surface, although they apply to other plate thicknesses and are applicable to any plate material with a similar surface roughness and wettability. The measurements also indicate that after the initial impact there can be high level forces when bubbles are entrained in the water layer.
NASA Astrophysics Data System (ADS)
Bernstein, L. S.; Shroll, R. M.; Galazutdinov, G. A.; Beletsky, Y.
2018-06-01
We explore the common-carrier hypothesis for the 6196 and 6614 Å diffuse interstellar bands (DIBs). The observed DIB spectra are sharpened using a spectral deconvolution algorithm. This reveals finer spectral features that provide tighter constraints on candidate carriers. We analyze a deconvolved λ6614 DIB spectrum and derive spectroscopic constants that are then used to model the λ6196 spectra. The common-carrier spectroscopic constants enable quantitative fits to the contrasting λ6196 and λ6614 spectra from two sightlines. Highlights of our analysis include (1) sharp cutoffs for the maximum values of the rotational quantum numbers, J max = K max, (2) the λ6614 DIB consisting of a doublet and a red-tail component arising from different carriers, (3) the λ6614 doublet and λ6196 DIBs sharing a common carrier, (4) the contrasting shapes of the λ6614 doublet and λ6196 DIBs arising from different vibration–rotation Coriolis coupling constants that originate from transitions from a common ground state to different upper electronic state degenerate vibrational levels, and (5) the different widths of the two DIBs arising from different effective rotational temperatures associated with principal rotational axes that are parallel and perpendicular to the highest-order symmetry axis. The analysis results suggest a puckered oblate symmetric top carrier with a dipole moment aligned with the highest-order symmetry axis. An example candidate carrier consistent with these specifications is corannulene (C20H10), or one of its symmetric ionic or dehydrogenated forms, whose rotational constants are comparable to those obtained from spectral modeling of the DIB profiles.
Accounting for pharmacokinetic differences in dual-tracer receptor density imaging.
Tichauer, K M; Diop, M; Elliott, J T; Samkoe, K S; Hasan, T; St Lawrence, K; Pogue, B W
2014-05-21
Dual-tracer molecular imaging is a powerful approach to quantify receptor expression in a wide range of tissues by using an untargeted tracer to account for any nonspecific uptake of a molecular-targeted tracer. This approach has previously required the pharmacokinetics of the receptor-targeted and untargeted tracers to be identical, requiring careful selection of an ideal untargeted tracer for any given targeted tracer. In this study, methodology capable of correcting for tracer differences in arterial input functions, as well as binding-independent delivery and retention, is derived and evaluated in a mouse U251 glioma xenograft model using an Affibody tracer targeted to epidermal growth factor receptor (EGFR), a cell membrane receptor overexpressed in many cancers. Simulations demonstrated that blood, and to a lesser extent vascular-permeability, pharmacokinetic differences between targeted and untargeted tracers could be quantified by deconvolving the uptakes of the two tracers in a region of interest devoid of targeted tracer binding, and therefore corrected for, by convolving the uptake of the untargeted tracer in all regions of interest by the product of the deconvolution. Using fluorescently labeled, EGFR-targeted and untargeted Affibodies (known to have different blood clearance rates), the average tumor concentration of EGFR in four mice was estimated using dual-tracer kinetic modeling to be 3.9 ± 2.4 nM compared to an expected concentration of 2.0 ± 0.4 nM. However, with deconvolution correction a more equivalent EGFR concentration of 2.0 ± 0.4 nM was measured.
Adaptive optics images restoration based on frame selection and multi-framd blind deconvolution
NASA Astrophysics Data System (ADS)
Tian, Y.; Rao, C. H.; Wei, K.
2008-10-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulent due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frame blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are picked out by frame selection technique is deconvolved. There is no priori knowledge except the positive constraint. The method has been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system in Yunnan Observatory. The results showed that the method can effectively improve the images partially corrected by adaptive optics.
Digital sorting of complex tissues for cell type-specific gene expression profiles.
Zhong, Yi; Wan, Ying-Wooi; Pang, Kaifang; Chow, Lionel M L; Liu, Zhandong
2013-03-07
Cellular heterogeneity is present in almost all gene expression profiles. However, transcriptome analysis of tissue specimens often ignores the cellular heterogeneity present in these samples. Standard deconvolution algorithms require prior knowledge of the cell type frequencies within a tissue or their in vitro expression profiles. Furthermore, these algorithms tend to report biased estimations. Here, we describe a Digital Sorting Algorithm (DSA) for extracting cell-type specific gene expression profiles from mixed tissue samples that is unbiased and does not require prior knowledge of cell type frequencies. The results suggest that DSA is a specific and sensitivity algorithm in gene expression profile deconvolution and will be useful in studying individual cell types of complex tissues.
Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media
NASA Astrophysics Data System (ADS)
Edrei, Eitan; Scarcelli, Giuliano
2016-09-01
High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.
Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media.
Edrei, Eitan; Scarcelli, Giuliano
2016-09-16
High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.
Array invariant-based ranging of a source of opportunity.
Byun, Gihoon; Kim, J S; Cho, Chomgun; Song, H C; Byun, Sung-Hoon
2017-09-01
The feasibility of tracking a ship radiating random and anisotropic noise is investigated using ray-based blind deconvolution (RBD) and array invariant (AI) with a vertical array in shallow water. This work is motivated by a recent report [Byun, Verlinden, and Sabra, J. Acoust. Soc. Am. 141, 797-807 (2017)] that RBD can be applied to ships of opportunity to estimate the Green's function. Subsequently, the AI developed for robust source-range estimation in shallow water can be applied to the estimated Green's function via RBD, exploiting multipath arrivals separated in beam angle and travel time. In this letter, a combination of the RBD and AI is demonstrated to localize and track a ship of opportunity (200-900 Hz) to within a 5% standard deviation of the relative range error along a track at ranges of 1.8-3.4 km, using a 16-element, 56-m long vertical array in approximately 100-m deep shallow water.
Computer-assisted bladder cancer grading: α-shapes for color space decomposition
NASA Astrophysics Data System (ADS)
Niazi, M. K. K.; Parwani, Anil V.; Gurcan, Metin N.
2016-03-01
According to American Cancer Society, around 74,000 new cases of bladder cancer are expected during 2015 in the US. To facilitate the bladder cancer diagnosis, we present an automatic method to differentiate carcinoma in situ (CIS) from normal/reactive cases that will work on hematoxylin and eosin (H and E) stained images of bladder. The method automatically determines the color deconvolution matrix by utilizing the α-shapes of the color distribution in the RGB color space. Then, variations in the boundary of transitional epithelium are quantified, and sizes of nuclei in the transitional epithelium are measured. We also approximate the "nuclear to cytoplasmic ratio" by computing the ratio of the average shortest distance between transitional epithelium and nuclei to average nuclei size. Nuclei homogeneity is measured by computing the kurtosis of the nuclei size histogram. The results show that 30 out of 34 (88.2%) images were correctly classified by the proposed method, indicating that these novel features are viable markers to differentiate CIS from normal/reactive bladder.
Dumitriu, Dani; Rodriguez, Alfredo; Morrison, John H.
2012-01-01
Morphological features such as size, shape and density of dendritic spines have been shown to reflect important synaptic functional attributes and potential for plasticity. Here we describe in detail a protocol for obtaining detailed morphometric analysis of spines using microinjection of fluorescent dyes, high resolution confocal microscopy, deconvolution and image analysis using NeuronStudio. Recent technical advancements include better preservation of tissue resulting in prolonged ability to microinject, and algorithmic improvements that compensate for the residual Z-smear inherent in all optical imaging. Confocal imaging parameters were probed systematically for the identification of both optimal resolution as well as highest efficiency. When combined, our methods yield size and density measurements comparable to serial section transmission electron microscopy in a fraction of the time. An experiment containing 3 experimental groups with 8 subjects in each can take as little as one month if optimized for speed, or approximately 4 to 5 months if the highest resolution and morphometric detail is sought. PMID:21886104
Cosmic Microwave Background Mapmaking with a Messenger Field
NASA Astrophysics Data System (ADS)
Huffenberger, Kevin M.; Næss, Sigurd K.
2018-01-01
We apply a messenger field method to solve the linear minimum-variance mapmaking equation in the context of Cosmic Microwave Background (CMB) observations. In simulations, the method produces sky maps that converge significantly faster than those from a conjugate gradient descent algorithm with a diagonal preconditioner, even though the computational cost per iteration is similar. The messenger method recovers large scales in the map better than conjugate gradient descent, and yields a lower overall χ2. In the single, pencil beam approximation, each iteration of the messenger mapmaking procedure produces an unbiased map, and the iterations become more optimal as they proceed. A variant of the method can handle differential data or perform deconvolution mapmaking. The messenger method requires no preconditioner, but a high-quality solution needs a cooling parameter to control the convergence. We study the convergence properties of this new method and discuss how the algorithm is feasible for the large data sets of current and future CMB experiments.
Cai, Yangping; Li, Youshan; Li, Shu; Gao, Tian; Zhang, Lu; Yang, Zhe; Fan, Zhengfu; Bai, Chujie
2016-09-01
The objective of this article is to develop and validate the level A in vitro-in vivo correlation (IVIVC) for three different formulations of tramadol hydrochloride. The formulations included were Tramazac® (Ml, conventional tablet) and TRD CONTIN® (M2, sustained release tablet), and a new controlled release tablet prepared on the basis of osmotic technology (formulation IVB). To develop level A IVIVC, in vivo data were deconvoluted into absorption data by using Wagner-Nelson equation. The absorption data (percent drug absorbed) was plotted against percent drug dissolved keeping the former along x-axis and the later along y-axis. The highest determination coefficient (R² = 0.9278) of the level A IVIVC was observed for formulation MI, and then for M2 (R² = 0.9046) and IVB (R² = 0.8796). Additionally, plasma drug levels were approximated from in vitio dissolution data using convolution approach to calculate the prediction error (%), which was found to be < 10%.
Experimental validation of a linear model for data reduction in chirp-pulse microwave CT.
Miyakawa, M; Orikasa, K; Bertero, M; Boccacci, P; Conte, F; Piana, M
2002-04-01
Chirp-pulse microwave computerized tomography (CP-MCT) is an imaging modality developed at the Department of Biocybernetics, University of Niigata (Niigata, Japan), which intends to reduce the microwave-tomography problem to an X-ray-like situation. We have recently shown that data acquisition in CP-MCT can be described in terms of a linear model derived from scattering theory. In this paper, we validate this model by showing that the theoretically computed response function is in good agreement with the one obtained from a regularized multiple deconvolution of three data sets measured with the prototype of CP-MCT. Furthermore, the reliability of the model as far as image restoration in concerned, is tested in the case of space-invariant conditions by considering the reconstruction of simple on-axis cylindrical phantoms.
Kinematic model for the space-variant image motion of star sensors under dynamical conditions
NASA Astrophysics Data System (ADS)
Liu, Chao-Shan; Hu, Lai-Hong; Liu, Guang-Bin; Yang, Bo; Li, Ai-Jun
2015-06-01
A kinematic description of a star spot in the focal plane is presented for star sensors under dynamical conditions, which involves all necessary parameters such as the image motion, velocity, and attitude parameters of the vehicle. Stars at different locations of the focal plane correspond to the slightly different orientation and extent of motion blur, which characterize the space-variant point spread function. Finally, the image motion, the energy distribution, and centroid extraction are numerically investigated using the kinematic model under dynamic conditions. A centroid error of eight successive iterations <0.002 pixel is used as the termination criterion for the Richardson-Lucy deconvolution algorithm. The kinematic model of a star sensor is useful for evaluating the compensation algorithms of motion-blurred images.
NASA Technical Reports Server (NTRS)
Hoge, F. E.; Swift, R. N.
1983-01-01
Airborne lidar oil spill experiments carried out to determine the practicability of the AOFSCE (absolute oil fluorescence spectral conversion efficiency) computational model are described. The results reveal that the model is suitable over a considerable range of oil film thicknesses provided the fluorescence efficiency of the oil does not approach the minimum detection sensitivity limitations of the lidar system. Separate airborne lidar experiments to demonstrate measurement of the water column Raman conversion efficiency are also conducted to ascertain the ultimate feasibility of converting such relative oil fluorescence to absolute values. Whereas the AOFSCE model is seen as highly promising, further airborne water column Raman conversion efficiency experiments with improved temporal or depth-resolved waveform calibration and software deconvolution techniques are thought necessary for a final determination of suitability.
Short-term nonmigrating tide variability in the mesosphere, thermosphere, and ionosphere
NASA Astrophysics Data System (ADS)
Pedatella, N. M.; Oberheide, J.; Sutton, E. K.; Liu, H.-L.; Anderson, J. L.; Raeder, K.
2016-04-01
The intraseasonal variability of the eastward propagating nonmigrating diurnal tide with zonal wave number 3 (DE3) during 2007 in the mesosphere, ionosphere, and thermosphere is investigated using a whole atmosphere model reanalysis and satellite observations. The atmospheric reanalysis is based on implementation of data assimilation in the Whole Atmosphere Community Climate Model (WACCM) using the Data Assimilation Research Testbed (DART) ensemble Kalman filter. The tidal variability in the WACCM+DART reanalysis is compared to the observed variability in the mesosphere and lower thermosphere (MLT) based on the Thermosphere Ionosphere Mesosphere Energetics Dynamics satellite Sounding of the Atmosphere using Broadband Emission Radiometry (TIMED/SABER) observations, in the ionosphere based on Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) observations, and in the upper thermosphere (˜475 km) based on Gravity Recovery and Climate Experiment (GRACE) neutral density observations. To obtain the short-term DE3 variability in the MLT and upper thermosphere, we apply the method of tidal deconvolution to the TIMED/SABER observations and consider the difference in the ascending and descending longitudinal wave number 4 structure in the GRACE observations. The results reveal that tidal amplitude changes of 5-10 K regularly occur on short timescales (˜10-20 days) in the MLT. Similar variability occurs in the WACCM+DART reanalysis and TIMED/SABER observations, demonstrating that the short-term variability can be captured in whole atmosphere models that employ data assimilation and in observations by the technique of tidal deconvolution. The impact of the short-term DE3 variability in the MLT on the ionosphere and thermosphere is also clearly evident in the COSMIC and GRACE observations. Analysis of the troposphere forcing in WACCM+DART and simulations of the Global Scale Wave Model (GSWM) show that the short-term DE3 variability in the MLT is not related to a single source; rather, it is due to a combination of changes in troposphere forcing, zonal mean atmosphere, and wave-wave interactions.
NASA Astrophysics Data System (ADS)
Floberg, J. M.; Holden, J. E.
2013-02-01
We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications.
NASA Astrophysics Data System (ADS)
Enguita, Jose M.; Álvarez, Ignacio; González, Rafael C.; Cancelas, Jose A.
2018-01-01
The problem of restoration of a high-resolution image from several degraded versions of the same scene (deconvolution) has been receiving attention in the last years in fields such as optics and computer vision. Deconvolution methods are usually based on sets of images taken with small (sub-pixel) displacements or slightly different focus. Techniques based on sets of images obtained with different point-spread-functions (PSFs) engineered by an optical system are less popular and mostly restricted to microscopic systems, where a spot of light is projected onto the sample under investigation, which is then scanned point-by-point. In this paper, we use the effect of conical diffraction to shape the PSFs in a full-field macroscopic imaging system. We describe a series of simulations and real experiments that help to evaluate the possibilities of the system, showing the enhancement in image contrast even at frequencies that are strongly filtered by the lens transfer function or when sampling near the Nyquist frequency. Although results are preliminary and there is room to optimize the prototype, the idea shows promise to overcome the limitations of the image sensor technology in many fields, such as forensics, medical, satellite, or scientific imaging.
Bade, Richard; Causanilles, Ana; Emke, Erik; Bijlsma, Lubertus; Sancho, Juan V; Hernandez, Felix; de Voogt, Pim
2016-11-01
A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of >200 pharmaceuticals and illicit drugs or ChemSpider. This hidden target screening approach led to the detection of numerous compounds including the illicit drug cocaine and its metabolite benzoylecgonine and the pharmaceuticals carbamazepine, gemfibrozil and losartan. The compounds found using both approaches were combined, and isotopic pattern and retention time prediction were used to filter out false positives. The remaining potential positives were reanalysed in MS/MS mode and their product ions were compared with literature and/or mass spectral libraries. The inclusion of the chemical database ChemSpider led to the tentative identification of several metabolites, including paraxanthine, theobromine, theophylline and carboxylosartan, as well as the pharmaceutical phenazone. The first three of these compounds are isomers and they were subsequently distinguished based on their product ions and predicted retention times. This work has shown that the use deconvolution tools facilitates non-target screening and enables the identification of a higher number of compounds. Copyright © 2016 Elsevier B.V. All rights reserved.
Krishnan, Shaji; Verheij, Elwin E R; Bas, Richard C; Hendriks, Margriet W B; Hankemeier, Thomas; Thissen, Uwe; Coulier, Leon
2013-05-15
Mass spectra obtained by deconvolution of liquid chromatography/high-resolution mass spectrometry (LC/HRMS) data can be impaired by non-informative mass-over-charge (m/z) channels. This impairment of mass spectra can have significant negative influence on further post-processing, like quantification and identification. A metric derived from the knowledge of errors in isotopic distribution patterns, and quality of the signal within a pre-defined mass chromatogram block, has been developed to pre-select all informative m/z channels. This procedure results in the clean-up of deconvoluted mass spectra by maintaining the intensity counts from m/z channels that originate from a specific compound/molecular ion, for example, molecular ion, adducts, (13) C-isotopes, multiply charged ions and removing all m/z channels that are not related to the specific peak. The methodology has been successfully demonstrated for two sets of high-resolution LC/MS data. The approach described is therefore thought to be a useful tool in the automatic processing of LC/HRMS data. It clearly shows the advantages compared to other approaches like peak picking and de-isotoping in the sense that all information is retained while non-informative data is removed automatically. Copyright © 2013 John Wiley & Sons, Ltd.
An integrated analysis-synthesis array system for spatial sound fields.
Bai, Mingsian R; Hua, Yi-Hsin; Kuo, Chia-Hao; Hsieh, Yu-Hao
2015-03-01
An integrated recording and reproduction array system for spatial audio is presented within a generic framework akin to the analysis-synthesis filterbanks in discrete time signal processing. In the analysis stage, a microphone array "encodes" the sound field by using the plane-wave decomposition. Direction of arrival of plane-wave components that comprise the sound field of interest are estimated by multiple signal classification. Next, the source signals are extracted by using a deconvolution procedure. In the synthesis stage, a loudspeaker array "decodes" the sound field by reconstructing the plane-wave components obtained in the analysis stage. This synthesis stage is carried out by pressure matching in the interior domain of the loudspeaker array. The deconvolution problem is solved by truncated singular value decomposition or convex optimization algorithms. For high-frequency reproduction that suffers from the spatial aliasing problem, vector panning is utilized. Listening tests are undertaken to evaluate the deconvolution method, vector panning, and a hybrid approach that combines both methods to cover frequency ranges below and above the spatial aliasing frequency. Localization and timbral attributes are considered in the subjective evaluation. The results show that the hybrid approach performs the best in overall preference. In addition, there is a trade-off between reproduction performance and the external radiation.
NASA Astrophysics Data System (ADS)
Gurrola, H.; Berdine, A.; Pulliam, J.
2017-12-01
Interference between Ps phases and reverberations (PPs, PSs phases and reverberations thereof) make it difficult to use Ps receiver functions (RF) in regions with thick sediments. Crustal reverberations typically interfere with Ps phases from the lithosphere-asthenosphere boundary (LAB). We have developed a method to separate Ps phases from reverberations by deconvolution of all the data recorded at a seismic station by removing phases from a single wavefront at each iteration of the deconvolution (wavefield iterative deconvolution or WID). We applied WID to data collected in the Gulf Coast and Llano Front regions of Texas by the EarthScope Transportable array and by a temporary deployment of 23 broadband seismometers (deployed by Texas Tech and Baylor Universities). The 23 station temporary deployment was 300 km long; crossing from Matagorda Island onto the Llano uplift. 3-D imaging using these data shows that the deepest part of the sedimentary basin may be inboard of the coastline. The Moho beneath the Gulf Coast plain does not appear in many of the images. This could be due to interference from reverberations from shallower layers or it may indicate the lack of a strong velocity contrast at the Moho perhaps due to serpentinization of the uppermost mantle. The Moho appears to be flat, at 40 km) beneath most of the Llano uplift but may thicken to the south and thin beneath the Coastal plain. After application of WID, we were able to identify a negatively polarized Ps phase consistent with LAB depths identified in Sp RF images. The LAB appears to be 80-100 km deep beneath most of the coast but is 100 to 120 km deep beneath the Llano uplift. There are other negatively polarized phases between 160 and 200 km depths beneath the Gulf Coast and the Llano Uplift. These deeper phases may indicate that, in this region, the LAB is transitional in nature and rather than a discrete boundary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulmer, W.
2015-06-15
Purpose: The knowledge of the total nuclear cross-section Qtot(E) of therapeutic protons Qtot(E) provides important information in advanced radiotherapy with protons, such as the decrease of fluence of primary protons, the release of secondary particles (neutrons, protons, deuterons, etc.), and the production of nuclear fragments (heavy recoils), which usually undergo β+/− decay by emission of γ-quanta. Therefore determination of Qtot(E) is an important tool for sophisticated calculation algorithms of dose distributions. This cross-section can be determined by a linear combination of shifted Gaussian kernels and an error-function. The resonances resulting from deconvolutions in the energy space can be associated withmore » typical nuclear reactions. Methods: The described method of the determination of Qtot(E) results from an extension of the Breit-Wigner formula and a rather extended version of the nuclear shell theory to include nuclear correlation effects, clusters and highly excited/virtually excited nuclear states. The elastic energy transfer of protons to nucleons (the quantum numbers of the target nucleus remain constant) can be removed by the mentioned deconvolution. Results: The deconvolution of the term related to the error-function of the type cerf*er((E-ETh)/σerf] is the main contribution to obtain various nuclear reactions as resonances, since the elastic part of energy transfer is removed. The nuclear products of various elements of therapeutic interest like oxygen, calcium are classified and calculated. Conclusions: The release of neutrons is completely underrated, in particular, for low-energy protons. The transport of seconary particles, e.g. cluster formation by deuterium, tritium and α-particles, show an essential contribution to secondary particles, and the heavy recoils, which create γ-quanta by decay reactions, lead to broadening of the scatter profiles. These contributions cannot be accounted for by one single Gaussian kernel for the description of lateral scatter.« less
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
STARBLADE: STar and Artefact Removal with a Bayesian Lightweight Algorithm from Diffuse Emission
NASA Astrophysics Data System (ADS)
Knollmüller, Jakob; Frank, Philipp; Ensslin, Torsten A.
2018-05-01
STARBLADE (STar and Artefact Removal with a Bayesian Lightweight Algorithm from Diffuse Emission) separates superimposed point-like sources from a diffuse background by imposing physically motivated models as prior knowledge. The algorithm can also be used on noisy and convolved data, though performing a proper reconstruction including a deconvolution prior to the application of the algorithm is advised; the algorithm could also be used within a denoising imaging method. STARBLADE learns the correlation structure of the diffuse emission and takes it into account to determine the occurrence and strength of a superimposed point source.
Dasgupta, Purnendu K
2008-12-05
Resolution of overlapped chromatographic peaks is generally accomplished by modeling the peaks as Gaussian or modified Gaussian functions. It is possible, even preferable, to use actual single analyte input responses for this purpose and a nonlinear least squares minimization routine such as that provided by Microsoft Excel Solver can then provide the resolution. In practice, the quality of the results obtained varies greatly due to small shifts in retention time. I show here that such deconvolution can be considerably improved if one or more of the response arrays are iteratively shifted in time.
Large seismic source imaging from old analogue seismograms
NASA Astrophysics Data System (ADS)
Caldeira, Bento; Buforn, Elisa; Borges, José; Bezzeghoud, Mourad
2017-04-01
In this work we present a procedure to recover the ground motions by a proper digital structure, from old seismograms in analogue physical support (paper or microfilm) to study the source rupture process, by application of modern finite source inversion tools. Despite the quality that the analog data and the digitizing technologies available may have, recover the ground motions with the accurate metrics from old seismograms, is often an intricate procedure. Frequently the general parameters of the analogue instruments response that allow recover the shape of the ground motions (free periods and damping) are known, but the magnification that allow recover the metric of these motions is dubious. It is in these situations that the procedure applies. The procedure is based on assign of the moment magnitude value to the integral of the apparent Source Time Function (STF), estimated by deconvolution of a synthetic elementary seismogram from the related observed seismogram, corrected with an instrument response affected by improper magnification. Two delicate issues in the process are 1) the calculus of the synthetic elementary seismograms that must consider later phases if applied to large earthquakes (the portions of signal should be 3 or 4 times larger than the rupture time) and 2) the deconvolution to calculate the apparent STF. In present version of the procedure was used the Direct Solution Method to compute the elementary seismograms and the deconvolution was processed in time domain by an iterative algorithm that allow constrains the STF to stay positive and time limited. The method was examined using synthetic data to test the accuracy and robustness. Finally, a set of 17 real old analog seismograms from the Santa Maria (Azores) 1939 earthquake (Mw=7.1) was used in order to recover the waveforms in the required digital structure, from which by inversion allows compute the finite source rupture model (slip distribution). Acknowledgements: This work is co-financed by the European Union through the European Regional Development Fund under COMPETE 2020 (Operational Program for Competitiveness and Internationalization) through the ICT project (UID / GEO / 04683/2013) under the reference POCI-01-0145 -FEDER-007690.
NASA Astrophysics Data System (ADS)
Song, J.; Liu, K. H.; Yu, Y.; Mickus, K. L.; Gao, S. S.
2017-12-01
The Williston Basin of the northcentral United States and southern Canada is a typical intracratonic sag basin, with nearly continuous subsidence from the Cambrian to the Jurassic. A number of contrasting models on the subsidence mechanism of this approximately circular basin have been proposed. While in principle 3D variations of crustal thickness, layering, and Poisson's ratio can provide essential constraints on the models, thick layers of Phanerozoic sediment with up to 4.5 km thickness prevented reliable determinations of those crustal properties using active or passive source seismic techniques. Specifically, the strong reverberations of teleseismic P-to-S converted waves (a.k.a. receiver functions or RFs) from the Moho and intracrustal interfaces in the loose sedimentary layer can severely contaminate the RFs. Here we use RFs recorded by about 200 USArray and other stations in the Williston Basin and adjacent areas to obtain spatial distributions of the crustal properties. We have found that virtually all of the RFs recorded by stations in the Basin contain strong reverberations, which are effectively removed using a recently developed deconvolution-based filter (Yu et al., 2015, DOI: 10.1002/2014JB011610). A "double Moho" structure is clearly imaged beneath the Basin. The top interface has a depth of about 40 km beneath the Basin, and shallows gradually toward the east from the depocenter. It joins with the Moho beneath the western margin of the Superior Craton, where the crust is about 30 km thick. The bottom interface has a depth of 55 km beneath the Wyoming Craton, and deepens to about 70 km beneath the depocenter. Based on preliminary results of H-k stacking and gravity modeling, we interpret the layer between the two interfaces as a high density, probably eclogized layer. Continuous eclogitization from the Cambrian to the Jurassic resulted in the previously observed rates of subsidence being nearly linear rather than exponential.
Imaging of stellar surfaces with the Occamian approach and the least-squares deconvolution technique
NASA Astrophysics Data System (ADS)
Järvinen, S. P.; Berdyugina, S. V.
2010-10-01
Context. We present in this paper a new technique for the indirect imaging of stellar surfaces (Doppler imaging, DI), when low signal-to-noise spectral data have been improved by the least-squares deconvolution (LSD) method and inverted into temperature maps with the Occamian approach. We apply this technique to both simulated and real data and investigate its applicability for different stellar rotation rates and noise levels in data. Aims: Our goal is to boost the signal of spots in spectral lines and to reduce the effect of photon noise without loosing the temperature information in the lines. Methods: We simulated data from a test star, to which we added different amounts of noise, and employed the inversion technique based on the Occamian approach with and without LSD. In order to be able to infer a temperature map from LSD profiles, we applied the LSD technique for the first time to both the simulated observations and theoretical local line profiles, which remain dependent on temperature and limb angles. We also investigated how the excitation energy of individual lines effects the obtained solution by using three submasks that have lines with low, medium, and high excitation energy levels. Results: We show that our novel approach enables us to overcome the limitations of the two-temperature approximation, which was previously employed for LSD profiles, and to obtain true temperature maps with stellar atmosphere models. The resulting maps agree well with those obtained using the inversion code without LSD, provided the data are noiseless. However, using LSD is only advisable for poor signal-to-noise data. Further, we show that the Occamian technique, both with and without LSD, approaches the surface temperature distribution reasonably well for an adequate spatial resolution. Thus, the stellar rotation rate has a great influence on the result. For instance, in a slowly rotating star, closely situated spots are usually recovered blurred and unresolved, which affects the obtained temperature range of the map. This limitation is critical for small unresolved cool spots and is common for all DI techniques. Finally the LSD method was carried out for high signal-to-noise observations of the young active star V889 Her: the maps obtained with and without LSD are found to be consistent. Conclusions: Our new technique provides meaningful information on the temperature distribution on the stellar surfaces, which was previously inaccessible in DI with LSD. Our approach can be easily adopted for any other multi-line techniques.
Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution.
Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl
2016-11-16
Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.
Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution
NASA Astrophysics Data System (ADS)
Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl
2016-11-01
Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.
Inverting pump-probe spectroscopy for state tomography of excitonic systems.
Hoyer, Stephan; Whaley, K Birgitta
2013-04-28
We propose a two-step protocol for inverting ultrafast spectroscopy experiments on a molecular aggregate to extract the time-evolution of the excited state density matrix. The first step is a deconvolution of the experimental signal to determine a pump-dependent response function. The second step inverts this response function to obtain the quantum state of the system, given a model for how the system evolves following the probe interaction. We demonstrate this inversion analytically and numerically for a dimer model system, and evaluate the feasibility of scaling it to larger molecular aggregates such as photosynthetic protein-pigment complexes. Our scheme provides a direct alternative to the approach of determining all Hamiltonian parameters and then simulating excited state dynamics.
Development and use of Fourier self deconvolution and curve-fitting in the study of coal oxidation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, J.A.
1986-01-01
Techniques have been developed for modeling highly overlapped band multiplets. The method is based on a least-squares fit of spectra by a series of bands of known shape. Using synthetic spectra, it was shown that when bands are separated by less than their full width at half height (FWHH), valid analytical data can only be obtained after the width of each component band is narrowed by Fourier self deconvolution (FSD). The optimum method of spectral fitting determined from the study of synthetic spectra was then applied to the characterization of oxidized coals. A medium volatile bituminous coal which was airmore » oxidized at 200/sup 0/C for different lengths of time, was extracted with chloroform. A comparison of the infrared spectra of the whole coal and the extract indicated that the extracted material contains a smaller amount of carbonyl, ether, and ester groups, while the aromatic content is much higher. Oxidation does not significantly affect the aromatic content of the whole cola. Most of the aromatic groups in the CHCl/sub 3/ extract show evidence of reaction, however. The production of relatively large amounts of intramolecular aromatic anhydrides is seen in the spectrum of the extract of coals which have undergone extensive oxidation,while there is only a slight indication of this anhydride in the whole coal.« less
Gravity and Seismic Investigations of the Northern Rio Grande Rift Area, New Mexico
NASA Astrophysics Data System (ADS)
Braile, L. W.; Deepak, A.; Helprin, O.; Kondas, S.; Maguire, H.; McCallister, B.; Orubu, A.; Rijfkogel, L.; Schumann, H.; Vannette, M.; Wanpiyarat, N.; Carchedi, C.; Ferguson, J. F.; McPhee, D.; Biehler, S.; Ralston, M. D.; Baldridge, W. S.
2017-12-01
Participants in the Summer of Applied Geophysical Experience (SAGE, a research and education program in applied geophysics for undergraduate and graduate students) program have studied the northern Rio Grande rift (RGR) area of New Mexico for the past thirty-five years. In recent years, the SAGE program has focused on the western edge of the Española basin and the transition into the Santo Domingo basin and the Valles caldera. During this time, we have collected about 50 km of seismic reflection and refraction data along approximately East-West profiles using a 120 channel data acquisition system with a 20 m station interval and a Vibroseis source. We also have access to several energy-industry seismic reflection record sections from the 1970s in the study area. During SAGE 2017, new gravity measurements north of the Jemez Mountains and a seismic reflection profile (Rio de Truchas Profile) in the Valarde graben adjacent to the eastern boundary of the RGR have added new constraints to a west-to-east transect in area of the northern RGR. The recorded near-vertical and wide-angle seismic refection data were processed to produce a CMP (common midpoint) stacked record section. Bandpass filtering, muting, deconvolution, and F-K velocity filtering were found to be effective in enhancing the seismic reflections. Modeling and interpretation of the northern RGR west-to-east geophysical profile indicates that the sedimentary rock fill in the Velarde graben is at least 3 km near the center of the graben. Gravity modeling also suggests the presence of a high-density intrusion at the top of the crystalline basement in an area to the north and west of Abiquiu, NM.
The combined effects of topography and vegetation on catchment connectivity
NASA Astrophysics Data System (ADS)
Nippgen, F.; McGlynn, B. L.; Emanuel, R. E.
2012-12-01
The deconvolution of whole catchment runoff response into its temporally dynamic source areas is a grand challenge in hydrology. The extent to which the intersection of static and dynamic catchment characteristics (e.g. topography and vegetation) influences water redistribution within a catchment and the hydrologic connectivity of hillslopes to the riparian and stream system is largely unknown. Over time, patterns of catchment storage shift and, because of threshold connectivity behavior, catchment areas become disconnected from the stream network. We developed a simple but spatially distributed modeling framework that explicitly incorporates static (topography) and dynamic (vegetation) catchment structure to document the evolution of catchment connectivity over the course of a water year. We employed directly measured eddy-covariance evapotranspiration data co-located within a highly instrumented (>150 recording groundwater wells) and gauged catchment to parse the effect of current and zero vegetation scenarios on the temporal evolution of hydrologic connectivity. In the absence of vegetation, and thus in the absence of evapotranspiration, modeled absolute connectivity was 4.5% greater during peak flow and 3.9% greater during late summer baseflow when compared to the actual vegetation scenario. The most significant differences in connected catchment area between current and zero vegetation (14.9%) occurred during the recession period in early July, when water and energy availability were at an optimum. However, the greatest relative difference in connected area occurs during the late summer baseflow period when the absence of evapotranspiration results in a connected area approximately 500% greater than when vegetation is present, while the relative increase during peak flow is just 6%. Changes in connected areas ultimately lead to propose a biologically modified geomorphic width function. This biogeomorphic width function is the result of lateral water redistribution driven by topography and water uptake by vegetation.
The lithosphere-asthenosphere boundary beneath the South Island of New Zealand
NASA Astrophysics Data System (ADS)
Hua, Junlin; Fischer, Karen M.; Savage, Martha K.
2018-02-01
Lithosphere-asthenosphere boundary (LAB) properties beneath the South Island of New Zealand have been imaged by Sp receiver function common-conversion point stacking. In this transpressional boundary between the Australian and Pacific plates, dextral offset on the Alpine fault and convergence have occurred for the past 20 My, with the Alpine fault now bounded by Australian plate subduction to the south and Pacific plate subduction to the north. Using data from onland seismometers, especially the 29 broadband stations of the New Zealand permanent seismic network (GeoNet), we obtained 24,971 individual receiver functions by extended-time multi-taper deconvolution, and mapped them to three-dimensional space using a Fresnel zone approximation. Pervasive strong positive Sp phases are observed in the LAB depth range indicated by surface wave tomography. These phases are interpreted as conversions from a velocity decrease across the LAB. In the central South Island, the LAB is observed to be deeper and broader to the northwest of the Alpine fault. The deeper LAB to the northwest of the Alpine fault is consistent with models in which oceanic lithosphere attached to the Australian plate was partially subducted, or models in which the Pacific lithosphere has been underthrust northwest past the Alpine fault. Further north, a zone of thin lithosphere with a strong and vertically localized LAB velocity gradient occurs to the northwest of the fault, juxtaposed against a region of anomalously weak LAB conversions to the southeast of the fault. This structure could be explained by lithospheric blocks with contrasting LAB properties that meet beneath the Alpine fault, or by the effects of Pacific plate subduction. The observed variations in LAB properties indicate strong modification of the LAB by the interplay of convergence and strike-slip deformation along and across this transpressional plate boundary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Hui; Chou, Dean-Yi, E-mail: chou@phys.nthu.edu.tw
The solar acoustic waves are modified by the interaction with sunspots. The interaction can be treated as a scattering problem: an incident wave propagating toward a sunspot is scattered by the sunspot into different modes. The absorption cross section and scattering cross section are two important parameters in the scattering problem. In this study, we use the wavefunction of the scattered wave, measured with a deconvolution method, to compute the absorption cross section σ {sub ab} and the scattering cross section σ {sub sc} for the radial order n = 0–5 for two sunspots, NOAA 11084 and NOAA 11092. Inmore » the computation of the cross sections, the random noise and dissipation in the measured acoustic power are corrected. For both σ {sub ab} and σ {sub sc}, the value of NOAA 11092 is greater than that of NOAA 11084, but their overall n dependence is similar: decreasing with n . The ratio of σ {sub ab} of NOAA 11092 to that of NOAA 11084 approximately equals the ratio of sunspot radii for all n , while the ratio of σ {sub sc} of the two sunspots is greater than the ratio of sunspot radii and increases with n . This suggests that σ {sub ab} is approximately proportional to the sunspot radius, while the dependence of σ {sub sc} on radius is faster than the linear increase.« less
Effects of radiative heat transfer on the turbulence structure in inert and reacting mixing layers
NASA Astrophysics Data System (ADS)
Ghosh, Somnath; Friedrich, Rainer
2015-05-01
We use large-eddy simulation to study the interaction between turbulence and radiative heat transfer in low-speed inert and reacting plane temporal mixing layers. An explicit filtering scheme based on approximate deconvolution is applied to treat the closure problem arising from quadratic nonlinearities of the filtered transport equations. In the reacting case, the working fluid is a mixture of ideal gases where the low-speed stream consists of hydrogen and nitrogen and the high-speed stream consists of oxygen and nitrogen. Both streams are premixed in a way that the free-stream densities are the same and the stoichiometric mixture fraction is 0.3. The filtered heat release term is modelled using equilibrium chemistry. In the inert case, the low-speed stream consists of nitrogen at a temperature of 1000 K and the highspeed stream is pure water vapour of 2000 K, when radiation is turned off. Simulations assuming the gas mixtures as gray gases with artificially increased Planck mean absorption coefficients are performed in which the large-eddy simulation code and the radiation code PRISSMA are fully coupled. In both cases, radiative heat transfer is found to clearly affect fluctuations of thermodynamic variables, Reynolds stresses, and Reynolds stress budget terms like pressure-strain correlations. Source terms in the transport equation for the variance of temperature are used to explain the decrease of this variance in the reacting case and its increase in the inert case.
Effects of radiative heat transfer on the turbulence structure in inert and reacting mixing layers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Somnath, E-mail: sghosh@aero.iitkgp.ernet.in; Friedrich, Rainer
2015-05-15
We use large-eddy simulation to study the interaction between turbulence and radiative heat transfer in low-speed inert and reacting plane temporal mixing layers. An explicit filtering scheme based on approximate deconvolution is applied to treat the closure problem arising from quadratic nonlinearities of the filtered transport equations. In the reacting case, the working fluid is a mixture of ideal gases where the low-speed stream consists of hydrogen and nitrogen and the high-speed stream consists of oxygen and nitrogen. Both streams are premixed in a way that the free-stream densities are the same and the stoichiometric mixture fraction is 0.3. Themore » filtered heat release term is modelled using equilibrium chemistry. In the inert case, the low-speed stream consists of nitrogen at a temperature of 1000 K and the highspeed stream is pure water vapour of 2000 K, when radiation is turned off. Simulations assuming the gas mixtures as gray gases with artificially increased Planck mean absorption coefficients are performed in which the large-eddy simulation code and the radiation code PRISSMA are fully coupled. In both cases, radiative heat transfer is found to clearly affect fluctuations of thermodynamic variables, Reynolds stresses, and Reynolds stress budget terms like pressure-strain correlations. Source terms in the transport equation for the variance of temperature are used to explain the decrease of this variance in the reacting case and its increase in the inert case.« less
NASA Astrophysics Data System (ADS)
Qiu, Xiang; Dai, Ming; Yin, Chuan-li
2017-09-01
Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.
Dipy, a library for the analysis of diffusion MRI data.
Garyfallidis, Eleftherios; Brett, Matthew; Amirbekian, Bagrat; Rokem, Ariel; van der Walt, Stefan; Descoteaux, Maxime; Nimmo-Smith, Ian
2014-01-01
Diffusion Imaging in Python (Dipy) is a free and open source software project for the analysis of data from diffusion magnetic resonance imaging (dMRI) experiments. dMRI is an application of MRI that can be used to measure structural features of brain white matter. Many methods have been developed to use dMRI data to model the local configuration of white matter nerve fiber bundles and infer the trajectory of bundles connecting different parts of the brain. Dipy gathers implementations of many different methods in dMRI, including: diffusion signal pre-processing; reconstruction of diffusion distributions in individual voxels; fiber tractography and fiber track post-processing, analysis and visualization. Dipy aims to provide transparent implementations for all the different steps of dMRI analysis with a uniform programming interface. We have implemented classical signal reconstruction techniques, such as the diffusion tensor model and deterministic fiber tractography. In addition, cutting edge novel reconstruction techniques are implemented, such as constrained spherical deconvolution and diffusion spectrum imaging (DSI) with deconvolution, as well as methods for probabilistic tracking and original methods for tractography clustering. Many additional utility functions are provided to calculate various statistics, informative visualizations, as well as file-handling routines to assist in the development and use of novel techniques. In contrast to many other scientific software projects, Dipy is not being developed by a single research group. Rather, it is an open project that encourages contributions from any scientist/developer through GitHub and open discussions on the project mailing list. Consequently, Dipy today has an international team of contributors, spanning seven different academic institutions in five countries and three continents, which is still growing.
Dipy, a library for the analysis of diffusion MRI data
Garyfallidis, Eleftherios; Brett, Matthew; Amirbekian, Bagrat; Rokem, Ariel; van der Walt, Stefan; Descoteaux, Maxime; Nimmo-Smith, Ian
2014-01-01
Diffusion Imaging in Python (Dipy) is a free and open source software project for the analysis of data from diffusion magnetic resonance imaging (dMRI) experiments. dMRI is an application of MRI that can be used to measure structural features of brain white matter. Many methods have been developed to use dMRI data to model the local configuration of white matter nerve fiber bundles and infer the trajectory of bundles connecting different parts of the brain. Dipy gathers implementations of many different methods in dMRI, including: diffusion signal pre-processing; reconstruction of diffusion distributions in individual voxels; fiber tractography and fiber track post-processing, analysis and visualization. Dipy aims to provide transparent implementations for all the different steps of dMRI analysis with a uniform programming interface. We have implemented classical signal reconstruction techniques, such as the diffusion tensor model and deterministic fiber tractography. In addition, cutting edge novel reconstruction techniques are implemented, such as constrained spherical deconvolution and diffusion spectrum imaging (DSI) with deconvolution, as well as methods for probabilistic tracking and original methods for tractography clustering. Many additional utility functions are provided to calculate various statistics, informative visualizations, as well as file-handling routines to assist in the development and use of novel techniques. In contrast to many other scientific software projects, Dipy is not being developed by a single research group. Rather, it is an open project that encourages contributions from any scientist/developer through GitHub and open discussions on the project mailing list. Consequently, Dipy today has an international team of contributors, spanning seven different academic institutions in five countries and three continents, which is still growing. PMID:24600385
NASA Astrophysics Data System (ADS)
Jo, J. A.; Fang, Q.; Papaioannou, T.; Qiao, J. H.; Fishbein, M. C.; Beseth, B.; Dorafshar, A. H.; Reil, T.; Baker, D.; Freischlag, J.; Marcu, L.
2006-02-01
This study introduces new methods of time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data analysis for tissue characterization. These analytical methods were applied for the detection of atherosclerotic vulnerable plaques. Upon pulsed nitrogen laser (337 nm, 1 ns) excitation, TR-LIFS measurements were obtained from carotid atherosclerotic plaque specimens (57 endarteroctomy patients) at 492 distinct areas. The emission was both spectrally- (360-600 nm range at 5 nm interval) and temporally- (0.3 ns resolution) resolved using a prototype clinically compatible fiber-optic catheter TR-LIFS apparatus. The TR-LIFS measurements were subsequently analyzed using a standard multiexponential deconvolution and a recently introduced Laguerre deconvolution technique. Based on their histopathology, the lesions were classified as early (thin intima), fibrotic (collagen-rich intima), and high-risk (thin cap over necrotic core and/or inflamed intima). Stepwise linear discriminant analysis (SLDA) was applied for lesion classification. Normalized spectral intensity values and Laguerre expansion coefficients (LEC) at discrete emission wavelengths (390, 450, 500 and 550 nm) were used as features for classification. The Laguerre based SLDA classifier provided discrimination of high-risk lesions with high sensitivity (SE>81%) and specificity (SP>95%). Based on these findings, we believe that TR-LIFS information derived from the Laguerre expansion coefficients can provide a valuable additional dimension for the diagnosis of high-risk vulnerable atherosclerotic plaques.