Deconvoluting lung evolution: from phenotypes to gene regulatory networks
Torday, John S.; Rehan, Virender K.; Hicks, James W.; Wang, Tobias; Maina, John; Weibel, Ewald R.; Hsia, Connie C.W.; Sommer, Ralf J.; Perry, Steven F.
2007-01-01
Speakers in this symposium presented examples of respiratory regulation that broadly illustrate principles of evolution from whole organ to genes. The swim bladder and lungs of aquatic and terrestrial organisms arose independently from a common primordial “respiratory pharynx” but not from each other. Pathways of lung evolution are similar between crocodiles and birds but a low compliance of mammalian lung may have driven the development of the diaphragm to permit lung inflation during inspiration. To meet the high oxygen demands of flight, bird lungs have evolved separate gas exchange and pump components to achieve unidirectional ventilation and minimize dead space. The process of “screening” (removal of oxygen from inspired air prior to entering the terminal units) reduces effective alveolar oxygen tension and potentially explains why nonathletic large mammals possess greater pulmonary diffusing capacities than required by their oxygen consumption. The “primitive” central admixture of oxygenated and deoxygenated blood in the incompletely divided reptilian heart is actually co-regulated with other autonomic cardiopulmonary responses to provide flexible control of arterial oxygen tension independent of ventilation as well as a unique mechanism for adjusting metabolic rate. Some of the most ancient oxygen-sensing molecules, i.e., hypoxia-inducible factor-1alpha and erythropoietin, are up-regulated during mammalian lung development and growth under apparently normoxic conditions, suggesting functional evolution. Normal alveolarization requires pleiotropic growth factors acting via highly conserved cell–cell signal transduction, e.g., parathyroid hormone-related protein transducing at least partly through the Wingless/int pathway. The latter regulates morphogenesis from nematode to mammal. If there is commonality among these diverse respiratory processes, it is that all levels of organization, from molecular signaling to structure to function, co-evolve progressively, and optimize an existing gas-exchange framework. PMID:20607138
Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho
2018-01-01
To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p < 0.001) and for the overall image quality (mean, 3.8; range, 3.3-4.0; p < 0.001). The most preferred anatomical regions were the azygoesophageal recess, thoracic spine, and unobscured lung. The visibility of chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.
Guo, Shicheng; Diep, Dinh; Plongthongkum, Nongluk; Fung, Ho-Lim; Zhang, Kang; Zhang, Kun
2017-04-01
Adjacent CpG sites in mammalian genomes can be co-methylated owing to the processivity of methyltransferases or demethylases, yet discordant methylation patterns have also been observed, which are related to stochastic or uncoordinated molecular processes. We focused on a systematic search and investigation of regions in the full human genome that show highly coordinated methylation. We defined 147,888 blocks of tightly coupled CpG sites, called methylation haplotype blocks, after analysis of 61 whole-genome bisulfite sequencing data sets and validation with 101 reduced-representation bisulfite sequencing data sets and 637 methylation array data sets. Using a metric called methylation haplotype load, we performed tissue-specific methylation analysis at the block level. Subsets of informative blocks were further identified for deconvolution of heterogeneous samples. Finally, using methylation haplotypes we demonstrated quantitative estimation of tumor load and tissue-of-origin mapping in the circulating cell-free DNA of 59 patients with lung or colorectal cancer.
Evolution of lung breathing from a lungless primitive vertebrate.
Hoffman, M; Taylor, B E; Harris, M B
2016-04-01
Air breathing was critical to the terrestrial radiation and evolution of tetrapods and arose in fish. The vertebrate lung originated from a progenitor structure present in primitive boney fish. The origin of the neural substrates, which are sensitive to metabolically produced CO2 and which rhythmically activate respiratory muscles to match lung ventilation to metabolic demand, is enigmatic. We have found that a distinct periodic centrally generated rhythm, described as "cough" and occurring in lamprey in vivo and in vitro, is modulated by central sensitivity to CO2. This suggests that elements critical for the evolution of breathing in tetrapods, were present in the most basal vertebrate ancestors prior to the evolution of the lung. We propose that the evolution of breathing in all vertebrates occurred through exaptations derived from these critical basal elements. Copyright © 2015 Elsevier B.V. All rights reserved.
Evolution and development of gas exchange structures in Mammalia: the placenta and the lung.
Mess, Andrea M; Ferner, Kirsten J
2010-08-31
Appropriate oxygen supply is crucial for organisms. Here we examine the evolution of structures associated with the delivery of oxygen in the pre- and postnatal phases in mammals. There is an enormous structural and functional variability in the placenta that has facilitated the evolution of specialized reproductive strategies, such as precociality. In particular the cell layers separating fetal and maternal blood differ markedly: a non-invasive epitheliochorial placenta, which increases the diffusion distance, represents a derived state in ungulates. Rodents and their relatives have an invasive haemochorial placental type as optimum for the diffusion distance. In contrast, lung development is highly conserved and differences in the lungs of neonates can be explained by different developmental rates. Monotremes and marsupials have altricial stages with lungs at the early saccular phase, whereas newborn eutherians have lungs at the late saccular or alveolar phase. In conclusion, the evolution of exchange structures in the pre- and postnatal periods does not follow similar principles. Copyright (c) 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jimenez-Ruiz, A.; Carnerero, J. M.; Castillo, P. M.; Prado-Gotor, R.
2017-01-01
Low-generation polyamidoamine (PAMAM) dendrimers are known to adsorb on the surface of gold nanoparticles (AuNPs) causing aggregation and color changes. In this paper, a thorough study of this affinity using absorption spectroscopy, colorimetric, and emission methods has been carried out. Results show that, for citrate-capped gold nanoparticles, interaction with the dendrimer is not only of an electrostatic character but instead occurs, at least in part, through the dendrimer's uncharged internal amino groups. The possibilities of the CIELab chromaticity system parameters' evolution have also been explored in order to quantify dendrimer interaction with the red-colored nanoparticles. By measuring and quantifying 17 nm citrate-capped AuNP color changes, which are strongly dependant on their aggregation state, binding free energies are obtained for the first time for these systems. Results are confirmed via an alternate fitting method which makes use of deconvolution parameters from absorbance spectra. Binding free energies obtained through the use of both means are in good agreement with each other.
Application of an NLME-Stochastic Deconvolution Approach to Level A IVIVC Modeling.
Kakhi, Maziar; Suarez-Sharp, Sandra; Shepard, Terry; Chittenden, Jason
2017-07-01
Stochastic deconvolution is a parameter estimation method that calculates drug absorption using a nonlinear mixed-effects model in which the random effects associated with absorption represent a Wiener process. The present work compares (1) stochastic deconvolution and (2) numerical deconvolution, using clinical pharmacokinetic (PK) data generated for an in vitro-in vivo correlation (IVIVC) study of extended release (ER) formulations of a Biopharmaceutics Classification System class III drug substance. The preliminary analysis found that numerical and stochastic deconvolution yielded superimposable fraction absorbed (F abs ) versus time profiles when supplied with exactly the same externally determined unit impulse response parameters. In a separate analysis, a full population-PK/stochastic deconvolution was applied to the clinical PK data. Scenarios were considered in which immediate release (IR) data were either retained or excluded to inform parameter estimation. The resulting F abs profiles were then used to model level A IVIVCs. All the considered stochastic deconvolution scenarios, and numerical deconvolution, yielded on average similar results with respect to the IVIVC validation. These results could be achieved with stochastic deconvolution without recourse to IR data. Unlike numerical deconvolution, this also implies that in crossover studies where certain individuals do not receive an IR treatment, their ER data alone can still be included as part of the IVIVC analysis. Published by Elsevier Inc.
Semi-quantitative assessment of pulmonary perfusion in children using dynamic contrast-enhanced MRI
NASA Astrophysics Data System (ADS)
Fetita, Catalin; Thong, William E.; Ou, Phalla
2013-03-01
This paper addresses the study of semi-quantitative assessment of pulmonary perfusion acquired from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in a study population mainly composed of children with pulmonary malformations. The automatic analysis approach proposed is based on the indicator-dilution theory introduced in 1954. First, a robust method is developed to segment the pulmonary artery and the lungs from anatomical MRI data, exploiting 2D and 3D mathematical morphology operators. Second, the time-dependent contrast signal of the lung regions is deconvolved by the arterial input function for the assessment of the local hemodynamic system parameters, ie. mean transit time, pulmonary blood volume and pulmonary blood flow. The discrete deconvolution method implements here a truncated singular value decomposition (tSVD) method. Parametric images for the entire lungs are generated as additional elements for diagnosis and quantitative follow-up. The preliminary results attest the feasibility of perfusion quantification in pulmonary DCE-MRI and open an interesting alternative to scintigraphy for this type of evaluation, to be considered at least as a preliminary decision in the diagnostic due to the large availability of the technique and to the non-invasive aspects.
Lung Cancer: Posttreatment Imaging: Radiation Therapy and Imaging Findings.
Benveniste, Marcelo F; Welsh, James; Viswanathan, Chitra; Shroff, Girish S; Betancourt Cuellar, Sonia L; Carter, Brett W; Marom, Edith M
2018-05-01
In this review, we discuss the different radiation delivery techniques available to treat non-small cell lung cancer, typical radiologic manifestations of conventional radiotherapy, and different patterns of lung injury and temporal evolution of the newer radiotherapy techniques. More sophisticated techniques include intensity-modulated radiotherapy, stereotactic body radiotherapy, proton therapy, and respiration-correlated computed tomography or 4-dimensional computed tomography for radiotherapy planning. Knowledge of the radiation treatment plan and technique, the completion date of radiotherapy, and the temporal evolution of radiation-induced lung injury is important to identify expected manifestations of radiation-induced lung injury and differentiate them from tumor recurrence or infection. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei
2018-07-01
Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.
Lung dynamic MRI deblurring using low-rank decomposition and dictionary learning.
Gou, Shuiping; Wang, Yueyue; Wu, Jiaolong; Lee, Percy; Sheng, Ke
2015-04-01
Lung dynamic MRI (dMRI) has emerged to be an appealing tool to quantify lung motion for both planning and treatment guidance purposes. However, this modality can result in blurry images due to intrinsically low signal-to-noise ratio in the lung and spatial/temporal interpolation. The image blurring could adversely affect the image processing that depends on the availability of fine landmarks. The purpose of this study is to reduce dMRI blurring using image postprocessing. To enhance the image quality and exploit the spatiotemporal continuity of dMRI sequences, a low-rank decomposition and dictionary learning (LDDL) method was employed to deblur lung dMRI and enhance the conspicuity of lung blood vessels. Fifty frames of continuous 2D coronal dMRI frames using a steady state free precession sequence were obtained from five subjects including two healthy volunteer and three lung cancer patients. In LDDL, the lung dMRI was decomposed into sparse and low-rank components. Dictionary learning was employed to estimate the blurring kernel based on the whole image, low-rank or sparse component of the first image in the lung MRI sequence. Deblurring was performed on the whole image sequences using deconvolution based on the estimated blur kernel. The deblurring results were quantified using an automated blood vessel extraction method based on the classification of Hessian matrix filtered images. Accuracy of automated extraction was calculated using manual segmentation of the blood vessels as the ground truth. In the pilot study, LDDL based on the blurring kernel estimated from the sparse component led to performance superior to the other ways of kernel estimation. LDDL consistently improved image contrast and fine feature conspicuity of the original MRI without introducing artifacts. The accuracy of automated blood vessel extraction was on average increased by 16% using manual segmentation as the ground truth. Image blurring in dMRI images can be effectively reduced using a low-rank decomposition and dictionary learning method using kernels estimated by the sparse component.
Deconvolution method for accurate determination of overlapping peak areas in chromatograms.
Nelson, T J
1991-12-20
A method is described for deconvoluting chromatograms which contain overlapping peaks. Parameters can be selected to ensure that attenuation of peak areas is uniform over any desired range of peak widths. A simple extension of the method greatly reduces the negative overshoot frequently encountered with deconvolutions. The deconvoluted chromatograms are suitable for integration by conventional methods.
NASA Astrophysics Data System (ADS)
Oda, Hirokuni; Xuan, Chuang
2014-10-01
development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.
NASA Astrophysics Data System (ADS)
Xuan, Chuang; Oda, Hirokuni
2015-11-01
The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.
A neural network approach for the blind deconvolution of turbulent flows
NASA Astrophysics Data System (ADS)
Maulik, R.; San, O.
2017-11-01
We present a single-layer feedforward artificial neural network architecture trained through a supervised learning approach for the deconvolution of flow variables from their coarse grained computations such as those encountered in large eddy simulations. We stress that the deconvolution procedure proposed in this investigation is blind, i.e. the deconvolved field is computed without any pre-existing information about the filtering procedure or kernel. This may be conceptually contrasted to the celebrated approximate deconvolution approaches where a filter shape is predefined for an iterative deconvolution process. We demonstrate that the proposed blind deconvolution network performs exceptionally well in the a-priori testing of both two-dimensional Kraichnan and three-dimensional Kolmogorov turbulence and shows promise in forming the backbone of a physics-augmented data-driven closure for the Navier-Stokes equations.
Crowded field photometry with deconvolved images.
NASA Astrophysics Data System (ADS)
Linde, P.; Spännare, S.
A local implementation of the Lucy-Richardson algorithm has been used to deconvolve a set of crowded stellar field images. The effects of deconvolution on detection limits as well as on photometric and astrometric properties have been investigated as a function of the number of deconvolution iterations. Results show that deconvolution improves detection of faint stars, although artifacts are also found. Deconvolution provides more stars measurable without significant degradation of positional accuracy. The photometric precision is affected by deconvolution in several ways. Errors due to unresolved images are notably reduced, while flux redistribution between stars and background increases the errors.
Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.
2003-01-01
Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.
Wear, Keith; Liu, Yunbo; Gammell, Paul M; Maruvada, Subha; Harris, Gerald R
2015-01-01
Nonlinear acoustic signals contain significant energy at many harmonic frequencies. For many applications, the sensitivity (frequency response) of a hydrophone will not be uniform over such a broad spectrum. In a continuation of a previous investigation involving deconvolution methodology, deconvolution (implemented in the frequency domain as an inverse filter computed from frequency-dependent hydrophone sensitivity) was investigated for improvement of accuracy and precision of nonlinear acoustic output measurements. Timedelay spectrometry was used to measure complex sensitivities for 6 fiber-optic hydrophones. The hydrophones were then used to measure a pressure wave with rich harmonic content. Spectral asymmetry between compressional and rarefactional segments was exploited to design filters used in conjunction with deconvolution. Complex deconvolution reduced mean bias (for 6 fiber-optic hydrophones) from 163% to 24% for peak compressional pressure (p+), from 113% to 15% for peak rarefactional pressure (p-), and from 126% to 29% for pulse intensity integral (PII). Complex deconvolution reduced mean coefficient of variation (COV) (for 6 fiber optic hydrophones) from 18% to 11% (p+), 53% to 11% (p-), and 20% to 16% (PII). Deconvolution based on sensitivity magnitude or the minimum phase model also resulted in significant reductions in mean bias and COV of acoustic output parameters but was less effective than direct complex deconvolution for p+ and p-. Therefore, deconvolution with appropriate filtering facilitates reliable nonlinear acoustic output measurements using hydrophones with frequency-dependent sensitivity.
Fast analytical spectral filtering methods for magnetic resonance perfusion quantification.
Reddy, Kasireddy V; Mitra, Abhishek; Yalavarthy, Phaneendra K
2016-08-01
The deconvolution in the perfusion weighted imaging (PWI) plays an important role in quantifying the MR perfusion parameters. The PWI application to stroke and brain tumor studies has become a standard clinical practice. The standard approach for this deconvolution is oscillatory-limited singular value decomposition (oSVD) and frequency domain deconvolution (FDD). The FDD is widely recognized as the fastest approach currently available for deconvolution of MR perfusion data. In this work, two fast deconvolution methods (namely analytical fourier filtering and analytical showalter spectral filtering) are proposed. Through systematic evaluation, the proposed methods are shown to be computationally efficient and quantitatively accurate compared to FDD and oSVD.
Ramachandra, Ranjan; de Jonge, Niels
2012-01-01
Three-dimensional (3D) data sets were recorded of gold nanoparticles placed on both sides of silicon nitride membranes using focal series aberration-corrected scanning transmission electron microscopy (STEM). The deconvolution of the 3D datasets was optimized to obtain the highest possible axial resolution. The deconvolution involved two different point spread function (PSF)s, each calculated iteratively via blind deconvolution.. Supporting membranes of different thicknesses were tested to study the effect of beam broadening on the deconvolution. It was found that several iterations of deconvolution was efficient in reducing the imaging noise. With an increasing number of iterations, the axial resolution was increased, and most of the structural information was preserved. Additional iterations improved the axial resolution by maximal a factor of 4 to 6, depending on the particular dataset, and up to 8 nm maximal, but at the cost of a reduction of the lateral size of the nanoparticles in the image. Thus, the deconvolution procedure optimized for highest axial resolution is best suited for applications where one is interested in the 3D locations of nanoparticles only. PMID:22152090
Wen, Yanhua; Wei, Yanjun; Zhang, Shumei; Li, Song; Liu, Hongbo; Wang, Fang; Zhao, Yue; Zhang, Dongwei; Zhang, Yan
2017-05-01
Tumour heterogeneity describes the coexistence of divergent tumour cell clones within tumours, which is often caused by underlying epigenetic changes. DNA methylation is commonly regarded as a significant regulator that differs across cells and tissues. In this study, we comprehensively reviewed research progress on estimating of tumour heterogeneity. Bioinformatics-based analysis of DNA methylation has revealed the evolutionary relationships between breast cancer cell lines and tissues. Further analysis of the DNA methylation profiles in 33 breast cancer-related cell lines identified cell line-specific methylation patterns. Next, we reviewed the computational methods in inferring clonal evolution of tumours from different perspectives and then proposed a deconvolution strategy for modelling cell subclonal populations dynamics in breast cancer tissues based on DNA methylation. Further analysis of simulated cancer tissues and real cell lines revealed that this approach exhibits satisfactory performance and relative stability in estimating the composition and proportions of cellular subpopulations. The application of this strategy to breast cancer individuals of the Cancer Genome Atlas's identified different cellular subpopulations with distinct molecular phenotypes. Moreover, the current and potential future applications of this deconvolution strategy to clinical breast cancer research are discussed, and emphasis was placed on the DNA methylation-based recognition of intra-tumour heterogeneity. The wide use of these methods for estimating heterogeneity to further clinical cohorts will improve our understanding of neoplastic progression and the design of therapeutic interventions for treating breast cancer and other malignancies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Cell–cell signaling drives the evolution of complex traits: introduction—lung evo-devo
Torday, John S.; Rehan, V. K.
2009-01-01
Physiology integrates biology with the environment through cell–cell interactions at multiple levels. The evolution of the respiratory system has been “deconvoluted” (Torday and Rehan in Am J Respir Cell Mol Biol 31:8–12, 2004) through Gene Regulatory Networks (GRNs) applied to cell–cell communication for all aspects of lung biology development, homeostasis, regeneration, and aging. Using this approach, we have predicted the phenotypic consequences of failed signaling for lung development, homeostasis, and regeneration based on evolutionary principles. This cell–cell communication model predicts other aspects of vertebrate physiology as adaptational responses. For example, the oxygen-induced differentiation of alveolar myocytes into alveolar adipocytes was critical for the evolution of the lung in land dwelling animals adapting to fluctuating Phanarezoic oxygen levels over the past 500 million years. Adipocytes prevent lung injury due to oxygen radicals and facilitate the rise of endothermy. In addition, they produce the class I cytokine leptin, which augments pulmonary surfactant activity and alveolar surface area, increasing selection pressure for both respiratory oxygenation and metabolic demand initially constrained by high-systemic vascular pressure, but subsequently compensated by the evolution of the adrenomedullary beta-adrenergic receptor mechanism. Conserted positive selection for the lung and adrenals created further selection pressure for the heart, which becomes progressively more complex phylogenetically in tandem with the lung. Developmentally, increasing heart complexity and size impinges precociously on the gut mesoderm to induce the liver. That evolutionary-developmental interaction is significant because the liver provides regulated sources of glucose and glycogen to the evolving physiologic system, which is necessary for the evolution of the neocortex. Evolution of neocortical control furthers integration of physiologic systems. Such an evolutionary vertical integration of cell-to-tissue-to-organ-to-physiology of intrinsic cell–cell signaling and extrinsic factors is the reverse of the “top-down” conventional way in which physiologic systems are usually regarded. This novel mechanistic approach, incorporating a “middle-out” cell–cell signaling component, will lead to a readily available algorithm for integrating genes and phenotypes. This symposium surveyed the phylogenetic origins of such vertically integrated mechanisms for the evolution of cell–cell communication as the basis for complex physiologic traits, from sponges to man. PMID:20607136
SU-F-T-478: Effect of Deconvolution in Analysis of Mega Voltage Photon Beam Profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muthukumaran, M; Manigandan, D; Murali, V
2016-06-15
Purpose: To study and compare the penumbra of 6 MV and 15 MV photon beam profiles after deconvoluting different volume ionization chambers. Methods: 0.125cc Semi-Flex chamber, Markus Chamber and PTW Farmer chamber were used to measure the in-plane and cross-plane profiles at 5 cm depth for 6 MV and 15 MV photons. The profiles were measured for various field sizes starting from 2×2 cm till 30×30 cm. PTW TBA scan software was used for the measurements and the “deconvolution” functionality in the software was used to remove the volume averaging effect due to finite volume of the chamber along lateralmore » and longitudinal directions for all the ionization chambers. The predicted true profile was compared and the change in penumbra before and after deconvolution was studied. Results: After deconvoluting the penumbra decreased by 1 mm for field sizes ranging from 2 × 2 cm till 20 x20 cm. This is observed for along both lateral and longitudinal directions. However for field sizes from 20 × 20 till 30 ×30 cm the difference in penumbra was around 1.2 till 1.8 mm. This was observed for both 6 MV and 15 MV photon beams. The penumbra was always lesser in the deconvoluted profiles for all the ionization chambers involved in the study. The variation in difference in penumbral values were in the order of 0.1 till 0.3 mm between the deconvoluted profile along lateral and longitudinal directions for all the chambers under study. Deconvolution of the profiles along longitudinal direction for Farmer chamber was not good and is not comparable with other deconvoluted profiles. Conclusion: The results of the deconvoluted profiles for 0.125cc and Markus chamber was comparable and the deconvolution functionality can be used to overcome the volume averaging effect.« less
NASA Astrophysics Data System (ADS)
Chang, Yong; Zi, Yanyang; Zhao, Jiyuan; Yang, Zhe; He, Wangpeng; Sun, Hailiang
2017-03-01
In guided wave pipeline inspection, echoes reflected from closely spaced reflectors generally overlap, meaning useful information is lost. To solve the overlapping problem, sparse deconvolution methods have been developed in the past decade. However, conventional sparse deconvolution methods have limitations in handling guided wave signals, because the input signal is directly used as the prototype of the convolution matrix, without considering the waveform change caused by the dispersion properties of the guided wave. In this paper, an adaptive sparse deconvolution (ASD) method is proposed to overcome these limitations. First, the Gaussian echo model is employed to adaptively estimate the column prototype of the convolution matrix instead of directly using the input signal as the prototype. Then, the convolution matrix is constructed upon the estimated results. Third, the split augmented Lagrangian shrinkage (SALSA) algorithm is introduced to solve the deconvolution problem with high computational efficiency. To verify the effectiveness of the proposed method, guided wave signals obtained from pipeline inspection are investigated numerically and experimentally. Compared to conventional sparse deconvolution methods, e.g. the {{l}1} -norm deconvolution method, the proposed method shows better performance in handling the echo overlap problem in the guided wave signal.
Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data
NASA Astrophysics Data System (ADS)
Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam
2018-06-01
Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.
NASA Technical Reports Server (NTRS)
Schade, David J.; Elson, Rebecca A. W.
1993-01-01
We describe experiments with deconvolutions of simulations of deep HST Wide Field Camera images containing faint, compact galaxies to determine under what circumstances there is a quantitative advantage to image deconvolution, and explore whether it is (1) helpful for distinguishing between stars and compact galaxies, or between spiral and elliptical galaxies, and whether it (2) improves the accuracy with which characteristic radii and integrated magnitudes may be determined. The Maximum Entropy and Richardson-Lucy deconvolution algorithms give the same results. For medium and low S/N images, deconvolution does not significantly improve our ability to distinguish between faint stars and compact galaxies, nor between spiral and elliptical galaxies. Measurements from both raw and deconvolved images are biased and must be corrected; it is easier to quantify and remove the biases for cases that have not been deconvolved. We find no benefit from deconvolution for measuring luminosity profiles, but these results are limited to low S/N images of very compact (often undersampled) galaxies.
NASA Astrophysics Data System (ADS)
Tian, Yu; Rao, Changhui; Wei, Kai
2008-07-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulence due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frames blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are suitable for blind deconvolution from the recorded AO close-loop frames series are selected by the frame selection technique and then do the multi-frame blind deconvolution. There is no priori knowledge except for the positive constraint in blind deconvolution. It is benefit for the use of multi-frame images to improve the stability and convergence of the blind deconvolution algorithm. The method had been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system at Yunnan Observatory. The results show that the method can effectively improve the images partially corrected by adaptive optics.
Blind source deconvolution for deep Earth seismology
NASA Astrophysics Data System (ADS)
Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.
2007-12-01
We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.
Coherent diffraction imaging of nanoscale strain evolution in a single crystal under high pressure
Yang, Wenge; Huang, Xiaojing; Harder, Ross; Clark, Jesse N.; Robinson, Ian K.; Mao, Ho-kwang
2013-01-01
The evolution of morphology and internal strain under high pressure fundamentally alters the physical property, structural stability, phase transition and deformation mechanism of materials. Until now, only averaged strain distributions have been studied. Bragg coherent X-ray diffraction imaging is highly sensitive to the internal strain distribution of individual crystals but requires coherent illumination, which can be compromised by the complex high-pressure sample environment. Here we report the successful de-convolution of these effects with the recently developed mutual coherent function method to reveal the three-dimensional strain distribution inside a 400 nm gold single crystal during compression within a diamond-anvil cell. The three-dimensional morphology and evolution of the strain under pressures up to 6.4 GPa were obtained with better than 30 nm spatial resolution. In addition to providing a new approach for high-pressure nanotechnology and rheology studies, we draw fundamental conclusions about the origin of the anomalous compressibility of nanocrystals. PMID:23575684
Coherent diffraction imaging of nanoscale strain evolution in a single crystal under high pressure.
Yang, Wenge; Huang, Xiaojing; Harder, Ross; Clark, Jesse N; Robinson, Ian K; Mao, Ho-kwang
2013-01-01
The evolution of morphology and internal strain under high pressure fundamentally alters the physical property, structural stability, phase transition and deformation mechanism of materials. Until now, only averaged strain distributions have been studied. Bragg coherent X-ray diffraction imaging is highly sensitive to the internal strain distribution of individual crystals but requires coherent illumination, which can be compromised by the complex high-pressure sample environment. Here we report the successful de-convolution of these effects with the recently developed mutual coherent function method to reveal the three-dimensional strain distribution inside a 400 nm gold single crystal during compression within a diamond-anvil cell. The three-dimensional morphology and evolution of the strain under pressures up to 6.4 GPa were obtained with better than 30 nm spatial resolution. In addition to providing a new approach for high-pressure nanotechnology and rheology studies, we draw fundamental conclusions about the origin of the anomalous compressibility of nanocrystals.
Erny, Guillaume L; Moeenfard, Marzieh; Alves, Arminda
2015-02-01
In this manuscript, the separation of kahweol and cafestol esters from Arabica coffee brews was investigated using liquid chromatography with a diode array detector. When detected in conjunction, cafestol, and kahweol esters were eluted together, but, after optimization, the kahweol esters could be selectively detected by setting the wavelength at 290 nm to allow their quantification. Such an approach was not possible for the cafestol esters, and spectral deconvolution was used to obtain deconvoluted chromatograms. In each of those chromatograms, the four esters were baseline separated allowing for the quantification of the eight targeted compounds. Because kahweol esters could be quantified either using the chromatogram obtained by setting the wavelength at 290 nm or using the deconvoluted chromatogram, those compounds were used to compare the analytical performances. Slightly better limits of detection were obtained using the deconvoluted chromatogram. Identical concentrations were found in a real sample with both approaches. The peak areas in the deconvoluted chromatograms were repeatable (intraday repeatability of 0.8%, interday repeatability of 1.0%). This work demonstrates the accuracy of spectral deconvolution when using liquid chromatography to mathematically separate coeluting compounds using the full spectra recorded by a diode array detector. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Calibration of Wide-Field Deconvolution Microscopy for Quantitative Fluorescence Imaging
Lee, Ji-Sook; Wee, Tse-Luen (Erika); Brown, Claire M.
2014-01-01
Deconvolution enhances contrast in fluorescence microscopy images, especially in low-contrast, high-background wide-field microscope images, improving characterization of features within the sample. Deconvolution can also be combined with other imaging modalities, such as confocal microscopy, and most software programs seek to improve resolution as well as contrast. Quantitative image analyses require instrument calibration and with deconvolution, necessitate that this process itself preserves the relative quantitative relationships between fluorescence intensities. To ensure that the quantitative nature of the data remains unaltered, deconvolution algorithms need to be tested thoroughly. This study investigated whether the deconvolution algorithms in AutoQuant X3 preserve relative quantitative intensity data. InSpeck Green calibration microspheres were prepared for imaging, z-stacks were collected using a wide-field microscope, and the images were deconvolved using the iterative deconvolution algorithms with default settings. Afterwards, the mean intensities and volumes of microspheres in the original and the deconvolved images were measured. Deconvolved data sets showed higher average microsphere intensities and smaller volumes than the original wide-field data sets. In original and deconvolved data sets, intensity means showed linear relationships with the relative microsphere intensities given by the manufacturer. Importantly, upon normalization, the trend lines were found to have similar slopes. In original and deconvolved images, the volumes of the microspheres were quite uniform for all relative microsphere intensities. We were able to show that AutoQuant X3 deconvolution software data are quantitative. In general, the protocol presented can be used to calibrate any fluorescence microscope or image processing and analysis procedure. PMID:24688321
Navarro, Jorge; Ring, Terry A.; Nigg, David W.
2015-03-01
A deconvolution method for a LaBr₃ 1"x1" detector for nondestructive Advanced Test Reactor (ATR) fuel burnup applications was developed. The method consisted of obtaining the detector response function, applying a deconvolution algorithm to 1”x1” LaBr₃ simulated, data along with evaluating the effects that deconvolution have on nondestructively determining ATR fuel burnup. The simulated response function of the detector was obtained using MCNPX as well with experimental data. The Maximum-Likelihood Expectation Maximization (MLEM) deconvolution algorithm was selected to enhance one-isotope source-simulated and fuel- simulated spectra. The final evaluation of the study consisted of measuring the performance of the fuel burnup calibrationmore » curve for the convoluted and deconvoluted cases. The methodology was developed in order to help design a reliable, high resolution, rugged and robust detection system for the ATR fuel canal capable of collecting high performance data for model validation, along with a system that can calculate burnup and using experimental scintillator detector data.« less
NASA Astrophysics Data System (ADS)
Wapenaar, K.; van der Neut, J.; Ruigrok, E.; Draganov, D.; Hunziker, J.; Slob, E.; Thorbecke, J.; Snieder, R.
2008-12-01
It is well-known that under specific conditions the crosscorrelation of wavefields observed at two receivers yields the impulse response between these receivers. This principle is known as 'Green's function retrieval' or 'seismic interferometry'. Recently it has been recognized that in many situations it can be advantageous to replace the correlation process by deconvolution. One of the advantages is that deconvolution compensates for the waveform emitted by the source; another advantage is that it is not necessary to assume that the medium is lossless. The approaches that have been developed to date employ a 1D deconvolution process. We propose a method for seismic interferometry by multidimensional deconvolution and show that under specific circumstances the method compensates for irregularities in the source distribution. This is an important difference with crosscorrelation methods, which rely on the condition that waves are equipartitioned. This condition is for example fulfilled when the sources are regularly distributed along a closed surface and the power spectra of the sources are identical. The proposed multidimensional deconvolution method compensates for anisotropic illumination, without requiring knowledge about the positions and the spectra of the sources.
Phylogenetic ctDNA analysis depicts early stage lung cancer evolution
Abbosh, Christopher; Birkbak, Nicolai J.; Wilson, Gareth A.; Jamal-Hanjani, Mariam; Constantin, Tudor; Salari, Raheleh; Le Quesne, John; Moore, David A; Veeriah, Selvaraju; Rosenthal, Rachel; Marafioti, Teresa; Kirkizlar, Eser; Watkins, Thomas B K; McGranahan, Nicholas; Ward, Sophia; Martinson, Luke; Riley, Joan; Fraioli, Francesco; Al Bakir, Maise; Grönroos, Eva; Zambrana, Francisco; Endozo, Raymondo; Bi, Wenya Linda; Fennessy, Fiona M.; Sponer, Nicole; Johnson, Diana; Laycock, Joanne; Shafi, Seema; Czyzewska-Khan, Justyna; Rowan, Andrew; Chambers, Tim; Matthews, Nik; Turajlic, Samra; Hiley, Crispin; Lee, Siow Ming; Forster, Martin D.; Ahmad, Tanya; Falzon, Mary; Borg, Elaine; Lawrence, David; Hayward, Martin; Kolvekar, Shyam; Panagiotopoulos, Nikolaos; Janes, Sam M; Thakrar, Ricky; Ahmed, Asia; Blackhall, Fiona; Summers, Yvonne; Hafez, Dina; Naik, Ashwini; Ganguly, Apratim; Kareht, Stephanie; Shah, Rajesh; Joseph, Leena; Quinn, Anne Marie; Crosbie, Phil; Naidu, Babu; Middleton, Gary; Langman, Gerald; Trotter, Simon; Nicolson, Marianne; Remmen, Hardy; Kerr, Keith; Chetty, Mahendran; Gomersall, Lesley; Fennell, Dean; Nakas, Apostolos; Rathinam, Sridhar; Anand, Girija; Khan, Sajid; Russell, Peter; Ezhil, Veni; Ismail, Babikir; Irvin-sellers, Melanie; Prakash, Vineet; Lester, Jason; Kornaszewska, Malgorzata; Attanoos, Richard; Adams, Haydn; Davies, Helen; Oukrif, Dahmane; Akarca, Ayse U; Hartley, John A; Lowe, Helen L; Lock, Sara; Iles, Natasha; Bell, Harriet; Ngai, Yenting; Elgar, Greg; Szallasi, Zoltan; Schwarz, Roland F; Herrero, Javier; Stewart, Aengus; Quezada, Sergio A; Peggs, Karl S.; Van Loo, Peter; Dive, Caroline; Lin, Jimmy; Rabinowitz, Matthew; Aerts, Hugo JWL; Hackshaw, Allan; Shaw, Jacqui A; Zimmermann, Bernhard G.; Swanton, Charles
2017-01-01
Summary The early detection of relapse following primary surgery for non-small cell lung cancer and the characterization of emerging subclones seeding metastatic sites might offer new therapeutic approaches to limit tumor recurrence. The potential to non-invasively track tumor evolutionary dynamics in ctDNA of early-stage lung cancer is not established. Here we conduct a tumour-specific phylogenetic approach to ctDNA profiling in the first 100 TRACERx (TRAcking non-small cell lung Cancer Evolution through therapy (Rx)) study participants, including one patient co-recruited to the PEACE (Posthumous Evaluation of Advanced Cancer Environment) post-mortem study. We identify independent predictors of ctDNA release and perform tumor volume limit of detection analyses. Through blinded profiling of post-operative plasma, we observe evidence of adjuvant chemotherapy resistance and identify patients destined to experience recurrence of their lung cancer. Finally, we show that phylogenetic ctDNA profiling tracks the subclonal nature of lung cancer relapse and metastases, providing a new approach for ctDNA driven therapeutic studies PMID:28445469
NASA Technical Reports Server (NTRS)
Ioup, J. W.; Ioup, G. E.; Rayborn, G. H., Jr.; Wood, G. M., Jr.; Upchurch, B. T.
1984-01-01
Mass spectrometer data in the form of ion current versus mass-to-charge ratio often include overlapping mass peaks, especially in low- and medium-resolution instruments. Numerical deconvolution of such data effectively enhances the resolution by decreasing the overlap of mass peaks. In this paper two approaches to deconvolution are presented: a function-domain iterative technique and a Fourier transform method which uses transform-domain function-continuation. Both techniques include data smoothing to reduce the sensitivity of the deconvolution to noise. The efficacy of these methods is demonstrated through application to representative mass spectrometer data and the deconvolved results are discussed and compared to data obtained from a spectrometer with sufficient resolution to achieve separation of the mass peaks studied. A case for which the deconvolution is seriously affected by Gibbs oscillations is analyzed.
Evidence for Avian Intrathoracic Air Sacs in a New Predatory Dinosaur from Argentina
Sereno, Paul C.; Martinez, Ricardo N.; Wilson, Jeffrey A.; Varricchio, David J.; Alcober, Oscar A.; Larsson, Hans C. E.
2008-01-01
Background Living birds possess a unique heterogeneous pulmonary system composed of a rigid, dorsally-anchored lung and several compliant air sacs that operate as bellows, driving inspired air through the lung. Evidence from the fossil record for the origin and evolution of this system is extremely limited, because lungs do not fossilize and because the bellow-like air sacs in living birds only rarely penetrate (pneumatize) skeletal bone and thus leave a record of their presence. Methodology/Principal Findings We describe a new predatory dinosaur from Upper Cretaceous rocks in Argentina, Aerosteon riocoloradensis gen. et sp. nov., that exhibits extreme pneumatization of skeletal bone, including pneumatic hollowing of the furcula and ilium. In living birds, these two bones are pneumatized by diverticulae of air sacs (clavicular, abdominal) that are involved in pulmonary ventilation. We also describe several pneumatized gastralia (“stomach ribs”), which suggest that diverticulae of the air sac system were present in surface tissues of the thorax. Conclusions/Significance We present a four-phase model for the evolution of avian air sacs and costosternal-driven lung ventilation based on the known fossil record of theropod dinosaurs and osteological correlates in extant birds: (1) Phase I—Elaboration of paraxial cervical air sacs in basal theropods no later than the earliest Late Triassic. (2) Phase II—Differentiation of avian ventilatory air sacs, including both cranial (clavicular air sac) and caudal (abdominal air sac) divisions, in basal tetanurans during the Jurassic. A heterogeneous respiratory tract with compliant air sacs, in turn, suggests the presence of rigid, dorsally attached lungs with flow-through ventilation. (3) Phase III—Evolution of a primitive costosternal pump in maniraptoriform theropods before the close of the Jurassic. (4) Phase IV—Evolution of an advanced costosternal pump in maniraptoran theropods before the close of the Jurassic. In addition, we conclude: (5) The advent of avian unidirectional lung ventilation is not possible to pinpoint, as osteological correlates have yet to be identified for uni- or bidirectional lung ventilation. (6) The origin and evolution of avian air sacs may have been driven by one or more of the following three factors: flow-through lung ventilation, locomotory balance, and/or thermal regulation. PMID:18825273
Parsimonious Charge Deconvolution for Native Mass Spectrometry
2018-01-01
Charge deconvolution infers the mass from mass over charge (m/z) measurements in electrospray ionization mass spectra. When applied over a wide input m/z or broad target mass range, charge-deconvolution algorithms can produce artifacts, such as false masses at one-half or one-third of the correct mass. Indeed, a maximum entropy term in the objective function of MaxEnt, the most commonly used charge deconvolution algorithm, favors a deconvolved spectrum with many peaks over one with fewer peaks. Here we describe a new “parsimonious” charge deconvolution algorithm that produces fewer artifacts. The algorithm is especially well-suited to high-resolution native mass spectrometry of intact glycoproteins and protein complexes. Deconvolution of native mass spectra poses special challenges due to salt and small molecule adducts, multimers, wide mass ranges, and fewer and lower charge states. We demonstrate the performance of the new deconvolution algorithm on a range of samples. On the heavily glycosylated plasma properdin glycoprotein, the new algorithm could deconvolve monomer and dimer simultaneously and, when focused on the m/z range of the monomer, gave accurate and interpretable masses for glycoforms that had previously been analyzed manually using m/z peaks rather than deconvolved masses. On therapeutic antibodies, the new algorithm facilitated the analysis of extensions, truncations, and Fab glycosylation. The algorithm facilitates the use of native mass spectrometry for the qualitative and quantitative analysis of protein and protein assemblies. PMID:29376659
Broadband ion mobility deconvolution for rapid analysis of complex mixtures.
Pettit, Michael E; Brantley, Matthew R; Donnarumma, Fabrizio; Murray, Kermit K; Solouki, Touradj
2018-05-04
High resolving power ion mobility (IM) allows for accurate characterization of complex mixtures in high-throughput IM mass spectrometry (IM-MS) experiments. We previously demonstrated that pure component IM-MS data can be extracted from IM unresolved post-IM/collision-induced dissociation (CID) MS data using automated ion mobility deconvolution (AIMD) software [Matthew Brantley, Behrooz Zekavat, Brett Harper, Rachel Mason, and Touradj Solouki, J. Am. Soc. Mass Spectrom., 2014, 25, 1810-1819]. In our previous reports, we utilized a quadrupole ion filter for m/z-isolation of IM unresolved monoisotopic species prior to post-IM/CID MS. Here, we utilize a broadband IM-MS deconvolution strategy to remove the m/z-isolation requirement for successful deconvolution of IM unresolved peaks. Broadband data collection has throughput and multiplexing advantages; hence, elimination of the ion isolation step reduces experimental run times and thus expands the applicability of AIMD to high-throughput bottom-up proteomics. We demonstrate broadband IM-MS deconvolution of two separate and unrelated pairs of IM unresolved isomers (viz., a pair of isomeric hexapeptides and a pair of isomeric trisaccharides) in a simulated complex mixture. Moreover, we show that broadband IM-MS deconvolution improves high-throughput bottom-up characterization of a proteolytic digest of rat brain tissue. To our knowledge, this manuscript is the first to report successful deconvolution of pure component IM and MS data from an IM-assisted data-independent analysis (DIA) or HDMSE dataset.
Using deconvolution to improve the metrological performance of the grid method
NASA Astrophysics Data System (ADS)
Grédiac, Michel; Sur, Frédéric; Badulescu, Claudiu; Mathias, Jean-Denis
2013-06-01
The use of various deconvolution techniques to enhance strain maps obtained with the grid method is addressed in this study. Since phase derivative maps obtained with the grid method can be approximated by their actual counterparts convolved by the envelope of the kernel used to extract phases and phase derivatives, non-blind restoration techniques can be used to perform deconvolution. Six deconvolution techniques are presented and employed to restore a synthetic phase derivative map, namely direct deconvolution, regularized deconvolution, the Richardson-Lucy algorithm and Wiener filtering, the last two with two variants concerning their practical implementations. Obtained results show that the noise that corrupts the grid images must be thoroughly taken into account to limit its effect on the deconvolved strain maps. The difficulty here is that the noise on the grid image yields a spatially correlated noise on the strain maps. In particular, numerical experiments on synthetic data show that direct and regularized deconvolutions are unstable when noisy data are processed. The same remark holds when Wiener filtering is employed without taking into account noise autocorrelation. On the other hand, the Richardson-Lucy algorithm and Wiener filtering with noise autocorrelation provide deconvolved maps where the impact of noise remains controlled within a certain limit. It is also observed that the last technique outperforms the Richardson-Lucy algorithm. Two short examples of actual strain fields restoration are finally shown. They deal with asphalt and shape memory alloy specimens. The benefits and limitations of deconvolution are presented and discussed in these two cases. The main conclusion is that strain maps are correctly deconvolved when the signal-to-noise ratio is high and that actual noise in the actual strain maps must be more specifically characterized than in the current study to address higher noise levels with Wiener filtering.
NASA Astrophysics Data System (ADS)
Bardy, Fabrice; Van Dun, Bram; Dillon, Harvey; Cowan, Robert
2014-08-01
Objective. To evaluate the viability of disentangling a series of overlapping ‘cortical auditory evoked potentials’ (CAEPs) elicited by different stimuli using least-squares (LS) deconvolution, and to assess the adaptation of CAEPs for different stimulus onset-asynchronies (SOAs). Approach. Optimal aperiodic stimulus sequences were designed by controlling the condition number of matrices associated with the LS deconvolution technique. First, theoretical considerations of LS deconvolution were assessed in simulations in which multiple artificial overlapping responses were recovered. Second, biological CAEPs were recorded in response to continuously repeated stimulus trains containing six different tone-bursts with frequencies 8, 4, 2, 1, 0.5, 0.25 kHz separated by SOAs jittered around 150 (120-185), 250 (220-285) and 650 (620-685) ms. The control condition had a fixed SOA of 1175 ms. In a second condition, using the same SOAs, trains of six stimuli were separated by a silence gap of 1600 ms. Twenty-four adults with normal hearing (<20 dB HL) were assessed. Main results. Results showed disentangling of a series of overlapping responses using LS deconvolution on simulated waveforms as well as on real EEG data. The use of rapid presentation and LS deconvolution did not however, allow the recovered CAEPs to have a higher signal-to-noise ratio than for slowly presented stimuli. The LS deconvolution technique enables the analysis of a series of overlapping responses in EEG. Significance. LS deconvolution is a useful technique for the study of adaptation mechanisms of CAEPs for closely spaced stimuli whose characteristics change from stimulus to stimulus. High-rate presentation is necessary to develop an understanding of how the auditory system encodes natural speech or other intrinsically high-rate stimuli.
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
Minimum entropy deconvolution and blind equalisation
NASA Technical Reports Server (NTRS)
Satorius, E. H.; Mulligan, J. J.
1992-01-01
Relationships between minimum entropy deconvolution, developed primarily for geophysics applications, and blind equalization are pointed out. It is seen that a large class of existing blind equalization algorithms are directly related to the scale-invariant cost functions used in minimum entropy deconvolution. Thus the extensive analyses of these cost functions can be directly applied to blind equalization, including the important asymptotic results of Donoho.
Scalar flux modeling in turbulent flames using iterative deconvolution
NASA Astrophysics Data System (ADS)
Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.
2018-04-01
In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.
NASA Astrophysics Data System (ADS)
Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.
2009-02-01
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.
NASA Astrophysics Data System (ADS)
Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.
2017-02-01
CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.
NASA Astrophysics Data System (ADS)
Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Juerg; Slob, Evert; Thorbecke, Jan; Snieder, Roel
2010-05-01
In recent years, seismic interferometry (or Green's function retrieval) has led to many applications in seismology (exploration, regional and global), underwater acoustics and ultrasonics. One of the explanations for this broad interest lies in the simplicity of the methodology. In passive data applications a simple crosscorrelation of responses at two receivers gives the impulse response (Green's function) at one receiver as if there were a source at the position of the other. In controlled-source applications the procedure is similar, except that it involves in addition a summation along the sources. It has also been recognized that the simple crosscorrelation approach has its limitations. From the various theoretical models it follows that there are a number of underlying assumptions for retrieving the Green's function by crosscorrelation. The most important assumptions are that the medium is lossless and that the waves are equipartitioned. In heuristic terms the latter condition means that the receivers are illuminated isotropically from all directions, which is for example achieved when the sources are regularly distributed along a closed surface, the sources are mutually uncorrelated and their power spectra are identical. Despite the fact that in practical situations these conditions are at most only partly fulfilled, the results of seismic interferometry are generally quite robust, but the retrieved amplitudes are unreliable and the results are often blurred by artifacts. Several researchers have proposed to address some of the shortcomings by replacing the correlation process by deconvolution. In most cases the employed deconvolution procedure is essentially 1-D (i.e., trace-by-trace deconvolution). This compensates the anelastic losses, but it does not account for the anisotropic illumination of the receivers. To obtain more accurate results, seismic interferometry by deconvolution should acknowledge the 3-D nature of the seismic wave field. Hence, from a theoretical point of view, the trace-by-trace process should be replaced by a full 3-D wave field deconvolution process. Interferometry by multidimensional deconvolution is more accurate than the trace-by-trace correlation and deconvolution approaches but the processing is more involved. In the presentation we will give a systematic analysis of seismic interferometry by crosscorrelation versus multi-dimensional deconvolution and discuss applications of both approaches.
Phylogenetic ctDNA analysis depicts early-stage lung cancer evolution.
Abbosh, Christopher; Birkbak, Nicolai J; Wilson, Gareth A; Jamal-Hanjani, Mariam; Constantin, Tudor; Salari, Raheleh; Le Quesne, John; Moore, David A; Veeriah, Selvaraju; Rosenthal, Rachel; Marafioti, Teresa; Kirkizlar, Eser; Watkins, Thomas B K; McGranahan, Nicholas; Ward, Sophia; Martinson, Luke; Riley, Joan; Fraioli, Francesco; Al Bakir, Maise; Grönroos, Eva; Zambrana, Francisco; Endozo, Raymondo; Bi, Wenya Linda; Fennessy, Fiona M; Sponer, Nicole; Johnson, Diana; Laycock, Joanne; Shafi, Seema; Czyzewska-Khan, Justyna; Rowan, Andrew; Chambers, Tim; Matthews, Nik; Turajlic, Samra; Hiley, Crispin; Lee, Siow Ming; Forster, Martin D; Ahmad, Tanya; Falzon, Mary; Borg, Elaine; Lawrence, David; Hayward, Martin; Kolvekar, Shyam; Panagiotopoulos, Nikolaos; Janes, Sam M; Thakrar, Ricky; Ahmed, Asia; Blackhall, Fiona; Summers, Yvonne; Hafez, Dina; Naik, Ashwini; Ganguly, Apratim; Kareht, Stephanie; Shah, Rajesh; Joseph, Leena; Marie Quinn, Anne; Crosbie, Phil A; Naidu, Babu; Middleton, Gary; Langman, Gerald; Trotter, Simon; Nicolson, Marianne; Remmen, Hardy; Kerr, Keith; Chetty, Mahendran; Gomersall, Lesley; Fennell, Dean A; Nakas, Apostolos; Rathinam, Sridhar; Anand, Girija; Khan, Sajid; Russell, Peter; Ezhil, Veni; Ismail, Babikir; Irvin-Sellers, Melanie; Prakash, Vineet; Lester, Jason F; Kornaszewska, Malgorzata; Attanoos, Richard; Adams, Haydn; Davies, Helen; Oukrif, Dahmane; Akarca, Ayse U; Hartley, John A; Lowe, Helen L; Lock, Sara; Iles, Natasha; Bell, Harriet; Ngai, Yenting; Elgar, Greg; Szallasi, Zoltan; Schwarz, Roland F; Herrero, Javier; Stewart, Aengus; Quezada, Sergio A; Peggs, Karl S; Van Loo, Peter; Dive, Caroline; Lin, C Jimmy; Rabinowitz, Matthew; Aerts, Hugo J W L; Hackshaw, Allan; Shaw, Jacqui A; Zimmermann, Bernhard G; Swanton, Charles
2017-04-26
The early detection of relapse following primary surgery for non-small-cell lung cancer and the characterization of emerging subclones, which seed metastatic sites, might offer new therapeutic approaches for limiting tumour recurrence. The ability to track the evolutionary dynamics of early-stage lung cancer non-invasively in circulating tumour DNA (ctDNA) has not yet been demonstrated. Here we use a tumour-specific phylogenetic approach to profile the ctDNA of the first 100 TRACERx (Tracking Non-Small-Cell Lung Cancer Evolution Through Therapy (Rx)) study participants, including one patient who was also recruited to the PEACE (Posthumous Evaluation of Advanced Cancer Environment) post-mortem study. We identify independent predictors of ctDNA release and analyse the tumour-volume detection limit. Through blinded profiling of postoperative plasma, we observe evidence of adjuvant chemotherapy resistance and identify patients who are very likely to experience recurrence of their lung cancer. Finally, we show that phylogenetic ctDNA profiling tracks the subclonal nature of lung cancer relapse and metastasis, providing a new approach for ctDNA-driven therapeutic studies.
Tracking Genomic Cancer Evolution for Precision Medicine: The Lung TRACERx Study
Jamal-Hanjani, Mariam; Hackshaw, Alan; Ngai, Yenting; Shaw, Jacqueline; Dive, Caroline; Quezada, Sergio; Middleton, Gary; de Bruin, Elza; Le Quesne, John; Shafi, Seema; Falzon, Mary; Horswell, Stuart; Blackhall, Fiona; Khan, Iftekhar; Janes, Sam; Nicolson, Marianne; Lawrence, David; Forster, Martin; Fennell, Dean; Lee, Siow-Ming; Lester, Jason; Kerr, Keith; Muller, Salli; Iles, Natasha; Smith, Sean; Murugaesu, Nirupa; Mitter, Richard; Salm, Max; Stuart, Aengus; Matthews, Nik; Adams, Haydn; Ahmad, Tanya; Attanoos, Richard; Bennett, Jonathan; Birkbak, Nicolai Juul; Booton, Richard; Brady, Ged; Buchan, Keith; Capitano, Arrigo; Chetty, Mahendran; Cobbold, Mark; Crosbie, Philip; Davies, Helen; Denison, Alan; Djearman, Madhav; Goldman, Jacki; Haswell, Tom; Joseph, Leena; Kornaszewska, Malgorzata; Krebs, Matthew; Langman, Gerald; MacKenzie, Mairead; Millar, Joy; Morgan, Bruno; Naidu, Babu; Nonaka, Daisuke; Peggs, Karl; Pritchard, Catrin; Remmen, Hardy; Rowan, Andrew; Shah, Rajesh; Smith, Elaine; Summers, Yvonne; Taylor, Magali; Veeriah, Selvaraju; Waller, David; Wilcox, Ben; Wilcox, Maggie; Woolhouse, Ian; McGranahan, Nicholas; Swanton, Charles
2014-01-01
The importance of intratumour genetic and functional heterogeneity is increasingly recognised as a driver of cancer progression and survival outcome. Understanding how tumour clonal heterogeneity impacts upon therapeutic outcome, however, is still an area of unmet clinical and scientific need. TRACERx (TRAcking non-small cell lung Cancer Evolution through therapy [Rx]), a prospective study of patients with primary non-small cell lung cancer (NSCLC), aims to define the evolutionary trajectories of lung cancer in both space and time through multiregion and longitudinal tumour sampling and genetic analysis. By following cancers from diagnosis to relapse, tracking the evolutionary trajectories of tumours in relation to therapeutic interventions, and determining the impact of clonal heterogeneity on clinical outcomes, TRACERx may help to identify novel therapeutic targets for NSCLC and may also serve as a model applicable to other cancer types. PMID:25003521
An optimized algorithm for multiscale wideband deconvolution of radio astronomical images
NASA Astrophysics Data System (ADS)
Offringa, A. R.; Smirnov, O.
2017-10-01
We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.
New regularization scheme for blind color image deconvolution
NASA Astrophysics Data System (ADS)
Chen, Li; He, Yu; Yap, Kim-Hui
2011-01-01
This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.
Methods and Apparatus for Reducing Multipath Signal Error Using Deconvolution
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor); Lau, Kenneth H. (Inventor)
1999-01-01
A deconvolution approach to adaptive signal processing has been applied to the elimination of signal multipath errors as embodied in one preferred embodiment in a global positioning system receiver. The method and receiver of the present invention estimates then compensates for multipath effects in a comprehensive manner. Application of deconvolution, along with other adaptive identification and estimation techniques, results in completely novel GPS (Global Positioning System) receiver architecture.
Improving space debris detection in GEO ring using image deconvolution
NASA Astrophysics Data System (ADS)
Núñez, Jorge; Núñez, Anna; Montojo, Francisco Javier; Condominas, Marta
2015-07-01
In this paper we present a method based on image deconvolution to improve the detection of space debris, mainly in the geostationary ring. Among the deconvolution methods we chose the iterative Richardson-Lucy (R-L), as the method that achieves better goals with a reasonable amount of computation. For this work, we used two sets of real 4096 × 4096 pixel test images obtained with the Telescope Fabra-ROA at Montsec (TFRM). Using the first set of data, we establish the optimal number of iterations in 7, and applying the R-L method with 7 iterations to the images, we show that the astrometric accuracy does not vary significantly while the limiting magnitude of the deconvolved images increases significantly compared to the original ones. The increase is in average about 1.0 magnitude, which means that objects up to 2.5 times fainter can be detected after deconvolution. The application of the method to the second set of test images, which includes several faint objects, shows that, after deconvolution, up to four previously undetected faint objects are detected in a single frame. Finally, we carried out a study of some economic aspects of applying the deconvolution method, showing that an important economic impact can be envisaged.
Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths.
Ingaramo, Maria; York, Andrew G; Hoogendoorn, Eelco; Postma, Marten; Shroff, Hari; Patterson, George H
2014-03-17
We use Richardson-Lucy (RL) deconvolution to combine multiple images of a simulated object into a single image in the context of modern fluorescence microscopy techniques. RL deconvolution can merge images with very different point-spread functions, such as in multiview light-sheet microscopes,1, 2 while preserving the best resolution information present in each image. We show that RL deconvolution is also easily applied to merge high-resolution, high-noise images with low-resolution, low-noise images, relevant when complementing conventional microscopy with localization microscopy. We also use RL deconvolution to merge images produced by different simulated illumination patterns, relevant to structured illumination microscopy (SIM)3, 4 and image scanning microscopy (ISM). The quality of our ISM reconstructions is at least as good as reconstructions using standard inversion algorithms for ISM data, but our method follows a simpler recipe that requires no mathematical insight. Finally, we apply RL deconvolution to merge a series of ten images with varying signal and resolution levels. This combination is relevant to gated stimulated-emission depletion (STED) microscopy, and shows that merges of high-quality images are possible even in cases for which a non-iterative inversion algorithm is unknown. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bardy, Fabrice; Dillon, Harvey; Van Dun, Bram
2014-04-01
Rapid presentation of stimuli in an evoked response paradigm can lead to overlap of multiple responses and consequently difficulties interpreting waveform morphology. This paper presents a deconvolution method allowing overlapping multiple responses to be disentangled. The deconvolution technique uses a least-squared error approach. A methodology is proposed to optimize the stimulus sequence associated with the deconvolution technique under low-jitter conditions. It controls the condition number of the matrices involved in recovering the responses. Simulations were performed using the proposed deconvolution technique. Multiple overlapping responses can be recovered perfectly in noiseless conditions. In the presence of noise, the amount of error introduced by the technique can be controlled a priori by the condition number of the matrix associated with the used stimulus sequence. The simulation results indicate the need for a minimum amount of jitter, as well as a sufficient number of overlap combinations to obtain optimum results. An aperiodic model is recommended to improve reconstruction. We propose a deconvolution technique allowing multiple overlapping responses to be extracted and a method of choosing the stimulus sequence optimal for response recovery. This technique may allow audiologists, psychologists, and electrophysiologists to optimize their experimental designs involving rapidly presented stimuli, and to recover evoked overlapping responses. Copyright © 2013 International Federation of Clinical Neurophysiology. All rights reserved.
He, Xinzi; Yu, Zhen; Wang, Tianfu; Lei, Baiying; Shi, Yiyan
2018-01-01
Dermoscopy imaging has been a routine examination approach for skin lesion diagnosis. Accurate segmentation is the first step for automatic dermoscopy image assessment. The main challenges for skin lesion segmentation are numerous variations in viewpoint and scale of skin lesion region. To handle these challenges, we propose a novel skin lesion segmentation network via a very deep dense deconvolution network based on dermoscopic images. Specifically, the deep dense layer and generic multi-path Deep RefineNet are combined to improve the segmentation performance. The deep representation of all available layers is aggregated to form the global feature maps using skip connection. Also, the dense deconvolution layer is leveraged to capture diverse appearance features via the contextual information. Finally, we apply the dense deconvolution layer to smooth segmentation maps and obtain final high-resolution output. Our proposed method shows the superiority over the state-of-the-art approaches based on the public available 2016 and 2017 skin lesion challenge dataset and achieves the accuracy of 96.0% and 93.9%, which obtained a 6.0% and 1.2% increase over the traditional method, respectively. By utilizing Dense Deconvolution Net, the average time for processing one testing images with our proposed framework was 0.253 s.
NASA Astrophysics Data System (ADS)
Li, Zhong-xiao; Li, Zhen-chun
2016-09-01
The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.
Studing Regional Wave Source Time Functions Using A Massive Automated EGF Deconvolution Procedure
NASA Astrophysics Data System (ADS)
Xie, J. "; Schaff, D. P.
2010-12-01
Reliably estimated source time functions (STF) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection, and minimization of parameter trade-off in attenuation studies. The empirical Green’s function (EGF) method can be used for estimating STF, but it requires a strict recording condition. Waveforms from pairs of events that are similar in focal mechanism, but different in magnitude must be on-scale recorded on the same stations for the method to work. Searching for such waveforms can be very time consuming, particularly for regional waves that contain complex path effects and have reduced S/N ratios due to attenuation. We have developed a massive, automated procedure to conduct inter-event waveform deconvolution calculations from many candidate event pairs. The procedure automatically evaluates the “spikiness” of the deconvolutions by calculating their “sdc”, which is defined as the peak divided by the background value. The background value is calculated as the mean absolute value of the deconvolution, excluding 10 s around the source time function. When the sdc values are about 10 or higher, the deconvolutions are found to be sufficiently spiky (pulse-like), indicating similar path Green’s functions and good estimates of the STF. We have applied this automated procedure to Lg waves and full regional wavetrains from 989 M ≥ 5 events in and around China, calculating about a million deconvolutions. Of these we found about 2700 deconvolutions with sdc greater than 9, which, if having a sufficiently broad frequency band, can be used to estimate the STF of the larger events. We are currently refining our procedure, as well as the estimated STFs. We will infer the source scaling using the STFs. We will also explore the possibility that the deconvolution procedure could complement cross-correlation in a real time event-screening process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe
2015-02-15
Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less
Artificial neural networks in biology and chemistry: the evolution of a new analytical tool.
Cartwright, Hugh M
2008-01-01
Once regarded as an eccentric and unpromising algorithm for the analysis of scientific data, the neural network has been developed in the last decade into a powerful computational tool. Its use now spans all areas of science, from the physical sciences and engineering to the life sciences and allied subjects. Applications range from the assessment of epidemiological data or the deconvolution of spectra to highly practical applications, such as the electronic nose. This introductory chapter considers briefly the growth in the use of neural networks and provides some general background in preparation for the more detailed chapters that follow.
Horger, Marius; Fallier-Becker, Petra; Thaiss, Wolfgang M; Sauter, Alexander; Bösmüller, Hans; Martella, Manuela; Preibsch, Heike; Fritz, Jan; Nikolaou, Konstantin; Kloth, Christopher
2018-05-03
This study aimed to test the hypothesis that ultrastructural wall abnormalities of lymphoma vessels correlate with perfusion computed tomography (PCT) kinetics. Our local institutional review board approved this prospective study. Between February 2013 and June 2016, we included 23 consecutive subjects with newly diagnosed lymphoma, who were referred for computed tomography-guided biopsy (6 women, 17 men; mean age, 60.61 ± 12.43 years; range, 28-74 years) and additionally agreed to undergo PCT of the target lymphoma tissues. PCT was obtained for 40 seconds using 80 kV, 120 mAs, 64 × 0.6-mm collimation, 6.9-cm z-axis coverage, and 26 volume measurements. Mean and maximum k-trans (mL/100 mL/min), blood flow (BF; mL/100 mL/min) and blood volume (BV) were quantified using the deconvolution and the maximum slope + Patlak calculation models. Immunohistochemical staining was performed for microvessel density quantification (vessels/m 2 ), and electron microscopy was used to determine the presence or absence of tight junctions, endothelial fenestration, basement membrane, and pericytes, and to measure extracellular matrix thickness. Extracellular matrix thickness as well as the presence or absence of tight junctions, basal lamina, and pericytes did not correlate with computed tomography perfusion parameters. Endothelial fenestrations correlated significantly with mean BF deconvolution (P = .047, r = 0.418) and additionally was significantly associated with higher mean BV deconvolution (P < .005). Mean k-trans Patlak correlated strongly with mean k-trans deconvolution (r = 0.939, P = .001), and both correlated with mean BF deconvolution (P = .001, r = 0.748), max BF deconvolution (P = .028, r = 0.564), mean BV deconvolution (P = .001, r = 0.752), and max BV deconvolution (P = .001, r = 0.771). Microvessel density correlated with max k-trans deconvolution (r = 0.564, P = .023). Vascular endothelial growth factor receptor-3 expression (receptor specific for lymphatics) correlated significantly with max k-trans Patlak (P = .041, r = 0.686) and mean BF deconvolution (P = .038, r = 0.695). k-Trans values of PCT do not correlate with ultrastructural microvessel features, whereas endothelial fenestrations correlate with increased intra-tumoral BVs. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Barrado Los Arcos, M; Rico Osés, M; Errasti Viader, M; Campo Vargas, M; Zelaya Huerta, M V; Martínez López, E
2016-01-01
Cell lung cancer is the principal cause of cancer death in men and women. We report the case of a man diagnosed with small cell lung cancer, metastatic from the outset. The disease is stable at present, forty-seven months from dia-gnosis, after receiving different treatment modalities.
Phylogenetic Copy-Number Factorization of Multiple Tumor Samples.
Zaccaria, Simone; El-Kebir, Mohammed; Klau, Gunnar W; Raphael, Benjamin J
2018-04-16
Cancer is an evolutionary process driven by somatic mutations. This process can be represented as a phylogenetic tree. Constructing such a phylogenetic tree from genome sequencing data is a challenging task due to the many types of mutations in cancer and the fact that nearly all cancer sequencing is of a bulk tumor, measuring a superposition of somatic mutations present in different cells. We study the problem of reconstructing tumor phylogenies from copy-number aberrations (CNAs) measured in bulk-sequencing data. We introduce the Copy-Number Tree Mixture Deconvolution (CNTMD) problem, which aims to find the phylogenetic tree with the fewest number of CNAs that explain the copy-number data from multiple samples of a tumor. We design an algorithm for solving the CNTMD problem and apply the algorithm to both simulated and real data. On simulated data, we find that our algorithm outperforms existing approaches that either perform deconvolution/factorization of mixed tumor samples or build phylogenetic trees assuming homogeneous tumor samples. On real data, we analyze multiple samples from a prostate cancer patient, identifying clones within these samples and a phylogenetic tree that relates these clones and their differing proportions across samples. This phylogenetic tree provides a higher resolution view of copy-number evolution of this cancer than published analyses.
NASA Technical Reports Server (NTRS)
Wood, G. M.; Rayborn, G. H.; Ioup, J. W.; Ioup, G. E.; Upchurch, B. T.; Howard, S. J.
1981-01-01
Mathematical deconvolution of digitized analog signals from scientific measuring instruments is shown to be a means of extracting important information which is otherwise hidden due to time-constant and other broadening or distortion effects caused by the experiment. Three different approaches to deconvolution and their subsequent application to recorded data from three analytical instruments are considered. To demonstrate the efficacy of deconvolution, the use of these approaches to solve the convolution integral for the gas chromatograph, magnetic mass spectrometer, and the time-of-flight mass spectrometer are described. Other possible applications of these types of numerical treatment of data to yield superior results from analog signals of the physical parameters normally measured in aerospace simulation facilities are suggested and briefly discussed.
Multi-frame partially saturated images blind deconvolution
NASA Astrophysics Data System (ADS)
Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2016-12-01
When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.
Parallelization of a blind deconvolution algorithm
NASA Astrophysics Data System (ADS)
Matson, Charles L.; Borelli, Kathy J.
2006-09-01
Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.
Improved deconvolution of very weak confocal signals.
Day, Kasey J; La Rivière, Patrick J; Chandler, Talon; Bindokas, Vytas P; Ferrier, Nicola J; Glick, Benjamin S
2017-01-01
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.
Septal penetration correction in I-131 imaging following thyroid cancer treatment
NASA Astrophysics Data System (ADS)
Barrack, Fiona; Scuffham, James; McQuaid, Sarah
2018-04-01
Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ = 0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ = 0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that could potentially improve the diagnostic accuracy of I-131 imaging. Novelty and significance This work has demonstrated that scatter correction combined with deconvolution can be used to substantially reduce the appearance of septal penetration artefacts in I-131 phantom and patient gamma camera planar images, enable improved visualisation of the I-131 distribution. Deconvolution with symmetric PSF has previously been used to reduce artefacts in gamma camera images however this work details the novel use of an asymmetric PSF to remove the angularly dependent septal penetration artefacts.
Source Pulse Estimation of Mine Shock by Blind Deconvolution
NASA Astrophysics Data System (ADS)
Makowski, R.
The objective of seismic signal deconvolution is to extract from the signal information concerning the rockmass or the signal in the source of the shock. In the case of blind deconvolution, we have to extract information regarding both quantities. Many methods of deconvolution made use of in prospective seismology were found to be of minor utility when applied to shock-induced signals recorded in the mines of the Lubin Copper District. The lack of effectiveness should be attributed to the inadequacy of the model on which the methods are based, with respect to the propagation conditions for that type of signal. Each of the blind deconvolution methods involves a number of assumptions; hence, only if these assumptions are fulfilled, we may expect reliable results.Consequently, we had to formulate a different model for the signals recorded in the copper mines of the Lubin District. The model is based on the following assumptions: (1) The signal emitted by the sh ock source is a short-term signal. (2) The signal transmitting system (rockmass) constitutes a parallel connection of elementary systems. (3) The elementary systems are of resonant type. Such a model seems to be justified by the geological structure as well as by the positions of the shock foci and seismometers. The results of time-frequency transformation also support the dominance of resonant-type propagation.Making use of the model, a new method for the blind deconvolution of seismic signals has been proposed. The adequacy of the new model, as well as the efficiency of the proposed method, has been confirmed by the results of blind deconvolution. The slight approximation errors obtained with a small number of approximating elements additionally corroborate the adequacy of the model.
NASA Astrophysics Data System (ADS)
McDonald, Geoff L.; Zhao, Qing
2017-01-01
Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.
The evolution of lung cancer screening.
Wilkinson, Neal W; Loewen, Gregory M; Klippenstein, Donald L; Litwin, Alan M; Anderson, Timothy M
2003-12-01
In the 1970s, four trials failed to demonstrate any mortality reduction using a combination of chest X-ray (CXR) and/or sputum cytology. The recent early lung cancer action project (ELCAP) demonstrated that modern screening is capable of detecting Stage I lung cancers. Bronchial epithelial changes leading up to cancers are now being understood to include histologic changes and genetic alterations. Emerging molecular markers detected in sputum and serum show promise in the future of lung cancer screening.
Improving Range Estimation of a 3-Dimensional Flash Ladar via Blind Deconvolution
2010-09-01
12 2.1.4 Optical Imaging as a Linear and Nonlinear System 15 2.1.5 Coherence Theory and Laser Light Statistics . . . 16 2.2 Deconvolution...rather than deconvolution. 2.1.5 Coherence Theory and Laser Light Statistics. Using [24] and [25], this section serves as background on coherence theory...the laser light incident on the detector surface. The image intensity related to different types of coherence is governed by the laser light’s spatial
Evaluation of deconvolution modelling applied to numerical combustion
NASA Astrophysics Data System (ADS)
Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît
2018-01-01
A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.
Faceting for direction-dependent spectral deconvolution
NASA Astrophysics Data System (ADS)
Tasse, C.; Hugo, B.; Mirmont, M.; Smirnov, O.; Atemkeng, M.; Bester, L.; Hardcastle, M. J.; Lakhoo, R.; Perkins, S.; Shimwell, T.
2018-04-01
The new generation of radio interferometers is characterized by high sensitivity, wide fields of view and large fractional bandwidth. To synthesize the deepest images enabled by the high dynamic range of these instruments requires us to take into account the direction-dependent Jones matrices, while estimating the spectral properties of the sky in the imaging and deconvolution algorithms. In this paper we discuss and implement a wideband wide-field spectral deconvolution framework (DDFacet) based on image plane faceting, that takes into account generic direction-dependent effects. Specifically, we present a wide-field co-planar faceting scheme, and discuss the various effects that need to be taken into account to solve for the deconvolution problem (image plane normalization, position-dependent Point Spread Function, etc). We discuss two wideband spectral deconvolution algorithms based on hybrid matching pursuit and sub-space optimisation respectively. A few interesting technical features incorporated in our imager are discussed, including baseline dependent averaging, which has the effect of improving computing efficiency. The version of DDFacet presented here can account for any externally defined Jones matrices and/or beam patterns.
NASA Astrophysics Data System (ADS)
Pompa, P. P.; Cingolani, R.; Rinaldi, R.
2003-07-01
In this paper, we present a deconvolution method aimed at spectrally resolving the broad fluorescence spectra of proteins, namely, of the enzyme bovine liver glutamate dehydrogenase (GDH). The analytical procedure is based on the deconvolution of the emission spectra into three distinct Gaussian fluorescing bands Gj. The relative changes of the Gj parameters are directly related to the conformational changes of the enzyme, and provide interesting information about the fluorescence dynamics of the individual emitting contributions. Our deconvolution method results in an excellent fitting of all the spectra obtained with GDH in a number of experimental conditions (various conformational states of the protein) and describes very well the dynamics of a variety of phenomena, such as the dependence of hexamers association on protein concentration, the dynamics of thermal denaturation, and the interaction process between the enzyme and external quenchers. The investigation was carried out by means of different optical experiments, i.e., native enzyme fluorescence, thermal-induced unfolding, and fluorescence quenching studies, utilizing both the analysis of the “average” behavior of the enzyme and the proposed deconvolution approach.
4Pi microscopy deconvolution with a variable point-spread function.
Baddeley, David; Carl, Christian; Cremer, Christoph
2006-09-20
To remove the axial sidelobes from 4Pi images, deconvolution forms an integral part of 4Pi microscopy. As a result of its high axial resolution, the 4Pi point spread function (PSF) is particularly susceptible to imperfect optical conditions within the sample. This is typically observed as a shift in the position of the maxima under the PSF envelope. A significantly varying phase shift renders deconvolution procedures based on a spatially invariant PSF essentially useless. We present a technique for computing the forward transformation in the case of a varying phase at a computational expense of the same order of magnitude as that of the shift invariant case, a method for the estimation of PSF phase from an acquired image, and a deconvolution procedure built on these techniques.
Improved deconvolution of very weak confocal signals
Day, Kasey J.; La Rivière, Patrick J.; Chandler, Talon; Bindokas, Vytas P.; Ferrier, Nicola J.; Glick, Benjamin S.
2017-01-01
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage. PMID:28868135
Improved deconvolution of very weak confocal signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less
Improved deconvolution of very weak confocal signals
Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon; ...
2017-06-06
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less
Blind deconvolution post-processing of images corrected by adaptive optics
NASA Astrophysics Data System (ADS)
Christou, Julian C.
1995-08-01
Experience with the adaptive optics system at the Starfire Optical Range has shown that the point spread function is non-uniform and varies both spatially and temporally as well as being object dependent. Because of this, the application of a standard linear and non-linear deconvolution algorithms make it difficult to deconvolve out the point spread function. In this paper we demonstrate the application of a blind deconvolution algorithm to adaptive optics compensated data where a separate point spread function is not needed.
Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program.
Afouxenidis, D; Polymeris, G S; Tsirliganis, N C; Kitis, G
2012-05-01
This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the GLOw Curve ANalysis INtercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters.
Deconvolution of noisy transient signals: a Kalman filtering application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J.V.; Zicker, J.E.
The deconvolution of transient signals from noisy measurements is a common problem occuring in various tests at Lawrence Livermore National Laboratory. The transient deconvolution problem places atypical constraints on algorithms presently available. The Schmidt-Kalman filter, a time-varying, tunable predictor, is designed using a piecewise constant model of the transient input signal. A simulation is developed to test the algorithm for various input signal bandwidths and different signal-to-noise ratios for the input and output sequences. The algorithm performance is reasonable.
A widespread approach to modern cancer therapy is to identify a single oncogenic driver gene and target its mutant-protein product (for example, EGFR-inhibitor treatment in EGFR-mutant lung cancers). However, genetically driven resistance to targeted therapy limits patient survival. Through genomic analysis of 1,122 EGFR-mutant lung cancer cell-free DNA samples and whole-exome analysis of seven longitudinally collected tumor samples from a patient with EGFR-mutant lung cancer, we identified critical co-occurring oncogenic events present in most advanced-stage EGFR-mutant lung cancers.
NASA Astrophysics Data System (ADS)
Arslan, Musa T.; Tofighi, Mohammad; Sevimli, Rasim A.; ćetin, Ahmet E.
2015-05-01
One of the main disadvantages of using commercial broadcasts in a Passive Bistatic Radar (PBR) system is the range resolution. Using multiple broadcast channels to improve the radar performance is offered as a solution to this problem. However, it suffers from detection performance due to the side-lobes that matched filter creates for using multiple channels. In this article, we introduce a deconvolution algorithm to suppress the side-lobes. The two-dimensional matched filter output of a PBR is further analyzed as a deconvolution problem. The deconvolution algorithm is based on making successive projections onto the hyperplanes representing the time delay of a target. Resulting iterative deconvolution algorithm is globally convergent because all constraint sets are closed and convex. Simulation results in an FM based PBR system are presented.
Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image
NASA Astrophysics Data System (ADS)
He, Xingwu; You, Junchen
2018-03-01
Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.
NASA Astrophysics Data System (ADS)
Yu, Zhongzhi; Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Liu, Xu
2018-06-01
Parallel detection, which can use the additional information of a pinhole plane image taken at every excitation scan position, could be an efficient method to enhance the resolution of a confocal laser scanning microscope. In this paper, we discuss images obtained under different conditions and using different image restoration methods with parallel detection to quantitatively compare the imaging quality. The conditions include different noise levels and different detector array settings. The image restoration methods include linear deconvolution and pixel reassignment with Richard-Lucy deconvolution and with maximum-likelihood estimation deconvolution. The results show that the linear deconvolution share properties such as high-efficiency and the best performance under all different conditions, and is therefore expected to be of use for future biomedical routine research.
Application of deconvolution interferometry with both Hi-net and KiK-net data
NASA Astrophysics Data System (ADS)
Nakata, N.
2013-12-01
Application of deconvolution interferometry to wavefields observed by KiK-net, a strong-motion recording network in Japan, is useful for estimating wave velocities and S-wave splitting in the near surface. Using this technique, for example, Nakata and Snieder (2011, 2012) found changed in velocities caused by Tohoku-Oki earthquake in Japan. At the location of the borehole accelerometer of each KiK-net station, a velocity sensor is also installed as a part of a high-sensitivity seismograph network (Hi-net). I present a technique that uses both Hi-net and KiK-net records for computing deconvolution interferometry. The deconvolved waveform obtained from the combination of Hi-net and KiK-net data is similar to the waveform computed from KiK-net data only, which indicates that one can use Hi-net wavefields for deconvolution interferometry. Because Hi-net records have a high signal-to-noise ratio (S/N) and high dynamic resolution, the S/N and the quality of amplitude and phase of deconvolved waveforms can be improved with Hi-net data. These advantages are especially important for short-time moving-window seismic interferometry and deconvolution interferometry using later coda waves.
NASA Astrophysics Data System (ADS)
Oba, T.; Riethmüller, T. L.; Solanki, S. K.; Iida, Y.; Quintero Noda, C.; Shimizu, T.
2017-11-01
Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode/SP data in an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of -3.0 km s-1 and +3.0 km s-1 at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.
2004-01-01
We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.
Peptide de novo sequencing of mixture tandem mass spectra
Hotta, Stéphanie Yuki Kolbeck; Verano‐Braga, Thiago; Kjeldsen, Frank
2016-01-01
The impact of mixture spectra deconvolution on the performance of four popular de novo sequencing programs was tested using artificially constructed mixture spectra as well as experimental proteomics data. Mixture fragmentation spectra are recognized as a limitation in proteomics because they decrease the identification performance using database search engines. De novo sequencing approaches are expected to be even more sensitive to the reduction in mass spectrum quality resulting from peptide precursor co‐isolation and thus prone to false identifications. The deconvolution approach matched complementary b‐, y‐ions to each precursor peptide mass, which allowed the creation of virtual spectra containing sequence specific fragment ions of each co‐isolated peptide. Deconvolution processing resulted in equally efficient identification rates but increased the absolute number of correctly sequenced peptides. The improvement was in the range of 20–35% additional peptide identifications for a HeLa lysate sample. Some correct sequences were identified only using unprocessed spectra; however, the number of these was lower than those where improvement was obtained by mass spectral deconvolution. Tight candidate peptide score distribution and high sensitivity to small changes in the mass spectrum introduced by the employed deconvolution method could explain some of the missing peptide identifications. PMID:27329701
Pelosi, Giuseppe; Bianchi, Fabrizio; Dama, Elisa; Simbolo, Michele; Mafficini, Andrea; Sonzogni, Angelica; Pilotto, Sara; Harari, Sergio; Papotti, Mauro; Volante, Marco; Fontanini, Gabriella; Mastracci, Luca; Albini, Adriana; Bria, Emilio; Calabrese, Fiorella; Scarpa, Aldo
2018-04-01
Among lung neuroendocrine tumours (Lung-NETs), typical carcinoid (TC) and atypical carcinoid (AC) are considered separate entities as opposed to large cell neuroendocrine carcinoma (LCNEC) and small cell lung carcinoma (SCLC). By means of two-way clustering analysis of previously reported next-generation sequencing data on 148 surgically resected Lung-NETs, six histology-independent clusters (C1 → C6) accounting for 68% of tumours were identified. Low-grade Lung-NETs were likely to evolve into high-grade tumours following two smoke-related paths. Tumour composition of the first path (C5 → C1 → C6) was coherent with the hypothesis of an evolution of TC to LCNEC, even with a conversion of SCLC-featuring tumours to LCNEC. The second path (C4 → C2-C3) had a tumour composition supporting the evolution of AC to SCLC-featuring tumours. The relevant Ki-67 labelling index varied accordingly, with median values being 5%, 9% and 50% in the cluster sequence C5 → C1 → C6, 12% in cluster C4 and 50-60% in cluster C2-C3. This proof-of-concept study suggests an innovative view on the progression of pre-existing TC or AC to high-grade NE carcinomas in most Lung-NET instances.
Deconvolution using a neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehman, S.K.
1990-11-15
Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.
Deconvolution of gas chromatographic data
NASA Technical Reports Server (NTRS)
Howard, S.; Rayborn, G. H.
1980-01-01
The use of deconvolution methods on gas chromatographic data to obtain an accurate determination of the relative amounts of each material present by mathematically separating the merged peaks is discussed. Data were obtained on a gas chromatograph with a flame ionization detector. Chromatograms of five xylenes with differing degrees of separation were generated by varying the column temperature at selected rates. The merged peaks were then successfully separated by deconvolution. The concept of function continuation in the frequency domain was introduced in striving to reach the theoretical limit of accuracy, but proved to be only partially successful.
Detailed interpretation of aeromagnetic data from the Patagonia Mountains area, southeastern Arizona
Bultman, Mark W.
2015-01-01
Euler deconvolution depth estimates derived from aeromagnetic data with a structural index of 0 show that mapped faults on the northern margin of the Patagonia Mountains generally agree with the depth estimates in the new geologic model. The deconvolution depth estimates also show that the concealed Patagonia Fault southwest of the Patagonia Mountains is more complex than recent geologic mapping represents. Additionally, Euler deconvolution depth estimates with a structural index of 2 locate many potential intrusive bodies that might be associated with known and unknown mineralization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ruixing; Yang, LV; Xu, Kele
Purpose: Deconvolution is a widely used tool in the field of image reconstruction algorithm when the linear imaging system has been blurred by the imperfect system transfer function. However, due to the nature of Gaussian-liked distribution for point spread function (PSF), the components with coherent high frequency in the image are hard to restored in most of the previous scanning imaging system, even the relatively accurate PSF is acquired. We propose a novel method for deconvolution of images which are obtained by using shape-modulated PSF. Methods: We use two different types of PSF - Gaussian shape and donut shape -more » to convolute the original image in order to simulate the process of scanning imaging. By employing deconvolution of the two images with corresponding given priors, the image quality of the deblurred images are compared. Then we find the critical size of the donut shape compared with the Gaussian shape which has similar deconvolution results. Through calculation of tightened focusing process using radially polarized beam, such size of donut is achievable under same conditions. Results: The effects of different relative size of donut and Gaussian shapes are investigated. When the full width at half maximum (FWHM) ratio of donut and Gaussian shape is set about 1.83, similar resolution results are obtained through our deconvolution method. Decreasing the size of donut will favor the deconvolution method. A mask with both amplitude and phase modulation is used to create a donut-shaped PSF compared with the non-modulated Gaussian PSF. Donut with size smaller than our critical value is obtained. Conclusion: The utility of donutshaped PSF are proved useful and achievable in the imaging and deconvolution processing, which is expected to have potential practical applications in high resolution imaging for biological samples.« less
NASA Astrophysics Data System (ADS)
Neuer, Marcus J.
2013-11-01
A technique for the spectral identification of strontium-90 is shown, utilising a Maximum-Likelihood deconvolution. Different deconvolution approaches are discussed and summarised. Based on the intensity distribution of the beta emission and Geant4 simulations, a combined response matrix is derived, tailored to the β- detection process in sodium iodide detectors. It includes scattering effects and attenuation by applying a base material decomposition extracted from Geant4 simulations with a CAD model for a realistic detector system. Inversion results of measurements show the agreement between deconvolution and reconstruction. A detailed investigation with additional masking sources like 40K, 226Ra and 131I shows that a contamination of strontium can be found in the presence of these nuisance sources. Identification algorithms for strontium are presented based on the derived technique. For the implementation of blind identification, an exemplary masking ratio is calculated.
A frequency-domain seismic blind deconvolution based on Gini correlations
NASA Astrophysics Data System (ADS)
Wang, Zhiguo; Zhang, Bing; Gao, Jinghuai; Huo Liu, Qing
2018-02-01
In reflection seismic processing, the seismic blind deconvolution is a challenging problem, especially when the signal-to-noise ratio (SNR) of the seismic record is low and the length of the seismic record is short. As a solution to this ill-posed inverse problem, we assume that the reflectivity sequence is independent and identically distributed (i.i.d.). To infer the i.i.d. relationships from seismic data, we first introduce the Gini correlations (GCs) to construct a new criterion for the seismic blind deconvolution in the frequency-domain. Due to a unique feature, the GCs are robust in their higher tolerance of the low SNR data and less dependent on record length. Applications of the seismic blind deconvolution based on the GCs show their capacity in estimating the unknown seismic wavelet and the reflectivity sequence, whatever synthetic traces or field data, even with low SNR and short sample record.
NASA Astrophysics Data System (ADS)
Ruigrok, Elmer; van der Neut, Joost; Djikpesse, Hugues; Chen, Chin-Wu; Wapenaar, Kees
2010-05-01
Active-source surveys are widely used for the delineation of hydrocarbon accumulations. Most source and receiver configurations are designed to illuminate the first 5 km of the earth. For a deep understanding of the evolution of the crust, much larger depths need to be illuminated. The use of large-scale active surveys is feasible, but rather costly. As an alternative, we use passive acquisition configurations, aiming at detecting responses from distant earthquakes, in combination with seismic interferometry (SI). SI refers to the principle of generating new seismic responses by combining seismic observations at different receiver locations. We apply SI to the earthquake responses to obtain responses as if there was a source at each receiver position in the receiver array. These responses are subsequently migrated to obtain an image of the lithosphere. Conventionally, SI is applied by a crosscorrelation of responses. Recently, an alternative implementation was proposed as SI by multidimensional deconvolution (MDD) (Wapenaar et al. 2008). SI by MDD compensates both for the source-sampling and the source wavelet irregularities. Another advantage is that the MDD relation also holds for media with severe anelastic losses. A severe restriction though for the implementation of MDD was the need to estimate responses without free-surface interaction, from the earthquake responses. To mitigate this restriction, Groenestijn en Verschuur (2009) proposed to introduce the incident wavefield as an additional unknown in the inversion process. As an alternative solution, van der Neut et al. (2010) showed that the required wavefield separation may be implemented after a crosscorrelation step. These last two approaches facilitate the application of MDD for lithospheric-scale imaging. In this work, we study the feasibility for the implementation of MDD when considering teleseismic wavefields. We address specific problems for teleseismic wavefields, such as long and complicated source wavelets, source-side reverberations and illumination gaps. We exemplify the feasibility of SI by MDD on synthetic data, based on field data from the Laramie and the POLARIS-MIT array. van Groenestijn, G.J.A. & Verschuur, D.J., 2009. Estimation of primaries by sparse inversion from passive seismic data, Expanded abstracts, 1597-1601, SEG. van der Neut, J.R, Ruigrok, E.N., Draganov, D.S., & Wapenaar, K., 2010. Retrieving the earth's reflection response by multi-dimensional deconvolution of ambient seismic noise, Extended abstracts, submitted, EAGE. Wapenaar, K., van der Neut, J., & Ruigrok, E.N., 2008. Passive seismic interferometry by multidimensional deconvolution, Geophysics, 75, A51-A56.
Processing strategy for water-gun seismic data from the Gulf of Mexico
Lee, Myung W.; Hart, Patrick E.; Agena, Warren F.
2000-01-01
In order to study the regional distribution of gas hydrates and their potential relationship to a large-scale sea-fl oor failures, more than 1,300 km of near-vertical-incidence seismic profi les were acquired using a 15-in3 water gun across the upper- and middle-continental slope in the Garden Banks and Green Canyon regions of the Gulf of Mexico. Because of the highly mixed phase water-gun signature, caused mainly by a precursor of the source arriving about 18 ms ahead of the main pulse, a conventional processing scheme based on the minimum phase assumption is not suitable for this data set. A conventional processing scheme suppresses the reverberations and compresses the main pulse, but the failure to suppress precursors results in complex interference between the precursors and primary refl ections, thus obscuring true refl ections. To clearly image the subsurface without interference from the precursors, a wavelet deconvolution based on the mixedphase assumption using variable norm is attempted. This nonminimum- phase wavelet deconvolution compresses a longwave- train water-gun signature into a simple zero-phase wavelet. A second-zero-crossing predictive deconvolution followed by a wavelet deconvolution suppressed variable ghost arrivals attributed to the variable depths of receivers. The processing strategy of using wavelet deconvolution followed by a secondzero- crossing deconvolution resulted in a sharp and simple wavelet and a better defi nition of the polarity of refl ections. Also, the application of dip moveout correction enhanced lateral resolution of refl ections and substantially suppressed coherent noise.
Carnevale Neto, Fausto; Pilon, Alan C; Selegato, Denise M; Freire, Rafael T; Gu, Haiwei; Raftery, Daniel; Lopes, Norberto P; Castro-Gamboa, Ian
2016-01-01
Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, thereby avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY) with Automated Mass Spectral Deconvolution and Identification System software (AMDIS). Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential, and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication was initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor) was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts.
Carnevale Neto, Fausto; Pilon, Alan C.; Selegato, Denise M.; Freire, Rafael T.; Gu, Haiwei; Raftery, Daniel; Lopes, Norberto P.; Castro-Gamboa, Ian
2016-01-01
Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, thereby avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY) with Automated Mass Spectral Deconvolution and Identification System software (AMDIS). Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential, and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication was initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor) was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts. PMID:27747213
A method of PSF generation for 3D brightfield deconvolution.
Tadrous, P J
2010-02-01
This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.
A digital algorithm for spectral deconvolution with noise filtering and peak picking: NOFIPP-DECON
NASA Technical Reports Server (NTRS)
Edwards, T. R.; Settle, G. L.; Knight, R. D.
1975-01-01
Noise-filtering, peak-picking deconvolution software incorporates multiple convoluted convolute integers and multiparameter optimization pattern search. The two theories are described and three aspects of the software package are discussed in detail. Noise-filtering deconvolution was applied to a number of experimental cases ranging from noisy, nondispersive X-ray analyzer data to very noisy photoelectric polarimeter data. Comparisons were made with published infrared data, and a man-machine interactive language has evolved for assisting in very difficult cases. A modified version of the program is being used for routine preprocessing of mass spectral and gas chromatographic data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oba, T.; Riethmüller, T. L.; Solanki, S. K.
Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode /SP data inmore » an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of −3.0 km s{sup −1} and +3.0 km s{sup −1} at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.« less
Toxoplasma Modulates Signature Pathways of Human Epilepsy, Neurodegeneration & Cancer.
Ngô, Huân M; Zhou, Ying; Lorenzi, Hernan; Wang, Kai; Kim, Taek-Kyun; Zhou, Yong; El Bissati, Kamal; Mui, Ernest; Fraczek, Laura; Rajagopala, Seesandra V; Roberts, Craig W; Henriquez, Fiona L; Montpetit, Alexandre; Blackwell, Jenefer M; Jamieson, Sarra E; Wheeler, Kelsey; Begeman, Ian J; Naranjo-Galvis, Carlos; Alliey-Rodriguez, Ney; Davis, Roderick G; Soroceanu, Liliana; Cobbs, Charles; Steindler, Dennis A; Boyer, Kenneth; Noble, A Gwendolyn; Swisher, Charles N; Heydemann, Peter T; Rabiah, Peter; Withers, Shawn; Soteropoulos, Patricia; Hood, Leroy; McLeod, Rima
2017-09-13
One third of humans are infected lifelong with the brain-dwelling, protozoan parasite, Toxoplasma gondii. Approximately fifteen million of these have congenital toxoplasmosis. Although neurobehavioral disease is associated with seropositivity, causality is unproven. To better understand what this parasite does to human brains, we performed a comprehensive systems analysis of the infected brain: We identified susceptibility genes for congenital toxoplasmosis in our cohort of infected humans and found these genes are expressed in human brain. Transcriptomic and quantitative proteomic analyses of infected human, primary, neuronal stem and monocytic cells revealed effects on neurodevelopment and plasticity in neural, immune, and endocrine networks. These findings were supported by identification of protein and miRNA biomarkers in sera of ill children reflecting brain damage and T. gondii infection. These data were deconvoluted using three systems biology approaches: "Orbital-deconvolution" elucidated upstream, regulatory pathways interconnecting human susceptibility genes, biomarkers, proteomes, and transcriptomes. "Cluster-deconvolution" revealed visual protein-protein interaction clusters involved in processes affecting brain functions and circuitry, including lipid metabolism, leukocyte migration and olfaction. Finally, "disease-deconvolution" identified associations between the parasite-brain interactions and epilepsy, movement disorders, Alzheimer's disease, and cancer. This "reconstruction-deconvolution" logic provides templates of progenitor cells' potentiating effects, and components affecting human brain parasitism and diseases.
Peptide de novo sequencing of mixture tandem mass spectra.
Gorshkov, Vladimir; Hotta, Stéphanie Yuki Kolbeck; Verano-Braga, Thiago; Kjeldsen, Frank
2016-09-01
The impact of mixture spectra deconvolution on the performance of four popular de novo sequencing programs was tested using artificially constructed mixture spectra as well as experimental proteomics data. Mixture fragmentation spectra are recognized as a limitation in proteomics because they decrease the identification performance using database search engines. De novo sequencing approaches are expected to be even more sensitive to the reduction in mass spectrum quality resulting from peptide precursor co-isolation and thus prone to false identifications. The deconvolution approach matched complementary b-, y-ions to each precursor peptide mass, which allowed the creation of virtual spectra containing sequence specific fragment ions of each co-isolated peptide. Deconvolution processing resulted in equally efficient identification rates but increased the absolute number of correctly sequenced peptides. The improvement was in the range of 20-35% additional peptide identifications for a HeLa lysate sample. Some correct sequences were identified only using unprocessed spectra; however, the number of these was lower than those where improvement was obtained by mass spectral deconvolution. Tight candidate peptide score distribution and high sensitivity to small changes in the mass spectrum introduced by the employed deconvolution method could explain some of the missing peptide identifications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Tatsumi, Norifumi; Kobayashi, Ritsuko; Yano, Tohru; Noda, Masatsugu; Fujimura, Koji; Okada, Norihiro; Okabe, Masataka
2016-01-01
The lung is an important organ for air breathing in tetrapods and originated well before the terrestrialization of vertebrates. Therefore, to better understand lung evolution, we investigated lung development in the extant basal actinopterygian fish Senegal bichir (Polypterus senegalus). First, we histologically confirmed that lung development in this species is very similar to that of tetrapods. We also found that the mesenchymal expression patterns of three genes that are known to play important roles in early lung development in tetrapods (Fgf10, Tbx4, and Tbx5) were quite similar to those of tetrapods. Moreover, we found a Tbx4 core lung mesenchyme-specific enhancer (C-LME) in the genomes of bichir and coelacanth (Latimeria chalumnae) and experimentally confirmed that these were functional in tetrapods. These findings provide the first molecular evidence that the developmental program for lung was already established in the common ancestor of actinopterygians and sarcopterygians. PMID:27466206
Tatsumi, Norifumi; Kobayashi, Ritsuko; Yano, Tohru; Noda, Masatsugu; Fujimura, Koji; Okada, Norihiro; Okabe, Masataka
2016-07-28
The lung is an important organ for air breathing in tetrapods and originated well before the terrestrialization of vertebrates. Therefore, to better understand lung evolution, we investigated lung development in the extant basal actinopterygian fish Senegal bichir (Polypterus senegalus). First, we histologically confirmed that lung development in this species is very similar to that of tetrapods. We also found that the mesenchymal expression patterns of three genes that are known to play important roles in early lung development in tetrapods (Fgf10, Tbx4, and Tbx5) were quite similar to those of tetrapods. Moreover, we found a Tbx4 core lung mesenchyme-specific enhancer (C-LME) in the genomes of bichir and coelacanth (Latimeria chalumnae) and experimentally confirmed that these were functional in tetrapods. These findings provide the first molecular evidence that the developmental program for lung was already established in the common ancestor of actinopterygians and sarcopterygians.
Boosting CNN performance for lung texture classification using connected filtering
NASA Astrophysics Data System (ADS)
Tarando, Sebastián. Roberto; Fetita, Catalin; Kim, Young-Wouk; Cho, Hyoun; Brillet, Pierre-Yves
2018-02-01
Infiltrative lung diseases describe a large group of irreversible lung disorders requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. This paper presents an original image pre-processing framework based on locally connected filtering applied in multiresolution, which helps improving the learning process and boost the performance of CNN for lung texture classification. By removing the dense vascular network from images used by the CNN for lung classification, locally connected filters provide a better discrimination between different lung patterns and help regularizing the classification output. The approach was tested in a preliminary evaluation on a 10 patient database of various lung pathologies, showing an increase of 10% in true positive rate (on average for all the cases) with respect to the state of the art cascade of CNNs for this task.
High affinity ligands from in vitro selection: Complex targets
Morris, Kevin N.; Jensen, Kirk B.; Julin, Carol M.; Weil, Michael; Gold, Larry
1998-01-01
Human red blood cell membranes were used as a model system to determine if the systematic evolution of ligands by exponential enrichment (SELEX) methodology, an in vitro protocol for isolating high-affinity oligonucleotides that bind specifically to virtually any single protein, could be used with a complex mixture of potential targets. Ligands to multiple targets were generated simultaneously during the selection process, and the binding affinities of these ligands for their targets are comparable to those found in similar experiments against pure targets. A secondary selection scheme, deconvolution-SELEX, facilitates rapid isolation of the ligands to targets of special interest within the mixture. SELEX provides high-affinity compounds for multiple targets in a mixture and might allow a means for dissecting complex biological systems. PMID:9501188
Inverting pump-probe spectroscopy for state tomography of excitonic systems.
Hoyer, Stephan; Whaley, K Birgitta
2013-04-28
We propose a two-step protocol for inverting ultrafast spectroscopy experiments on a molecular aggregate to extract the time-evolution of the excited state density matrix. The first step is a deconvolution of the experimental signal to determine a pump-dependent response function. The second step inverts this response function to obtain the quantum state of the system, given a model for how the system evolves following the probe interaction. We demonstrate this inversion analytically and numerically for a dimer model system, and evaluate the feasibility of scaling it to larger molecular aggregates such as photosynthetic protein-pigment complexes. Our scheme provides a direct alternative to the approach of determining all Hamiltonian parameters and then simulating excited state dynamics.
Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C
2013-05-01
Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. Copyright © 2013 Elsevier B.V. All rights reserved.
Deconvolution of azimuthal mode detection measurements
NASA Astrophysics Data System (ADS)
Sijtsma, Pieter; Brouwer, Harry
2018-05-01
Unequally spaced transducer rings make it possible to extend the range of detectable azimuthal modes. The disadvantage is that the response of the mode detection algorithm to a single mode is distributed over all detectable modes, similarly to the Point Spread Function of Conventional Beamforming with microphone arrays. With multiple modes the response patterns interfere, leading to a relatively high "noise floor" of spurious modes in the detected mode spectrum, in other words, to a low dynamic range. In this paper a deconvolution strategy is proposed for increasing this dynamic range. It starts with separating the measured sound into shaft tones and broadband noise. For broadband noise modes, a standard Non-Negative Least Squares solver appeared to be a perfect deconvolution tool. For shaft tones a Matching Pursuit approach is proposed, taking advantage of the sparsity of dominant modes. The deconvolution methods were applied to mode detection measurements in a fan rig. An increase in dynamic range of typically 10-15 dB was found.
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1991-01-01
The final report for work on the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution is presented. Papers and theses prepared during the research report period are included. Among all the research results reported, note should be made of the specific investigation of the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution. A methodology was developed to determine design and operation parameters for error minimization when deconvolution is included in data analysis. An error surface is plotted versus the signal-to-noise ratio (SNR) and all parameters of interest. Instrumental characteristics will determine a curve in this space. The SNR and parameter values which give the projection from the curve to the surface, corresponding to the smallest value for the error, are the optimum values. These values are constrained by the curve and so will not necessarily correspond to an absolute minimum in the error surface.
NASA Technical Reports Server (NTRS)
Becker, Joseph F.; Valentin, Jose
1996-01-01
The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.
Joint deconvolution and classification with applications to passive acoustic underwater multipath.
Anderson, Hyrum S; Gupta, Maya R
2008-11-01
This paper addresses the problem of classifying signals that have been corrupted by noise and unknown linear time-invariant (LTI) filtering such as multipath, given labeled uncorrupted training signals. A maximum a posteriori approach to the deconvolution and classification is considered, which produces estimates of the desired signal, the unknown channel, and the class label. For cases in which only a class label is needed, the classification accuracy can be improved by not committing to an estimate of the channel or signal. A variant of the quadratic discriminant analysis (QDA) classifier is proposed that probabilistically accounts for the unknown LTI filtering, and which avoids deconvolution. The proposed QDA classifier can work either directly on the signal or on features whose transformation by LTI filtering can be analyzed; as an example a classifier for subband-power features is derived. Results on simulated data and real Bowhead whale vocalizations show that jointly considering deconvolution with classification can dramatically improve classification performance over traditional methods over a range of signal-to-noise ratios.
Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin
2012-11-21
New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.
A new scoring function for top-down spectral deconvolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kou, Qiang; Wu, Si; Liu, Xiaowen
2014-12-18
Background: Top-down mass spectrometry plays an important role in intact protein identification and characterization. Top-down mass spectra are more complex than bottom-up mass spectra because they often contain many isotopomer envelopes from highly charged ions, which may overlap with one another. As a result, spectral deconvolution, which converts a complex top-down mass spectrum into a monoisotopic mass list, is a key step in top-down spectral interpretation. Results: In this paper, we propose a new scoring function, L-score, for evaluating isotopomer envelopes. By combining L-score with MS-Deconv, a new software tool, MS-Deconv+, was developed for top-down spectral deconvolution. Experimental results showedmore » that MS-Deconv+ outperformed existing software tools in top-down spectral deconvolution. Conclusions: L-score shows high discriminative ability in identification of isotopomer envelopes. Using L-score, MS-Deconv+ reports many correct monoisotopic masses missed by other software tools, which are valuable for proteoform identification and characterization.« less
Bayesian Deconvolution for Angular Super-Resolution in Forward-Looking Scanning Radar
Zha, Yuebo; Huang, Yulin; Sun, Zhichao; Wang, Yue; Yang, Jianyu
2015-01-01
Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP) criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson–Lucy algorithm. PMID:25806871
Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C.
2014-01-01
Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. PMID:23542422
Neuroendocrine neoplasms of the lung: Concepts and terminology.
Wick, Mark R; Marchevsky, Alberto M
2015-11-01
Neuroendocrine neoplasms of the lung continue to undergo scrutiny, with respect to the diagnostic terminology recommended for them and details of their clinicopathologic profiles. This overview considers the nosological evolution of such lesions and presents current views on classification schemes that pertain to them. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, T.; Popescu, S. C.; Krause, K.
2016-12-01
Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: 1) direct decomposition, 2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from discrete LiDAR data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, < 0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, < 1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (< 1.01m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE.
Lee, Myung W.
1999-01-01
Processing of 20 seismic profiles acquired in the Chesapeake Bay area aided in analysis of the details of an impact structure and allowed more accurate mapping of the depression caused by a bolide impact. Particular emphasis was placed on enhancement of seismic reflections from the basement. Application of wavelet deconvolution after a second zero-crossing predictive deconvolution improved the resolution of shallow reflections, and application of a match filter enhanced the basement reflections. The use of deconvolution and match filtering with a two-dimensional signal enhancement technique (F-X filtering) significantly improved the interpretability of seismic sections.
Langenbucher, Frieder
2003-11-01
Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.
High quality image-pair-based deblurring method using edge mask and improved residual deconvolution
NASA Astrophysics Data System (ADS)
Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting
2017-04-01
Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.
Windprofiler optimization using digital deconvolution procedures
NASA Astrophysics Data System (ADS)
Hocking, W. K.; Hocking, A.; Hocking, D. G.; Garbanzo-Salas, M.
2014-10-01
Digital improvements to data acquisition procedures used for windprofiler radars have the potential for improving the height coverage at optimum resolution, and permit improved height resolution. A few newer systems already use this capability. Real-time deconvolution procedures offer even further optimization, and this has not been effectively employed in recent years. In this paper we demonstrate the advantages of combining these features, with particular emphasis on the advantages of real-time deconvolution. Using several multi-core CPUs, we have been able to achieve speeds of up to 40 GHz from a standard commercial motherboard, allowing data to be digitized and processed without the need for any type of hardware except for a transmitter (and associated drivers), a receiver and a digitizer. No Digital Signal Processor chips are needed, allowing great flexibility with analysis algorithms. By using deconvolution procedures, we have then been able to not only optimize height resolution, but also have been able to make advances in dealing with spectral contaminants like ground echoes and other near-zero-Hz spectral contamination. Our results also demonstrate the ability to produce fine-resolution measurements, revealing small-scale structures within the backscattered echoes that were previously not possible to see. Resolutions of 30 m are possible for VHF radars. Furthermore, our deconvolution technique allows the removal of range-aliasing effects in real time, a major bonus in many instances. Results are shown using new radars in Canada and Costa Rica.
Lung evolution as a cipher for physiology
Torday, J. S.; Rehan, V. K.
2009-01-01
In the postgenomic era, we need an algorithm to readily translate genes into physiologic principles. The failure to advance biomedicine is due to the false hope raised in the wake of the Human Genome Project (HGP) by the promise of systems biology as a ready means of reconstructing physiology from genes. like the atom in physics, the cell, not the gene, is the smallest completely functional unit of biology. Trying to reassemble gene regulatory networks without accounting for this fundamental feature of evolution will result in a genomic atlas, but not an algorithm for functional genomics. For example, the evolution of the lung can be “deconvoluted” by applying cell-cell communication mechanisms to all aspects of lung biology development, homeostasis, and regeneration/repair. Gene regulatory networks common to these processes predict ontogeny, phylogeny, and the disease-related consequences of failed signaling. This algorithm elucidates characteristics of vertebrate physiology as a cascade of emergent and contingent cellular adaptational responses. By reducing complex physiological traits to gene regulatory networks and arranging them hierarchically in a self-organizing map, like the periodic table of elements in physics, the first principles of physiology will emerge. PMID:19366785
Kidd, Timothy J.; Geake, James B.; Bell, Scott C.; Currie, Bart J.
2017-01-01
ABSTRACT Cystic fibrosis (CF) is a genetic disorder characterized by progressive lung function decline. CF patients are at an increased risk of respiratory infections, including those by the environmental bacterium Burkholderia pseudomallei, the causative agent of melioidosis. Here, we compared the genomes of B. pseudomallei isolates collected between ~4 and 55 months apart from seven chronically infected CF patients. Overall, the B. pseudomallei strains showed evolutionary patterns similar to those of other chronic infections, including emergence of antibiotic resistance, genome reduction, and deleterious mutations in genes involved in virulence, metabolism, environmental survival, and cell wall components. We documented the first reported B. pseudomallei hypermutators, which were likely caused by defective MutS. Further, our study identified both known and novel molecular mechanisms conferring resistance to three of the five clinically important antibiotics for melioidosis treatment. Our report highlights the exquisite adaptability of microorganisms to long-term persistence in their environment and the ongoing challenges of antibiotic treatment in eradicating pathogens in the CF lung. Convergent evolution with other CF pathogens hints at a degree of predictability in bacterial evolution in the CF lung and potential targeted eradication of chronic CF infections in the future. PMID:28400528
NASA Astrophysics Data System (ADS)
Tarando, Sebastian Roberto; Fetita, Catalin; Brillet, Pierre-Yves
2017-03-01
The infiltrative lung diseases are a class of irreversible, non-neoplastic lung pathologies requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. Traditionally, such classification relies on a two-dimensional analysis of axial CT images. This paper proposes a cascade of the existing CNN based CAD system, specifically tuned-up. The advantage of using a deep learning approach is a better regularization of the classification output. In a preliminary evaluation, the combined approach was tested on a 13 patient database of various lung pathologies, showing an increase of 10% in True Positive Rate (TPR) with respect to the best suited state of the art CNN for this task.
Strehl-constrained iterative blind deconvolution for post-adaptive-optics data
NASA Astrophysics Data System (ADS)
Desiderà, G.; Carbillet, M.
2009-12-01
Aims: We aim to improve blind deconvolution applied to post-adaptive-optics (AO) data by taking into account one of their basic characteristics, resulting from the necessarily partial AO correction: the Strehl ratio. Methods: We apply a Strehl constraint in the framework of iterative blind deconvolution (IBD) of post-AO near-infrared images simulated in a detailed end-to-end manner and considering a case that is as realistic as possible. Results: The results obtained clearly show the advantage of using such a constraint, from the point of view of both performance and stability, especially for poorly AO-corrected data. The proposed algorithm has been implemented in the freely-distributed and CAOS-based Software Package AIRY.
Calibration of a polarimetric imaging SAR
NASA Technical Reports Server (NTRS)
Sarabandi, K.; Pierce, L. E.; Ulaby, F. T.
1991-01-01
Calibration of polarimetric imaging Synthetic Aperture Radars (SAR's) using point calibration targets is discussed. The four-port network calibration technique is used to describe the radar error model. The polarimetric ambiguity function of the SAR is then found using a single point target, namely a trihedral corner reflector. Based on this, an estimate for the backscattering coefficient of the terrain is found by a deconvolution process. A radar image taken by the JPL Airborne SAR (AIRSAR) is used for verification of the deconvolution calibration method. The calibrated responses of point targets in the image are compared both with theory and the POLCAL technique. Also, response of a distributed target are compared using the deconvolution and POLCAL techniques.
Histogram deconvolution - An aid to automated classifiers
NASA Technical Reports Server (NTRS)
Lorre, J. J.
1983-01-01
It is shown that N-dimensional histograms are convolved by the addition of noise in the picture domain. Three methods are described which provide the ability to deconvolve such noise-affected histograms. The purpose of the deconvolution is to provide automated classifiers with a higher quality N-dimensional histogram from which to obtain classification statistics.
NASA Technical Reports Server (NTRS)
Ioup, G. E.
1985-01-01
Appendix 5 of the Study of One- and Two-Dimensional Filtering and Deconvolution Algorithms for a Streaming Array Computer includes a resume of the professional background of the Principal Investigator on the project, lists of this publications and research papers, graduate thesis supervised, and grants received.
Rosen, I G; Luczak, Susan E; Weiss, Jordan
2014-03-15
We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.
Matthews, Grant
2004-12-01
The Geostationary Earth Radiation Budget (GERB) experiment is a broadband satellite radiometer instrument program intended to resolve remaining uncertainties surrounding the effect of cloud radiative feedback on future climate change. By use of a custom-designed diffraction-aberration telescope model, the GERB detector spatial response is recovered by deconvolution applied to the ground calibration point-spread function (PSF) measurements. An ensemble of randomly generated white-noise test scenes, combined with the measured telescope transfer function results in the effect of noise on the deconvolution being significantly reduced. With the recovered detector response as a base, the same model is applied in construction of the predicted in-flight field-of-view response of each GERB pixel to both short- and long-wave Earth radiance. The results of this study can now be used to simulate and investigate the instantaneous sampling errors incurred by GERB. Also, the developed deconvolution method may be highly applicable in enhancing images or PSF data for any telescope system for which a wave-front error measurement is available.
Point spread functions and deconvolution of ultrasonic images.
Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten
2015-03-01
This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.
NASA Technical Reports Server (NTRS)
Hucek, Richard R.; Ardanuy, Philip E.; Kyle, H. Lee
1987-01-01
A deconvolution method for extracting the top of the atmosphere (TOA) mean, daily albedo field from a set of wide-FOV (WFOV) shortwave radiometer measurements is proposed. The method is based on constructing a synthetic measurement for each satellite observation. The albedo field is represented as a truncated series of spherical harmonic functions, and these linear equations are presented. Simulation studies were conducted to determine the sensitivity of the method. It is observed that a maximum of about 289 pieces of data can be extracted from a set of Nimbus 7 WFOV satellite measurements. The albedos derived using the deconvolution method are compared with albedos derived using the WFOV archival method; the developed albedo field achieved a 20 percent reduction in the global rms regional reflected flux density errors. The deconvolution method is applied to estimate the mean, daily average TOA albedo field for January 1983. A strong and extensive albedo maximum (0.42), which corresponds to the El Nino/Southern Oscillation event of 1982-1983, is detected over the south central Pacific Ocean.
Deconvolution of astronomical images using SOR with adaptive relaxation.
Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J
2011-07-04
We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.
Gaussian and linear deconvolution of LC-MS/MS chromatograms of the eight aminobutyric acid isomers
Vemula, Harika; Kitase, Yukiko; Ayon, Navid J.; Bonewald, Lynda; Gutheil, William G.
2016-01-01
Isomeric molecules present a challenge for analytical resolution and quantification, even with MS-based detection. The eight-aminobutyric acid (ABA) isomers are of interest for their various biological activities, particularly γ-aminobutyric acid (GABA) and the d- and l-isomers of β-aminoisobutyric acid (β-AIBA; BAIBA). This study aimed to investigate LC-MS/MS-based resolution of these ABA isomers as their Marfey's (Mar) reagent derivatives. HPLC was able to separate three Mar-ABA isomers l-β-ABA (l-BABA), and l- and d-α-ABA (AABA) completely, with three isomers (GABA, and d/l-BAIBA) in one chromatographic cluster, and two isomers (α-AIBA (AAIBA) and d-BABA) in a second cluster. Partially separated cluster components were deconvoluted using Gaussian peak fitting except for GABA and d-BAIBA. MS/MS detection of Marfey's derivatized ABA isomers provided six MS/MS fragments, with substantially different intensity profiles between structural isomers. This allowed linear deconvolution of ABA isomer peaks. Combining HPLC separation with linear and Gaussian deconvolution allowed resolution of all eight ABA isomers. Application to human serum found a substantial level of l-AABA (13 μM), an intermediate level of l-BAIBA (0.8 μM), and low but detectable levels (<0.2 μM) of GABA, l-BABA, AAIBA, d-BAIBA, and d-AABA. This approach should be useful for LC-MS/MS deconvolution of other challenging groups of isomeric molecules. PMID:27771391
Klughammer, Christof; Schreiber, Ulrich
2016-05-01
A newly developed compact measuring system for assessment of transmittance changes in the near-infrared spectral region is described; it allows deconvolution of redox changes due to ferredoxin (Fd), P700, and plastocyanin (PC) in intact leaves. In addition, it can also simultaneously measure chlorophyll fluorescence. The major opto-electronic components as well as the principles of data acquisition and signal deconvolution are outlined. Four original pulse-modulated dual-wavelength difference signals are measured (785-840 nm, 810-870 nm, 870-970 nm, and 795-970 nm). Deconvolution is based on specific spectral information presented graphically in the form of 'Differential Model Plots' (DMP) of Fd, P700, and PC that are derived empirically from selective changes of these three components under appropriately chosen physiological conditions. Whereas information on maximal changes of Fd is obtained upon illumination after dark-acclimation, maximal changes of P700 and PC can be readily induced by saturating light pulses in the presence of far-red light. Using the information of DMP and maximal changes, the new measuring system enables on-line deconvolution of Fd, P700, and PC. The performance of the new device is demonstrated by some examples of practical applications, including fast measurements of flash relaxation kinetics and of the Fd, P700, and PC changes paralleling the polyphasic fluorescence rise upon application of a 300-ms pulse of saturating light.
SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, C; Jin, M; Ouyang, L
2015-06-15
Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used tomore » recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation compared to the direct method.« less
Gold - A novel deconvolution algorithm with optimization for waveform LiDAR processing
NASA Astrophysics Data System (ADS)
Zhou, Tan; Popescu, Sorin C.; Krause, Keith; Sheridan, Ryan D.; Putman, Eric
2017-07-01
Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: (1) direct decomposition, (2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson-Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from the corresponding reference data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, <0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, <1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (<1.01 m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE. Additionally, the high level of uncertainty occurs more on areas with high slope and high vegetation. This study provides an alternative and innovative approach for waveform processing that will benefit high fidelity processing of waveform LiDAR data to characterize vegetation structures.
Improving cell mixture deconvolution by identifying optimal DNA methylation libraries (IDOL).
Koestler, Devin C; Jones, Meaghan J; Usset, Joseph; Christensen, Brock C; Butler, Rondi A; Kobor, Michael S; Wiencke, John K; Kelsey, Karl T
2016-03-08
Confounding due to cellular heterogeneity represents one of the foremost challenges currently facing Epigenome-Wide Association Studies (EWAS). Statistical methods leveraging the tissue-specificity of DNA methylation for deconvoluting the cellular mixture of heterogenous biospecimens offer a promising solution, however the performance of such methods depends entirely on the library of methylation markers being used for deconvolution. Here, we introduce a novel algorithm for Identifying Optimal Libraries (IDOL) that dynamically scans a candidate set of cell-specific methylation markers to find libraries that optimize the accuracy of cell fraction estimates obtained from cell mixture deconvolution. Application of IDOL to training set consisting of samples with both whole-blood DNA methylation data (Illumina HumanMethylation450 BeadArray (HM450)) and flow cytometry measurements of cell composition revealed an optimized library comprised of 300 CpG sites. When compared existing libraries, the library identified by IDOL demonstrated significantly better overall discrimination of the entire immune cell landscape (p = 0.038), and resulted in improved discrimination of 14 out of the 15 pairs of leukocyte subtypes. Estimates of cell composition across the samples in the training set using the IDOL library were highly correlated with their respective flow cytometry measurements, with all cell-specific R (2)>0.99 and root mean square errors (RMSEs) ranging from [0.97 % to 1.33 %] across leukocyte subtypes. Independent validation of the optimized IDOL library using two additional HM450 data sets showed similarly strong prediction performance, with all cell-specific R (2)>0.90 and R M S E<4.00 %. In simulation studies, adjustments for cell composition using the IDOL library resulted in uniformly lower false positive rates compared to competing libraries, while also demonstrating an improved capacity to explain epigenome-wide variation in DNA methylation within two large publicly available HM450 data sets. Despite consisting of half as many CpGs compared to existing libraries for whole blood mixture deconvolution, the optimized IDOL library identified herein resulted in outstanding prediction performance across all considered data sets and demonstrated potential to improve the operating characteristics of EWAS involving adjustments for cell distribution. In addition to providing the EWAS community with an optimized library for whole blood mixture deconvolution, our work establishes a systematic and generalizable framework for the assembly of libraries that improve the accuracy of cell mixture deconvolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, L; Tan, S; Lu, W
2014-06-01
Purpose: To implement a new method that integrates deconvolution with segmentation under the variational framework for PET tumor delineation. Methods: Deconvolution and segmentation are both challenging problems in image processing. The partial volume effect (PVE) makes tumor boundaries in PET image blurred which affects the accuracy of tumor segmentation. Deconvolution aims to obtain a PVE-free image, which can help to improve the segmentation accuracy. Conversely, a correct localization of the object boundaries is helpful to estimate the blur kernel, and thus assist in the deconvolution. In this study, we proposed to solve the two problems simultaneously using a variational methodmore » so that they can benefit each other. The energy functional consists of a fidelity term and a regularization term, and the blur kernel was limited to be the isotropic Gaussian kernel. We minimized the energy functional by solving the associated Euler-Lagrange equations and taking the derivative with respect to the parameters of the kernel function. An alternate minimization method was used to iterate between segmentation, deconvolution and blur-kernel recovery. The performance of the proposed method was tested on clinic PET images of patients with non-Hodgkin's lymphoma, and compared with seven other segmentation methods using the dice similarity index (DSI) and volume error (VE). Results: Among all segmentation methods, the proposed one (DSI=0.81, VE=0.05) has the highest accuracy, followed by the active contours without edges (DSI=0.81, VE=0.25), while other methods including the Graph Cut and the Mumford-Shah (MS) method have lower accuracy. A visual inspection shows that the proposed method localizes the real tumor contour very well. Conclusion: The result showed that deconvolution and segmentation can contribute to each other. The proposed variational method solve the two problems simultaneously, and leads to a high performance for tumor segmentation in PET. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less
Histological evolution of pleuroparenchymal fibroelastosis
Hirota, Takako; Yoshida, Yuji; Kitasato, Yasuhiko; Yoshimi, Michihiro; Koga, Takaomi; Tsuruta, Nobuko; Minami, Masato; Harada, Taishi; Ishii, Hiroshi; Fujita, Masaki; Nabeshima, Kazuki; Nagata, Nobuhiko; Watanabe, Kentaro
2015-01-01
Aims To investigate the histological evolution in the development of pleuroparenchymal fibroelastosis (PPFE). Methods and results We examined four patients who had undergone surgical lung biopsy twice, or who had undergone surgical lung biopsy and had been autopsied, and in whom the histological diagnosis of the first biopsy was not PPFE, but the diagnosis of the second biopsy or of the autopsy was PPFE. The histological patterns of the first biopsy were cellular and fibrotic interstitial pneumonia, cellular interstitial pneumonia (CIP) with organizing pneumonia, CIP with granulomas and acute lung injury in cases 1, 2, 3, and 4, respectively. Septal elastosis was already present in the non-specific interstitial pneumonia-like histology of case 1, but a few additional years were necessary to reach consolidated subpleural fibroelastosis. In case 3, subpleural fibroelastosis was already present in the first biopsy, but only to a small extent. Twelve years later, it was replaced by a long band of fibroelastosis. The septal inflammation and fibrosis and airspace organization observed in the first biopsies were replaced by less cellular subpleural fibroelastosis within 3–12 years. Conclusions Interstitial inflammation or acute lung injury may be an initial step in the development of PPFE. PMID:25234959
Constructing a WISE High Resolution Galaxy Atlas
NASA Technical Reports Server (NTRS)
Jarrett, T. H.; Masci, F.; Tsai, C. W.; Petty, S.; Cluver, M.; Assef, Roberto J.; Benford, D.; Blain, A.; Bridge, C.; Donoso, E.;
2012-01-01
After eight months of continuous observations, the Wide-field Infrared Survey Explorer (WISE) mapped the entire sky at 3.4 micron, 4.6 micron, 12 micron, and 22 micron. We have begun a dedicated WISE High Resolution Galaxy Atlas project to fully characterize large, nearby galaxies and produce a legacy image atlas and source catalog. Here we summarize the deconvolution techniques used to significantly improve the spatial resolution of WISE imaging, specifically designed to study the internal anatomy of nearby galaxies. As a case study, we present results for the galaxy NGC 1566, comparing the WISE enhanced-resolution image processing to that of Spitzer, Galaxy Evolution Explorer, and ground-based imaging. This is the first paper in a two-part series; results for a larger sample of nearby galaxies are presented in the second paper.
ERIC Educational Resources Information Center
Alter, Krystyn P.; Molloy, John L.; Niemeyer, Emily D.
2005-01-01
A laboratory experiment reinforces the concept of acid-base equilibria while introducing a common application of spectrophotometry and can easily be completed within a standard four-hour laboratory period. It provides students with an opportunity to use advanced data analysis techniques like data smoothing and spectral deconvolution to…
Deconvolution of Energy Spectra in the ATIC Experiment
NASA Technical Reports Server (NTRS)
Batkov, K. E.; Panov, A. D.; Adams, J. H.; Ahn, H. S.; Bashindzhagyan, G. L.; Chang, J.; Christl, M.; Fazley, A. R.; Ganel, O.; Gunasigha, R. M.;
2005-01-01
The Advanced Thin Ionization Calorimeter (ATIC) balloon-borne experiment is designed to perform cosmic- ray elemental spectra measurements from below 100 GeV up to tens TeV for nuclei from hydrogen to iron. The instrument is composed of a silicon matrix detector followed by a carbon target, interleaved with scintillator tracking layers, and a segmented BGO calorimeter composed of 320 individual crystals totalling 18 radiation lengths, used to determine the particle energy. The technique for deconvolution of the energy spectra measured in the thin calorimeter is based on detailed simulations of the response of the ATIC instrument to different cosmic ray nuclei over a wide energy range. The method of deconvolution is described and energy spectrum of carbon obtained by this technique is presented.
Sequential deconvolution from wave-front sensing using bivariate simplex splines
NASA Astrophysics Data System (ADS)
Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai
2015-05-01
Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.
SOURCE PULSE ENHANCEMENT BY DECONVOLUTION OF AN EMPIRICAL GREEN'S FUNCTION.
Mueller, Charles S.
1985-01-01
Observations of the earthquake source-time function are enhanced if path, recording-site, and instrument complexities can be removed from seismograms. Assuming that a small earthquake has a simple source, its seismogram can be treated as an empirical Green's function and deconvolved from the seismogram of a larger and/or more complex earthquake by spectral division. When the deconvolution is well posed, the quotient spectrum represents the apparent source-time function of the larger event. This study shows that with high-quality locally recorded earthquake data it is feasible to Fourier transform the quotient and obtain a useful result in the time domain. In practice, the deconvolution can be stabilized by one of several simple techniques. Application of the method is given. Refs.
Deconvolution of time series in the laboratory
NASA Astrophysics Data System (ADS)
John, Thomas; Pietschmann, Dirk; Becker, Volker; Wagner, Christian
2016-10-01
In this study, we present two practical applications of the deconvolution of time series in Fourier space. First, we reconstruct a filtered input signal of sound cards that has been heavily distorted by a built-in high-pass filter using a software approach. Using deconvolution, we can partially bypass the filter and extend the dynamic frequency range by two orders of magnitude. Second, we construct required input signals for a mechanical shaker in order to obtain arbitrary acceleration waveforms, referred to as feedforward control. For both situations, experimental and theoretical approaches are discussed to determine the system-dependent frequency response. Moreover, for the shaker, we propose a simple feedback loop as an extension to the feedforward control in order to handle nonlinearities of the system.
Historical Perspectives of the Causation of Lung Cancer
2015-01-01
Lung cancer is the leading cause of cancer deaths worldwide. Less-known forces are involved in the etiology of lung cancer and have relevant implications for providers in ameliorating care. The purpose of this article is to discuss theories of causation of lung cancer using historical analyses of the evolution of the disease and incorporating related explanations integrating the relationships of science, nursing, medicine, and society. Literature from 160 years was searched and Thagard’s model of causation networks was used to exhibit how nursing and medicine were significant influences in lung cancer causation theory. Disease causation interfaces with sociological norms of behavior to form habits and rates of health behavior. Historically, nursing was detrimentally manipulated by the tobacco industry, engaging in harmful smoking behaviors, thus negatively affecting patient care. Understanding the underlying history behind lung cancer causation may empower nurses to play an active role in a patient’s health. PMID:28462309
Govindan, Ramaswamy; Mandrekar, Sumithra J.; Gerber, David E.; Oxnard, Geoffrey R.; Dahlberg, Suzanne E.; Malik, Shakun; Mooney, Margaret; Abrams, Jeffrey S.; Jänne, Pasi A.; Gandara, David R.; Ramalingam, Suresh S.; Vokes, Everett E.
2015-01-01
The treatment of patients with metastatic non-small cell lung cancer (NSCLC) is slowly evolving from empirical cytotoxic chemotherapy to personalized treatment based on specific molecular alterations. Despite this 10-year evolution, targeted therapies have not been studied adequately in patients with resected NSCLC who have clearly defined actionable mutations. The advent of next generation sequencing has now made it possible to characterize genomic alterations in unprecedented detail. The efforts begun by The Cancer Genome Atlas (TCGA) project to understand the complexities of the genomic landscape of lung cancer will be supplemented further by studying a large number of tumor specimens. Adjuvant Lung Cancer Enrichment Marker Identification and Sequencing Trial (ALCHEMIST) is a National Cancer Institute (NCI) sponsored national clinical trials network (NCTN) initiative to address the needs to refine therapy for early stage NSCLC. This program will screen several thousand patients with operable lung adenocarcinoma to determine if their tumors contain specific molecular alterations [epidermal growth factor receptor mutation (EGFR) and anaplastic lymphoma kinase rearrangement (ALK)] making them eligible for treatment trials that target these alterations. Patients with EGFR mutation or ALK gene rearrangement in their tumor will be randomized to placebo vs. erlotinib or crizotinib respectively after completion of their standard adjuvant therapy. ALCHEMIST will also contain a large discovery component that will provide an opportunity to incorporate genomic studies to fully understand the clonal architecture and clonal evolution and mechanisms of resistance to therapy. In this review, we describe the concept, rationale and outline of ALCHEMIST and the plan for genomic studies in patients with lung adenocarcinoma. PMID:26672084
Augmentation therapy for alpha-1 antitrypsin deficiency: towards a personalised approach
2013-01-01
Background Intravenous augmentation therapy is the only specific treatment available for emphysema associated with alpha-1 antitrypsin deficiency. Despite large observational studies and limited interventional studies there remains controversy about the efficacy of this treatment due to the impracticality of conducting adequately powered studies to evaluate the rate of decline in lung function, due to the low prevalence and the slow progression of the disease. However, measurement of lung density by computed tomography is a more specific and sensitive marker of the evolution of emphysema and two small placebo-controlled clinical trials have provided evidence supporting a reduction in the rate of decline in lung density with augmentation therapy. The problem Where augmentation therapy has become available there has been little consideration of a structured approach to therapy which is often introduced on the basis of functional impairment at diagnosis. Data from registries have shown a great variability in the evolution of lung disease according to patient acquisition and the presence of recognised risk factors. Avoidance of risk factors may, in many cases, stabilise the disease. Since augmentation therapy itself will at best preserve the presenting level of lung damage yet require intravenous administration for life with associated costs, identification of patients at risk of continued rapid or long term progression is essential to select those for whom this treatment can be most appropriate and hence generally more cost-effective. This represents a major reconsideration of the current practice in order to develop a consistent approach to management world wide. Purpose of this review The current review assesses the evidence for efficacy of augmentation therapy and considers how the combination of age, physiological impairment, exacerbation history and rate of decline in spirometry and other measures of emphysema may be used to improve therapeutic decision making, until a reliable predictive biomarker of the evolution of lung impairment can be identified. In addition, individual pharmacokinetic studies may permit the selection of the best regimen of administration for those who need it. Summary The rarity and variable characteristics of the disease imply the need for an individualised approach to therapy in specialised centres with sufficient experience to apply a systematic approach to monitoring and management. PMID:24063809
Efficient volumetric estimation from plenoptic data
NASA Astrophysics Data System (ADS)
Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.
2013-03-01
The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.
NASA Astrophysics Data System (ADS)
Yang, Yang; Chu, Zhigang; Shen, Linbang; Ping, Guoli; Xu, Zhongming
2018-07-01
Being capable of demystifying the acoustic source identification result fast, Fourier-based deconvolution has been studied and applied widely for the delay and sum (DAS) beamforming with two-dimensional (2D) planar arrays. It is, however so far, still blank in the context of spherical harmonics beamforming (SHB) with three-dimensional (3D) solid spherical arrays. This paper is motivated to settle this problem. Firstly, for the purpose of determining the effective identification region, the premise of deconvolution, a shift-invariant point spread function (PSF), is analyzed with simulations. To make the premise be satisfied approximately, the opening angle in elevation dimension of the surface of interest should be small, while no restriction is imposed to the azimuth dimension. Then, two kinds of deconvolution theories are built for SHB using the zero and the periodic boundary conditions respectively. Both simulations and experiments demonstrate that the periodic boundary condition is superior to the zero one, and fits the 3D acoustic source identification with solid spherical arrays better. Finally, four periodic boundary condition based deconvolution methods are formulated, and their performance is disclosed both with simulations and experimentally. All the four methods offer enhanced spatial resolution and reduced sidelobe contaminations over SHB. The recovered source strength approximates to the exact one multiplied with a coefficient that is the square of the focus distance divided by the distance from the source to the array center, while the recovered pressure contribution is scarcely affected by the focus distance, always approximating to the exact one.
NASA Astrophysics Data System (ADS)
Rajendran, Kishore; Leng, Shuai; Jorgensen, Steven M.; Abdurakhimova, Dilbar; Ritman, Erik L.; McCollough, Cynthia H.
2017-03-01
Changes in arterial wall perfusion are an indicator of early atherosclerosis. This is characterized by an increased spatial density of vasa vasorum (VV), the micro-vessels that supply oxygen and nutrients to the arterial wall. Detection of increased VV during contrast-enhanced computed tomography (CT) imaging is limited due to contamination from blooming effect from the contrast-enhanced lumen. We report the application of an image deconvolution technique using a measured system point-spread function, on CT data obtained from a photon-counting CT system to reduce blooming and to improve the CT number accuracy of arterial wall, which enhances detection of increased VV. A phantom study was performed to assess the accuracy of the deconvolution technique. A porcine model was created with enhanced VV in one carotid artery; the other carotid artery served as a control. CT images at an energy range of 25-120 keV were reconstructed. CT numbers were measured for multiple locations in the carotid walls and for multiple time points, pre and post contrast injection. The mean CT number in the carotid wall was compared between the left (increased VV) and right (control) carotid arteries. Prior to deconvolution, results showed similar mean CT numbers in the left and right carotid wall due to the contamination from blooming effect, limiting the detection of increased VV in the left carotid artery. After deconvolution, the mean CT number difference between the left and right carotid arteries was substantially increased at all the time points, enabling detection of the increased VV in the artery wall.
VizieR Online Data Catalog: Spatial deconvolution code (Quintero Noda+, 2015)
NASA Astrophysics Data System (ADS)
Quintero Noda, C.; Asensio Ramos, A.; Orozco Suarez, D.; Ruiz Cobo, B.
2015-05-01
This deconvolution method follows the scheme presented in Ruiz Cobo & Asensio Ramos (2013A&A...549L...4R) The Stokes parameters are projected onto a few spectral eigenvectors and the ensuing maps of coefficients are deconvolved using a standard Lucy-Richardson algorithm. This introduces a stabilization because the PCA filtering reduces the amount of noise. (1 data file).
NASA Astrophysics Data System (ADS)
Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats
2000-05-01
Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.
A method to measure the presampling MTF in digital radiography using Wiener deconvolution
NASA Astrophysics Data System (ADS)
Zhou, Zhongxing; Zhu, Qingzhen; Gao, Feng; Zhao, Huijuan; Zhang, Lixin; Li, Guohui
2013-03-01
We developed a novel method for determining the presampling modulation transfer function (MTF) of digital radiography systems from slanted edge images based on Wiener deconvolution. The degraded supersampled edge spread function (ESF) was obtained from simulated slanted edge images with known MTF in the presence of poisson noise, and its corresponding ideal ESF without degration was constructed according to its central edge position. To meet the requirements of the absolute integrable condition of Fourier transformation, the origianl ESFs were mirrored to construct the symmetric pattern of ESFs. Then based on Wiener deconvolution technique, the supersampled line spread function (LSF) could be acquired from the symmetric pattern of degraded supersampled ESFs in the presence of ideal symmetric ESFs and system noise. The MTF is then the normalized magnitude of the Fourier transform of the LSF. The determined MTF showed a strong agreement with the theoritical true MTF when appropriated Wiener parameter was chosen. The effects of Wiener parameter value and the width of square-like wave peak in symmetric ESFs were illustrated and discussed. In conclusion, an accurate and simple method to measure the presampling MTF was established using Wiener deconvolution technique according to slanted edge images.
Deconvolution of interferometric data using interior point iterative algorithms
NASA Astrophysics Data System (ADS)
Theys, C.; Lantéri, H.; Aime, C.
2016-09-01
We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.
Single-Ion Deconvolution of Mass Peak Overlaps for Atom Probe Microscopy.
London, Andrew J; Haley, Daniel; Moody, Michael P
2017-04-01
Due to the intrinsic evaporation properties of the material studied, insufficient mass-resolving power and lack of knowledge of the kinetic energy of incident ions, peaks in the atom probe mass-to-charge spectrum can overlap and result in incorrect composition measurements. Contributions to these peak overlaps can be deconvoluted globally, by simply examining adjacent peaks combined with knowledge of natural isotopic abundances. However, this strategy does not account for the fact that the relative contributions to this convoluted signal can often vary significantly in different regions of the analysis volume; e.g., across interfaces and within clusters. Some progress has been made with spatially localized deconvolution in cases where the discrete microstructural regions can be easily identified within the reconstruction, but this means no further point cloud analyses are possible. Hence, we present an ion-by-ion methodology where the identity of each ion, normally obscured by peak overlap, is resolved by examining the isotopic abundance of their immediate surroundings. The resulting peak-deconvoluted data are a point cloud and can be analyzed with any existing tools. We present two detailed case studies and discussion of the limitations of this new technique.
Image deblurring by motion estimation for remote sensing
NASA Astrophysics Data System (ADS)
Chen, Yueting; Wu, Jiagu; Xu, Zhihai; Li, Qi; Feng, Huajun
2010-08-01
The imagery resolution of imaging systems for remote sensing is often limited by image degradation resulting from unwanted motion disturbances of the platform during image exposures. Since the form of the platform vibration can be arbitrary, the lack of priori knowledge about the motion function (the PSF) suggests blind restoration approaches. A deblurring method which combines motion estimation and image deconvolution both for area-array and TDI remote sensing has been proposed in this paper. The image motion estimation is accomplished by an auxiliary high-speed detector and a sub-pixel correlation algorithm. The PSF is then reconstructed from estimated image motion vectors. Eventually, the clear image can be recovered by the Richardson-Lucy (RL) iterative deconvolution algorithm from the blurred image of the prime camera with the constructed PSF. The image deconvolution for the area-array detector is direct. While for the TDICCD detector, an integral distortion compensation step and a row-by-row deconvolution scheme are applied. Theoretical analyses and experimental results show that, the performance of the proposed concept is convincing. Blurred and distorted images can be properly recovered not only for visual observation, but also with significant objective evaluation increment.
Wille, M-L; Zapf, M; Ruiter, N V; Gemmeke, H; Langton, C M
2015-06-21
The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.
Chemometric Data Analysis for Deconvolution of Overlapped Ion Mobility Profiles
NASA Astrophysics Data System (ADS)
Zekavat, Behrooz; Solouki, Touradj
2012-11-01
We present the details of a data analysis approach for deconvolution of the ion mobility (IM) overlapped or unresolved species. This approach takes advantage of the ion fragmentation variations as a function of the IM arrival time. The data analysis involves the use of an in-house developed data preprocessing platform for the conversion of the original post-IM/collision-induced dissociation mass spectrometry (post-IM/CID MS) data to a Matlab compatible format for chemometric analysis. We show that principle component analysis (PCA) can be used to examine the post-IM/CID MS profiles for the presence of mobility-overlapped species. Subsequently, using an interactive self-modeling mixture analysis technique, we show how to calculate the total IM spectrum (TIMS) and CID mass spectrum for each component of the IM overlapped mixtures. Moreover, we show that PCA and IM deconvolution techniques provide complementary results to evaluate the validity of the calculated TIMS profiles. We use two binary mixtures with overlapping IM profiles, including (1) a mixture of two non-isobaric peptides (neurotensin (RRPYIL) and a hexapeptide (WHWLQL)), and (2) an isobaric sugar isomer mixture of raffinose and maltotriose, to demonstrate the applicability of the IM deconvolution.
Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.
Eichstädt, S; Wilkens, V
2017-06-01
An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.
Designing a stable feedback control system for blind image deconvolution.
Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan
2018-05-01
Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.
Multi-limit unsymmetrical MLIBD image restoration algorithm
NASA Astrophysics Data System (ADS)
Yang, Yang; Cheng, Yiping; Chen, Zai-wang; Bo, Chen
2012-11-01
A novel multi-limit unsymmetrical iterative blind deconvolution(MLIBD) algorithm was presented to enhance the performance of adaptive optics image restoration.The algorithm enhances the reliability of iterative blind deconvolution by introducing the bandwidth limit into the frequency domain of point spread(PSF),and adopts the PSF dynamic support region estimation to improve the convergence speed.The unsymmetrical factor is automatically computed to advance its adaptivity.Image deconvolution comparing experiments between Richardson-Lucy IBD and MLIBD were done,and the result indicates that the iteration number is reduced by 22.4% and the peak signal-to-noise ratio is improved by 10.18dB with MLIBD method. The performance of MLIBD algorithm is outstanding in the images restoration the FK5-857 adaptive optics and the double-star adaptive optics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neft, R.E.; Tierney, L.A.; Belinsky, S.A.
Molecular and immunological techniques may enhance the usefulness of sputum cytology as a screening tool for lung cancer. These techniques may also be useful in detecting and following the early progression of disease from metaplasia to dysplasia, carcinoma in situ, and finally to invasive carcinoma. Longitudinal information on the evolution of these malignant changes in the respiratory epithelium can be gained by prospective study of populations at high risk for lung cancer. This work is significant because double-labeling of cells in sputum with p53 and cytokeratin antibodies facilitates rapid screening of p53 positive neoplastic and preneoplastic lung cells by brightfieldmore » and fluorescence microscopy.« less
Bronchial blood supply after lung transplantation without bronchial artery revascularization.
Nicolls, Mark R; Zamora, Martin R
2010-10-01
This review discusses how the bronchial artery circulation is interrupted following lung transplantation and what may be the long-term complications of compromising systemic blood flow to allograft airways. Preclinical and clinical studies have shown that the loss of airway microcirculations is highly associated with the development of airway hypoxia and an increased susceptibility to chronic rejection. The bronchial artery circulation has been highly conserved through evolution. Current evidence suggests that the failure to routinely perform bronchial artery revascularization at the time of lung transplantation may predispose patients to develop the bronchiolitis obliterans syndrome.
The iron lung: halfway technology or necessary step?
Maxwell, J H
1986-01-01
The iron lung is often used to epitomize the costly halfway technologies of modern-day medicine that fail to cure and only prolong a seriously compromised existence. Historical evidence indicates that the iron lung was not a costly instrument of last resort; instead, it was a lifesaving device that played a critical role in the evolution of modern respirators and respiratory care. Contrary to the prevailing views of the biomedical research community, the creation of new devices and instruments is often as important a source of technical change in medicine as are advances in the biological sciences.
NASA Astrophysics Data System (ADS)
Navarro, Jorge
The goal of this study presented is to determine the best available nondestructive technique necessary to collect validation data as well as to determine burnup and cooling time of the fuel elements on-site at the Advanced Test Reactor (ATR) canal. This study makes a recommendation of the viability of implementing a permanent fuel scanning system at the ATR canal and leads to the full design of a permanent fuel scan system. The study consisted at first in determining if it was possible and which equipment was necessary to collect useful spectra from ATR fuel elements at the canal adjacent to the reactor. Once it was establish that useful spectra can be obtained at the ATR canal, the next step was to determine which detector and which configuration was better suited to predict burnup and cooling time of fuel elements nondestructively. Three different detectors of High Purity Germanium (HPGe), Lanthanum Bromide (LaBr3), and High Pressure Xenon (HPXe) in two system configurations of above and below the water pool were used during the study. The data collected and analyzed were used to create burnup and cooling time calibration prediction curves for ATR fuel. The next stage of the study was to determine which of the three detectors tested was better suited for the permanent system. From spectra taken and the calibration curves obtained, it was determined that although the HPGe detector yielded better results, a detector that could better withstand the harsh environment of the ATR canal was needed. The in-situ nature of the measurements required a rugged fuel scanning system, low in maintenance and easy to control system. Based on the ATR canal feasibility measurements and calibration results, it was determined that the LaBr3 detector was the best alternative for canal in-situ measurements; however, in order to enhance the quality of the spectra collected using this scintillator, a deconvolution method was developed. Following the development of the deconvolution method for ATR applications, the technique was tested using one-isotope, multi-isotope, and fuel simulated sources. Burnup calibrations were perfomed using convoluted and deconvoluted data. The calibrations results showed burnup prediction by this method improves using deconvolution. The final stage of the deconvolution method development was to perform an irradiation experiment in order to create a surrogate fuel source to test the deconvolution method using experimental data. A conceptual design of the fuel scan system is path forward using the rugged LaBr 3 detector in an above the water configuration and deconvolution algorithms.
Gravity and the Evolution of Cardiopulmonary Morphology in Snakes
Lillywhite, Harvey B.; Albert, James S.; Sheehy, Coleman M.; Seymour, Roger S.
2011-01-01
Physiological investigations of snakes have established the importance of heart position and pulmonary structure in contexts of gravity effects on blood circulation. Here we investigate morphological correlates of cardiopulmonary physiology in contexts related to ecology, behavior and evolution. We analyze data for heart position and length of vascular lung in 154 species of snakes that exhibit a broad range of characteristic behaviors and habitat associations. We construct a composite phylogeny for these species, and we codify gravitational stress according to species habitat and behavior. We use conventional regression and phylogenetically independent contrasts to evaluate whether trait diversity is correlated with gravitational habitat related to evolutionary transitions within the composite tree topology. We demonstrate that snake species living in arboreal habitats, or which express strongly climbing behaviors, possess relatively short blood columns between the heart and the head, as well as relatively short vascular lungs, compared to terrestrial species. Aquatic species, which experience little or no gravity stress in water, show the reverse – significantly longer heart–head distance and longer vascular lungs. These phylogenetic differences complement the results of physiological studies and are reflected in multiple habitat transitions during the evolutionary histories of these snake lineages, providing strong evidence that heart–to–head distance and length of vascular lung are co–adaptive cardiopulmonary features of snakes. PMID:22079804
A MAP blind image deconvolution algorithm with bandwidth over-constrained
NASA Astrophysics Data System (ADS)
Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong
2018-03-01
We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.
Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution
2015-06-08
deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero
Toward Overcoming the Local Minimum Trap in MFBD
2015-07-14
the first two years of this grant: • A. Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind Deconvolution...Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Numerical Optimization Meth- ods for Blind Deconvolution, Numerical Algorithms, volume 65, issue 1...Publications (published) during reporting period: A. Cornelio, E. Loli Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind
NASA Astrophysics Data System (ADS)
Chu, Zhigang; Yang, Yang; He, Yansong
2015-05-01
Spherical Harmonics Beamforming (SHB) with solid spherical arrays has become a particularly attractive tool for doing acoustic sources identification in cabin environments. However, it presents some intrinsic limitations, specifically poor spatial resolution and severe sidelobe contaminations. This paper focuses on overcoming these limitations effectively by deconvolution. First and foremost, a new formulation is proposed, which expresses SHB's output as a convolution of the true source strength distribution and the point spread function (PSF) defined as SHB's response to a unit-strength point source. Additionally, the typical deconvolution methods initially suggested for planar arrays, deconvolution approach for the mapping of acoustic sources (DAMAS), nonnegative least-squares (NNLS), Richardson-Lucy (RL) and CLEAN, are adapted to SHB successfully, which are capable of giving rise to highly resolved and deblurred maps. Finally, the merits of the deconvolution methods are validated and the relationships of source strength and pressure contribution reconstructed by the deconvolution methods vs. focus distance are explored both with computer simulations and experimentally. Several interesting results have emerged from this study: (1) compared with SHB, DAMAS, NNLS, RL and CLEAN all can not only improve the spatial resolution dramatically but also reduce or even eliminate the sidelobes effectively, allowing clear and unambiguous identification of single source or incoherent sources. (2) The availability of RL for coherent sources is highest, then DAMAS and NNLS, and that of CLEAN is lowest due to its failure in suppressing sidelobes. (3) Whether or not the real distance from the source to the array center equals the assumed one that is referred to as focus distance, the previous two results hold. (4) The true source strength can be recovered by dividing the reconstructed one by a coefficient that is the square of the focus distance divided by the real distance from the source to the array center. (5) The reconstructed pressure contribution is almost not affected by the focus distance, always approximating to the true one. This study will be of great significance to the accurate localization and quantification of acoustic sources in cabin environments.
2017-01-01
Photoelectrochemical hydrogen evolution is a promising avenue to store the energy of sunlight in the form of chemical bonds. The recent rapid development of new synthetic approaches enables the nanoscale engineering of semiconductor photoelectrodes, thus tailoring their physicochemical properties toward efficient H2 formation. In this work, we carried out the parallel optimization of the morphological features of the semiconductor light absorber (NiO) and the cocatalyst (Pt). While nanoporous NiO films were obtained by electrochemical anodization, the monodisperse Pt nanoparticles were synthesized using wet chemical methods. The Pt/NiO nanocomposites were characterized by XRD, XPS, SEM, ED, TEM, cyclic voltammetry, photovoltammetry, EIS, etc. The relative enhancement of the photocurrent was demonstrated as a function of the nanoparticle size and loading. For mass-specific surface activity the smallest nanoparticles (2.0 and 4.8 nm) showed the best performance. After deconvoluting the trivial geometrical effects (stemming from the variation of Pt particle size and thus the electroactive surface area), however, the intermediate particle sizes (4.8 and 7.2 nm) were found to be optimal. Under optimized conditions, a 20-fold increase in the photocurrent (and thus the H2 evolution rates) was observed for the nanostructured Pt/NiO composite, compared to the benchmark nanoparticulate NiO film. PMID:28620447
Saber, Ali; Hiltermann, T Jeroen N; Kok, Klaas; Terpstra, M Martijn; de Lange, Kim; Timens, Wim; Groen, Harry J M; van den Berg, Anke
2017-02-01
Several studies have shown heterogeneity in lung cancer, with parallel existence of multiple subclones characterized by their own specific mutational landscape. The extent to which minor clones become dominant in distinct metastasis is not clear. The aim of our study was to gain insight in the evolution pattern of lung cancer by investigating genomic heterogeneity between primary tumor and its distant metastases. Whole exome sequencing (WES) was performed on 24 tumor and five normal samples of two small cell lung carcinoma (SCLC) and three non-SCLC (NSCLC) patients. Validation of somatic variants in these 24 and screening of 33 additional samples was done by single primer enrichment technology. For each of the three NSCLC patients, about half of the mutations were shared between all tumor samples, whereas for SCLC patients, this percentage was around 95. Independent validation of the non-ubiquitous mutations confirmed the WES data for the vast majority of the variants. Phylogenetic trees indicated more distance between the tumor samples of the NSCLC patients as compared to the SCLC patients. Analysis of 30 independent DNA samples of 16 biopsies used for WES revealed a low degree of intra-tumor heterogeneity of the selected sets of mutations. In the primary tumors of all five patients, variable percentages (19-67%) of the seemingly metastases-specific mutations were present albeit at low read frequencies. Patients with advanced NSCLC have a high percentage of non-ubiquitous mutations indicative of branched evolution. In contrast, the low degree of heterogeneity in SCLC suggests a parallel and linear model of evolution. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Lung Tumors Treated With Percutaneous Radiofrequency Ablation: Computed Tomography Imaging Follow-Up
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palussiere, Jean, E-mail: palussiere@bergonie.org; Marcet, Benjamin; Descat, Edouard
2011-10-15
Purpose: To describe the morphologic evolution of lung tumors treated with radiofrequency ablation (RFA) by way of computed tomography (CT) images and to investigate patterns of incomplete RFA at the site of ablation. Materials and Methods: One hundred eighty-nine patients with 350 lung tumors treated with RFA underwent CT imaging at 2, 4, 6, and 12 months. CT findings were interpreted separately by two reviewers with consensus. Five different radiologic patterns were predefined: fibrosis, cavitation, nodule, atelectasis, and disappearance. The appearance of the treated area was evaluated at each follow-up CT using the predefined patterns. Results: At 1 year aftermore » treatment, the most common evolutions were fibrosis (50.5%) or nodules (44.8%). Differences were noted depending on the initial size of the tumor, with fibrosis occurring more frequently for tumors <2 cm (58.6% vs. 22.9%, P = 1 Multiplication-Sign 10{sup -5}). Cavitation and atelectasis were less frequent patterns (2.4% and 1.4%, respectively, at 1 year). Tumor location (intraparenchymatous, with pleural contact <50% or >50%) was not significantly correlated with follow-up image pattern. Local tumor progressions were observed with each type of evolution. At 1 year, 12 local recurrences were noted: 2 cavitations, which represented 40% of the cavitations noted at 1 year; 2 fibroses (1.9%); 7 nodules (7.4%); and 1 atelectasis (33.3%). Conclusion: After RFA of lung tumors, follow-up CT scans show that the shape of the treatment zone can evolve in five different patterns. None of these patterns, however, can confirm the absence of further local tumor progression at subsequent follow-up.« less
Harper, Brett; Neumann, Elizabeth K; Stow, Sarah M; May, Jody C; McLean, John A; Solouki, Touradj
2016-10-05
Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting "pure" IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810-1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) "shift factors" to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288.8 Å(2), 295.1 Å(2), 296.8 Å(2), and 300.1 Å(2); all four of these CCS values were within 1.5% of independently measured DTIM-MS values. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Lester, D. F.; Harvey, P. M.; Joy, M.; Ellis, H. B., Jr.
1986-01-01
Far-infrared continuum studies from the Kuiper Airborne Observatory are described that are designed to fully exploit the small-scale spatial information that this facility can provide. This work gives the clearest picture to data on the structure of galactic and extragalactic star forming regions in the far infrared. Work is presently being done with slit scans taken simultaneously at 50 and 100 microns, yielding one-dimensional data. Scans of sources in different directions have been used to get certain information on two dimensional structure. Planned work with linear arrays will allow us to generalize our techniques to two dimensional image restoration. For faint sources, spatial information at the diffraction limit of the telescope is obtained, while for brighter sources, nonlinear deconvolution techniques have allowed us to improve over the diffraction limit by as much as a factor of four. Information on the details of the color temperature distribution is derived as well. This is made possible by the accuracy with which the instrumental point-source profile (PSP) is determined at both wavelengths. While these two PSPs are different, data at different wavelengths can be compared by proper spatial filtering. Considerable effort has been devoted to implementing deconvolution algorithms. Nonlinear deconvolution methods offer the potential of superresolution -- that is, inference of power at spatial frequencies that exceed D lambda. This potential is made possible by the implicit assumption by the algorithm of positivity of the deconvolved data, a universally justifiable constraint for photon processes. We have tested two nonlinear deconvolution algorithms on our data; the Richardson-Lucy (R-L) method and the Maximum Entropy Method (MEM). The limits of image deconvolution techniques for achieving spatial resolution are addressed.
Govindan, Ramaswamy; Mandrekar, Sumithra J; Gerber, David E; Oxnard, Geoffrey R; Dahlberg, Suzanne E; Chaft, Jamie; Malik, Shakun; Mooney, Margaret; Abrams, Jeffrey S; Jänne, Pasi A; Gandara, David R; Ramalingam, Suresh S; Vokes, Everett E
2015-12-15
The treatment of patients with metastatic non-small cell lung cancer (NSCLC) is slowly evolving from empirical cytotoxic chemotherapy to personalized treatment based on specific molecular alterations. Despite this 10-year evolution, targeted therapies have not been studied adequately in patients with resected NSCLC who have clearly defined actionable mutations. The advent of next-generation sequencing has now made it possible to characterize genomic alterations in unprecedented detail. The efforts begun by The Cancer Genome Atlas project to understand the complexities of the genomic landscape of lung cancer will be supplemented further by studying a large number of tumor specimens. The Adjuvant Lung Cancer Enrichment Marker Identification and Sequencing Trial (ALCHEMIST) is an NCI-sponsored national clinical trials network (NCTN) initiative to address the needs to refine therapy for early-stage NSCLC. This program will screen several thousand patients with operable lung adenocarcinoma to determine whether their tumors contain specific molecular alterations [epidermal growth factor receptor mutation (EGFR) and anaplastic lymphoma kinase rearrangement (ALK)], making them eligible for treatment trials that target these alterations. Patients with EGFR mutation or ALK gene rearrangement in their tumor will be randomized to placebo versus erlotinib or crizotinib, respectively, after completion of their standard adjuvant therapy. ALCHEMIST will also contain a large discovery component that will provide an opportunity to incorporate genomic studies to fully understand the clonal architecture, clonal evolution, and mechanisms of resistance to therapy. In this review, we describe the concept, rationale, and outline of ALCHEMIST and the plan for genomic studies in patients with lung adenocarcinoma. Clin Cancer Res; 21(24); 5439-44. ©2015 AACR. ©2015 American Association for Cancer Research.
Wen, C; Wan, W; Li, F H; Tang, D
2015-04-01
The [110] cross-sectional samples of 3C-SiC/Si (001) were observed with a spherical aberration-corrected 300 kV high-resolution transmission electron microscope. Two images taken not close to the Scherzer focus condition and not representing the projected structures intuitively were utilized for performing the deconvolution. The principle and procedure of image deconvolution and atomic sort recognition are summarized. The defect structure restoration together with the recognition of Si and C atoms from the experimental images has been illustrated. The structure maps of an intrinsic stacking fault in the area of SiC, and of Lomer and 60° shuffle dislocations at the interface have been obtained at atomic level. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sheet-scanned dual-axis confocal microscopy using Richardson-Lucy deconvolution.
Wang, D; Meza, D; Wang, Y; Gao, L; Liu, J T C
2014-09-15
We have previously developed a line-scanned dual-axis confocal (LS-DAC) microscope with subcellular resolution suitable for high-frame-rate diagnostic imaging at shallow depths. Due to the loss of confocality along one dimension, the contrast (signal-to-background ratio) of a LS-DAC microscope is deteriorated compared to a point-scanned DAC microscope. However, by using a sCMOS camera for detection, a short oblique light-sheet is imaged at each scanned position. Therefore, by scanning the light sheet in only one dimension, a thin 3D volume is imaged. Both sequential two-dimensional deconvolution and three-dimensional deconvolution are performed on the thin image volume to improve the resolution and contrast of one en face confocal image section at the center of the volume, a technique we call sheet-scanned dual-axis confocal (SS-DAC) microscopy.
NASA Astrophysics Data System (ADS)
Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.
The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.
NASA Technical Reports Server (NTRS)
Liang, Steven Y.; Dornfeld, David A.; Nickerson, Jackson A.
1987-01-01
The coloring effect on the acoustic emission signal due to the frequency response of the data acquisition/processing instrumentation may bias the interpretation of AE signal characteristics. In this paper, a frequency domain deconvolution technique, which involves the identification of the instrumentation transfer functions and multiplication of the AE signal spectrum by the inverse of these system functions, has been carried out. In this way, the change in AE signal characteristics can be better interpreted as the result of the change in only the states of the process. Punch stretching process was used as an example to demonstrate the application of the technique. Results showed that, through the deconvolution, the frequency characteristics of AE signals generated during the stretching became more distinctive and can be more effectively used as tools for process monitoring.
Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D.
2010-01-01
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors. PMID:22294896
Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D
2010-01-01
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.
Regression-assisted deconvolution.
McIntyre, Julie; Stefanski, Leonard A
2011-06-30
We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.
Respiratory mechanics in brain injury: A review.
Koutsoukou, Antonia; Katsiari, Maria; Orfanos, Stylianos E; Kotanidou, Anastasia; Daganou, Maria; Kyriakopoulou, Magdalini; Koulouris, Nikolaos G; Rovina, Nikoletta
2016-02-04
Several clinical and experimental studies have shown that lung injury occurs shortly after brain damage. The responsible mechanisms involve neurogenic pulmonary edema, inflammation, the harmful action of neurotransmitters, or autonomic system dysfunction. Mechanical ventilation, an essential component of life support in brain-damaged patients (BD), may be an additional traumatic factor to the already injured or susceptible to injury lungs of these patients thus worsening lung injury, in case that non lung protective ventilator settings are applied. Measurement of respiratory mechanics in BD patients, as well as assessment of their evolution during mechanical ventilation, may lead to preclinical lung injury detection early enough, allowing thus the selection of the appropriate ventilator settings to avoid ventilator-induced lung injury. The aim of this review is to explore the mechanical properties of the respiratory system in BD patients along with the underlying mechanisms, and to translate the evidence of animal and clinical studies into therapeutic implications regarding the mechanical ventilation of these critically ill patients.
Ganoderma lucidum targeting lung cancer signaling: A review.
Gill, Balraj Singh; Navgeet; Kumar, Sanjeev
2017-06-01
Lung cancer causes huge mortality to population, and pharmaceutical companies require new drugs as an alternative either synthetic or natural targeting lung cancer. The conventional therapies cause side effects, and therefore, natural products are used as a therapeutic candidate in lung cancer. Chemical diversity among natural products highlights the impact of evolution and survival of fittest. One such neglected natural product is Ganoderma lucidum used for promoting health and longevity for a longer time. The major bioconstituents of G. lucidum are mainly terpenes, polysaccharides, and proteins, which were explored for various activities ranging from apoptosis to autophagy. The bioconstituents of G. lucidum activate plasma membrane receptors and initiate various downstream signaling leading to nuclear factor-κB, phosphoinositide 3-kinase, Akt, and mammalian target of rapamycin in cancer. The bioconstituents regulate the expression of various genes involved in cell cycle, immune response, apoptosis, and autophagy in lung cancer. This review highlights the inextricable role of G. lucidum and its bioconstituents in lung cancer signaling for the first time.
XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-08-01
XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.
1983-06-01
system, provides a convenient, low- noise , fully parallel method of improving contrast and enhancing structural detail in an image prior to input to a...directed towards problems in deconvolution, reconstruction from projections, bandlimited extrapolation, and shift varying deblurring of images...deconvolution algorithm has been studied with promising 5 results [I] for simulated motion blurs. Future work will focus on noise effects and the extension
2014-02-24
Suite 600 Washington, DC 20036 NRL/MR/ 6110 --14-9521 Approved for public release; distribution is unlimited. 1Science & Engineering Apprenticeship...Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/ 6110 --14-9521 Chemometric Deconvolution of Continuous Electrokinetic Injection Micellar... Engineering Apprenticeship Program American Society for Engineering Education Washington, DC Kevin Johnson Navy Technology Center for Safety and
Enhanced Seismic Imaging of Turbidite Deposits in Chicontepec Basin, Mexico
NASA Astrophysics Data System (ADS)
Chavez-Perez, S.; Vargas-Meleza, L.
2007-05-01
We test, as postprocessing tools, a combination of migration deconvolution and geometric attributes to attack the complex problems of reflector resolution and detection in migrated seismic volumes. Migration deconvolution has been empirically shown to be an effective approach for enhancing the illumination of migrated images, which are blurred versions of the subsurface reflectivity distribution, by decreasing imaging artifacts, improving spatial resolution, and alleviating acquisition footprint problems. We utilize migration deconvolution as a means to improve the quality and resolution of 3D prestack time migrated results from Chicontepec basin, Mexico, a very relevant portion of the producing onshore sector of Pemex, the Mexican petroleum company. Seismic data covers the Agua Fria, Coapechaca, and Tajin fields. It exhibits acquisition footprint problems, migration artifacts and a severe lack of resolution in the target area, where turbidite deposits need to be characterized between major erosional surfaces. Vertical resolution is about 35 m and the main hydrocarbon plays are turbidite beds no more than 60 m thick. We also employ geometric attributes (e.g., coherent energy and curvature), computed after migration deconvolution, to detect and map out depositional features, and help design development wells in the area. Results of this workflow show imaging enhancement and allow us to identify meandering channels and individual sand bodies, previously undistinguishable in the original seismic migrated images.
Dependence of quantitative accuracy of CT perfusion imaging on system parameters
NASA Astrophysics Data System (ADS)
Li, Ke; Chen, Guang-Hong
2017-03-01
Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.
Data Dependent Peak Model Based Spectrum Deconvolution for Analysis of High Resolution LC-MS Data
2015-01-01
A data dependent peak model (DDPM) based spectrum deconvolution method was developed for analysis of high resolution LC-MS data. To construct the selected ion chromatogram (XIC), a clustering method, the density based spatial clustering of applications with noise (DBSCAN), is applied to all m/z values of an LC-MS data set to group the m/z values into each XIC. The DBSCAN constructs XICs without the need for a user defined m/z variation window. After the XIC construction, the peaks of molecular ions in each XIC are detected using both the first and the second derivative tests, followed by an optimized chromatographic peak model selection method for peak deconvolution. A total of six chromatographic peak models are considered, including Gaussian, log-normal, Poisson, gamma, exponentially modified Gaussian, and hybrid of exponential and Gaussian models. The abundant nonoverlapping peaks are chosen to find the optimal peak models that are both data- and retention-time-dependent. Analysis of 18 spiked-in LC-MS data demonstrates that the proposed DDPM spectrum deconvolution method outperforms the traditional method. On average, the DDPM approach not only detected 58 more chromatographic peaks from each of the testing LC-MS data but also improved the retention time and peak area 3% and 6%, respectively. PMID:24533635
NASA Astrophysics Data System (ADS)
Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.
2009-02-01
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.
Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng
2017-01-01
Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.
Krudopp, Heimke; Sönnichsen, Frank D; Steffen-Heins, Anja
2015-08-15
The partitioning behavior of paramagnetic nitroxides in dispersed systems can be determined by deconvolution of electron paramagnetic resonance (EPR) spectra giving equivalent results with the validated methods of ultrafiltration techniques (UF) and pulsed-field gradient nuclear magnetic resonance spectroscopy (PFG-NMR). The partitioning behavior of nitroxides with increasing lipophilicity was investigated in anionic, cationic and nonionic micellar systems and 10 wt% o/w emulsions. Apart from EPR spectra deconvolution, the PFG-NMR was used in micellar solutions as a non-destructive approach, while UF based on separation of very small volume of the aqueous phase. As a function of their substituent and lipophilicity, the proportions of nitroxides that were solubilized in the micellar or emulsion interface increased with increasing nitroxide lipophilicity for all emulsifier used. Comparing the different approaches, EPR deconvolution and UF revealed comparable nitroxide proportions that were solubilized in the interfaces. Those proportions were higher than found with PFG-NMR. For PFG-NMR self-diffusion experiments the reduced nitroxides were used revealing a high dynamic of hydroxylamines and emulsifiers. Deconvolution of EPR spectra turned out to be the preferred method for measuring the partitioning behavior of paramagnetic molecules as it enables distinguishing between several populations at their individual solubilization sites. Copyright © 2015 Elsevier Inc. All rights reserved.
Extraction of near-surface properties for a lossy layered medium using the propagator matrix
Mehta, K.; Snieder, R.; Graizer, V.
2007-01-01
Near-surface properties play an important role in advancing earthquake hazard assessment. Other areas where near-surface properties are crucial include civil engineering and detection and delineation of potable groundwater. From an exploration point of view, near-surface properties are needed for wavefield separation and correcting for the local near-receiver structure. It has been shown that these properties can be estimated for a lossless homogeneous medium using the propagator matrix. To estimate the near-surface properties, we apply deconvolution to passive borehole recordings of waves excited by an earthquake. Deconvolution of these incoherent waveforms recorded by the sensors at different depths in the borehole with the recording at the surface results in waves that propagate upwards and downwards along the array. These waves, obtained by deconvolution, can be used to estimate the P- and S-wave velocities near the surface. As opposed to waves obtained by cross-correlation that represent filtered version of the sum of causal and acausal Green's function between the two receivers, the waves obtained by deconvolution represent the elements of the propagator matrix. Finally, we show analytically the extension of the propagator matrix analysis to a lossy layered medium for a special case of normal incidence. ?? 2007 The Authors Journal compilation ?? 2007 RAS.
NASA Astrophysics Data System (ADS)
Zhou, Q.; Michailovich, O.; Rathi, Y.
2014-03-01
High angular resolution diffusion imaging (HARDI) improves upon more traditional diffusion tensor imaging (DTI) in its ability to resolve the orientations of crossing and branching neural fibre tracts. The HARDI signals are measured over a spherical shell in q-space, and are usually used as an input to q-ball imaging (QBI) which allows estimation of the diffusion orientation distribution functions (ODFs) associated with a given region-of interest. Unfortunately, the partial nature of single-shell sampling imposes limits on the estimation accuracy. As a result, the recovered ODFs may not possess sufficient resolution to reveal the orientations of fibre tracts which cross each other at acute angles. A possible solution to the problem of limited resolution of QBI is provided by means of spherical deconvolution, a particular instance of which is sparse deconvolution. However, while capable of yielding high-resolution reconstructions over spacial locations corresponding to white matter, such methods tend to become unstable when applied to anatomical regions with a substantial content of isotropic diffusion. To resolve this problem, a new deconvolution approach is proposed in this paper. Apart from being uniformly stable across the whole brain, the proposed method allows one to quantify the isotropic component of cerebral diffusion, which is known to be a useful diagnostic measure by itself.
Convex blind image deconvolution with inverse filtering
NASA Astrophysics Data System (ADS)
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
Model-free quantification of dynamic PET data using nonparametric deconvolution
Zanderigo, Francesca; Parsey, Ramin V; Todd Ogden, R
2015-01-01
Dynamic positron emission tomography (PET) data are usually quantified using compartment models (CMs) or derived graphical approaches. Often, however, CMs either do not properly describe the tracer kinetics, or are not identifiable, leading to nonphysiologic estimates of the tracer binding. The PET data are modeled as the convolution of the metabolite-corrected input function and the tracer impulse response function (IRF) in the tissue. Using nonparametric deconvolution methods, it is possible to obtain model-free estimates of the IRF, from which functionals related to tracer volume of distribution and binding may be computed, but this approach has rarely been applied in PET. Here, we apply nonparametric deconvolution using singular value decomposition to simulated and test–retest clinical PET data with four reversible tracers well characterized by CMs ([11C]CUMI-101, [11C]DASB, [11C]PE2I, and [11C]WAY-100635), and systematically compare reproducibility, reliability, and identifiability of various IRF-derived functionals with that of traditional CMs outcomes. Results show that nonparametric deconvolution, completely free of any model assumptions, allows for estimates of tracer volume of distribution and binding that are very close to the estimates obtained with CMs and, in some cases, show better test–retest performance than CMs outcomes. PMID:25873427
Modeling lung cancer evolution and preclinical response by orthotopic mouse allografts.
Ambrogio, Chiara; Carmona, Francisco J; Vidal, August; Falcone, Mattia; Nieto, Patricia; Romero, Octavio A; Puertas, Sara; Vizoso, Miguel; Nadal, Ernest; Poggio, Teresa; Sánchez-Céspedes, Montserrat; Esteller, Manel; Mulero, Francisca; Voena, Claudia; Chiarle, Roberto; Barbacid, Mariano; Santamaría, David; Villanueva, Alberto
2014-11-01
Cancer evolution is a process that is still poorly understood because of the lack of versatile in vivo longitudinal studies. By generating murine non-small cell lung cancer (NSCLC) orthoallobanks and paired primary cell lines, we provide a detailed description of an in vivo, time-dependent cancer malignization process. We identify the acquisition of metastatic dissemination potential, the selection of co-driver mutations, and the appearance of naturally occurring intratumor heterogeneity, thus recapitulating the stochastic nature of human cancer development. This approach combines the robustness of genetically engineered cancer models with the flexibility of allograft methodology. We have applied this tool for the preclinical evaluation of therapeutic approaches. This system can be implemented to improve the design of future treatments for patients with NSCLC. ©2014 American Association for Cancer Research.
KRAS-driven lung adenocarcinoma: combined DDR1/Notch inhibition as an effective therapy
Ambrogio, Chiara; Nadal, Ernest; Villanueva, Alberto; Gómez-López, Gonzalo; Cash, Timothy P; Barbacid, Mariano; Santamaría, David
2016-01-01
Understanding the early evolution of cancer heterogeneity during the initial steps of tumorigenesis can uncover vulnerabilities of cancer cells that may be masked at later stages. We describe a comprehensive approach employing gene expression analysis in early lesions to identify novel therapeutic targets and the use of mouse models to test synthetic lethal drug combinations to treat human Kirsten rat sarcoma viral oncogene homologue (KRAS)-driven lung adenocarcinoma. PMID:27843638
Cheng, Xinghua; Chen, Haiquan
2014-01-01
Lung cancer, mostly nonsmall cell lung cancer, continues to be the leading cause of cancer-related death worldwide. With the development of tyrosine kinase inhibitors that selectively target lung cancer-related epidermal growth factor receptor mutations, management of advanced nonsmall cell lung cancer has been greatly transformed. Improvements in progression-free survival and life quality of the patients were observed in numerous clinical studies. However, overall survival is not prolonged because of later-acquired drug resistance. Recent studies reveal a heterogeneous subclonal architecture of lung cancer, so it is speculated that the tumor may rapidly adapt to environmental changes via a Darwinian selection mechanism. In this review, we aim to provide an overview of both spatial and temporal tumor heterogeneity as potential mechanisms underlying epidermal growth factor receptor tyrosine kinase inhibitor resistance in nonsmall cell lung cancer and summarize the possible origins of tumor heterogeneity covering theories of cancer stem cells and clonal evolution, as well as genomic instability and epigenetic aberrations in lung cancer. Moreover, investigational measures that overcome heterogeneity-associated drug resistance and new assays to improve tumor assessment are also discussed. PMID:25285017
Tibboel, Jeroen; Keijzer, Richard; Reiss, Irwin; de Jongste, Johan C; Post, Martin
2014-06-01
The aim of this study was to characterize the evolution of lung function and -structure in elastase-induced emphysema in adult mice and the effect of mesenchymal stromal cell (MSC) administration on these parameters. Adult mice were treated with intratracheal (4.8 units/100 g bodyweight) elastase to induce emphysema. MSCs were administered intratracheally or intravenously, before or after elastase injection. Lung function measurements, histological and morphometric analysis of lung tissue were performed at 3 weeks, 5 and 10 months after elastase and at 19, 20 and 21 days following MSC administration. Elastase-treated mice showed increased dynamic compliance and total lung capacity, and reduced tissue-specific elastance and forced expiratory flows at 3 weeks after elastase, which persisted during 10 months follow-up. Histology showed heterogeneous alveolar destruction which also persisted during long-term follow-up. Jugular vein injection of MSCs before elastase inhibited deterioration of lung function but had no effects on histology. Intratracheal MSC treatment did not modify lung function or histology. In conclusion, elastase-treated mice displayed persistent characteristics of pulmonary emphysema. Jugular vein injection of MSCs prior to elastase reduced deterioration of lung function. Intratracheal MSC treatment had no effect on lung function or histology.
Plasticity of lung development in the amphibian, Xenopus laevis
Rose, Christopher S.; James, Brandon
2013-01-01
Summary Contrary to previous studies, we found that Xenopus laevis tadpoles raised in normoxic water without access to air can routinely complete metamorphosis with lungs that are either severely stunted and uninflated or absent altogether. This is the first demonstration that lung development in a tetrapod can be inhibited by environmental factors and that a tetrapod that relies significantly on lung respiration under unstressed conditions can be raised to forego this function without adverse effects. This study compared lung development in untreated, air-deprived (AD) and air-restored (AR) tadpoles and frogs using whole mounts, histology, BrdU labeling of cell division and antibody staining of smooth muscle actin. We also examined the relationship of swimming and breathing behaviors to lung recovery in AR animals. Inhibition and recovery of lung development occurred at the stage of lung inflation. Lung recovery in AR tadpoles occurred at a predictable and rapid rate and correlated with changes in swimming and breathing behavior. It thus presents a new experimental model for investigating the role of mechanical forces in lung development. Lung recovery in AR frogs was unpredictable and did not correlate with behavioral changes. Its low frequency of occurrence could be attributed to developmental, physical and behavioral changes, the effects of which increase with size and age. Plasticity of lung inflation at tadpole stages and loss of plasticity at postmetamorphic stages offer new insights into the role of developmental plasticity in amphibian lung loss and life history evolution. PMID:24337117
Plasticity of lung development in the amphibian, Xenopus laevis.
Rose, Christopher S; James, Brandon
2013-12-15
Contrary to previous studies, we found that Xenopus laevis tadpoles raised in normoxic water without access to air can routinely complete metamorphosis with lungs that are either severely stunted and uninflated or absent altogether. This is the first demonstration that lung development in a tetrapod can be inhibited by environmental factors and that a tetrapod that relies significantly on lung respiration under unstressed conditions can be raised to forego this function without adverse effects. This study compared lung development in untreated, air-deprived (AD) and air-restored (AR) tadpoles and frogs using whole mounts, histology, BrdU labeling of cell division and antibody staining of smooth muscle actin. We also examined the relationship of swimming and breathing behaviors to lung recovery in AR animals. Inhibition and recovery of lung development occurred at the stage of lung inflation. Lung recovery in AR tadpoles occurred at a predictable and rapid rate and correlated with changes in swimming and breathing behavior. It thus presents a new experimental model for investigating the role of mechanical forces in lung development. Lung recovery in AR frogs was unpredictable and did not correlate with behavioral changes. Its low frequency of occurrence could be attributed to developmental, physical and behavioral changes, the effects of which increase with size and age. Plasticity of lung inflation at tadpole stages and loss of plasticity at postmetamorphic stages offer new insights into the role of developmental plasticity in amphibian lung loss and life history evolution.
NASA Astrophysics Data System (ADS)
Darudi, Ahmad; Bakhshi, Hadi; Asgari, Reza
2015-05-01
In this paper we present the results of image restoration using the data taken by a Hartmann sensor. The aberration is measure by a Hartmann sensor in which the object itself is used as reference. Then the Point Spread Function (PSF) is simulated and used for image reconstruction using the Lucy-Richardson technique. A technique is presented for quantitative evaluation the Lucy-Richardson technique for deconvolution.
2007-02-28
Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex Medium Response, International Journal of Imaging Systems and...1767-1782, 2006. 31. Z. Mu, R. Plemmons, and P. Santago. Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex...rigorous mathematical and computational research on inverse problems in optical imaging of direct interest to the Army and also the intelligence agencies
Adaptive Optics Image Restoration Based on Frame Selection and Multi-frame Blind Deconvolution
NASA Astrophysics Data System (ADS)
Tian, Yu; Rao, Chang-hui; Wei, Kai
Restricted by the observational condition and the hardware, adaptive optics can only make a partial correction of the optical images blurred by atmospheric turbulence. A postprocessing method based on frame selection and multi-frame blind deconvolution is proposed for the restoration of high-resolution adaptive optics images. By frame selection we mean we first make a selection of the degraded (blurred) images for participation in the iterative blind deconvolution calculation, with no need of any a priori knowledge, and with only a positivity constraint. This method has been applied to the restoration of some stellar images observed by the 61-element adaptive optics system installed on the Yunnan Observatory 1.2m telescope. The experimental results indicate that this method can effectively compensate for the residual errors of the adaptive optics system on the image, and the restored image can reach the diffraction-limited quality.
Huang, Yulin; Zha, Yuebo; Wang, Yue; Yang, Jianyu
2015-06-18
The forward looking radar imaging task is a practical and challenging problem for adverse weather aircraft landing industry. Deconvolution method can realize the forward looking imaging but it often leads to the noise amplification in the radar image. In this paper, a forward looking radar imaging based on deconvolution method is presented for adverse weather aircraft landing. We first present the theoretical background of forward looking radar imaging task and its application for aircraft landing. Then, we convert the forward looking radar imaging task into a corresponding deconvolution problem, which is solved in the framework of algebraic theory using truncated singular decomposition method. The key issue regarding the selecting of the truncated parameter is addressed using generalized cross validation approach. Simulation and experimental results demonstrate that the proposed method is effective in achieving angular resolution enhancement with suppressing the noise amplification in forward looking radar imaging.
Towards real-time image deconvolution: application to confocal and STED microscopy
Zanella, R.; Zanghirati, G.; Cavicchioli, R.; Zanni, L.; Boccacci, P.; Bertero, M.; Vicidomini, G.
2013-01-01
Although deconvolution can improve the quality of any type of microscope, the high computational time required has so far limited its massive spreading. Here we demonstrate the ability of the scaled-gradient-projection (SGP) method to provide accelerated versions of the most used algorithms in microscopy. To achieve further increases in efficiency, we also consider implementations on graphic processing units (GPUs). We test the proposed algorithms both on synthetic and real data of confocal and STED microscopy. Combining the SGP method with the GPU implementation we achieve a speed-up factor from about a factor 25 to 690 (with respect the conventional algorithm). The excellent results obtained on STED microscopy images demonstrate the synergy between super-resolution techniques and image-deconvolution. Further, the real-time processing allows conserving one of the most important property of STED microscopy, i.e the ability to provide fast sub-diffraction resolution recordings. PMID:23982127
Removing the echoes from terahertz pulse reflection system and sample
NASA Astrophysics Data System (ADS)
Liu, Haishun; Zhang, Zhenwei; Zhang, Cunlin
2018-01-01
Due to the echoes both from terahertz (THz) pulse reflection system and sample, the THz primary pulse will be distorted. The system echoes include two types. One preceding the main peak probably is caused by ultrafast laser pulse and the other at the back of the primary pulse is caused by the Fabry-Perot (F-P) etalon effect of detector. We attempt to remove the corresponding echoes by using two kinds of deconvolution. A Si wafer of 400μm was selected as the tested sample. Firstly, the method of double Gaussian filter (DGF) decnvolution was used to remove the systematic echoes, and then another deconvolution technique was employed to eliminate the two obvious echoes of the sample. The ultimate results indicated: although the combination of two deconvolution techniques could not entirely remove the echoes of sample and system, the echoes were largely reduced.
Batsoulis, A N; Nacos, M K; Pappas, C S; Tarantilis, P A; Mavromoustakos, T; Polissiou, M G
2004-02-01
Hemicellulose samples were isolated from kenaf (Hibiscus cannabinus L.). Hemicellulosic fractions usually contain a variable percentage of uronic acids. The uronic acid content (expressed in polygalacturonic acid) of the isolated hemicelluloses was determined by diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method. A linear relationship between uronic acids content and the sum of the peak areas at 1745, 1715, and 1600 cm(-1) was established with a high correlation coefficient (0.98). The deconvolution analysis using the curve-fitting method allowed the elimination of spectral interferences from other cell wall components. The above method was compared with an established spectrophotometric method and was found equivalent for accuracy and repeatability (t-test, F-test). This method is applicable in analysis of natural or synthetic mixtures and/or crude substances. The proposed method is simple, rapid, and nondestructive for the samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Navarro, Jorge
2013-12-01
The goal of this study presented is to determine the best available non-destructive technique necessary to collect validation data as well as to determine burn-up and cooling time of the fuel elements onsite at the Advanced Test Reactor (ATR) canal. This study makes a recommendation of the viability of implementing a permanent fuel scanning system at the ATR canal and leads3 to the full design of a permanent fuel scan system. The study consisted at first in determining if it was possible and which equipment was necessary to collect useful spectra from ATR fuel elements at the canal adjacent tomore » the reactor. Once it was establish that useful spectra can be obtained at the ATR canal the next step was to determine which detector and which configuration was better suited to predict burnup and cooling time of fuel elements non-destructively. Three different detectors of High Purity Germanium (HPGe), Lanthanum Bromide (LaBr3), and High Pressure Xenon (HPXe) in two system configurations of above and below the water pool were used during the study. The data collected and analyzed was used to create burnup and cooling time calibration prediction curves for ATR fuel. The next stage of the study was to determine which of the three detectors tested was better suited for the permanent system. From spectra taken and the calibration curves obtained, it was determined that although the HPGe detector yielded better results, a detector that could better withstand the harsh environment of the ATR canal was needed. The in-situ nature of the measurements required a rugged fuel scanning system, low in maintenance and easy to control system. Based on the ATR canal feasibility measurements and calibration results it was determined that the LaBr3 detector was the best alternative for canal in-situ measurements; however in order to enhance the quality of the spectra collected using this scintillator a deconvolution method was developed. Following the development of the deconvolution method for ATR applications the technique was tested using one-isotope, multi-isotope and fuel simulated sources. Burnup calibrations were perfomed using convoluted and deconvoluted data. The calibrations results showed burnup prediction by this method improves using deconvolution. The final stage of the deconvolution method development was to perform an irradiation experiment in order to create a surrogate fuel source to test the deconvolution method using experimental data. A conceptual design of the fuel scan system is path forward using the rugged LaBr3 detector in an above the water configuration and deconvolution algorithms.« less
Intrathoracic airway measurement: ex-vivo validation
NASA Astrophysics Data System (ADS)
Reinhardt, Joseph M.; Raab, Stephen A.; D'Souza, Neil D.; Hoffman, Eric A.
1997-05-01
High-resolution x-ray CT (HRCT) provides detailed images of the lungs and bronchial tree. HRCT-based imaging and quantitation of peripheral bronchial airway geometry provides a valuable tool for assessing regional airway physiology. Such measurements have been sued to address physiological questions related to the mechanics of airway collapse in sleep apnea, the measurement of airway response to broncho-constriction agents, and to evaluate and track the progression of disease affecting the airways, such as asthma and cystic fibrosis. Significant attention has been paid to the measurements of extra- and intra-thoracic airways in 2D sections from volumetric x-ray CT. A variety of manual and semi-automatic techniques have been proposed for airway geometry measurement, including the use of standardized display window and level settings for caliper measurements, methods based on manual or semi-automatic border tracing, and more objective, quantitative approaches such as the use of the 'half-max' criteria. A recently proposed measurements technique uses a model-based deconvolution to estimate the location of the inner and outer airway walls. Validation using a plexiglass phantom indicates that the model-based method is more accurate than the half-max approach for thin-walled structures. In vivo validation of these airway measurement techniques is difficult because of the problems in identifying a reliable measurement 'gold standard.' In this paper we report on ex vivo validation of the half-max and model-based methods using an excised pig lung. The lung is sliced into thin sections of tissue and scanned using an electron beam CT scanner. Airways of interest are measured from the CT images, and also measured with using a microscope and micrometer to obtain a measurement gold standard. The result show no significant difference between the model-based measurements and the gold standard; while the half-max estimates exhibited a measurement bias and were significantly different than the gold standard.
Disseminated paracoccidioidomycosis diagnosis based on oral lesions.
Webber, Liana Preto; Martins, Manoela Domingues; de Oliveira, Márcia Gaiger; Munhoz, Etiene Andrade; Carrard, Vinicius Coelho
2014-04-01
Paracoccidioidomycosis (PCM) is a deep mycosis with primary lung manifestations that may present cutaneous and oral lesions. Oral lesions mimic other infectious diseases or even squamous cell carcinoma, clinically and microscopically. Sometimes, the dentist is the first to detect the disease, because lung lesions are asymptomatic, or even misdiagnosed. An unusual case of PCM with 5 months of evolution presenting pulmonary, oral, and cutaneous lesions that was diagnosed by the dentist based on oral lesions is presented and discussed.
Fungemia and interstitial lung compromise caused by Malassezia sympodialis in a pediatric patient.
Aguirre, Clarisa; Euliarte, Cristina; Finquelievich, Jorge; Sosa, María de los Ángeles; Giusiano, Gustavo
2015-01-01
A case of fungemia with interstitial lung compromise caused by Malassezia sympodialis is reported in an obese pediatric patient on long-term treatment with inhaled corticosteroids for asthma. The patient was hospitalized due to a post-surgical complication of appendicitis. The patient was treated with amphotericin B for 3 weeks, with good clinical evolution and subsequent negative cultures. Copyright © 2013 Revista Iberoamericana de Micología. Published by Elsevier Espana. All rights reserved.
Quantitative fluorescence microscopy and image deconvolution.
Swedlow, Jason R
2013-01-01
Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used to remove blurred signal from an image. There are two major types of deconvolution approaches, deblurring and restoration algorithms. Deblurring algorithms remove blur, but treat a series of optical sections as individual two-dimensional entities, and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed. Copyright © 1998 Elsevier Inc. All rights reserved.
A Robust Deconvolution Method based on Transdimensional Hierarchical Bayesian Inference
NASA Astrophysics Data System (ADS)
Kolb, J.; Lekic, V.
2012-12-01
Analysis of P-S and S-P conversions allows us to map receiver side crustal and lithospheric structure. This analysis often involves deconvolution of the parent wave field from the scattered wave field as a means of suppressing source-side complexity. A variety of deconvolution techniques exist including damped spectral division, Wiener filtering, iterative time-domain deconvolution, and the multitaper method. All of these techniques require estimates of noise characteristics as input parameters. We present a deconvolution method based on transdimensional Hierarchical Bayesian inference in which both noise magnitude and noise correlation are used as parameters in calculating the likelihood probability distribution. Because the noise for P-S and S-P conversion analysis in terms of receiver functions is a combination of both background noise - which is relatively easy to characterize - and signal-generated noise - which is much more difficult to quantify - we treat measurement errors as an known quantity, characterized by a probability density function whose mean and variance are model parameters. This transdimensional Hierarchical Bayesian approach has been successfully used previously in the inversion of receiver functions in terms of shear and compressional wave speeds of an unknown number of layers [1]. In our method we used a Markov chain Monte Carlo (MCMC) algorithm to find the receiver function that best fits the data while accurately assessing the noise parameters. In order to parameterize the receiver function we model the receiver function as an unknown number of Gaussians of unknown amplitude and width. The algorithm takes multiple steps before calculating the acceptance probability of a new model, in order to avoid getting trapped in local misfit minima. Using both observed and synthetic data, we show that the MCMC deconvolution method can accurately obtain a receiver function as well as an estimate of the noise parameters given the parent and daughter components. Furthermore, we demonstrate that this new approach is far less susceptible to generating spurious features even at high noise levels. Finally, the method yields not only the most-likely receiver function, but also quantifies its full uncertainty. [1] Bodin, T., M. Sambridge, H. Tkalčić, P. Arroucau, K. Gallagher, and N. Rawlinson (2012), Transdimensional inversion of receiver functions and surface wave dispersion, J. Geophys. Res., 117, B02301
Brost, Eric Edward; Watanabe, Yoichi
2018-06-01
Cerenkov photons are created by high-energy radiation beams used for radiation therapy. In this study, we developed a Cerenkov light dosimetry technique to obtain a two-dimensional dose distribution in a superficial region of medium from the images of Cerenkov photons by using a deconvolution method. An integral equation was derived to represent the Cerenkov photon image acquired by a camera for a given incident high-energy photon beam by using convolution kernels. Subsequently, an equation relating the planar dose at a depth to a Cerenkov photon image using the well-known relationship between the incident beam fluence and the dose distribution in a medium was obtained. The final equation contained a convolution kernel called the Cerenkov dose scatter function (CDSF). The CDSF function was obtained by deconvolving the Cerenkov scatter function (CSF) with the dose scatter function (DSF). The GAMOS (Geant4-based Architecture for Medicine-Oriented Simulations) Monte Carlo particle simulation software was used to obtain the CSF and DSF. The dose distribution was calculated from the Cerenkov photon intensity data using an iterative deconvolution method with the CDSF. The theoretical formulation was experimentally evaluated by using an optical phantom irradiated by high-energy photon beams. The intensity of the deconvolved Cerenkov photon image showed linear dependence on the dose rate and the photon beam energy. The relative intensity showed a field size dependence similar to the beam output factor. Deconvolved Cerenkov images showed improvement in dose profiles compared with the raw image data. In particular, the deconvolution significantly improved the agreement in the high dose gradient region, such as in the penumbra. Deconvolution with a single iteration was found to provide the most accurate solution of the dose. Two-dimensional dose distributions of the deconvolved Cerenkov images agreed well with the reference distributions for both square fields and a multileaf collimator (MLC) defined, irregularly shaped field. The proposed technique improved the accuracy of the Cerenkov photon dosimetry in the penumbra region. The results of this study showed initial validation of the deconvolution method for beam profile measurements in a homogeneous media. The new formulation accounted for the physical processes of Cerenkov photon transport in the medium more accurately than previously published methods. © 2018 American Association of Physicists in Medicine.
Garcia-Bernabé, Abel; Dominguez-Espinosa, Gustavo; Diaz-Calleja, Ricardo; Riande, Evaristo; Haag, Rainer
2007-09-28
The non-Debye relaxation behavior of hyperbranched polyglycerol was investigated by broadband dielectric spectroscopy. A thorough study of the relaxations was carried out paying special attention to truncation effects on deconvolutions of overlapping processes. Hyperbranched polyglycerol exhibits two relaxations in the glassy state named in increasing order of frequency beta and gamma processes. The study of the evolution of these two fast processes with temperature in the time retardation spectra shows that the beta absorption is swallowed by the alpha in the glass-liquid transition, the gamma absorption being the only relaxation that remains operative in the liquid state. In heating, a temperature is reached at which the alpha absorption vanishes appearing the alphagamma relaxation. Two characteristics of alpha absorptions, decrease of the dielectric strength with increasing temperature and rather high activation energy, are displayed by the alphagamma process. Williams' ansatz seems to hold for these topologically complex macromolecules.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hao; Sa, Niya; He, Meinan
The understanding of the reaction mechanism and temporal speciation of the lithium sulfur batteries is challenged by complex polysulfide disproportionation chemistry coupled with the precipitation and dissolution of species. In this report, for the first time, we present a comprehensive method to investigate lithium sulfur electrochemistry using in situ 7Li NMR spectroscopy, a technique that is capable of quantitatively capturing the evolution of the soluble and precipitated lithium (poly)sulfides during electrochemical cycling. Furthermore, through deconvolution and quantification, every lithium-bearing species was closely tracked and four-step soluble lithium polysulfide-mediated lithium sulfur electrochemistry was demonstrated in never before seen detail. Significant irreversiblemore » accumulation of Li 2S is observed on the Li metal anode after four cycles because of sulfur shuttling. We present the application of the method in order to study electrolyte/additive development and lithium protection research can be readily envisaged.« less
Wang, Hao; Sa, Niya; He, Meinan; ...
2017-03-03
The understanding of the reaction mechanism and temporal speciation of the lithium sulfur batteries is challenged by complex polysulfide disproportionation chemistry coupled with the precipitation and dissolution of species. In this report, for the first time, we present a comprehensive method to investigate lithium sulfur electrochemistry using in situ 7Li NMR spectroscopy, a technique that is capable of quantitatively capturing the evolution of the soluble and precipitated lithium (poly)sulfides during electrochemical cycling. Furthermore, through deconvolution and quantification, every lithium-bearing species was closely tracked and four-step soluble lithium polysulfide-mediated lithium sulfur electrochemistry was demonstrated in never before seen detail. Significant irreversiblemore » accumulation of Li 2S is observed on the Li metal anode after four cycles because of sulfur shuttling. We present the application of the method in order to study electrolyte/additive development and lithium protection research can be readily envisaged.« less
Prediction of human errors by maladaptive changes in event-related brain networks.
Eichele, Tom; Debener, Stefan; Calhoun, Vince D; Specht, Karsten; Engel, Andreas K; Hugdahl, Kenneth; von Cramon, D Yves; Ullsperger, Markus
2008-04-22
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve approximately 30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations.
Prediction of human errors by maladaptive changes in event-related brain networks
Eichele, Tom; Debener, Stefan; Calhoun, Vince D.; Specht, Karsten; Engel, Andreas K.; Hugdahl, Kenneth; von Cramon, D. Yves; Ullsperger, Markus
2008-01-01
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve ≈30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations. PMID:18427123
NASA Astrophysics Data System (ADS)
Dicente Cid, Yashin; Mamonov, Artem; Beers, Andrew; Thomas, Armin; Kovalev, Vassili; Kalpathy-Cramer, Jayashree; Müller, Henning
2017-03-01
The analysis of large data sets can help to gain knowledge about specific organs or on specific diseases, just as big data analysis does in many non-medical areas. This article aims to gain information from 3D volumes, so the visual content of lung CT scans of a large number of patients. In the case of the described data set, only little annotation is available on the patients that were all part of an ongoing screening program and besides age and gender no information on the patient and the findings was available for this work. This is a scenario that can happen regularly as image data sets are produced and become available in increasingly large quantities but manual annotations are often not available and also clinical data such as text reports are often harder to share. We extracted a set of visual features from 12,414 CT scans of 9,348 patients that had CT scans of the lung taken in the context of a national lung screening program in Belarus. Lung fields were segmented by two segmentation algorithms and only cases where both algorithms were able to find left and right lung and had a Dice coefficient above 0.95 were analyzed. This assures that only segmentations of good quality were used to extract features of the lung. Patients ranged in age from 0 to 106 years. Data analysis shows that age can be predicted with a fairly high accuracy for persons under 15 years. Relatively good results were also obtained between 30 and 65 years where a steady trend is seen. For young adults and older people the results are not as good as variability is very high in these groups. Several visualizations of the data show the evolution patters of the lung texture, size and density with age. The experiments allow learning the evolution of the lung and the gained results show that even with limited metadata we can extract interesting information from large-scale visual data. These age-related changes (for example of the lung volume, the density histogram of the tissue) can also be taken into account for the interpretation of new cases. The database used includes patients that had suspicions on a chest X-ray, so it is not a group of healthy people, and only tendencies and not a model of a healthy lung at a specific age can be derived.
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-21
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed 'MPD-AwTTV'. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
NASA Astrophysics Data System (ADS)
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-01
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
NASA Astrophysics Data System (ADS)
Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li
2014-09-01
This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.
Bouridane, Ahmed; Ling, Bingo Wing-Kuen
2018-01-01
This paper presents an unsupervised learning algorithm for sparse nonnegative matrix factor time–frequency deconvolution with optimized fractional β-divergence. The β-divergence is a group of cost functions parametrized by a single parameter β. The Itakura–Saito divergence, Kullback–Leibler divergence and Least Square distance are special cases that correspond to β=0, 1, 2, respectively. This paper presents a generalized algorithm that uses a flexible range of β that includes fractional values. It describes a maximization–minimization (MM) algorithm leading to the development of a fast convergence multiplicative update algorithm with guaranteed convergence. The proposed model operates in the time–frequency domain and decomposes an information-bearing matrix into two-dimensional deconvolution of factor matrices that represent the spectral dictionary and temporal codes. The deconvolution process has been optimized to yield sparse temporal codes through maximizing the likelihood of the observations. The paper also presents a method to estimate the fractional β value. The method is demonstrated on separating audio mixtures recorded from a single channel. The paper shows that the extraction of the spectral dictionary and temporal codes is significantly more efficient by using the proposed algorithm and subsequently leads to better source separation performance. Experimental tests and comparisons with other factorization methods have been conducted to verify its efficacy. PMID:29702629
MetaUniDec: High-Throughput Deconvolution of Native Mass Spectra
NASA Astrophysics Data System (ADS)
Reid, Deseree J.; Diesing, Jessica M.; Miller, Matthew A.; Perry, Scott M.; Wales, Jessica A.; Montfort, William R.; Marty, Michael T.
2018-04-01
The expansion of native mass spectrometry (MS) methods for both academic and industrial applications has created a substantial need for analysis of large native MS datasets. Existing software tools are poorly suited for high-throughput deconvolution of native electrospray mass spectra from intact proteins and protein complexes. The UniDec Bayesian deconvolution algorithm is uniquely well suited for high-throughput analysis due to its speed and robustness but was previously tailored towards individual spectra. Here, we optimized UniDec for deconvolution, analysis, and visualization of large data sets. This new module, MetaUniDec, centers around a hierarchical data format 5 (HDF5) format for storing datasets that significantly improves speed, portability, and file size. It also includes code optimizations to improve speed and a new graphical user interface for visualization, interaction, and analysis of data. To demonstrate the utility of MetaUniDec, we applied the software to analyze automated collision voltage ramps with a small bacterial heme protein and large lipoprotein nanodiscs. Upon increasing collisional activation, bacterial heme-nitric oxide/oxygen binding (H-NOX) protein shows a discrete loss of bound heme, and nanodiscs show a continuous loss of lipids and charge. By using MetaUniDec to track changes in peak area or mass as a function of collision voltage, we explore the energetic profile of collisional activation in an ultra-high mass range Orbitrap mass spectrometer. [Figure not available: see fulltext.
The evolution of blood pressure and the rise of mankind.
Schulte, Kevin; Kunter, Uta; Moeller, Marcus J
2015-05-01
Why is it that only human beings continuously perform acts of heroism? Looking back at our evolutionary history can offer us some potentially useful insight. This review highlights some of the major steps in our evolution-more specifically, the evolution of high blood pressure. When we were fish, the first kidney was developed to create a standardized internal 'milieu' preserving the primordial sea within us. When we conquered land as amphibians, the evolution of the lung required a low systemic blood pressure, which explains why early land vertebrates (amphibians, reptiles) are such low performers. Gaining independence from water required the evolution of an impermeable skin and a water-retaining kidney. The latter was accomplished twice with two different solutions in the two major branches of vertebrate evolution: mammals excrete nitrogenous waste products as urea, which can be utilized by the kidney as an osmotic agent to produce more concentrated urine. Dinosaurs and birds have a distinct nitrogen metabolism and excrete nitrogen as water-insoluble uric acid-therefore, their kidneys cannot use urea to concentrate as well. Instead, some birds have developed the capability to reabsorb water from their cloacae. The convergent development of a separate small circulation of the lung in mammals and birds allowed for the evolution of 'high blood-pressure animals' with better capillarization of the peripheral tissues allowing high endurance performance. Finally, we investigate why mankind outperforms any other mammal on earth and why, to this day, we continue to perform acts of heroism on our eternal quest for personal bliss. © The Author 2014. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Blakely, Collin M.; Watkins, Thomas B.K.; Wu, Wei; Gini, Beatrice; Chabon, Jacob J.; McCoach, Caroline E.; McGranahan, Nicholas; Wilson, Gareth A.; Birkbak, Nicolai J.; Olivas, Victor R.; Rotow, Julia; Maynard, Ashley; Wang, Victoria; Gubens, Matthew A.; Banks, Kimberly C.; Lanman, Richard B.; Caulin, Aleah F.; John, John St.; Cordero, Anibal R.; Giannikopoulos, Petros; Simmons, Andrew D.; Mack, Philip C.; Gandara, David R.; Husain, Hatim; Doebele, Robert C.; Riess, Jonathan W.; Diehn, Maximilian; Swanton, Charles; Bivona, Trever G.
2017-01-01
A widespread approach to modern cancer therapy is to identify a single oncogenic driver gene and target its mutant protein product (e.g. EGFR inhibitor treatment in EGFR-mutant lung cancers). However, genetically-driven resistance to targeted therapy limits patient survival. Through genomic analysis of 1122 EGFR-mutant lung cancer cell-free DNA samples and whole exome analysis of seven longitudinally collected tumor samples from an EGFR-mutant lung cancer patient, we identify critical co-occurring oncogenic events present in most advanced-stage EGFR-mutant lung cancers. We define new pathways limiting EGFR inhibitor response, including WNT/β-catenin and cell cycle gene (e.g. CDK4, CDK6) alterations. Tumor genomic complexity increases with EGFR inhibitor treatment and co-occurring alterations in CTNNB1, and PIK3CA exhibit non-redundant functions that cooperatively promote tumor metastasis or limit EGFR inhibitor response. This study challenges the prevailing single-gene driver oncogene view and links clinical outcomes to co-occurring genetic alterations in advanced-stage EGFR-mutant lung cancer patients. PMID:29106415
The airway microbiota in early cystic fibrosis lung disease.
Frayman, Katherine B; Armstrong, David S; Grimwood, Keith; Ranganathan, Sarath C
2017-11-01
Infection plays a critical role in the pathogenesis of cystic fibrosis (CF) lung disease. Over the past two decades, the application of molecular and extended culture-based techniques to microbial analysis has changed our understanding of the lungs in both health and disease. CF lung disease is a polymicrobial disorder, with obligate and facultative anaerobes recovered alongside traditional pathogens in varying proportions, with some differences observed to correlate with disease stage. While healthy lungs are not sterile, differences between the lower airway microbiota of individuals with CF and disease-controls are already apparent in childhood. Understanding the evolution of the CF airway microbiota, and its relationship with clinical treatments and outcome at each disease stage, will improve our understanding of the pathogenesis of CF lung disease and potentially inform clinical management. This review summarizes current knowledge of the early development of the respiratory microbiota in healthy children and then discusses what is known about the airway microbiota in individuals with CF, including how it evolves over time and where future research priorities lie. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Dallmann, N. A.; Carlsten, B. E.; Stonehill, L. C.
2017-12-01
Orbiting nuclear spectrometers have contributed significantly to our understanding of the composition of solar system bodies. Gamma rays and neutrons are produced within the surfaces of bodies by impacting galactic cosmic rays (GCR) and by intrinsic radionuclide decay. Measuring the flux and energy spectrum of these products at one point in an orbit elucidates the elemental content of the area in view. Deconvolution of measurements from many spatially registered orbit points can produce detailed maps of elemental abundances. In applying these well-established techniques to small and irregularly shaped bodies like Phobos, one encounters unique challenges beyond those of a large spheroid. Polar mapping orbits are not possible for Phobos and quasistatic orbits will realize only modest inclinations unavoidably limiting surface coverage and creating North-South ambiguities in deconvolution. The irregular shape causes self-shadowing both of the body to the spectrometer but also of the body to the incoming GCR. The view angle to the surface normal as well as the distance between the surface and the spectrometer is highly irregular. These characteristics can be synthesized into a complicated and continuously changing measurement system point spread function. We have begun to explore different model-based, statistically rigorous, iterative deconvolution methods to produce elemental abundance maps for a proposed future investigation of Phobos. By incorporating the satellite orbit, the existing high accuracy shape-models of Phobos, and the spectrometer response function, a detailed and accurate system model can be constructed. Many aspects of this model formation are particularly well suited to modern graphics processing techniques and parallel processing. We will present the current status and preliminary visualizations of the Phobos measurement system model. We will also discuss different deconvolution strategies and their relative merit in statistical rigor, stability, achievable resolution, and exploitation of the irregular shape to partially resolve ambiguities. The general applicability of these new approaches to existing data sets from Mars, Mercury, and Lunar investigations will be noted.
A comparison of deconvolution and the Rutland-Patlak plot in parenchymal renal uptake rate.
Al-Shakhrah, Issa A
2012-07-01
Deconvolution and the Rutland-Patlak (R-P) plot are two of the most commonly used methods for analyzing dynamic radionuclide renography. Both methods allow estimation of absolute and relative renal uptake of radiopharmaceutical and of its rate of transit through the kidney. Seventeen patients (32 kidneys) were referred for further evaluation by renal scanning. All patients were positioned supine with their backs to the scintillation gamma camera, so that the kidneys and the heart are both in the field of view. Approximately 5-7 mCi of (99m)Tc-DTPA (diethylinetriamine penta-acetic acid) in about 0.5 ml of saline is injected intravenously and sequential 20 s frames were acquired, the study on each patient lasts for approximately 20 min. The time-activity curves of the parenchymal region of interest of each kidney, as well as the heart were obtained for analysis. The data were then analyzed with deconvolution and the R-P plot. A strong positive association (n = 32; r = 0.83; R (2) = 0.68) was found between the values that obtained by applying the two methods. Bland-Altman statistical analysis demonstrated that ninety seven percent of the values in the study (31 cases from 32 cases, 97% of the cases) were within limits of agreement (mean ± 1.96 standard deviation). We believe that R-P analysis method is expected to be more reproducible than iterative deconvolution method, because the deconvolution technique (the iterative method) relies heavily on the accuracy of the first point analyzed, as any errors are carried forward into the calculations of all the subsequent points, whereas R-P technique is based on an initial analysis of the data by means of the R-P plot, and it can be considered as an alternative technique to find and calculate the renal uptake rate.
NASA Astrophysics Data System (ADS)
Olurin, Oluwaseun Tolutope
2017-12-01
Interpretation of high resolution aeromagnetic data of Ilesha and its environs within the basement complex of the geological setting of Southwestern Nigeria was carried out in the study. The study area is delimited by geographic latitudes 7°30'-8°00'N and longitudes 4°30'-5°00'E. This investigation was carried out using Euler deconvolution on filtered digitised total magnetic data (Sheet Number 243) to delineate geological structures within the area under consideration. The digitised airborne magnetic data acquired in 2009 were obtained from the archives of the Nigeria Geological Survey Agency (NGSA). The airborne magnetic data were filtered, processed and enhanced; the resultant data were subjected to qualitative and quantitative magnetic interpretation, geometry and depth weighting analyses across the study area using Euler deconvolution filter control file in Oasis Montag software. Total magnetic intensity distribution in the field ranged from -77.7 to 139.7 nT. Total magnetic field intensities reveal high-magnitude magnetic intensity values (high-amplitude anomaly) and magnetic low intensities (low-amplitude magnetic anomaly) in the area under consideration. The study area is characterised with high intensity correlated with lithological variation in the basement. The sharp contrast is enhanced due to the sharp contrast in magnetic intensity between the magnetic susceptibilities of the crystalline and sedimentary rocks. The reduced-to-equator (RTE) map is characterised by high frequencies, short wavelengths, small size, weak intensity, sharp low amplitude and nearly irregular shaped anomalies, which may due to near-surface sources, such as shallow geologic units and cultural features. Euler deconvolution solution indicates a generally undulating basement, with a depth ranging from -500 to 1000 m. The Euler deconvolution results show that the basement relief is generally gentle and flat, lying within the basement terrain.
Disseminated paracoccidioidomycosis diagnosis based on oral lesions
Webber, Liana Preto; Martins, Manoela Domingues; de Oliveira, Márcia Gaiger; Munhoz, Etiene Andrade; Carrard, Vinicius Coelho
2014-01-01
Paracoccidioidomycosis (PCM) is a deep mycosis with primary lung manifestations that may present cutaneous and oral lesions. Oral lesions mimic other infectious diseases or even squamous cell carcinoma, clinically and microscopically. Sometimes, the dentist is the first to detect the disease, because lung lesions are asymptomatic, or even misdiagnosed. An unusual case of PCM with 5 months of evolution presenting pulmonary, oral, and cutaneous lesions that was diagnosed by the dentist based on oral lesions is presented and discussed. PMID:24963249
Semi-automated Image Processing for Preclinical Bioluminescent Imaging.
Slavine, Nikolai V; McColl, Roderick W
Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.
Epithelioid hemangioendotheliomas of the liver and lung in children and adolescents.
Hettmer, Simone; Andrieux, Geoffroy; Hochrein, Jochen; Kurz, Philipp; Rössler, Jochen; Lassmann, Silke; Werner, Martin; von Bubnoff, Nikolas; Peters, Christoph; Koscielniak, Ewa; Sparber-Sauer, Monika; Niemeyer, Charlotte; Mentzel, Thomas; Busch, Hauke; Boerries, Melanie
2017-12-01
Epithelioid hemangioendothelioma (EHE) is a rare, vascular sarcoma. Visceral forms arise in the liver/ lungs. We review the clinical and molecular phenotype of pediatric visceral EHE based on the case of a 9-year-old male child with EHE of the liver/lungs. His tumor expressed the EHE-specific fusion oncogene WWTR1-CAMTA1. Molecular characterization revealed a low somatic mutation rate and activated interferon signaling, angiogenesis regulation, and blood vessel remodeling. After polychemotherapy and resection of lung tumors, residual disease remained stable on oral lenalidomide. Literature review identified another 24 children with EHE of the liver/lungs. Most presented with multifocal, systemic disease. Only those who underwent complete resection achieved complete remission. Four children experienced rapid progression and died. In six children, disease remained stable for years without therapy. Two patients died from progressive EHE 21 and 24 years after first diagnosis. Natural evolution of pediatric visceral EHE is variable, and long-term prognosis remains unclear. © 2017 Wiley Periodicals, Inc.
Cell-free DNA and next-generation sequencing in the service of personalized medicine for lung cancer
Bennett, Catherine W.; Berchem, Guy; Kim, Yeoun Jin; El-Khoury, Victoria
2016-01-01
Personalized medicine has emerged as the future of cancer care to ensure that patients receive individualized treatment specific to their needs. In order to provide such care, molecular techniques that enable oncologists to diagnose, treat, and monitor tumors are necessary. In the field of lung cancer, cell free DNA (cfDNA) shows great potential as a less invasive liquid biopsy technique, and next-generation sequencing (NGS) is a promising tool for analysis of tumor mutations. In this review, we outline the evolution of cfDNA and NGS and discuss the progress of using them in a clinical setting for patients with lung cancer. We also present an analysis of the role of cfDNA as a liquid biopsy technique and NGS as an analytical tool in studying EGFR and MET, two frequently mutated genes in lung cancer. Ultimately, we hope that using cfDNA and NGS for cancer diagnosis and treatment will become standard for patients with lung cancer and across the field of oncology. PMID:27589834
On the Evolution of the Pulmonary Alveolar Lipofibroblast
Torday, John S.; Rehan, Virender K.
2015-01-01
The pulmonary alveolar lipofibroblast was first reported in 1970. Since then its development, structure, function and molecular characteristics have been determined. Its capacity to actively absorb, store and ‘traffic’ neutral lipid for protection of the alveolus against oxidant injury, and for the active supply of substrate for lung surfactant phospholipid production have offered the opportunity to identify a number of specialized functions of these strategically placed cells. Namely, Parathyroid Hormone-related Protein (PTHrP) signaling, expression of Adipocyte Differentiation Related Protein, leptin, peroxisome proliferator activator receptor gamma, and the prostaglandin E2 receptor EP2-which are all stretch-regulated, explaining how and why surfactant production is ‘on-demand’ in service to ventilation-perfusion matching. Because of the central role of the lipofibroblast in vertebrate lung physiologic evolution, it is a Rosetta Stone for understanding how and why the lung evolved in adaptation to terrestrial life, beginning with the duplication of the PTHrP Receptor some 300 mya. Moreover, such detailed knowledge of the workings of the lipofibroblast have provided insight to the etiology and effective treatment of Bronchopulmonary Dysplasia based on physiologic principles rather than on pharmacology. PMID:26706109
Estimating Fluctuating Pressures From Distorted Measurements
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Leondes, Cornelius T.
1994-01-01
Two algorithms extract estimates of time-dependent input (upstream) pressures from outputs of pressure sensors located at downstream ends of pneumatic tubes. Effect deconvolutions that account for distoring effects of tube upon pressure signal. Distortion of pressure measurements by pneumatic tubes also discussed in "Distortion of Pressure Signals in Pneumatic Tubes," (ARC-12868). Varying input pressure estimated from measured time-varying output pressure by one of two deconvolution algorithms that take account of measurement noise. Algorithms based on minimum-covariance (Kalman filtering) theory.
2012-02-12
is the total number of data points, is an approximately unbiased estimate of the “expected relative Kullback - Leibler distance” ( information loss...possible models). Thus, after each model from Table 2 is fit to a data set, we can compute the Akaike weights for the set of candidate models and use ...computed from the OLS best- fit model solution (top), from a deconvolution of the data using normal curves (middle) and from a deconvolution of the data
Fourier Deconvolution Methods for Resolution Enhancement in Continuous-Wave EPR Spectroscopy.
Reed, George H; Poyner, Russell R
2015-01-01
An overview of resolution enhancement of conventional, field-swept, continuous-wave electron paramagnetic resonance spectra using Fourier transform-based deconvolution methods is presented. Basic steps that are involved in resolution enhancement of calculated spectra using an implementation based on complex discrete Fourier transform algorithms are illustrated. Advantages and limitations of the method are discussed. An application to an experimentally obtained spectrum is provided to illustrate the power of the method for resolving overlapped transitions. © 2015 Elsevier Inc. All rights reserved.
Least-Squares Deconvolution of Compton Telescope Data with the Positivity Constraint
NASA Technical Reports Server (NTRS)
Wheaton, William A.; Dixon, David D.; Tumer, O. Tumay; Zych, Allen D.
1993-01-01
We describe a Direct Linear Algebraic Deconvolution (DLAD) approach to imaging of data from Compton gamma-ray telescopes. Imposition of the additional physical constraint, that all components of the model be non-negative, has been found to have a powerful effect in stabilizing the results, giving spatial resolution at or near the instrumental limit. A companion paper (Dixon et al. 1993) presents preliminary images of the Crab Nebula region using data from COMPTEL on the Compton Gamma-Ray Observatory.
An l1-TV Algorithm for Deconvolution with Salt and Pepper Noise
2009-04-01
deblurring in the presence of impulsive noise ,” Int. J. Comput. Vision, vol. 70, no. 3, pp. 279–298, Dec. 2006. [13] A. E. Beaton and J. W. Tukey, “The...AN 1-TV ALGORITHM FOR DECONVOLUTIONWITH SALT AND PEPPER NOISE Brendt Wohlberg∗ T-7 Mathematical Modeling and Analysis Los Alamos National Laboratory...and pepper noise , but the extension of this formulation to more general prob- lems, such as deconvolution, has received little attention. We consider
Evolution of Functional Groups during Pyrolysis Oil Upgrading
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stankovikj, Filip; Tran, Chi-Cong; Kaliaguine, Serge
In this paper, we examine the evolution of functional groups (carbonyl, carboxyl, phenol, and hydroxyl) during stabilization at 100–200 °C of two typical wood derived pyrolysis oils from BTG and Amaron in a batch reactor over Ru/C catalyst for 4h. An aqueous and an oily phase were obtained. The content of functional groups in both phases were analyzed by GC/MS, 31P-NMR, 1H-NMR, elemental analysis, KF titration, carbonyl groups by Faix, Folin – Ciocalteu method and UV-Fluorescence. The consumption of hydrogen was between 0.007 and 0.016 g/g oil, and 0.001-0.020 g of CH4/g of oil, 0.005-0.016 g of CO2/g oil andmore » 0.03-0.10 g H2O/g oil were formed. The content of carbonyl, hydroxyl, and carboxyl groups in the volatile GC-MS detectable fraction decreased (80, 65, and ~70% respectively), while their behavior in the total oil and hence in the non-volatile fraction was more complex. The carbonyl groups initially decreased having minimum at ~125-150°C and then increased, while the hydroxyl groups had reversed trend. This might be explained by initial hydrogenation of the carbonyl groups to form hydroxyls, followed by continued dehydration reactions at higher temperatures that may increase their content. The 31P-NMR was on the limit of its sensitivity for the carboxylic groups to precisely detect changes in the non-volatile fraction, however the more precise titration method showed that the concentration of carboxylic groups in the non-volatile fraction remains constant with increased stabilization temperature. The UV-Fluorescence results show that repolymerization increases with temperature. ATR-FTIR method coupled with deconvolution of the region between 1490 and 1850 cm-1 showed to be a good tool for following the changes in carbonyl groups and phenols of the stabilized pyrolysis oils. The deconvolution of the IR bands around 1050 and 1260 cm-1 correlated very well with the changes in the 31P-NMR silent O groups (likely ethers). Most of the H2O formation could be explained from the significant reduction of these silent O groups (from 12% in the fresh oils, to 6 to 2% in the stabilized oils) most probably belonging to ethers.« less
[Pulmonary Langerhans' cell histiocytosis (PLCH) revealed by pneumothorax: about a case].
Sajiai, Hafsa; Rachidi, Mariam; Serhane, Hind; Aitbatahar, Salma; Amro, Lamyae
2016-01-01
Langerhans cell histiocytosis is a rare disease of unknown etiology characterized by the infiltration of Langerhans cells in one or more organs. It has a polymorphic clinical presentation. We report the case of Mr R.Y, age 22, with 8 pack year history of smoking, admitted to hospital with complete spontaneous right-sided pneumothorax. Chest drainage was performed with good evolution. Control chest CT scan showed multiple diffuse cyst formations, predominant in the upper lobes. Lab and imaging tests were performed in order to detect systemic histiocytosis with negative results. Patient's evolution was marked by pneumothorax recurrence; pleurodesis and lung biopsy were performed which confirmed the diagnosis. The diagnosis of Langerhans cell histiocytosis should be evoked in front of pneumothorax associated with lung cystic. The diagnosis is easy in front of a suggestive clinical and radiological picture. Nevertheless, therapeutic options are limited and pneumothorax recurrence is common.
Texas two-step: a framework for optimal multi-input single-output deconvolution.
Neelamani, Ramesh; Deffenbaugh, Max; Baraniuk, Richard G
2007-11-01
Multi-input single-output deconvolution (MISO-D) aims to extract a deblurred estimate of a target signal from several blurred and noisy observations. This paper develops a new two step framework--Texas Two-Step--to solve MISO-D problems with known blurs. Texas Two-Step first reduces the MISO-D problem to a related single-input single-output deconvolution (SISO-D) problem by invoking the concept of sufficient statistics (SSs) and then solves the simpler SISO-D problem using an appropriate technique. The two-step framework enables new MISO-D techniques (both optimal and suboptimal) based on the rich suite of existing SISO-D techniques. In fact, the properties of SSs imply that a MISO-D algorithm is mean-squared-error optimal if and only if it can be rearranged to conform to the Texas Two-Step framework. Using this insight, we construct new wavelet- and curvelet-based MISO-D algorithms with asymptotically optimal performance. Simulated and real data experiments verify that the framework is indeed effective.
Voigt deconvolution method and its applications to pure oxygen absorption spectrum at 1270 nm band.
Al-Jalali, Muhammad A; Aljghami, Issam F; Mahzia, Yahia M
2016-03-15
Experimental spectral lines of pure oxygen at 1270 nm band were analyzed by Voigt deconvolution method. The method gave a total Voigt profile, which arises from two overlapping bands. Deconvolution of total Voigt profile leads to two Voigt profiles, the first as a result of O2 dimol at 1264 nm band envelope, and the second from O2 monomer at 1268 nm band envelope. In addition, Voigt profile itself is the convolution of Lorentzian and Gaussian distributions. Competition between thermal and collisional effects was clearly observed through competition between Gaussian and Lorentzian width for each band envelope. Voigt full width at half-maximum height (Voigt FWHM) for each line, and the width ratio between Lorentzian and Gaussian width (ΓLΓG(-1)) have been investigated. The following applied pressures were at 1, 2, 3, 4, 5, and 8 bar, while the temperatures were at 298 K, 323 K, 348 K, and 373 K range. Copyright © 2015 Elsevier B.V. All rights reserved.
Hom, Erik F. Y.; Marchis, Franck; Lee, Timothy K.; Haase, Sebastian; Agard, David A.; Sedat, John W.
2011-01-01
We describe an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of multi-frame and three-dimensional data acquired through astronomical and microscopic imaging. AIDA is a reimplementation and extension of the MISTRAL method developed by Mugnier and co-workers and shown to yield object reconstructions with excellent edge preservation and photometric precision [J. Opt. Soc. Am. A 21, 1841 (2004)]. Written in Numerical Python with calls to a robust constrained conjugate gradient method, AIDA has significantly improved run times over the original MISTRAL implementation. Included in AIDA is a scheme to automatically balance maximum-likelihood estimation and object regularization, which significantly decreases the amount of time and effort needed to generate satisfactory reconstructions. We validated AIDA using synthetic data spanning a broad range of signal-to-noise ratios and image types and demonstrated the algorithm to be effective for experimental data from adaptive optics–equipped telescope systems and wide-field microscopy. PMID:17491626
A novel SURE-based criterion for parametric PSF estimation.
Xue, Feng; Blu, Thierry
2015-02-01
We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.
Zunder, Eli R.; Finck, Rachel; Behbehani, Gregory K.; Amir, El-ad D.; Krishnaswamy, Smita; Gonzalez, Veronica D.; Lorang, Cynthia G.; Bjornson, Zach; Spitzer, Matthew H.; Bodenmiller, Bernd; Fantl, Wendy J.; Pe’er, Dana; Nolan, Garry P.
2015-01-01
SUMMARY Mass-tag cell barcoding (MCB) labels individual cell samples with unique combinatorial barcodes, after which they are pooled for processing and measurement as a single multiplexed sample. The MCB method eliminates variability between samples in antibody staining and instrument sensitivity, reduces antibody consumption, and shortens instrument measurement time. Here, we present an optimized MCB protocol with several improvements over previously described methods. The use of palladium-based labeling reagents expands the number of measurement channels available for mass cytometry and reduces interference with lanthanide-based antibody measurement. An error-detecting combinatorial barcoding scheme allows cell doublets to be identified and removed from the analysis. A debarcoding algorithm that is single cell-based rather than population-based improves the accuracy and efficiency of sample deconvolution. This debarcoding algorithm has been packaged into software that allows rapid and unbiased sample deconvolution. The MCB procedure takes 3–4 h, not including sample acquisition time of ~1 h per million cells. PMID:25612231
Automated processing for proton spectroscopic imaging using water reference deconvolution.
Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W
1994-06-01
Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.
DECONV-TOOL: An IDL based deconvolution software package
NASA Technical Reports Server (NTRS)
Varosi, F.; Landsman, W. B.
1992-01-01
There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.
Deconvolutions based on singular value decomposition and the pseudoinverse: a guide for beginners.
Hendler, R W; Shrager, R I
1994-01-01
Singular value decomposition (SVD) is deeply rooted in the theory of linear algebra, and because of this is not readily understood by a large group of researchers who could profit from its application. In this paper, we discuss the subject on a level that should be understandable to scientists who are not well versed in linear algebra. However, because it is necessary that certain key concepts in linear algebra be appreciated in order to comprehend what is accomplished by SVD, we present the section, 'Bare basics of linear algebra'. This is followed by a discussion of the theory of SVD. Next we present step-by-step examples to illustrate how SVD is applied to deconvolute a titration involving a mixture of three pH indicators. One noiseless case is presented as well as two cases where either a fixed or varying noise level is present. Finally, we discuss additional deconvolutions of mixed spectra based on the use of the pseudoinverse.
NASA Astrophysics Data System (ADS)
Gong, Changfei; Zeng, Dong; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua
2016-03-01
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for diagnosis and risk stratification of coronary artery disease by assessing the myocardial perfusion hemodynamic maps (MPHM). Meanwhile, the repeated scanning of the same region results in a relatively large radiation dose to patients potentially. In this work, we present a robust MPCT deconvolution algorithm with adaptive-weighted tensor total variation regularization to estimate residue function accurately under the low-dose context, which is termed `MPD-AwTTV'. More specifically, the AwTTV regularization takes into account the anisotropic edge property of the MPCT images compared with the conventional total variation (TV) regularization, which can mitigate the drawbacks of TV regularization. Subsequently, an effective iterative algorithm was adopted to minimize the associative objective function. Experimental results on a modified XCAT phantom demonstrated that the present MPD-AwTTV algorithm outperforms and is superior to other existing deconvolution algorithms in terms of noise-induced artifacts suppression, edge details preservation and accurate MPHM estimation.
Deconvoluting complex structural histories archived in brittle fault zones
NASA Astrophysics Data System (ADS)
Viola, G.; Scheiber, T.; Fredin, O.; Zwingmann, H.; Margreth, A.; Knies, J.
2016-11-01
Brittle deformation can saturate the Earth's crust with faults and fractures in an apparently chaotic fashion. The details of brittle deformational histories and implications on, for example, seismotectonics and landscape, can thus be difficult to untangle. Fortunately, brittle faults archive subtle details of the stress and physical/chemical conditions at the time of initial strain localization and eventual subsequent slip(s). Hence, reading those archives offers the possibility to deconvolute protracted brittle deformation. Here we report K-Ar isotopic dating of synkinematic/authigenic illite coupled with structural analysis to illustrate an innovative approach to the high-resolution deconvolution of brittle faulting and fluid-driven alteration of a reactivated fault in western Norway. Permian extension preceded coaxial reactivation in the Jurassic and Early Cretaceous fluid-related alteration with pervasive clay authigenesis. This approach represents important progress towards time-constrained structural models, where illite characterization and K-Ar analysis are a fundamental tool to date faulting and alteration in crystalline rocks.
Klein, Wilfried; Abe, Augusto S; Perry, Steven F
2003-04-15
The surgical removal of the post-hepatic septum (PHS) in the tegu lizard, Tupinambis merianae, significantly reduces resting lung volume (V(Lr)) and maximal lung volume (V(Lm)) when compared with tegus with intact PHS. Standardised for body mass (M(B)), static lung compliance was significantly less in tegus without PHS. Pleural and abdominal pressures followed, like ventilation, a biphasic pattern. In general, pressures increased during expiration and decreased during inspiration. However, during expiration pressure changes showed a marked intra- and interindividual variation. The removal of the PHS resulted in a lower cranio-caudal intracoelomic pressure differential, but had no effect on the general pattern of pressure changes accompanying ventilation. These results show that a perforated PHS that lacks striated muscle has significant influence on static breathing mechanics in Tupinambis and by analogy provides valuable insight into similar processes that led to the evolution of the mammalian diaphragm.
Chen, Hong; Chen, Qun; Jiang, Chun-Ming; Shi, Guang-Yue; Sui, Bo-Wen; Zhang, Wei; Yang, Li-Zhen; Li, Zhu-Ying; Liu, Li; Su, Yu-Ming; Zhao, Wen-Cheng; Sun, Hong-Qiang; Li, Zhen-Zi; Fu, Zhou
2018-03-01
Idiopathic pulmonary fibrosis (IPF) and tumor are highly similar to abnormal cell proliferation that damages the body. This malignant cell evolution in a stressful environment closely resembles that of epithelial-mesenchymal transition (EMT). As a popular EMT-inducing factor, TGFβ plays an important role in the progression of multiple diseases. However, the drugs that target TGFB1 are limited. In this study, we found that triptolide (TPL), a Chinese medicine extract, exerts an anti-lung fibrosis effect by inhibiting the EMT of lung epithelial cells. In addition, triptolide directly binds to TGFβ and subsequently increase E-cadherin expression and decrease vimentin expression. In in vivo studies, TPL improves the survival state and inhibits lung fibrosis in mice. In summary, this study revealed the potential therapeutic effect of paraquat induced TPL in lung fibrosis by regulating TGFβ-dependent EMT progression. Copyright © 2017 Elsevier B.V. All rights reserved.
Expression and significance of Ki-67 in lung cancer.
Folescu, Roxana; Levai, Codrina Mihaela; Grigoraş, Mirela Loredana; Arghirescu, Teodora Smaranda; Talpoş, Ioana Cristina; Gîndac, Ciprian Mihai; Zamfir, Carmen Lăcrămioara; Poroch, Vladimir; Anghel, Mirella Dorina
2018-01-01
Ki-67 parameter is a proliferation marker in malignant tumors. The increased proliferation activity and the decreased prognosis in lung cancer determined us to investigate different parameters connected to the tumor's aggression, such as cellularity, Ki-67 positivity rate, and proliferating cell nuclear antigen (PCNA). We evaluated the proliferative activity in 62 primary lung tumors by determining the cell's percentage of Ki-67 and immunoreactive PCNA (using MIB-1 and PCNA monoclonal antibodies), classifying Ki-67 and PCNA immunoreactivity into three score groups. The results obtained emphasized a linkage between Ki-67 score with the histological tumor subtype, tumor cellularity and degree of differentiation and with other proliferation immunohistochemistry (IHC) markers, such as p53 cellular tumor antigen. The tumor's cellularity, the Ki-67 positivity rate and PCNA, together with the clinical stage and the histological differentiation bring extra pieces of useful information in order to anticipate the evolution and the prognosis of lung cancer.
Evolution of the Immune Response to Chronic Airway Colonization with Aspergillus fumigatus Hyphae.
Urb, Mirjam; Snarr, Brendan D; Wojewodka, Gabriella; Lehoux, Mélanie; Lee, Mark J; Ralph, Benjamin; Divangahi, Maziar; King, Irah L; McGovern, Toby K; Martin, James G; Fraser, Richard; Radzioch, Danuta; Sheppard, Donald C
2015-09-01
Airway colonization by the mold Aspergillus fumigatus is common in patients with underlying lung disease and is associated with chronic airway inflammation. Studies probing the inflammatory response to colonization with A. fumigatus hyphae have been hampered by the lack of a model of chronic colonization in immunocompetent mice. By infecting mice intratracheally with conidia embedded in agar beads (Af beads), we have established an in vivo model to study the natural history of airway colonization with live A. fumigatus hyphae. Histopathological examination and galactomannan assay of lung homogenates demonstrated that hyphae exited beads and persisted in the lungs of mice up to 28 days postinfection without invasive disease. Fungal lesions within the airways were surrounded by a robust neutrophilic inflammatory reaction and peribronchial infiltration of lymphocytes. Whole-lung cytokine analysis from Af bead-infected mice revealed an increase in proinflammatory cytokines and chemokines early in infection. Evidence of a Th2 type response was observed only early in the course of colonization, including increased levels of interleukin-4 (IL-4), elevated IgE levels in serum, and a mild increase in airway responsiveness. Pulmonary T cell subset analysis during infection mirrored these results with an initial transient increase in IL-4-producing CD4(+) T cells, followed by a rise in IL-17 and Foxp3(+) cells by day 14. These results provide the first report of the evolution of the immune response to A. fumigatus hyphal colonization. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Optimal application of Morrison's iterative noise removal for deconvolution. Appendices
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1987-01-01
Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.
Total variation based image deconvolution for extended depth-of-field microscopy images
NASA Astrophysics Data System (ADS)
Hausser, F.; Beckers, I.; Gierlak, M.; Kahraman, O.
2015-03-01
One approach for a detailed understanding of dynamical cellular processes during drug delivery is the use of functionalized biocompatible nanoparticles and fluorescent markers. An appropriate imaging system has to detect these moving particles so as whole cell volumes in real time with high lateral resolution in a range of a few 100 nm. In a previous study Extended depth-of-field microscopy (EDF-microscopy) has been applied to fluorescent beads and tradiscantia stamen hair cells and the concept of real-time imaging has been proved in different microscopic modes. In principle a phase retardation system like a programmable space light modulator or a static waveplate is incorporated in the light path and modulates the wavefront of light. Hence the focal ellipsoid is smeared out and images seem to be blurred in a first step. An image restoration by deconvolution using the known point-spread-function (PSF) of the optical system is necessary to achieve sharp microscopic images of an extended depth-of-field. This work is focused on the investigation and optimization of deconvolution algorithms to solve this restoration problem satisfactorily. This inverse problem is challenging due to presence of Poisson distributed noise and Gaussian noise, and since the PSF used for deconvolution exactly fits in just one plane within the object. We use non-linear Total Variation based image restoration techniques, where different types of noise can be treated properly. Various algorithms are evaluated for artificially generated 3D images as well as for fluorescence measurements of BPAE cells.
Deconvolution of the vestibular evoked myogenic potential.
Lütkenhöner, Bernd; Basel, Türker
2012-02-07
The vestibular evoked myogenic potential (VEMP) and the associated variance modulation can be understood by a convolution model. Two functions of time are incorporated into the model: the motor unit action potential (MUAP) of an average motor unit, and the temporal modulation of the MUAP rate of all contributing motor units, briefly called rate modulation. The latter is the function of interest, whereas the MUAP acts as a filter that distorts the information contained in the measured data. Here, it is shown how to recover the rate modulation by undoing the filtering using a deconvolution approach. The key aspects of our deconvolution algorithm are as follows: (1) the rate modulation is described in terms of just a few parameters; (2) the MUAP is calculated by Wiener deconvolution of the VEMP with the rate modulation; (3) the model parameters are optimized using a figure-of-merit function where the most important term quantifies the difference between measured and model-predicted variance modulation. The effectiveness of the algorithm is demonstrated with simulated data. An analysis of real data confirms the view that there are basically two components, which roughly correspond to the waves p13-n23 and n34-p44 of the VEMP. The rate modulation corresponding to the first, inhibitory component is much stronger than that corresponding to the second, excitatory component. But the latter is more extended so that the two modulations have almost the same equivalent rectangular duration. Copyright © 2011 Elsevier Ltd. All rights reserved.
Isotope pattern deconvolution as a tool to study iron metabolism in plants.
Rodríguez-Castrillón, José Angel; Moldovan, Mariella; García Alonso, J Ignacio; Lucena, Juan José; García-Tomé, Maria Luisa; Hernández-Apaolaza, Lourdes
2008-01-01
Isotope pattern deconvolution is a mathematical technique for isolating distinct isotope signatures from mixtures of natural abundance and enriched tracers. In iron metabolism studies measurement of all four isotopes of the element by high-resolution multicollector or collision cell ICP-MS allows the determination of the tracer/tracee ratio with simultaneous internal mass bias correction and lower uncertainties. This technique was applied here for the first time to study iron uptake by cucumber plants using 57Fe-enriched iron chelates of the o,o and o,p isomers of ethylenediaminedi(o-hydroxyphenylacetic) acid (EDDHA) and ethylenediamine tetraacetic acid (EDTA). Samples of root, stem, leaves, and xylem sap, after exposure of the cucumber plants to the mentioned 57Fe chelates, were collected, dried, and digested using nitric acid. The isotopic composition of iron in the samples was measured by ICP-MS using a high-resolution multicollector instrument. Mass bias correction was computed using both a natural abundance iron standard and by internal correction using isotope pattern deconvolution. It was observed that, for plants with low 57Fe enrichment, isotope pattern deconvolution provided lower tracer/tracee ratio uncertainties than the traditional method applying external mass bias correction. The total amount of the element in the plants was determined by isotope dilution analysis, using a collision cell quadrupole ICP-MS instrument, after addition of 57Fe or natural abundance Fe in a known amount which depended on the isotopic composition of the sample.
First results from the LIFE project: discovery of two magnetic hot evolved stars
NASA Astrophysics Data System (ADS)
Martin, A. J.; Neiner, C.; Oksala, M. E.; Wade, G. A.; Keszthelyi, Z.; Fossati, L.; Marcolino, W.; Mathis, S.; Georgy, C.
2018-04-01
We present the initial results of the Large Impact of magnetic Fields on the Evolution of hot stars (LIFE) project. The focus of this project is the search for magnetic fields in evolved OBA giants and supergiants with visual magnitudes between 4 and 8, with the aim to investigate how the magnetic fields observed in upper main-sequence (MS) stars evolve from the MS until the late post-MS stages. In this paper, we present spectropolarimetric observations of 15 stars observed using the ESPaDOnS instrument of the Canada-France-Hawaii Telescope. For each star, we have determined the fundamental parameters and have used stellar evolution models to calculate their mass, age, and radius. Using the least-squared deconvolution technique, we have produced averaged line profiles for each star. From these profiles, we have measured the longitudinal magnetic field strength and have calculated the detection probability. We report the detection of magnetic fields in two stars of our sample: a weak field of Bl = 1.0 ± 0.2 G is detected in the post-MS A5 star 19 Aur and a stronger field of Bl = -230 ± 10 G is detected in the MS/post-MS B8/9 star HR 3042.
Development of a Prognostic Marker for Lung Cancer Using Analysis of Tumor Evolution
2017-08-01
SUPPLEMENTARY NOTES 14. ABSTRACT The goal of this project is to sequence the exomes of single tumor cells from tumors in order to construct evolutionary trees...dissociation, tumor cell isolation, whole genome amplification, and exome sequencing. We have begun to sequence the exomes of single cells and to...of populations, the evolution of tumor cells within a tumor can be diagrammed on a phylogenetic tree. The more diverse a tumor’s phylogenetic tree
Transcriptome profile and unique genetic evolution of positively selected genes in yak lungs.
Lan, DaoLiang; Xiong, XianRong; Ji, WenHui; Li, Jian; Mipam, Tserang-Donko; Ai, Yi; Chai, ZhiXin
2018-04-01
The yak (Bos grunniens), which is a unique bovine breed that is distributed mainly in the Qinghai-Tibetan Plateau, is considered a good model for studying plateau adaptability in mammals. The lungs are important functional organs that enable animals to adapt to their external environment. However, the genetic mechanism underlying the adaptability of yak lungs to harsh plateau environments remains unknown. To explore the unique evolutionary process and genetic mechanism of yak adaptation to plateau environments, we performed transcriptome sequencing of yak and cattle (Bos taurus) lungs using RNA-Seq technology and a subsequent comparison analysis to identify the positively selected genes in the yak. After deep sequencing, a normal transcriptome profile of yak lung that containing a total of 16,815 expressed genes was obtained, and the characteristics of yak lungs transcriptome was described by functional analysis. Furthermore, Ka/Ks comparison statistics result showed that 39 strong positively selected genes are identified from yak lungs. Further GO and KEGG analysis was conducted for the functional annotation of these genes. The results of this study provide valuable data for further explorations of the unique evolutionary process of high-altitude hypoxia adaptation in yaks in the Tibetan Plateau and the genetic mechanism at the molecular level.
NASA Astrophysics Data System (ADS)
Kazakis, Nikolaos A.
2018-01-01
The present comment concerns the correct presentation of an algorithm proposed in the above paper for the glow-curve deconvolution in the case of continuous distribution of trapping states. Since most researchers would use directly the proposed algorithm as published, they should be notified of its correct formulation during the fitting of TL glow curves of materials with continuous trap distribution using this Equation.
An l1-TV algorithm for deconvolution with salt and pepper noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohlberg, Brendt; Rodriguez, Paul
2008-01-01
There has recently been considerable interest in applying Total Variation with an {ell}{sup 1} data fidelity term to the denoising of images subject to salt and pepper noise, but the extension of this formulation to more general problems, such as deconvolution, has received little attention, most probably because most efficient algorithms for {ell}{sup 1}-TV denoising can not handle more general inverse problems. We apply the Iteratively Reweighted Norm algorithm to this problem, and compare performance with an alternative algorithm based on the Mumford-Shah functional.
van Soldt, Benjamin J; Metscher, Brian D; Poelmann, Robert E; Vervust, Bart; Vonk, Freek J; Müller, Gerd B; Richardson, Michael K
2015-01-01
Snake lungs show a remarkable diversity of organ asymmetries. The right lung is always fully developed, while the left lung is either absent, vestigial, or well-developed (but smaller than the right). A 'tracheal lung' is present in some taxa. These asymmetries are reflected in the pulmonary arteries. Lung asymmetry is known to appear at early stages of development in Thamnophis radix and Natrix natrix. Unfortunately, there is no developmental data on snakes with a well-developed or absent left lung. We examine the adult and developmental morphology of the lung and pulmonary arteries in the snakes Python curtus breitensteini, Pantherophis guttata guttata, Elaphe obsoleta spiloides, Calloselasma rhodostoma and Causus rhombeatus using gross dissection, MicroCT scanning and 3D reconstruction. We find that the right and tracheal lung develop similarly in these species. By contrast, the left lung either: (1) fails to develop; (2) elongates more slowly and aborts early without (2a) or with (2b) subsequent development of faveoli; (3) or develops normally. A right pulmonary artery always develops, but the left develops only if the left lung develops. No pulmonary artery develops in relation to the tracheal lung. We conclude that heterochrony in lung bud development contributes to lung asymmetry in several snake taxa. Secondly, the development of the pulmonary arteries is asymmetric at early stages, possibly because the splanchnic plexus fails to develop when the left lung is reduced. Finally, some changes in the topography of the pulmonary arteries are consequent on ontogenetic displacement of the heart down the body. Our findings show that the left-right asymmetry in the cardiorespiratory system of snakes is expressed early in development and may become phenotypically expressed through heterochronic shifts in growth, and changes in axial relations of organs and vessels. We propose a step-wise model for reduction of the left lung during snake evolution.
[Primitive lung abscess: an unusual situation in children].
Bouyahia, O; Jlidi, S; Sammoud, A
2014-12-01
Lung abscess is a localized area of non tuberculosis suppurative necrosis of the parenchyma lung, resulting in formation of a cavity containing purulent material. This pathology is uncommon in childhood. A 3-year-6 month-old boy was admitted with prolonged fever and dyspnea. Chest X-ray showed a non systemized, well limited, thick walled, hydric, and excavated opacity containing an air-fluid level. Chest ultrasound examination showed a collection of 6. 8 cm of diameter in the right pulmonary field with an air-fluid level. Hemoculture showed Staphylococcus aureus. The patient received large spectrum antibiotherapy. Three days after, he presented a septic shock and surgical drainage was indicated. Histological examination confirmed the diagnosis of lung abscess. Any underlying condition such as inoculation site, local cause or immune deficiency, was noted and diagnosis of primary abscess was made. The patient demonstrated complete recovery. He is asymptomatic with normal chest X-ray and pulmonary function after 3 years of evolution. Lung abscess represent a rare cause of prolonged fever in childhood. An underlying condition must be excluded to eliminate secondary abscess. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Zurbano, L; Zurbano, F
2017-10-01
The lung transplantation is a therapeutic procedure indicated for lung diseases that are terminal and irreversible (except lung cancer) despite the best medical current treatment. It is an emergent procedure in medical care. In this review, an analyse is made of the most frequent complications of lung transplant related to the graft (rejection and chronic graft dysfunction), immunosuppression (infections, arterial hypertension, renal dysfunction, and diabetes), as well as others such as gastrointestinal complications, osteoporosis. The most advisable therapeutic options are also included. Specific mention is made of the reviews and follow-up for monitoring the graft and the patients, as well as the lifestyle recommended to improve the prognosis and quality of life. An analysis is also made on the outcomes in the Spanish and international registries, their historical evolution and the most frequent causes of death, in order to objectively analyse the usefulness of the transplant. Copyright © 2016 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España, S.L.U. All rights reserved.
On the Evolution of the Mammalian Brain.
Torday, John S; Miller, William B
2016-01-01
Hobson and Friston have hypothesized that the brain must actively dissipate heat in order to process information (Hobson et al., 2014). This physiologic trait is functionally homologous with the first instantation of life formed by lipids suspended in water forming micelles- allowing the reduction in entropy (heat dissipation). This circumvents the Second Law of Thermodynamics permitting the transfer of information between living entities, enabling them to perpetually glean information from the environment, that is felt by many to correspond to evolution per se. The next evolutionary milestone was the advent of cholesterol, embedded in the cell membranes of primordial eukaryotes, facilitating metabolism, oxygenation and locomotion, the triadic basis for vertebrate evolution. Lipids were key to homeostatic regulation of calcium, forming calcium channels. Cell membrane cholesterol also fostered metazoan evolution by forming lipid rafts for receptor-mediated cell-cell signaling, the origin of the endocrine system. The eukaryotic cell membrane exapted to all complex physiologic traits, including the lung and brain, which are molecularly homologous through the function of neuregulin, mediating both lung development and myelinization of neurons. That cooption later exapted as endothermy during the water-land transition (Torday, 2015a), perhaps being the functional homolog for brain heat dissipation and conscious/mindful information processing. The skin and brain similarly share molecular homologies through the "skin-brain" hypothesis, giving insight to the cellular-molecular "arc" of consciousness from its unicellular origins to integrated physiology. This perspective on the evolution of the central nervous system clarifies self-organization, reconciling thermodynamic and informational definitions of the underlying biophysical mechanisms, thereby elucidating relations between the predictive capabilities of the brain and self-organizational processes.
On the origin of avian air sacs.
Farmer, C G
2006-11-01
For many vertebrates the lung is the largest and lightest organ in the body cavity and for these reasons can greatly affect an organism's shape, density, and its distribution of mass; characters that are important to locomotion. In this paper non-respiratory functions of the lung are considered along with data on the respiratory capacities and gas exchange abilities of birds and crocodilians to infer the evolutionary history of the respiratory systems of dinosaurs, including birds. From a quadrupedal ancestry theropod dinosaurs evolved a bipedal posture. Bipedalism is an impressive balancing act, especially for tall animals with massive heads. During this transition selection for good balance and agility may have helped shape pulmonary morphology. Respiratory adaptations arising for bipedalism are suggested to include a reduction in costal ventilation and the use of cuirassal ventilation with a caudad expansion of the lung into the dorsal abdominal cavity. The evolution of volant animals from bipeds required yet again a major reorganization in body form. With this transition avian air sacs may have been favored because they enhanced balance and agility in flight. Finally, I propose that these hypotheses can be tested by examining the importance of the air sacs to balance and agility in extant animals and that these data will enhance our understanding of the evolution of the respiratory system in archosaurs.
Gabor Deconvolution as Preliminary Method to Reduce Pitfall in Deeper Target Seismic Data
NASA Astrophysics Data System (ADS)
Oktariena, M.; Triyoso, W.
2018-03-01
Anelastic attenuation process during seismic wave propagation is the trigger of seismic non-stationary characteristic. An absorption and a scattering of energy are causing the seismic energy loss as the depth increasing. A series of thin reservoir layers found in the study area is located within Talang Akar Fm. Level, showing an indication of interpretation pitfall due to attenuation effect commonly occurred in deeper level seismic data. Attenuation effect greatly influences the seismic images of deeper target level, creating pitfalls in several aspect. Seismic amplitude in deeper target level often could not represent its real subsurface character due to a low amplitude value or a chaotic event nearing the Basement. Frequency wise, the decaying could be seen as the frequency content diminishing in deeper target. Meanwhile, seismic amplitude is the simple tool to point out Direct Hydrocarbon Indicator (DHI) in preliminary Geophysical study before a further advanced interpretation method applied. A quick-look of Post-Stack Seismic Data shows the reservoir associated with a bright spot DHI while another bigger bright spot body detected in the North East area near the field edge. A horizon slice confirms a possibility that the other bright spot zone has smaller delineation; an interpretation pitfall commonly occurs in deeper level of seismic. We evaluates this pitfall by applying Gabor Deconvolution to address the attenuation problem. Gabor Deconvolution forms a Partition of Unity to factorize the trace into smaller convolution window that could be processed as stationary packets. Gabor Deconvolution estimates both the magnitudes of source signature alongside its attenuation function. The enhanced seismic shows a better imaging in the pitfall area that previously detected as a vast bright spot zone. When the enhanced seismic is used for further advanced reprocessing process, the Seismic Impedance and Vp/Vs Ratio slices show a better reservoir delineation, in which the pitfall area is reduced and some morphed as background lithology. Gabor Deconvolution removes the attenuation by performing Gabor Domain spectral division, which in extension also reduces interpretation pitfall in deeper target seismic.
DECONVOLUTION OF IMAGES FROM BLAST 2005: INSIGHT INTO THE K3-50 AND IC 5146 STAR-FORMING REGIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, Arabindo; Netterfield, Calvin B.; Ade, Peter A. R.
2011-04-01
We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed itsmore » performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4.'5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and {sup 12}CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below {approx} 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region.« less
Deconvolution of Images from BLAST 2005: Insight into the K3-50 and IC 5146 Star-forming Regions
NASA Astrophysics Data System (ADS)
Roy, Arabindo; Ade, Peter A. R.; Bock, James J.; Brunt, Christopher M.; Chapin, Edward L.; Devlin, Mark J.; Dicker, Simon R.; France, Kevin; Gibb, Andrew G.; Griffin, Matthew; Gundersen, Joshua O.; Halpern, Mark; Hargrave, Peter C.; Hughes, David H.; Klein, Jeff; Marsden, Gaelen; Martin, Peter G.; Mauskopf, Philip; Netterfield, Calvin B.; Olmi, Luca; Patanchon, Guillaume; Rex, Marie; Scott, Douglas; Semisch, Christopher; Truch, Matthew D. P.; Tucker, Carole; Tucker, Gregory S.; Viero, Marco P.; Wiebe, Donald V.
2011-04-01
We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed its performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4farcm5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and 12CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below ~ 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region.
Image processing tools dedicated to quantification in 3D fluorescence microscopy
NASA Astrophysics Data System (ADS)
Dieterlen, A.; De Meyer, A.; Colicchio, B.; Le Calvez, S.; Haeberlé, O.; Jacquey, S.
2006-05-01
3-D optical fluorescent microscopy now becomes an efficient tool for the volume investigation of living biological samples. Developments in instrumentation have permitted to beat off the conventional Abbe limit. In any case the recorded image can be described by the convolution equation between the original object and the Point Spread Function (PSF) of the acquisition system. Due to the finite resolution of the instrument, the original object is recorded with distortions and blurring, and contaminated by noise. This induces that relevant biological information cannot be extracted directly from raw data stacks. If the goal is 3-D quantitative analysis, then to assess optimal performance of the instrument and to ensure the data acquisition reproducibility, the system characterization is mandatory. The PSF represents the properties of the image acquisition system; we have proposed the use of statistical tools and Zernike moments to describe a 3-D PSF system and to quantify the variation of the PSF. This first step toward standardization is helpful to define an acquisition protocol optimizing exploitation of the microscope depending on the studied biological sample. Before the extraction of geometrical information and/or intensities quantification, the data restoration is mandatory. Reduction of out-of-focus light is carried out computationally by deconvolution process. But other phenomena occur during acquisition, like fluorescence photo degradation named "bleaching", inducing an alteration of information needed for restoration. Therefore, we have developed a protocol to pre-process data before the application of deconvolution algorithms. A large number of deconvolution methods have been described and are now available in commercial package. One major difficulty to use this software is the introduction by the user of the "best" regularization parameters. We have pointed out that automating the choice of the regularization level; also greatly improves the reliability of the measurements although it facilitates the use. Furthermore, to increase the quality and the repeatability of quantitative measurements a pre-filtering of images improves the stability of deconvolution process. In the same way, the PSF prefiltering stabilizes the deconvolution process. We have shown that Zemike polynomials can be used to reconstruct experimental PSF, preserving system characteristics and removing the noise contained in the PSF.
An improved method for polarimetric image restoration in interferometry
NASA Astrophysics Data System (ADS)
Pratley, Luke; Johnston-Hollitt, Melanie
2016-11-01
Interferometric radio astronomy data require the effects of limited coverage in the Fourier plane to be accounted for via a deconvolution process. For the last 40 years this process, known as `cleaning', has been performed almost exclusively on all Stokes parameters individually as if they were independent scalar images. However, here we demonstrate for the case of the linear polarization P, this approach fails to properly account for the complex vector nature resulting in a process which is dependent on the axes under which the deconvolution is performed. We present here an improved method, `Generalized Complex CLEAN', which properly accounts for the complex vector nature of polarized emission and is invariant under rotations of the deconvolution axes. We use two Australia Telescope Compact Array data sets to test standard and complex CLEAN versions of the Högbom and SDI (Steer-Dwedney-Ito) CLEAN algorithms. We show that in general the complex CLEAN version of each algorithm produces more accurate clean components with fewer spurious detections and lower computation cost due to reduced iterations than the current methods. In particular, we find that the complex SDI CLEAN produces the best results for diffuse polarized sources as compared with standard CLEAN algorithms and other complex CLEAN algorithms. Given the move to wide-field, high-resolution polarimetric imaging with future telescopes such as the Square Kilometre Array, we suggest that Generalized Complex CLEAN should be adopted as the deconvolution method for all future polarimetric surveys and in particular that the complex version of an SDI CLEAN should be used.
NASA Astrophysics Data System (ADS)
Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.
2014-12-01
Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.
NASA Astrophysics Data System (ADS)
Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François
2018-06-01
The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.
Bai, Chen; Xu, Shanshan; Duan, Junbo; Jing, Bowen; Yang, Miao; Wan, Mingxi
2017-08-01
Pulse-inversion subharmonic (PISH) imaging can display information relating to pure cavitation bubbles while excluding that of tissue. Although plane-wave-based ultrafast active cavitation imaging (UACI) can monitor the transient activities of cavitation bubbles, its resolution and cavitation-to-tissue ratio (CTR) are barely satisfactory but can be significantly improved by introducing eigenspace-based (ESB) adaptive beamforming. PISH and UACI are a natural combination for imaging of pure cavitation activity in tissue; however, it raises two problems: 1) the ESB beamforming is hard to implement in real time due to the enormous amount of computation associated with the covariance matrix inversion and eigendecomposition and 2) the narrowband characteristic of the subharmonic filter will incur a drastic degradation in resolution. Thus, in order to jointly address these two problems, we propose a new PISH-UACI method using novel fast ESB (F-ESB) beamforming and cavitation deconvolution for nonlinear signals. This method greatly reduces the computational complexity by using F-ESB beamforming through dimensionality reduction based on principal component analysis, while maintaining the high quality of ESB beamforming. The degraded resolution is recovered using cavitation deconvolution through a modified convolution model and compressive deconvolution. Both simulations and in vitro experiments were performed to verify the effectiveness of the proposed method. Compared with the ESB-based PISH-UACI, the entire computation of our proposed approach was reduced by 99%, while the axial resolution gain and CTR were increased by 3 times and 2 dB, respectively, confirming that satisfactory performance can be obtained for monitoring pure cavitation bubbles in tissue erosion.
Wavespace-Based Coherent Deconvolution
NASA Technical Reports Server (NTRS)
Bahr, Christopher J.; Cattafesta, Louis N., III
2012-01-01
Array deconvolution is commonly used in aeroacoustic analysis to remove the influence of a microphone array's point spread function from a conventional beamforming map. Unfortunately, the majority of deconvolution algorithms assume that the acoustic sources in a measurement are incoherent, which can be problematic for some aeroacoustic phenomena with coherent, spatially-distributed characteristics. While several algorithms have been proposed to handle coherent sources, some are computationally intractable for many problems while others require restrictive assumptions about the source field. Newer generalized inverse techniques hold promise, but are still under investigation for general use. An alternate coherent deconvolution method is proposed based on a wavespace transformation of the array data. Wavespace analysis offers advantages over curved-wave array processing, such as providing an explicit shift-invariance in the convolution of the array sampling function with the acoustic wave field. However, usage of the wavespace transformation assumes the acoustic wave field is accurately approximated as a superposition of plane wave fields, regardless of true wavefront curvature. The wavespace technique leverages Fourier transforms to quickly evaluate a shift-invariant convolution. The method is derived for and applied to ideal incoherent and coherent plane wave fields to demonstrate its ability to determine magnitude and relative phase of multiple coherent sources. Multi-scale processing is explored as a means of accelerating solution convergence. A case with a spherical wave front is evaluated. Finally, a trailing edge noise experiment case is considered. Results show the method successfully deconvolves incoherent, partially-coherent, and coherent plane wave fields to a degree necessary for quantitative evaluation. Curved wave front cases warrant further investigation. A potential extension to nearfield beamforming is proposed.
Sparse Solution of Fiber Orientation Distribution Function by Diffusion Decomposition
Yeh, Fang-Cheng; Tseng, Wen-Yih Isaac
2013-01-01
Fiber orientation is the key information in diffusion tractography. Several deconvolution methods have been proposed to obtain fiber orientations by estimating a fiber orientation distribution function (ODF). However, the L 2 regularization used in deconvolution often leads to false fibers that compromise the specificity of the results. To address this problem, we propose a method called diffusion decomposition, which obtains a sparse solution of fiber ODF by decomposing the diffusion ODF obtained from q-ball imaging (QBI), diffusion spectrum imaging (DSI), or generalized q-sampling imaging (GQI). A simulation study, a phantom study, and an in-vivo study were conducted to examine the performance of diffusion decomposition. The simulation study showed that diffusion decomposition was more accurate than both constrained spherical deconvolution and ball-and-sticks model. The phantom study showed that the angular error of diffusion decomposition was significantly lower than those of constrained spherical deconvolution at 30° crossing and ball-and-sticks model at 60° crossing. The in-vivo study showed that diffusion decomposition can be applied to QBI, DSI, or GQI, and the resolved fiber orientations were consistent regardless of the diffusion sampling schemes and diffusion reconstruction methods. The performance of diffusion decomposition was further demonstrated by resolving crossing fibers on a 30-direction QBI dataset and a 40-direction DSI dataset. In conclusion, diffusion decomposition can improve angular resolution and resolve crossing fibers in datasets with low SNR and substantially reduced number of diffusion encoding directions. These advantages may be valuable for human connectome studies and clinical research. PMID:24146772
Development of a Prognostic Marker for Lung Cancer Using Analysis of Tumor Evolution
2016-08-01
construct evolutionary trees , the characteristics of which will be used to predict whether a tumor will metastasize or not. We established a procedure for...of populations, the evolution of tumor cells within a tumor can be diagrammed on a phylogenetic tree . The more diverse a tumor’s phylogenetic tree ...individual tumor cells from the tumors of a training set of patients (half early stage, half late stage). We will reconstruct each tumor’s phylogenetic tree
van Soldt, Benjamin J.; Metscher, Brian D.; Poelmann, Robert E.; Vervust, Bart; Vonk, Freek J.; Müller, Gerd B.; Richardson, Michael K.
2015-01-01
Snake lungs show a remarkable diversity of organ asymmetries. The right lung is always fully developed, while the left lung is either absent, vestigial, or well-developed (but smaller than the right). A ‘tracheal lung’ is present in some taxa. These asymmetries are reflected in the pulmonary arteries. Lung asymmetry is known to appear at early stages of development in Thamnophis radix and Natrix natrix. Unfortunately, there is no developmental data on snakes with a well-developed or absent left lung. We examine the adult and developmental morphology of the lung and pulmonary arteries in the snakes Python curtus breitensteini, Pantherophis guttata guttata, Elaphe obsoleta spiloides, Calloselasma rhodostoma and Causus rhombeatus using gross dissection, MicroCT scanning and 3D reconstruction. We find that the right and tracheal lung develop similarly in these species. By contrast, the left lung either: (1) fails to develop; (2) elongates more slowly and aborts early without (2a) or with (2b) subsequent development of faveoli; (3) or develops normally. A right pulmonary artery always develops, but the left develops only if the left lung develops. No pulmonary artery develops in relation to the tracheal lung. We conclude that heterochrony in lung bud development contributes to lung asymmetry in several snake taxa. Secondly, the development of the pulmonary arteries is asymmetric at early stages, possibly because the splanchnic plexus fails to develop when the left lung is reduced. Finally, some changes in the topography of the pulmonary arteries are consequent on ontogenetic displacement of the heart down the body. Our findings show that the left-right asymmetry in the cardiorespiratory system of snakes is expressed early in development and may become phenotypically expressed through heterochronic shifts in growth, and changes in axial relations of organs and vessels. We propose a step-wise model for reduction of the left lung during snake evolution. PMID:25555231
Snieder, R.; Safak, E.
2006-01-01
The motion of a building depends on the excitation, the coupling of the building to the ground, and the mechanical properties of the building. We separate the building response from the excitation and the ground coupling by deconvolving the motion recorded at different levels in the building and apply this to recordings of the motion in the Robert A. Millikan Library in Pasadena, California. This deconvolution allows for the separation of instrinsic attenuation and radiation damping. The waveforms obtained from deconvolution with the motion in the top floor show a superposition of one upgoing and one downgoing wave. The waveforms obtained by deconvolution with the motion in the basement can be formulated either as a sum of upgoing and downgoing waves, or as a sum over normal modes. Because these deconvolved waves for late time have a monochromatic character, they are most easily analyzed with normal-mode theory. For this building we estimate a shear velocity c = 322 m/sec and a quality factor Q = 20. These values explain both the propagating waves and the normal modes.
Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy
NASA Astrophysics Data System (ADS)
Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong
2011-06-01
With the use of adaptive optics (AO), the ocular aberrations can be compensated to get high-resolution image of living human retina. However, the wavefront correction is not perfect due to the wavefront measure error and hardware restrictions. Thus, it is necessary to use a deconvolution algorithm to recover the retinal images. In this paper, a blind deconvolution technique called Incremental Wiener filter is used to restore the adaptive optics confocal scanning laser ophthalmoscope (AOSLO) images. The point-spread function (PSF) measured by wavefront sensor is only used as an initial value of our algorithm. We also realize the Incremental Wiener filter on graphics processing unit (GPU) in real-time. When the image size is 512 × 480 pixels, six iterations of our algorithm only spend about 10 ms. Retinal blood vessels as well as cells in retinal images are restored by our algorithm, and the PSFs are also revised. Retinal images with and without adaptive optics are both restored. The results show that Incremental Wiener filter reduces the noises and improve the image quality.
NASA Astrophysics Data System (ADS)
Gerwe, David R.; Lee, David J.; Barchers, Jeffrey D.
2000-10-01
A post-processing methodology for reconstructing undersampled image sequences with randomly varying blur is described which can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive optics compensated imagery taken by the Starfire Optical Range 3.5 meter telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques which includes a representation of spatial sampling by the focal plane array elements in the forward stochastic model of the imaging system. This generalization enables the random shifts and shape of the adaptive compensated PSF to be used to partially eliminate the aliasing effects associated with sub- Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss which occurs when imaging in wide FOV modes.
NASA Astrophysics Data System (ADS)
Gerwe, David R.; Lee, David J.; Barchers, Jeffrey D.
2002-09-01
We describe a postprocessing methodology for reconstructing undersampled image sequences with randomly varying blur that can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive-optics-(AO)-compensated imagery taken by the Starfire Optical Range 3.5-m telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground-based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques that include a representation of spatial sampling by the focal plane array elements based on a forward stochastic model. This generalization enables the random shifts and shape of the AO- compensated point spread function (PSF) to be used to partially eliminate the aliasing effects associated with sub-Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss that occurs when imaging in wide- field-of-view (FOV) modes.
Pulse analysis of acoustic emission signals
NASA Technical Reports Server (NTRS)
Houghton, J. R.; Packman, P. F.
1977-01-01
A method for the signature analysis of pulses in the frequency domain and the time domain is presented. Fourier spectrum, Fourier transfer function, shock spectrum and shock spectrum ratio were examined in the frequency domain analysis and pulse shape deconvolution was developed for use in the time domain analysis. Comparisons of the relative performance of each analysis technique are made for the characterization of acoustic emission pulses recorded by a measuring system. To demonstrate the relative sensitivity of each of the methods to small changes in the pulse shape, signatures of computer modeled systems with analytical pulses are presented. Optimization techniques are developed and used to indicate the best design parameter values for deconvolution of the pulse shape. Several experiments are presented that test the pulse signature analysis methods on different acoustic emission sources. These include acoustic emission associated with (a) crack propagation, (b) ball dropping on a plate, (c) spark discharge, and (d) defective and good ball bearings. Deconvolution of the first few micro-seconds of the pulse train is shown to be the region in which the significant signatures of the acoustic emission event are to be found.
Gainer, Christian F; Utzinger, Urs; Romanowski, Marek
2012-07-01
The use of upconverting lanthanide nanoparticles in fast-scanning microscopy is hindered by a long luminescence decay time, which greatly blurs images acquired in a nondescanned mode. We demonstrate herein an image processing method based on Richardson-Lucy deconvolution that mitigates the detrimental effects of their luminescence lifetime. This technique generates images with lateral resolution on par with the system's performance, ∼1.2 μm, while maintaining an axial resolution of 5 μm or better at a scan rate comparable with traditional two-photon microscopy. Remarkably, this can be accomplished with near infrared excitation power densities of 850 W/cm(2), several orders of magnitude below those used in two-photon imaging with molecular fluorophores. By way of illustration, we introduce the use of lipids to coat and functionalize these nanoparticles, rendering them water dispersible and readily conjugated to biologically relevant ligands, in this case epidermal growth factor receptor antibody. This deconvolution technique combined with the functionalized nanoparticles will enable three-dimensional functional tissue imaging at exceptionally low excitation power densities.
Spatial studies of planetary nebulae with IRAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawkins, G.W.; Zuckerman, B.
1991-06-01
The infrared sizes at the four IRAS wavelengths of 57 planetaries, most with 20-60 arcsec optical size, are derived from spatial deconvolution of one-dimensional survey mode scans. Survey observations from multiple detectors and hours confirmed (HCON) observations are combined to increase the sampling to a rate that is sufficient for successful deconvolution. The Richardson-Lucy deconvolution algorithm is used to obtain an increase in resolution of a factor of about 2 or 3 from the normal IRAS detector sizes of 45, 45, 90, and 180 arcsec at wavelengths 12, 25, 60, and 100 microns. Most of the planetaries deconvolve at 12more » and 25 microns to sizes equal to or smaller than the optical size. Some of the planetaries with optical rings 60 arcsec or more in diameter show double-peaked IRAS profiles. Many, such as NGC 6720 and NGC 6543 show all infrared sizes equal to the optical size, while others indicate increasing infrared size with wavelength. Deconvolved IRAS profiles are presented for the 57 planetaries at nearly all wavelengths where IRAS flux densities are 1-2 Jy or higher. 60 refs.« less
NASA Astrophysics Data System (ADS)
Almasganj, Mohammad; Adabi, Saba; Fatemizadeh, Emad; Xu, Qiuyun; Sadeghi, Hamid; Daveluy, Steven; Nasiriavanaki, Mohammadreza
2017-03-01
Optical Coherence Tomography (OCT) has a great potential to elicit clinically useful information from tissues due to its high axial and transversal resolution. In practice, an OCT setup cannot reach to its theoretical resolution due to imperfections of its components, which make its images blurry. The blurriness is different alongside regions of image; thus, they cannot be modeled by a unique point spread function (PSF). In this paper, we investigate the use of solid phantoms to estimate the PSF of each sub-region of imaging system. We then utilize Lucy-Richardson, Hybr and total variation (TV) based iterative deconvolution methods for mitigating occurred spatially variant blurriness. It is shown that the TV based method will suppress the so-called speckle noise in OCT images better than the two other approaches. The performance of proposed algorithm is tested on various samples, including several skin tissues besides the test image blurred with synthetic PSF-map, demonstrating qualitatively and quantitatively the advantage of TV based deconvolution method using spatially-variant PSF for enhancing image quality.
A stopping criterion to halt iterations at the Richardson-Lucy deconvolution of radiographic images
NASA Astrophysics Data System (ADS)
Almeida, G. L.; Silvani, M. I.; Souza, E. S.; Lopes, R. T.
2015-07-01
Radiographic images, as any experimentally acquired ones, are affected by spoiling agents which degrade their final quality. The degradation caused by agents of systematic character, can be reduced by some kind of treatment such as an iterative deconvolution. This approach requires two parameters, namely the system resolution and the best number of iterations in order to achieve the best final image. This work proposes a novel procedure to estimate the best number of iterations, which replaces the cumbersome visual inspection by a comparison of numbers. These numbers are deduced from the image histograms, taking into account the global difference G between them for two subsequent iterations. The developed algorithm, including a Richardson-Lucy deconvolution procedure has been embodied into a Fortran program capable to plot the 1st derivative of G as the processing progresses and to stop it automatically when this derivative - within the data dispersion - reaches zero. The radiograph of a specially chosen object acquired with thermal neutrons from the Argonauta research reactor at Institutode Engenharia Nuclear - CNEN, Rio de Janeiro, Brazil, have undergone this treatment with fair results.
Fast online deconvolution of calcium imaging data
Zhou, Pengcheng; Paninski, Liam
2017-01-01
Fluorescent calcium indicators are a popular means for observing the spiking activity of large neuronal populations, but extracting the activity of each neuron from raw fluorescence calcium imaging data is a nontrivial problem. We present a fast online active set method to solve this sparse non-negative deconvolution problem. Importantly, the algorithm 3progresses through each time series sequentially from beginning to end, thus enabling real-time online estimation of neural activity during the imaging session. Our algorithm is a generalization of the pool adjacent violators algorithm (PAVA) for isotonic regression and inherits its linear-time computational complexity. We gain remarkable increases in processing speed: more than one order of magnitude compared to currently employed state of the art convex solvers relying on interior point methods. Unlike these approaches, our method can exploit warm starts; therefore optimizing model hyperparameters only requires a handful of passes through the data. A minor modification can further improve the quality of activity inference by imposing a constraint on the minimum spike size. The algorithm enables real-time simultaneous deconvolution of O(105) traces of whole-brain larval zebrafish imaging data on a laptop. PMID:28291787
Wang, Jian; Chen, Hong-Ping; Liu, You-Ping; Wei, Zheng; Liu, Rong; Fan, Dan-Qing
2013-05-01
This experiment shows how to use the automated mass spectral deconvolution & identification system (AMDIS) to deconvolve the overlapped peaks in the total ion chromatogram (TIC) of volatile oil from Chineses materia medica (CMM). The essential oil was obtained by steam distillation. Its TIC was gotten by GC-MS, and the superimposed peaks in TIC were deconvolved by AMDIS. First, AMDIS can detect the number of components in TIC through the run function. Then, by analyzing the extracted spectrum of corresponding scan point of detected component and the original spectrum of this scan point, and their counterparts' spectra in the referred MS Library, researchers can ascertain the component's structure accurately or deny some compounds, which don't exist in nature. Furthermore, through examining the changeability of characteristic fragment ion peaks of identified compounds, the previous outcome can be affirmed again. The result demonstrated that AMDIS could efficiently deconvolve the overlapped peaks in TIC by taking out the spectrum of matching scan point of discerned component, which led to exact identification of the component's structure.
Thorium concentrations in the lunar surface. V - Deconvolution of the central highlands region
NASA Technical Reports Server (NTRS)
Metzger, A. E.; Etchegaray-Ramirez, M. I.; Haines, E. L.
1982-01-01
The distribution of thorium in the lunar central highlands measured from orbit by the Apollo 16 gamma-ray spectrometer is subjected to a deconvolution analysis to yield improved spatial resolution and contrast. Use of two overlapping data fields for complete coverage also provides a demonstration of the technique's ability to model concentrations several degrees beyond the data track. Deconvolution reveals an association between Th concentration and the Kant Plateau, Descartes Mountain and Cayley plains surface formations. The Kant Plateau and Descartes Mountains model with Th less than 1 part per million, which is typical of farside highlands but is infrequently seen over any other nearside highland portions of the Apollo 15 and 16 ground tracks. It is noted that, if the Cayley plains are the result of basin-forming impact ejecta, the distribution of Th concentration with longitude supports an origin from the Imbrium basin rather than the Nectaris or Orientale basins. Nectaris basin materials are found to have a Th concentration similar to that of the Descartes Mountains, evidence that the latter may have been emplaced as Nectaris basin impact deposits.
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Haering, Edward A., Jr.; Ehernberger, L. J.
1996-01-01
In-flight measurements of the SR-71 near-field sonic boom were obtained by an F-16XL airplane at flightpath separation distances from 40 to 740 ft. Twenty-two signatures were obtained from Mach 1.60 to Mach 1.84 and altitudes from 47,600 to 49,150 ft. The shock wave signatures were measured by the total and static sensors on the F-16XL noseboo. These near-field signature measurements were distorted by pneumatic attenuation in the pitot-static sensors and accounting for their effects using optimal deconvolution. Measurement system magnitude and phase characteristics were determined from ground-based step-response tests and extrapolated to flight conditions using analytical models. Deconvolution was implemented using Fourier transform methods. Comparisons of the shock wave signatures reconstructed from the total and static pressure data are presented. The good agreement achieved gives confidence of the quality of the reconstruction analysis. although originally developed to reconstruct the sonic boom signatures from SR-71 sonic boom flight tests, the methods presented here generally apply to other types of highly attenuated or distorted pneumatic measurements.
Two-dimensional imaging of two types of radicals by the CW-EPR method
NASA Astrophysics Data System (ADS)
Czechowski, Tomasz; Krzyminiewski, Ryszard; Jurga, Jan; Chlewicki, Wojciech
2008-01-01
The CW-EPR method of image reconstruction is based on sample rotation in a magnetic field with a constant gradient (50 G/cm). In order to obtain a projection (radical density distribution) along a given direction, the EPR spectra are recorded with and without the gradient. Deconvolution, then gives the distribution of the spin density. Projection at 36 different angles gives the information that is necessary for reconstruction of the radical distribution. The problem becomes more complex when there are at least two types of radicals in the sample, because the deconvolution procedure does not give satisfactory results. We propose a method to calculate the projections for each radical, based on iterative procedures. The images of density distribution for each radical obtained by our procedure have proved that the method of deconvolution, in combination with iterative fitting, provides correct results. The test was performed on a sample of polymer PPS Br 111 ( p-phenylene sulphide) with glass fibres and minerals. The results indicated a heterogeneous distribution of radicals in the sample volume. The images obtained were in agreement with the known shape of the sample.
Liu, Yunbo; Wear, Keith A; Harris, Gerald R
2017-10-01
Reliable acoustic characterization is fundamental for patient safety and clinical efficacy during high-intensity therapeutic ultrasound (HITU) treatment. Technical challenges, such as measurement variation and signal analysis, still exist for HITU exposimetry using ultrasound hydrophones. In this work, four hydrophones were compared for pressure measurement: a robust needle hydrophone, a small polyvinylidene fluoride capsule hydrophone and two fiberoptic hydrophones. The focal waveform and beam distribution of a single-element HITU transducer (1.05 MHz and 3.3 MHz) were evaluated. Complex deconvolution between the hydrophone voltage signal and frequency-dependent complex sensitivity was performed to obtain pressure waveforms. Compressional pressure (p + ), rarefactional pressure (p - ) and focal beam distribution were compared up to 10.6/-6.0 MPa (p + /p - ) (1.05 MHz) and 20.65/-7.20 MPa (3.3 MHz). The effects of spatial averaging, local non-linear distortion, complex deconvolution and hydrophone damage thresholds were investigated. This study showed a variation of no better than 10%-15% among hydrophones during HITU pressure characterization. Published by Elsevier Inc.
Pulse analysis of acoustic emission signals
NASA Technical Reports Server (NTRS)
Houghton, J. R.; Packman, P. F.
1977-01-01
A method for the signature analysis of pulses in the frequency domain and the time domain is presented. Fourier spectrum, Fourier transfer function, shock spectrum and shock spectrum ratio were examined in the frequency domain analysis, and pulse shape deconvolution was developed for use in the time domain analysis. Comparisons of the relative performance of each analysis technique are made for the characterization of acoustic emission pulses recorded by a measuring system. To demonstrate the relative sensitivity of each of the methods to small changes in the pulse shape, signatures of computer modeled systems with analytical pulses are presented. Optimization techniques are developed and used to indicate the best design parameters values for deconvolution of the pulse shape. Several experiments are presented that test the pulse signature analysis methods on different acoustic emission sources. These include acoustic emissions associated with: (1) crack propagation, (2) ball dropping on a plate, (3) spark discharge and (4) defective and good ball bearings. Deconvolution of the first few micro-seconds of the pulse train are shown to be the region in which the significant signatures of the acoustic emission event are to be found.
NASA Astrophysics Data System (ADS)
Li, Gang; Zhao, Qing
2017-03-01
In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.
Blind image deconvolution using the Fields of Experts prior
NASA Astrophysics Data System (ADS)
Dong, Wende; Feng, Huajun; Xu, Zhihai; Li, Qi
2012-11-01
In this paper, we present a method for single image blind deconvolution. To improve its ill-posedness, we formulate the problem under Bayesian probabilistic framework and use a prior named Fields of Experts (FoE) which is learnt from natural images to regularize the latent image. Furthermore, due to the sparse distribution of the point spread function (PSF), we adopt a Student-t prior to regularize it. An improved alternating minimization (AM) approach is proposed to solve the resulted optimization problem. Experiments on both synthetic and real world blurred images show that the proposed method can achieve results of high quality.
Application of the Lucy–Richardson Deconvolution Procedure to High Resolution Photoemission Spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rameau, J.; Yang, H.-B.; Johnson, P.D.
2010-07-01
Angle-resolved photoemission has developed into one of the leading probes of the electronic structure and associated dynamics of condensed matter systems. As with any experimental technique the ability to resolve features in the spectra is ultimately limited by the resolution of the instrumentation used in the measurement. Previously developed for sharpening astronomical images, the Lucy-Richardson deconvolution technique proves to be a useful tool for improving the photoemission spectra obtained in modern hemispherical electron spectrometers where the photoelectron spectrum is displayed as a 2D image in energy and momentum space.
NASA Astrophysics Data System (ADS)
Zhang, Yongliang; Day-Uei Li, David
2017-02-01
This comment is to clarify that Poisson noise instead of Gaussian noise shall be included to assess the performances of least-squares deconvolution with Laguerre expansion (LSD-LE) for analysing fluorescence lifetime imaging data obtained from time-resolved systems. Moreover, we also corrected an equation in the paper. As the LSD-LE method is rapid and has the potential to be widely applied not only for diagnostic but for wider bioimaging applications, it is desirable to have precise noise models and equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tennenberg, S.D.; Jacobs, M.P.; Solomkin, J.S.
Complement-mediated neutrophil activation (CMNA) has been proposed as an important pathogenic mechanism causing acute microvascular lung injury in the adult respiratory distress syndrome (ARDS). To clarify the relationship between CMNA and evolving lung injury, we studied 26 patients with multiple trauma and sepsis within 24 hours of risk establishment for ARDS. Pulmonary alveolar-capillary permeability (PACP) was quantified as the clearance rate of a particulate radioaerosol. Seventeen patients (65%) had increased PACP (six developed ARDS) while nine (35%) had normal PACP (none developed ARDS; clearance rates of 3.4%/min and 1.5%/min, respectively). These patients, regardless of evidence of early lung injury, hadmore » elevated plasma C3adesArg levels and neutrophil chemotactic desensitization to C5a/C5adesArg. Plasma C3adesArg levels correlated weakly, but significantly, with PACP. Thus, CMNA may be a necessary, but not a sufficient, pathogenic mechanism in the evolution of ARDS.« less
Large-eddy simulation of turbulent cavitating flow in a micro channel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Hickel, Stefan; Schmidt, Steffen J.
2014-08-15
Large-eddy simulations (LES) of cavitating flow of a Diesel-fuel-like fluid in a generic throttle geometry are presented. Two-phase regions are modeled by a parameter-free thermodynamic equilibrium mixture model, and compressibility of the liquid and the liquid-vapor mixture is taken into account. The Adaptive Local Deconvolution Method (ALDM), adapted for cavitating flows, is employed for discretizing the convective terms of the Navier-Stokes equations for the homogeneous mixture. ALDM is a finite-volume-based implicit LES approach that merges physically motivated turbulence modeling and numerical discretization. Validation of the numerical method is performed for a cavitating turbulent mixing layer. Comparisons with experimental data ofmore » the throttle flow at two different operating conditions are presented. The LES with the employed cavitation modeling predicts relevant flow and cavitation features accurately within the uncertainty range of the experiment. The turbulence structure of the flow is further analyzed with an emphasis on the interaction between cavitation and coherent motion, and on the statistically averaged-flow evolution.« less
NASA Astrophysics Data System (ADS)
Bejan, A.; Charles, J. D.; Lorente, S.
2014-07-01
The prevailing view is that we cannot witness biological evolution because it occurred on a time scale immensely greater than our lifetime. Here, we show that we can witness evolution in our lifetime by watching the evolution of the flying human-and-machine species: the airplane. We document this evolution, and we also predict it based on a physics principle: the constructal law. We show that the airplanes must obey theoretical allometric rules that unite them with the birds and other animals. For example, the larger airplanes are faster, more efficient as vehicles, and have greater range. The engine mass is proportional to the body size: this scaling is analogous to animal design, where the mass of the motive organs (muscle, heart, lung) is proportional to the body size. Large or small, airplanes exhibit a proportionality between wing span and fuselage length, and between fuel load and body size. The animal-design counterparts of these features are evident. The view that emerges is that the evolution phenomenon is broader than biological evolution. The evolution of technology, river basins, and animal design is one phenomenon, and it belongs in physics.
Management of Lung Cancer Invading the Superior Sulcus.
Kratz, Johannes R; Woodard, Gavitt; Jablons, David M
2017-05-01
Superior sulcus tumors have posed a formidable therapeutic challenge since their original description by Pancoast and Tobias in the early twentieth century. Initial therapeutic efforts with radiotherapy were associated with high rates of relapse and mortality. Bimodality therapy with complete surgical resection in the 1960s paved the way for trimodality therapy as the current standard of care in the treatment of superior sulcus tumors. The evolution of treatment approaches over time has provided outcomes that come increasingly closer to rivaling those of similarly staged nonapical lung cancer. Copyright © 2017 Elsevier Inc. All rights reserved.
Increasing CAD system efficacy for lung texture analysis using a convolutional network
NASA Astrophysics Data System (ADS)
Tarando, Sebastian Roberto; Fetita, Catalin; Faccinetto, Alex; Brillet, Pierre-Yves
2016-03-01
The infiltrative lung diseases are a class of irreversible, non-neoplastic lung pathologies requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. For the large majority of CAD systems, such classification relies on a two-dimensional analysis of axial CT images. In a previously developed CAD system, we proposed a fully-3D approach exploiting a multi-scale morphological analysis which showed good performance in detecting diseased areas, but with a major drawback consisting of sometimes overestimating the pathological areas and mixing different type of lung patterns. This paper proposes a combination of the existing CAD system with the classification outcome provided by a convolutional network, specifically tuned-up, in order to increase the specificity of the classification and the confidence to diagnosis. The advantage of using a deep learning approach is a better regularization of the classification output (because of a deeper insight into a given pathological class over a large series of samples) where the previous system is extra-sensitive due to the multi-scale response on patient-specific, localized patterns. In a preliminary evaluation, the combined approach was tested on a 10 patient database of various lung pathologies, showing a sharp increase of true detections.
The Emergence of Physiology and Form: Natural Selection Revisited
Torday, John S.
2016-01-01
Natural Selection describes how species have evolved differentially, but it is descriptive, non-mechanistic. What mechanisms does Nature use to accomplish this feat? One known way in which ancient natural forces affect development, phylogeny and physiology is through gravitational effects that have evolved as mechanotransduction, seen in the lung, kidney and bone, linking as molecular homologies to skin and brain. Tracing the ontogenetic and phylogenetic changes that have facilitated mechanotransduction identifies specific homologous cell-types and functional molecular markers for lung homeostasis that reveal how and why complex physiologic traits have evolved from the unicellular to the multicellular state. Such data are reinforced by their reverse-evolutionary patterns in chronic degenerative diseases. The physiologic responses of model organisms like Dictyostelium and yeast to gravity provide deep comparative molecular phenotypic homologies, revealing mammalian Target of Rapamycin (mTOR) as the final common pathway for vertical integration of vertebrate physiologic evolution; mTOR integrates calcium/lipid epistatic balance as both the proximate and ultimate positive selection pressure for vertebrate physiologic evolution. The commonality of all vertebrate structure-function relationships can be reduced to calcium/lipid homeostatic regulation as the fractal unit of vertebrate physiology, demonstrating the primacy of the unicellular state as the fundament of physiologic evolution. PMID:27534726
Huang, C.; Townshend, J.R.G.; Liang, S.; Kalluri, S.N.V.; DeFries, R.S.
2002-01-01
Measured and modeled point spread functions (PSF) of sensor systems indicate that a significant portion of the recorded signal of each pixel of a satellite image originates from outside the area represented by that pixel. This hinders the ability to derive surface information from satellite images on a per-pixel basis. In this study, the impact of the PSF of the Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m bands was assessed using four images representing different landscapes. Experimental results showed that though differences between pixels derived with and without PSF effects were small on the average, the PSF generally brightened dark objects and darkened bright objects. This impact of the PSF lowered the performance of a support vector machine (SVM) classifier by 5.4% in overall accuracy and increased the overall root mean square error (RMSE) by 2.4% in estimating subpixel percent land cover. An inversion method based on the known PSF model reduced the signals originating from surrounding areas by as much as 53%. This method differs from traditional PSF inversion deconvolution methods in that the PSF was adjusted with lower weighting factors for signals originating from neighboring pixels than those specified by the PSF model. By using this deconvolution method, the lost classification accuracy due to residual impact of PSF effects was reduced to only 1.66% in overall accuracy. The increase in the RMSE of estimated subpixel land cover proportions due to the residual impact of PSF effects was reduced to 0.64%. Spatial aggregation also effectively reduced the errors in estimated land cover proportion images. About 50% of the estimation errors were removed after applying the deconvolution method and aggregating derived proportion images to twice their dimensional pixel size. ?? 2002 Elsevier Science Inc. All rights reserved.
A gene profiling deconvolution approach to estimating immune cell composition from complex tissues.
Chen, Shu-Hwa; Kuo, Wen-Yu; Su, Sheng-Yao; Chung, Wei-Chun; Ho, Jen-Ming; Lu, Henry Horng-Shing; Lin, Chung-Yen
2018-05-08
A new emerged cancer treatment utilizes intrinsic immune surveillance mechanism that is silenced by those malicious cells. Hence, studies of tumor infiltrating lymphocyte populations (TILs) are key to the success of advanced treatments. In addition to laboratory methods such as immunohistochemistry and flow cytometry, in silico gene expression deconvolution methods are available for analyses of relative proportions of immune cell types. Herein, we used microarray data from the public domain to profile gene expression pattern of twenty-two immune cell types. Initially, outliers were detected based on the consistency of gene profiling clustering results and the original cell phenotype notation. Subsequently, we filtered out genes that are expressed in non-hematopoietic normal tissues and cancer cells. For every pair of immune cell types, we ran t-tests for each gene, and defined differentially expressed genes (DEGs) from this comparison. Equal numbers of DEGs were then collected as candidate lists and numbers of conditions and minimal values for building signature matrixes were calculated. Finally, we used v -Support Vector Regression to construct a deconvolution model. The performance of our system was finally evaluated using blood biopsies from 20 adults, in which 9 immune cell types were identified using flow cytometry. The present computations performed better than current state-of-the-art deconvolution methods. Finally, we implemented the proposed method into R and tested extensibility and usability on Windows, MacOS, and Linux operating systems. The method, MySort, is wrapped as the Galaxy platform pluggable tool and usage details are available at https://testtoolshed.g2.bx.psu.edu/view/moneycat/mysort/e3afe097e80a .
Samanipour, Saer; Reid, Malcolm J; Bæk, Kine; Thomas, Kevin V
2018-04-17
Nontarget analysis is considered one of the most comprehensive tools for the identification of unknown compounds in a complex sample analyzed via liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS). Due to the complexity of the data generated via LC-HRMS, the data-dependent acquisition mode, which produces the MS 2 spectra of a limited number of the precursor ions, has been one of the most common approaches used during nontarget screening. However, data-independent acquisition mode produces highly complex spectra that require proper deconvolution and library search algorithms. We have developed a deconvolution algorithm and a universal library search algorithm (ULSA) for the analysis of complex spectra generated via data-independent acquisition. These algorithms were validated and tested using both semisynthetic and real environmental data. A total of 6000 randomly selected spectra from MassBank were introduced across the total ion chromatograms of 15 sludge extracts at three levels of background complexity for the validation of the algorithms via semisynthetic data. The deconvolution algorithm successfully extracted more than 60% of the added ions in the analytical signal for 95% of processed spectra (i.e., 3 complexity levels multiplied by 6000 spectra). The ULSA ranked the correct spectra among the top three for more than 95% of cases. We further tested the algorithms with 5 wastewater effluent extracts for 59 artificial unknown analytes (i.e., their presence or absence was confirmed via target analysis). These algorithms did not produce any cases of false identifications while correctly identifying ∼70% of the total inquiries. The implications, capabilities, and the limitations of both algorithms are further discussed.
Schinkel, Lena; Lehner, Sandro; Knobloch, Marco; Lienemann, Peter; Bogdal, Christian; McNeill, Kristopher; Heeb, Norbert V
2018-03-01
Chlorinated paraffins (CPs) are high production volume chemicals widely used as additives in metal working fluids. Thereby, CPs are exposed to hot metal surfaces which may induce degradation processes. We hypothesized that the elimination of hydrochloric acid would transform CPs into chlorinated olefins (COs). Mass spectrometry is widely used to detect CPs, mostly in the selected ion monitoring mode (SIM) evaluating 2-3 ions at mass resolutions R < 20'000. This approach is not suited to detected COs, because their mass spectra strongly overlap with CPs. We applied a mathematical deconvolution method based on full-scan MS data to separate interfered CP/CO spectra. Metal drilling indeed induced HCl-losses. CO proportions in exposed mixtures of chlorotridecanes increased. Thermal exposure of chlorotridecanes at 160, 180, 200 and 220 °C also induced dehydrohalogenation reactions and CO proportions also increased. Deconvolution of respective mass spectra is needed to study the CP transformation kinetics without bias from CO interferences. Apparent first-order rate constants (k app ) increased up to 0.17, 0.29 and 0.46 h -1 for penta-, hexa- and heptachloro-tridecanes exposed at 220 °C. Respective half-life times (τ 1/2 ) decreased from 4.0 to 2.4 and 1.5 h. Thus, higher chlorinated paraffins degrade faster than lower chlorinated ones. In conclusion, exposure of CPs during metal drilling and thermal treatment induced HCl losses and CO formation. It is expected that CPs and COs are co-released from such processes. Full-scan mass spectra and subsequent deconvolution of interfered signals is a promising approach to tackle the CP/CO problem, in case of insufficient mass resolution. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Khodasevich, I. A.; Voitikov, S. V.; Orlovich, V. A.; Kosmyna, M. B.; Shekhovtsov, A. N.
2016-09-01
Unpolarized spontaneous Raman spectra of crystalline double calcium orthovanadates Ca10M(VO4)7 (M = Li, K, Na) in the range 150-1600 cm-1 were measured. Two vibrational bands with full-width at half-maximum (FWHM) of 37-50 cm-1 were found in the regions 150-500 and 700-1000 cm-1. The band shapes were approximated well by deconvolution into Voigt profiles. The band at 700-1000 cm-1 was stronger and deconvoluted into eight Voigt profiles. The frequencies of two strong lines were ~848 and ~862 cm-1 for Ca10Li(VO4)7; ~850 and ~866 cm-1 for Ca10Na(VO4)7; and ~844 and ~866 cm-1 for Ca10K(VO4)7. The Lorentzian width parameters of these lines in the Voigt profiles were ~5 times greater than those of the Gaussian width parameters. The FWHM of the Voigt profiles were ~18-42 cm-1. The two strongest lines had widths of 21-25 cm-1. The vibrational band at 300-500 cm-1 was ~5-6 times weaker than that at 700-1000 cm-1 and was deconvoluted into four lines with widths of 25-40 cm-1. The large FWHM of the Raman lines indicated that the crystal structures were disordered. These crystals could be of interest for Raman conversion of pico- and femtosecond laser pulses because of the intense vibrations with large FWHM in the Raman spectra.
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
Continuous monitoring of high-rise buildings using seismic interferometry
NASA Astrophysics Data System (ADS)
Mordret, A.; Sun, H.; Prieto, G. A.; Toksoz, M. N.; Buyukozturk, O.
2016-12-01
The linear seismic response of a building is commonly extracted from ambient vibration measurements. Seismic deconvolution interferometry performed on ambient vibration measurements can also be used to estimate the dynamic characteristics of a building, such as the velocity of shear-waves travelling inside the building as well as a damping parameter depending on the intrinsic attenuation of the building and the soil-structure coupling. The continuous nature of the ambient vibrations allows us to measure these parameters repeatedly and to observe their temporal variations. We used 2 weeks of ambient vibration recorded by 36 accelerometers installed in the Green Building on the Massachusetts Institute of Technology campus (Cambridge, MA) to continuously monitor the shear-wave speed and the attenuation factor of the building. Due to the low strain of the ambient vibrations, the observed changes are totally reversible. The relative velocity changes between a reference deconvolution function and the current deconvolution functions are measured with two different methods: 1) the Moving Window Cross-Spectral technique and 2) the stretching technique. Both methods show similar results. We show that measuring the stretching coefficient for the deconvolution functions filtered around the fundamental mode frequency is equivalent to measuring the wandering of the fundamental frequency in the raw ambient vibration data. By comparing these results with local weather parameters, we show that the relative air humidity is the factor dominating the relative seismic velocity variations in the Green Building, as well as the wandering of the fundamental mode. The one-day periodic variations are affected by both the temperature and the humidity. The attenuation factor, measured as the exponential decay of the fundamental mode waveforms, shows a more complex behaviour with respect to the weather measurements.
Airway Basal Cell Heterogeneity and Lung Squamous Cell Carcinoma.
Hynds, Robert E; Janes, Sam M
2017-09-01
Basal cells are stem/progenitor cells that maintain airway homeostasis, enact repair following epithelial injury, and are a candidate cell-of-origin for lung squamous cell carcinoma. Heterogeneity of basal cells is recognized in terms of gene expression and differentiation capacity. In this Issue, Pagano and colleagues isolate a subset of immortalized basal cells that are characterized by high motility, suggesting that they might also be heterogeneous in their biophysical properties. Motility-selected cells displayed an increased ability to colonize the lung in vivo The possible implications of these findings are discussed in terms of basal cell heterogeneity, epithelial cell migration, and modeling of metastasis that occurs early in cancer evolution. Cancer Prev Res; 10(9); 491-3. ©2017 AACR See related article by Pagano et al., p. 514 . ©2017 American Association for Cancer Research.
Abdelaziz, Hadeer M; Gaber, Mohamed; Abd-Elwakil, Mahmoud M; Mabrouk, Moustafa T; Elgohary, Mayada M; Kamel, Nayra M; Kabary, Dalia M; Freag, May S; Samaha, Magda W; Mortada, Sana M; Elkhodairy, Kadria A; Fang, Jia-You; Elzoghby, Ahmed O
2018-01-10
There is progressive evolution in the use of inhalable drug delivery systems (DDSs) for lung cancer therapy. The inhalation route offers many advantages, being non-invasive method of drug administration as well as localized delivery of anti-cancer drugs to tumor tissue. This article reviews various inhalable colloidal systems studied for tumor-targeted drug delivery including polymeric, lipid, hybrid and inorganic nanocarriers. The active targeting approaches for enhanced delivery of nanocarriers to lung cancer cells were illustrated. This article also reviews the recent advances of inhalable microparticle-based drug delivery systems for lung cancer therapy including bioresponsive, large porous, solid lipid and drug-complex microparticles. The possible strategies to improve the aerosolization behavior and maintain the critical physicochemical parameters for efficient delivery of drugs deep into lungs were also discussed. Therefore, a strong emphasis is placed on the approaches which combine the merits of both nanocarriers and microparticles including inhalable nanocomposites and nanoaggregates and on the optimization of such formulations using the proper techniques and carriers. Finally, the toxicological behavior and market potential of the inhalable anti-cancer drug delivery systems are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Diwanji, Tejan P.; Mohindra, Pranshu; Vyfhuis, Melissa; Snider, James W.; Kalavagunta, Chaitanya; Mossahebi, Sina; Yu, Jen; Feigenberg, Steven
2017-01-01
The 21st century has seen several paradigm shifts in the treatment of non-small cell lung cancer (NSCLC) in early-stage inoperable disease, definitive locally advanced disease, and the postoperative setting. A key driver in improvement of local disease control has been the significant evolution of radiation therapy techniques in the last three decades, allowing for delivery of definitive radiation doses while limiting exposure of normal tissues. For patients with locally-advanced NSCLC, the advent of volumetric imaging techniques has allowed a shift from 2-dimensional approaches to 3-dimensional conformal radiation therapy (3DCRT). The next generation of 3DCRT, intensity-modulated radiation therapy and volumetric-modulated arc therapy (VMAT), have enabled even more conformal radiation delivery. Clinical evidence has shown that this can improve the quality of life for patients undergoing definitive management of lung cancer. In the early-stage setting, conventional fractionation led to poor outcomes. Evaluation of altered dose fractionation with the previously noted technology advances led to advent of stereotactic body radiation therapy (SBRT). This technique has dramatically improved local control and expanded treatment options for inoperable, early-stage patients. The recent development of proton therapy has opened new avenues for improving conformity and the therapeutic ratio. Evolution of newer proton therapy techniques, such as pencil-beam scanning (PBS), could improve tolerability and possibly allow reexamination of dose escalation. These new progresses, along with significant advances in systemic therapies, have improved survival for lung cancer patients across the spectrum of non-metastatic disease. They have also brought to light new challenges and avenues for further research and improvement. PMID:28529896
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandlik, Nandkumar, E-mail: ntmandlik@gmail.com; Patil, B. J.; Bhoraskar, V. N.
2014-04-24
Nanorods of CaSO{sub 4}: Dy having diameter 20 nm and length 200 nm have been synthesized by the chemical coprecipitation method. These samples were irradiated with gamma radiation for the dose varying from 0.1 Gy to 50 kGy and their TL characteristics have been studied. TL dose response shows a linear behavior up to 5 kGy and further saturates with increase in the dose. A Computerized Glow Curve Deconvolution (CGCD) program was used for the analysis of TL glow curves. Trapping parameters for various peaks have been calculated by using CGCD program.
NASA Astrophysics Data System (ADS)
Mandlik, Nandkumar; Patil, B. J.; Bhoraskar, V. N.; Sahare, P. D.; Dhole, S. D.
2014-04-01
Nanorods of CaSO4: Dy having diameter 20 nm and length 200 nm have been synthesized by the chemical coprecipitation method. These samples were irradiated with gamma radiation for the dose varying from 0.1 Gy to 50 kGy and their TL characteristics have been studied. TL dose response shows a linear behavior up to 5 kGy and further saturates with increase in the dose. A Computerized Glow Curve Deconvolution (CGCD) program was used for the analysis of TL glow curves. Trapping parameters for various peaks have been calculated by using CGCD program.
Iterative Transform Phase Diversity: An Image-Based Object and Wavefront Recovery
NASA Technical Reports Server (NTRS)
Smith, Jeffrey
2012-01-01
The Iterative Transform Phase Diversity algorithm is designed to solve the problem of recovering the wavefront in the exit pupil of an optical system and the object being imaged. This algorithm builds upon the robust convergence capability of Variable Sampling Mapping (VSM), in combination with the known success of various deconvolution algorithms. VSM is an alternative method for enforcing the amplitude constraints of a Misell-Gerchberg-Saxton (MGS) algorithm. When provided the object and additional optical parameters, VSM can accurately recover the exit pupil wavefront. By combining VSM and deconvolution, one is able to simultaneously recover the wavefront and the object.
Deconvolution Method on OSL Curves from ZrO2 Irradiated by Beta and UV Radiations
NASA Astrophysics Data System (ADS)
Rivera, T.; Kitis, G.; Azorín, J.; Furetta, C.
This paper reports the optically stimulated luminescent (OSL) response of ZrO2 to beta and ultraviolet radiations in order to investigate the potential use of this material as a radiation dosimeter. The experimentally obtained OSL decay curves were analyzed using the computerized curve de-convolution (CCD) method. It was found that the OSL curve structure, for the short (practical) illumination time used, consists of three first order components. The individual OSL dose response behavior of each component was found. The values of the time at the OSL peak maximum and the decay constant of each component were also estimated.
Pooling across cells to normalize single-cell RNA sequencing data with many zero counts.
Lun, Aaron T L; Bach, Karsten; Marioni, John C
2016-04-27
Normalization of single-cell RNA sequencing data is necessary to eliminate cell-specific biases prior to downstream analyses. However, this is not straightforward for noisy single-cell data where many counts are zero. We present a novel approach where expression values are summed across pools of cells, and the summed values are used for normalization. Pool-based size factors are then deconvolved to yield cell-based factors. Our deconvolution approach outperforms existing methods for accurate normalization of cell-specific biases in simulated data. Similar behavior is observed in real data, where deconvolution improves the relevance of results of downstream analyses.
Correction Factor for Gaussian Deconvolution of Optically Thick Linewidths in Homogeneous Sources
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Bhatia, A. K.
1999-01-01
Profiles of optically thick, non-Gaussian emission line profiles convoluted with Gaussian instrumental profiles are constructed, and are deconvoluted on the usual Gaussian basis to examine the departure from accuracy thereby caused in "measured" linewidths. It is found that "measured" linewidths underestimate the true linewidths of optically thick lines, by a factor which depends on the resolution factor r congruent to Doppler width/instrumental width and on the optical thickness tau(sub 0). An approximating expression is obtained for this factor, applicable in the range of at least 0 <= tau(sub 0) <= 10, which can provide estimates of the true linewidth and optical thickness.
Time-Domain Receiver Function Deconvolution using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Moreira, L. P.
2017-12-01
Receiver Functions (RF) are well know method for crust modelling using passive seismological signals. Many different techniques were developed to calculate the RF traces, applying the deconvolution calculation to radial and vertical seismogram components. A popular method used a spectral division of both components, which requires human intervention to apply the Water Level procedure to avoid instabilities from division by small numbers. One of most used method is an iterative procedure to estimate the RF peaks and applying the convolution with vertical component seismogram, comparing the result with the radial component. This method is suitable for automatic processing, however several RF traces are invalid due to peak estimation failure.In this work it is proposed a deconvolution algorithm using Genetic Algorithm (GA) to estimate the RF peaks. This method is entirely processed in the time domain, avoiding the time-to-frequency calculations (and vice-versa), and totally suitable for automatic processing. Estimated peaks can be used to generate RF traces in a seismogram format for visualization. The RF trace quality is similar for high magnitude events, although there are less failures for RF calculation of smaller events, increasing the overall performance for high number of events per station.
Ultrasonic inspection of studs (bolts) using dynamic predictive deconvolution and wave shaping.
Suh, D M; Kim, W W; Chung, J G
1999-01-01
Bolt degradation has become a major issue in the nuclear industry since the 1980's. If small cracks in stud bolts are not detected early enough, they grow rapidly and cause catastrophic disasters. Their detection, despite its importance, is known to be a very difficult problem due to the complicated structures of the stud bolts. This paper presents a method of detecting and sizing a small crack in the root between two adjacent crests in threads. The key idea is from the fact that the mode-converted Rayleigh wave travels slowly down the face of the crack and turns from the intersection of the crack and the root of thread to the transducer. Thus, when a crack exists, a small delayed pulse due to the Rayleigh wave is detected between large regularly spaced pulses from the thread. The delay time is the same as the propagation delay time of the slow Rayleigh wave and is proportional to the site of the crack. To efficiently detect the slow Rayleigh wave, three methods based on digital signal processing are proposed: wave shaping, dynamic predictive deconvolution, and dynamic predictive deconvolution combined with wave shaping.
Retinal image restoration by means of blind deconvolution
NASA Astrophysics Data System (ADS)
Marrugo, Andrés G.; Šorel, Michal; Šroubek, Filip; Millán, María S.
2011-11-01
Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.
NASA Technical Reports Server (NTRS)
Pan, Jianqiang
1992-01-01
Several important problems in the fields of signal processing and model identification, such as system structure identification, frequency response determination, high order model reduction, high resolution frequency analysis, deconvolution filtering, and etc. Each of these topics involves a wide range of applications and has received considerable attention. Using the Fourier based sinusoidal modulating signals, it is shown that a discrete autoregressive model can be constructed for the least squares identification of continuous systems. Some identification algorithms are presented for both SISO and MIMO systems frequency response determination using only transient data. Also, several new schemes for model reduction were developed. Based upon the complex sinusoidal modulating signals, a parametric least squares algorithm for high resolution frequency estimation is proposed. Numerical examples show that the proposed algorithm gives better performance than the usual. Also, the problem was studied of deconvolution and parameter identification of a general noncausal nonminimum phase ARMA system driven by non-Gaussian stationary random processes. Algorithms are introduced for inverse cumulant estimation, both in the frequency domain via the FFT algorithms and in the domain via the least squares algorithm.
Torres-Lapasió, J R; Pous-Torres, S; Ortiz-Bolsico, C; García-Alvarez-Coque, M C
2015-01-16
The optimisation of the resolution in high-performance liquid chromatography is traditionally performed attending only to the time information. However, even in the optimal conditions, some peak pairs may remain unresolved. Such incomplete resolution can be still accomplished by deconvolution, which can be carried out with more guarantees of success by including spectral information. In this work, two-way chromatographic objective functions (COFs) that incorporate both time and spectral information were tested, based on the peak purity (analyte peak fraction free of overlapping) and the multivariate selectivity (figure of merit derived from the net analyte signal) concepts. These COFs are sensitive to situations where the components that coelute in a mixture show some spectral differences. Therefore, they are useful to find out experimental conditions where the spectrochromatograms can be recovered by deconvolution. Two-way multivariate selectivity yielded the best performance and was applied to the separation using diode-array detection of a mixture of 25 phenolic compounds, which remained unresolved in the chromatographic order using linear and multi-linear gradients of acetonitrile-water. Peak deconvolution was carried out using the combination of orthogonal projection approach and alternating least squares. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Schawinski, Kevin; Zhang, Ce; Zhang, Hantian; Fowler, Lucas; Santhanam, Gokula Krishnan
2017-05-01
Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon-Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.
Imaging samples in silica aerogel using an experimental point spread function.
White, Amanda J; Ebel, Denton S
2015-02-01
Light microscopy is a powerful tool that allows for many types of samples to be examined in a rapid, easy, and nondestructive manner. Subsequent image analysis, however, is compromised by distortion of signal by instrument optics. Deconvolution of images prior to analysis allows for the recovery of lost information by procedures that utilize either a theoretically or experimentally calculated point spread function (PSF). Using a laser scanning confocal microscope (LSCM), we have imaged whole impact tracks of comet particles captured in silica aerogel, a low density, porous SiO2 solid, by the NASA Stardust mission. In order to understand the dynamical interactions between the particles and the aerogel, precise grain location and track volume measurement are required. We report a method for measuring an experimental PSF suitable for three-dimensional deconvolution of imaged particles in aerogel. Using fluorescent beads manufactured into Stardust flight-grade aerogel, we have applied a deconvolution technique standard in the biological sciences to confocal images of whole Stardust tracks. The incorporation of an experimentally measured PSF allows for better quantitative measurements of the size and location of single grains in aerogel and more accurate measurements of track morphology.
NASA Astrophysics Data System (ADS)
Ainiwaer, A.; Gurrola, H.
2018-03-01
Common conversion point stacking or migration of receiver functions (RFs) and H-k (H is depth and k is Vp/Vs) stacking of RFs has become a common method to study the crust and upper mantle beneath broad-band three-component seismic stations. However, it can be difficult to interpret Pds RFs due to interference between the Pds, PPds and PSds phases, especially in the mantle portion of the lithosphere. We propose a phase separation method to isolate the prominent phases of the RFs and produce separate Pds, PPds and PSds `phase specific' receiver functions (referred to as PdsRFs, PPdsRFs and PSdsRFs, respectively) by deconvolution of the wavefield rather than single seismograms. One of the most important products of this deconvolution method is to produce Ps receiver functions (PdsRFs) that are free of crustal multiples. This is accomplished by using H-k analysis to identify specific phases in the wavefield from all seismograms recorded at a station which enables development of an iterative deconvolution procedure to produce the above-mentioned phase specific RFs. We refer to this method as wavefield iterative deconvolution (WID). The WID method differentiates and isolates different RF phases by exploiting their differences in moveout curves across the entire wave front. We tested the WID by applying it to synthetic seismograms produced using a modified version of the PREM velocity model. The WID effectively separates phases from each stacked RF in synthetic data. We also applied this technique to produce RFs from seismograms recorded at ARU (a broad-band station in Arti, Russia). The phase specific RFs produced using WID are easier to interpret than traditional RFs. The PdsRFs computed using WID are the most improved, owing to the distinct shape of its moveout curves as compared to the moveout curves for the PPds and PSds phases. The importance of this WID method is most significant in reducing interference between phases for depths of less than 300 km. Phases from deeper layers (i.e. P660s as compared to PP220s) are less likely to be misinterpreted because the large amount of moveout causes the appropriate phases to stack coherently if there is sufficient distribution in ray parameter. WID is most effective in producing clean PdsRFs that are relatively free of reverberations whereas PPdsRFs and PSdsRFs retain contamination from reverberations.
Sbragia, L.; Nassr, A.C.C.; Gonçalves, F.L.L.; Schmidt, A.F.; Zuliani, C.C.; Garcia, P.V.; Gallindo, R.M.; Pereira, L.A.V.
2014-01-01
Changes in vascular endothelial growth factor (VEGF) in pulmonary vessels have been described in congenital diaphragmatic hernia (CDH) and may contribute to the development of pulmonary hypoplasia and hypertension; however, how the expression of VEGF receptors changes during fetal lung development in CDH is not understood. The aim of this study was to compare morphological evolution with expression of VEGF receptors, VEGFR1 (Flt-1) and VEGFR2 (Flk-1), in pseudoglandular, canalicular, and saccular stages of lung development in normal rat fetuses and in fetuses with CDH. Pregnant rats were divided into four groups (n=20 fetuses each) of four different gestational days (GD) 18.5, 19.5, 20.5, 21.5: external control (EC), exposed to olive oil (OO), exposed to 100 mg nitrofen, by gavage, without CDH (N-), and exposed to nitrofen with CDH (CDH) on GD 9.5 (term=22 days). The morphological variables studied were: body weight (BW), total lung weight (TLW), left lung weight, TLW/BW ratio, total lung volume, and left lung volume. The histometric variables studied were: left lung parenchymal area density and left lung parenchymal volume. VEGFR1 and VEGFR2 expression were determined by Western blotting. The data were analyzed using analysis of variance with the Tukey-Kramer post hoc test. CDH frequency was 37% (80/216). All the morphological and histometric variables were reduced in the N- and CDH groups compared with the controls, and reductions were more pronounced in the CDH group (P<0.05) and more evident on GD 20.5 and GD 21.5. Similar results were observed for VEGFR1 and VEGFR2 expression. We conclude that N- and CDH fetuses showed primary pulmonary hypoplasia, with a decrease in VEGFR1 and VEGFR2 expression. PMID:24519134
Early and mid-term results of lung transplantation with donors 60 years and older.
López, Iker; Zapata, Ricardo; Solé, Juan; Jaúregui, Alberto; Deu, María; Romero, Laura; Pérez, Javier; Bello, Irene; Wong, Manuel; Ribas, Montse; Masnou, Nuria; Rello, Jordi; Roman, Antonio; Canela, Mercedes
2015-01-01
There are doubts about the age limit for lung donors and the ideal donor has traditionally been considered to be one younger than 55 years. The objective of this study was to compare the outcomes in lung transplantation between organs from donors older and younger than 60 years. We performed a retrospective observational study comparing the group of patients receiving organs from donors 60 years or older (Group A) or younger than 60 years (Group B) between January 2007 and December 2011. Postoperative evolution and mortality rates, short-term and mid-term postoperative complications, and global survival rate were evaluated. We analysed a total of 230 lung transplants, of which 53 (23%) involved lungs from donors 60 years of age or older (Group A), and 177 (77%) were from donors younger than 60 years (Group B). Three (5.7%) patients from Group A and 14 patients (7.9%) from Group B died within 30 days (P = 0.58). The percentage of patients free from chronic lung allograft dysfunction at 1-3 years was 95.5, 74.3 and 69.3% for Group A, and 94.5, 84.8 and 73.3% for Group B, respectively (P = 0.47). There were no statistically significant differences between Groups A and B in terms of survival at 3 years, (69.4 vs 68.8%; P = 0.28). Our results support the idea that lungs from donors aged 60-70 years can be used safely for lung transplantation with comparable results to lungs from younger donors in terms of postoperative mortality and mid-term survival. © The Author 2014. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Epigenetics in non-small cell lung cancer: from basics to therapeutics.
Ansari, Junaid; Shackelford, Rodney E; El-Osta, Hazem
2016-04-01
Lung cancer remains the number one cause of cancer-related deaths worldwide with 221,200 estimated new cases and 158,040 estimated deaths in 2015. Approximately 80% of cases are non-small cell lung cancer (NSCLC). The diagnosis is usually made at an advanced stage where the prognosis is poor and therapeutic options are limited. The evolution of lung cancer is a multistep process involving genetic, epigenetic, and environmental factor interactions that result in the dysregulation of key oncogenes and tumor suppressor genes, culminating in activation of cancer-related signaling pathways. The past decade has witnessed the discovery of multiple molecular aberrations that drive lung cancer growth, among which are epidermal growth factor receptor (EGFR) mutations and translocations involving the anaplastic lymphoma kinase (ALK) gene. This has translated into therapeutic agent developments that target these molecular alterations. The absence of targetable mutations in 50% of NSCLC cases and targeted therapy resistance development underscores the importance for developing alternative therapeutic strategies for treating lung cancer. Among these strategies, pharmacologic modulation of the epigenome has been used to treat lung cancer. Epigenetics approaches may circumvent the problem of tumor heterogeneity by affecting the expression of multiple tumor suppression genes (TSGs), halting tumor growth and survival. Moreover, it may be effective for tumors that are not driven by currently recognized druggable mutations. This review summarizes the molecular pathology of lung cancer epigenetic aberrations and discusses current efforts to target the epigenome with different pharmacological approaches. Our main focus will be on hypomethylating agents, histone deacetylase (HDAC) inhibitors, microRNA modulations, and the role of novel epigenetic biomarkers. Last, we will address the challenges that face this old-new strategy in treating lung cancer.
Schachner, Emma R; Farmer, C G; McDonald, Andrew T; Dodson, Peter
2011-09-01
Examination of the thoracic rib and vertebral anatomy of extant archosaurs indicates a relationship between the postcranial axial skeleton and pulmonary anatomy. Lung ventilation in extant crocodilians is primarily achieved with a hepatic piston pump and costal rotation. The tubercula and capitula of the ribs lie on the horizontal plane, forming a smooth thoracic "ceiling" facilitating movement of the viscera. Although the parietal pleura is anchored to the dorsal thoracic wall, the dorsal visceral pleura exhibits a greater freedom of movement. The air sac system and lungs of birds are associated with bicapitate ribs with a ventrally positioned capitular articulation, generating a rigid and furrowed rib cage that minimizes dorsoventral changes in volume in the dorsal thorax. The thin walled bronchi are kept from collapsing by fusion of the lung to the thorax on all sides. Data from this study suggest a progression from a dorsally rigid, heterogeneously partitioned, multichambered lung in basal dinosauriform archosaurs towards the small entirely rigid avian-style lung that was likely present in saurischian dinosaurs, consistent with a constant volume cavum pulmonale, thin walled parabronchi, and distinct air sacs. There is no vertebral evidence for a crocodilian hepatic piston pump in any of the taxa reviewed. The evidence for both a rigid lung and unidirectional airflow in dinosauriformes raises the possibility that these animals had a highly efficient lung relative to other Mesozoic vertebrates, which may have contributed to their successful radiation during this time period. Copyright © 2011 Wiley-Liss, Inc.
Using Fractal And Morphological Criteria For Automatic Classification Of Lung Diseases
NASA Astrophysics Data System (ADS)
Vehel, Jacques Levy
1989-11-01
Medical Images are difficult to analyze by means of classical image processing tools because they are very complex and irregular. Such shapes are obtained for instance in Nuclear Medecine with the spatial distribution of activity for organs such as lungs, liver, and heart. We have tried to apply two different theories to these signals: - Fractal Geometry deals with the analysis of complex irregular shapes which cannot well be described by the classical Euclidean geometry. - Integral Geometry treats sets globally and allows to introduce robust measures. We have computed three parameters on three kinds of Lung's SPECT images: normal, pulmonary embolism and chronic desease: - The commonly used fractal dimension (FD), that gives a measurement of the irregularity of the 3D shape. - The generalized lacunarity dimension (GLD), defined as the variance of the ratio of the local activity by the mean activity, which is only sensitive to the distribution and the size of gaps in the surface. - The Favard length that gives an approximation of the surface of a 3-D shape. The results show that each slice of the lung, considered as a 3D surface, is fractal and that the fractal dimension is the same for each slice and for the three kind of lungs; as for the lacunarity and Favard length, they are clearly different for normal lungs, pulmonary embolisms and chronic diseases. These results indicate that automatic classification of Lung's SPECT can be achieved, and that a quantitative measurement of the evolution of the disease could be made.
Eilstein, Daniel; Uhry, Zoé; Lim, Tek-Ang; Bloch, Juliette
2008-03-01
Lung cancer is currently the most common cancer in the world and as such is an important public health concern. One of the main challenges is to foresee the evolution of trends in lung cancer mortality rates in order to anticipate the future burden of this disease as well as to plan the supply of adequate health care. The aim of this study is to propose a quantification of future lung cancer mortality rates by gender in France until the year 2012. Lung cancer mortality data in France (1978-2002) were extracted from the National Statistics of Death and analyzed by 5-year age-groups and periods, using a Bayesian age-period-cohort model. Between 1978 and 2002, female lung cancer mortality rate rises by 3.3%year(-1). For men, a slow increase is observed until 1988-1992 followed by a declining trend. In 1998-2002, age-standardized mortality rates were, respectively, 45.5 and 7.6 per 100000 for males and for females. By 2008-2012 these figures would reach 40.8 (95% credibility interval (CI): 32.7, 50.0) and 12.1 (CI: 11.7, 12.6) per 100000, respectively, which represents among women a 4.7% annual increase (CI: 4.5, 5.0). Our results highlight the relevance of pursuing public health measures in order to cope more actively with tobacco smoking in the prevention strategy against lung cancer specifically among women.
[Function of alveoles as a result of evolutionary development of respiratory system in mammals].
Ivanov, K P
2013-01-01
Reaction of hemoglobin oxygenation is known to occur for 40 femtoseconds (40 x 10(-15) s). However, the process of oxygen diffusion to hemoglobin under physiologic conditions decelerated this reaction approximately billion times. In mammalian lungs, blood is moving at a high rate and in a relatively high amount. The human lung mass is as low as 600 g, but the complete cardiac output approaches 6 1/min. In rat, from 20 to 40 ml of blood is passed for q min through the lung whose mass is about 1.5 g. Such blood flow rate is possible, as in lungs of high animals there exists a dense network of relatively large microvessels with diameter from 20 to 40 microm and more. In spite of a large volume and a high blood flow rate hampering oxygen diffusion, the complete blood oxygenation occurs in lung alveoli. This is due to peculiar mechanisms that facilitate markedly the oxygen diffusion and that developed in alveoli of mammals in the course of many million years of evolution of their respiratory system. Thus, alveolus is not a bubble with air, but a complex tool of fight with inertness of diffusion. It is interesting that in lungs of the low vertebrates, neither such system of blood vessels nor alveoli exist, and their blood flow rate is much lower than in mammals.
Precision Therapy for Lung Cancer: Tyrosine Kinase Inhibitors and Beyond.
Rajan, Arun; Schrump, David S
2015-01-01
For patients with advanced cancers there has been a concerted effort to transition from a generic treatment paradigm to one based on tumor-specific biologic, and patient-specific clinical characteristics. This approach, known as precision therapy has been made possible owing to widespread availability and a reduction in the cost of cutting-edge technologies that are used to study the genomic, proteomic, and metabolic attributes of individual tumors. This review traces the evolution of precision therapy for lung cancer from the identification of molecular subsets of the disease to the development and approval of tyrosine kinase, as well as immune checkpoint inhibitors for lung cancer therapy. Challenges of the precision therapy era including the emergence of acquired resistance, identification of untargetable mutations, and the effect on clinical trial design are discussed. We conclude by highlighting newer applications for the concept of precision therapy. Published by Elsevier Inc.
Histopathology of ventilator-associated pneumonia (VAP) and its clinical implications.
Torres, A; Fábregas, N; Arce, Y; López-Boado, M A
1999-01-01
Ventilator-associated pneumonia (VAP) is a diffuse polymicrobial and dynamic process, with heterogeneous distribution of lesions, showing different degrees of histological evolution predominating in the dependent lung zones, in which microbiology and histology can be dissociated. This might explain why blind endobronchial techniques to collect respiratory secretions have similar accuracy compared to visually guided samples, explaining the difficulties in validating any methods for its diagnosis. In the clinical setting the association of acute lung injury (ALI) and pneumonia is controversial. However, it is rare to detect diffuse alveolar damage (DAD) in absence of histological signs of pneumonia, probably evidencing that ALI favors the development of pneumonia. Histopathologically, it is difficult to distinguish initial and resolution phases of DAD from pneumonia and vice versa. On the other hand, there is a clear relationship between antimicrobial treatment and the decreased lung bacterial burden which strengthens the importance of distal airway sampling before starting antibiotic therapy.
Evolution of granulomas in lungs of mice infected aerogenically with Mycobacterium tuberculosis.
Cardona, P J; Llatjós, R; Gordillo, S; Díaz, J; Ojanguren, I; Ariza, A; Ausina, V
2000-08-01
Aerogenous infection of C57Bl/6 mice with a virulent strain of Mycobacterium tuberculosis (CL 511) leads to the formation of primary granulomas in the lung where neutrophils, macrophages and subsequently, lymphocytes accumulate progressively around an initial cluster of infected macrophages. The spread of infection through the lung parenchyma gives rise to secondary granulomas featuring numerous lymphocytes that surround a small number of infected macrophages. Afterwards, foamy macrophages add an outer layer to the granulomas, which characteristically respect the pulmonary interstitium and remain confined within the alveolar spaces. This feature, in conjunction with the constant presence of M. tuberculosis in the products of broncho-alveolar lavage, suggests that the upward bronchial migration of infected macrophages may contribute significantly to pulmonary dissemination of mycobacterial infection. The latter would be in agreement with the persistence of chronic pulmonary infection in spite of a concomitant strong T helper 1 cell response.
Costa, Daniel B.; Wright, Jeffrey; VanderLaan, Paul A.
2015-01-01
The diagnosis and staging of patients with lung cancer in recent decades has increasingly relied on minimally invasive tissue sampling techniques, such as endobronchial ultrasound (EBUS) or endoscopic ultrasound (EUS) needle aspiration, transbronchial biopsy, and transthoracic image guided core needle biopsy. These modalities have been shown to have low complication rates, and provide adequate cellular material for pathologic diagnosis and necessary ancillary molecular testing. As an important component to a multidisciplinary team approach in the care of patients with lung cancer, these minimally invasive modalities have proven invaluable for the rapid and safe acquisition of tissue used for the diagnosis, staging, and molecular testing of tumors to identify the best evidence-based treatment plan. The continuous evolution of the field of lung cancer staging and treatment has translated into improvements in survival and quality of life for patients. Although differences in clinical practice between academic and community hospital settings still exist, improvements in physician education and training as well as adoption of technological advancements should help narrow this gap going forward. PMID:26380180
NASA Astrophysics Data System (ADS)
Zhu, Xiaolu; Yang, Hao
2017-12-01
The recently emerged four-dimensional (4D) biofabrication technique aims to create dynamic three-dimensional (3D) biological structures that can transform their shapes or functionalities with time when an external stimulus is imposed or when cell postprinting self-assembly occurs. The evolution of 3D pattern of branching geometry via self-assembly of cells is critical for 4D biofabrication of artificial organs or tissues with branched geometry. However, it is still unclear that how the formation and evolution of these branching pattern are biologically encoded. We study the 4D fabrication of lung branching structures utilizing a simulation model on the reaction-diffusion mechanism, which is established using partial differential equations of four variables, describing the reaction and diffusion process of morphogens with time during the development process of lung branching. The simulation results present the forming process of 3D branching pattern, and also interpret the behaviors of side branching and tip splitting as the stalk growing, through 3D visualization of numerical simulation.
Allele-Specific HLA Loss and Immune Escape in Lung Cancer Evolution.
McGranahan, Nicholas; Rosenthal, Rachel; Hiley, Crispin T; Rowan, Andrew J; Watkins, Thomas B K; Wilson, Gareth A; Birkbak, Nicolai J; Veeriah, Selvaraju; Van Loo, Peter; Herrero, Javier; Swanton, Charles
2017-11-30
Immune evasion is a hallmark of cancer. Losing the ability to present neoantigens through human leukocyte antigen (HLA) loss may facilitate immune evasion. However, the polymorphic nature of the locus has precluded accurate HLA copy-number analysis. Here, we present loss of heterozygosity in human leukocyte antigen (LOHHLA), a computational tool to determine HLA allele-specific copy number from sequencing data. Using LOHHLA, we find that HLA LOH occurs in 40% of non-small-cell lung cancers (NSCLCs) and is associated with a high subclonal neoantigen burden, APOBEC-mediated mutagenesis, upregulation of cytolytic activity, and PD-L1 positivity. The focal nature of HLA LOH alterations, their subclonal frequencies, enrichment in metastatic sites, and occurrence as parallel events suggests that HLA LOH is an immune escape mechanism that is subject to strong microenvironmental selection pressures later in tumor evolution. Characterizing HLA LOH with LOHHLA refines neoantigen prediction and may have implications for our understanding of resistance mechanisms and immunotherapeutic approaches targeting neoantigens. VIDEO ABSTRACT. Copyright © 2017 The Francis Crick Institute. Published by Elsevier Inc. All rights reserved.
Beckmann, Sonja; Nikolic, Nataša; Denhaerynck, Kris; Binet, Isabelle; Koller, Michael; Boely, Elsa; De Geest, Sabina
2017-03-01
Obesity and weight gain are serious concerns after solid organ transplantation (Tx); however, no unbiased comparison regarding body weight parameter evolution across organ groups has yet been performed. Using data from the prospective nationwide Swiss Transplant Cohort Study, we compared the evolution of weight parameters up to 3 years post-Tx in 1359 adult kidney (58.3%), liver (21.7%), lung (11.6%), and heart (8.4%) recipients transplanted between May 2008 and May 2012. Changes in mean weight and body mass index (BMI) category were compared to reference values from 6 months post-Tx. At 3 years post-Tx, compared to other organ groups, liver Tx recipients showed the greatest weight gain (mean 4.8±10.4 kg), 57.4% gained >5% body weight, and they had the highest incidence of obesity (38.1%). After 3 years, based on their BMI categories at 6 months, normal weight and obese liver Tx patients, as well as underweight kidney, lung and heart Tx patients had the highest weight gains. Judged against international Tx patient data, the majority of our Swiss Tx recipients' experienced lower post-Tx weight gain. However, our findings show weight gain pattern differences, both within and across organ Tx groups that call for preventive measures. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Adaptive optics images restoration based on frame selection and multi-framd blind deconvolution
NASA Astrophysics Data System (ADS)
Tian, Y.; Rao, C. H.; Wei, K.
2008-10-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulent due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frame blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are picked out by frame selection technique is deconvolved. There is no priori knowledge except the positive constraint. The method has been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system in Yunnan Observatory. The results showed that the method can effectively improve the images partially corrected by adaptive optics.
Digital sorting of complex tissues for cell type-specific gene expression profiles.
Zhong, Yi; Wan, Ying-Wooi; Pang, Kaifang; Chow, Lionel M L; Liu, Zhandong
2013-03-07
Cellular heterogeneity is present in almost all gene expression profiles. However, transcriptome analysis of tissue specimens often ignores the cellular heterogeneity present in these samples. Standard deconvolution algorithms require prior knowledge of the cell type frequencies within a tissue or their in vitro expression profiles. Furthermore, these algorithms tend to report biased estimations. Here, we describe a Digital Sorting Algorithm (DSA) for extracting cell-type specific gene expression profiles from mixed tissue samples that is unbiased and does not require prior knowledge of cell type frequencies. The results suggest that DSA is a specific and sensitivity algorithm in gene expression profile deconvolution and will be useful in studying individual cell types of complex tissues.
Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media
NASA Astrophysics Data System (ADS)
Edrei, Eitan; Scarcelli, Giuliano
2016-09-01
High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.
Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media.
Edrei, Eitan; Scarcelli, Giuliano
2016-09-16
High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.
Deconvolution of acoustic emissions for source localization using time reverse modeling
NASA Astrophysics Data System (ADS)
Kocur, Georg Karl
2017-01-01
Impact experiments on small-scale slabs made of concrete and aluminum were carried out. Wave motion radiated from the epicenter of the impact was recorded as voltage signals by resonant piezoelectric transducers. Numerical simulations of the elastic wave propagation are performed to simulate the physical experiments. The Hertz theory of contact is applied to estimate the force impulse, which is subsequently used for the numerical simulation. Displacements at the transducer positions are calculated numerically. A deconvolution function is obtained by comparing the physical (voltage signal) and the numerical (calculated displacement) experiments. Acoustic emission signals due to pencil-lead breaks are recorded, deconvolved and applied for localization using time reverse modeling.
Infection, inflammation, and lung function decline in infants with cystic fibrosis.
Pillarisetti, Naveen; Williamson, Elizabeth; Linnane, Barry; Skoric, Billy; Robertson, Colin F; Robinson, Phil; Massie, John; Hall, Graham L; Sly, Peter; Stick, Stephen; Ranganathan, Sarath
2011-07-01
Better understanding of evolution of lung function in infants with cystic fibrosis (CF) and its association with pulmonary inflammation and infection is crucial in informing both early intervention studies aimed at limiting lung damage and the role of lung function as outcomes in such studies. To describe longitudinal change in lung function in infants with CF and its association with pulmonary infection and inflammation. Infants diagnosed after newborn screening or clinical presentation were recruited prospectively. FVC, forced expiratory volume in 0.5 seconds (FEV(0.5)), and forced expiratory flows at 75% of exhaled vital capacity (FEF(75)) were measured using the raised-volume technique, and z-scores were calculated from published reference equations. Pulmonary infection and inflammation were measured in bronchoalveolar lavage within 48 hours of lung function testing. Thirty-seven infants had at least two successful repeat lung function measurements. Mean (SD) z-scores for FVC were -0.8 (1.0), -0.9 (1.1), and -1.7 (1.2) when measured at the first visit, 1-year visit, or 2-year visit, respectively. Mean (SD) z-scores for FEV(0.5) were -1.4 (1.2), -2.4 (1.1), and -4.3 (1.6), respectively. In those infants in whom free neutrophil elastase was detected, FVC z-scores were 0.81 lower (P=0.003), and FEV(0.5) z-scores 0.96 lower (P=0.001), respectively. Significantly greater decline in FEV(0.5) z-scores occurred in those infected with Staphylococcus aureus (P=0.018) or Pseudomonas aeruginosa (P=0.021). In infants with CF, pulmonary inflammation is associated with lower lung function, whereas pulmonary infection is associated with a greater rate of decline in lung function. Strategies targeting pulmonary inflammation and infection are required to prevent early decline in lung function in infants with CF.
Evolution of cystic fibrosis lung function in the early years.
Bush, Andrew; Sly, Peter D
2015-11-01
Most treatment of newborn screening-diagnosed cystic fibrosis is not evidence-based; there are very few randomized controlled trials (RCTs). Furthermore, the advent of novel molecular therapies, which could be started at diagnosis, mandates performing RCTs in very young children. However, unless the natural history of early cystic fibrosis lung disease is known, RCTs are impossible. Here, we review the results of two large prospective cohorts of these infants - London Cystic Fibrosis Collaboration (LCFC) (London, UK) and Australian Respiratory Early Surveillance Team for Cystic Fibrosis (AREST-CF) (Australia). Nutritional status remained excellent in both the cohorts. Both cohorts reported abnormal lung function aged at 3 months. AREST-CF, which previously reported rapidly declining preschool lung function, now report good conventional school-age spirometry. LCFC reported improvement between 3 months and 1 year, and stability in the second year. AREST-CF also reported a high prevalence of high resolution computed tomographic abnormalities related to free neutrophil elastase in bronchoalveolar lavage; LCFC reported high resolution computed tomographic changes at 1 year, which were too mild to be scored reproducibly. At least in the first 2 years of life, lung function is not a good end-point for RCTs; routine bronchoalveolar lavage and HRCT cannot be justified. Newborn screening has greatly improved outcomes, but we need better point-of-care biomarkers.
Lung Cancers Associated with Cystic Airspaces: Underrecognized Features of Early Disease.
Sheard, Sarah; Moser, Joanna; Sayer, Charlie; Stefanidis, Konstantinos; Devaraj, Anand; Vlahos, Ioannis
2018-01-01
Early lung cancers associated with cystic airspaces are increasingly being recognized as a cause of delayed diagnoses-owing to data gathered from screening trials and encounters in routine clinical practice as more patients undergo serial imaging. Several morphologic subtypes of cancers associated with cystic airspaces exist and can exhibit variable patterns of progression as the solid elements of the tumor grow. Current understanding of the pathogenesis of these malignancies is limited, and the numbers of cases reported in the literature are small. However, several tumor cell types are represented in these lesions, with adenocarcinoma predominating. The features of cystic airspaces differ among cases and include emphysematous bullae, congenital or fibrotic cysts, subpleural blebs, bronchiectatic airways, and distended distal airspaces. Once identified, these cystic lesions pose management challenges to radiologists in terms of distinguishing them from benign mimics of cancer that are commonly seen in patients who also are at increased risk of lung cancer. Rendering a definitive tissue-based diagnosis can be difficult when the lesions are small, and affected patients tend to be in groups that are at higher risk of requiring biopsy or resection. In addition, the decision to monitor these cases can add to patient anxiety and cause the additional burden of strained departmental resources. The authors have drawn from their experience, emerging evidence from international lung cancer screening trials, and large databases of lung cancer cases from other groups to analyze the prevalence and evolution of lung cancers associated with cystic airspaces and provide guidance for managing these lesions. Although there are insufficient data to support specific management guidelines similar to those for managing small solid and ground-glass lung nodules, these data and guidelines should be the direction for ongoing research on early detection of lung cancer. © RSNA, 2018.
NASA Astrophysics Data System (ADS)
Floberg, J. M.; Holden, J. E.
2013-02-01
We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications.
Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition
NASA Astrophysics Data System (ADS)
Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding
2018-02-01
Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.
NASA Astrophysics Data System (ADS)
Enguita, Jose M.; Álvarez, Ignacio; González, Rafael C.; Cancelas, Jose A.
2018-01-01
The problem of restoration of a high-resolution image from several degraded versions of the same scene (deconvolution) has been receiving attention in the last years in fields such as optics and computer vision. Deconvolution methods are usually based on sets of images taken with small (sub-pixel) displacements or slightly different focus. Techniques based on sets of images obtained with different point-spread-functions (PSFs) engineered by an optical system are less popular and mostly restricted to microscopic systems, where a spot of light is projected onto the sample under investigation, which is then scanned point-by-point. In this paper, we use the effect of conical diffraction to shape the PSFs in a full-field macroscopic imaging system. We describe a series of simulations and real experiments that help to evaluate the possibilities of the system, showing the enhancement in image contrast even at frequencies that are strongly filtered by the lens transfer function or when sampling near the Nyquist frequency. Although results are preliminary and there is room to optimize the prototype, the idea shows promise to overcome the limitations of the image sensor technology in many fields, such as forensics, medical, satellite, or scientific imaging.
Charge reconstruction in large-area photomultipliers
NASA Astrophysics Data System (ADS)
Grassi, M.; Montuschi, M.; Baldoncini, M.; Mantovani, F.; Ricci, B.; Andronico, G.; Antonelli, V.; Bellato, M.; Bernieri, E.; Brigatti, A.; Brugnera, R.; Budano, A.; Buscemi, M.; Bussino, S.; Caruso, R.; Chiesa, D.; Corti, D.; Dal Corso, F.; Ding, X. F.; Dusini, S.; Fabbri, A.; Fiorentini, G.; Ford, R.; Formozov, A.; Galet, G.; Garfagnini, A.; Giammarchi, M.; Giaz, A.; Insolia, A.; Isocrate, R.; Lippi, I.; Longhitano, F.; Lo Presti, D.; Lombardi, P.; Marini, F.; Mari, S. M.; Martellini, C.; Meroni, E.; Mezzetto, M.; Miramonti, L.; Monforte, S.; Nastasi, M.; Ortica, F.; Paoloni, A.; Parmeggiano, S.; Pedretti, D.; Pelliccia, N.; Pompilio, R.; Previtali, E.; Ranucci, G.; Re, A. C.; Romani, A.; Saggese, P.; Salamanna, G.; Sawy, F. H.; Settanta, G.; Sisti, M.; Sirignano, C.; Spinetti, M.; Stanco, L.; Strati, V.; Verde, G.; Votano, L.
2018-02-01
Large-area PhotoMultiplier Tubes (PMT) allow to efficiently instrument Liquid Scintillator (LS) neutrino detectors, where large target masses are pivotal to compensate for neutrinos' extremely elusive nature. Depending on the detector light yield, several scintillation photons stemming from the same neutrino interaction are likely to hit a single PMT in a few tens/hundreds of nanoseconds, resulting in several photoelectrons (PEs) to pile-up at the PMT anode. In such scenario, the signal generated by each PE is entangled to the others, and an accurate PMT charge reconstruction becomes challenging. This manuscript describes an experimental method able to address the PMT charge reconstruction in the case of large PE pile-up, providing an unbiased charge estimator at the permille level up to 15 detected PEs. The method is based on a signal filtering technique (Wiener filter) which suppresses the noise due to both PMT and readout electronics, and on a Fourier-based deconvolution able to minimize the influence of signal distortions—such as an overshoot. The analysis of simulated PMT waveforms shows that the slope of a linear regression modeling the relation between reconstructed and true charge values improves from 0.769 ± 0.001 (without deconvolution) to 0.989 ± 0.001 (with deconvolution), where unitary slope implies perfect reconstruction. A C++ implementation of the charge reconstruction algorithm is available online at [1].
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Bade, Richard; Causanilles, Ana; Emke, Erik; Bijlsma, Lubertus; Sancho, Juan V; Hernandez, Felix; de Voogt, Pim
2016-11-01
A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of >200 pharmaceuticals and illicit drugs or ChemSpider. This hidden target screening approach led to the detection of numerous compounds including the illicit drug cocaine and its metabolite benzoylecgonine and the pharmaceuticals carbamazepine, gemfibrozil and losartan. The compounds found using both approaches were combined, and isotopic pattern and retention time prediction were used to filter out false positives. The remaining potential positives were reanalysed in MS/MS mode and their product ions were compared with literature and/or mass spectral libraries. The inclusion of the chemical database ChemSpider led to the tentative identification of several metabolites, including paraxanthine, theobromine, theophylline and carboxylosartan, as well as the pharmaceutical phenazone. The first three of these compounds are isomers and they were subsequently distinguished based on their product ions and predicted retention times. This work has shown that the use deconvolution tools facilitates non-target screening and enables the identification of a higher number of compounds. Copyright © 2016 Elsevier B.V. All rights reserved.
Krishnan, Shaji; Verheij, Elwin E R; Bas, Richard C; Hendriks, Margriet W B; Hankemeier, Thomas; Thissen, Uwe; Coulier, Leon
2013-05-15
Mass spectra obtained by deconvolution of liquid chromatography/high-resolution mass spectrometry (LC/HRMS) data can be impaired by non-informative mass-over-charge (m/z) channels. This impairment of mass spectra can have significant negative influence on further post-processing, like quantification and identification. A metric derived from the knowledge of errors in isotopic distribution patterns, and quality of the signal within a pre-defined mass chromatogram block, has been developed to pre-select all informative m/z channels. This procedure results in the clean-up of deconvoluted mass spectra by maintaining the intensity counts from m/z channels that originate from a specific compound/molecular ion, for example, molecular ion, adducts, (13) C-isotopes, multiply charged ions and removing all m/z channels that are not related to the specific peak. The methodology has been successfully demonstrated for two sets of high-resolution LC/MS data. The approach described is therefore thought to be a useful tool in the automatic processing of LC/HRMS data. It clearly shows the advantages compared to other approaches like peak picking and de-isotoping in the sense that all information is retained while non-informative data is removed automatically. Copyright © 2013 John Wiley & Sons, Ltd.
Plenoptic Image Motion Deblurring.
Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo
2018-04-01
We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.
Kratochvíla, Jiří; Jiřík, Radovan; Bartoš, Michal; Standara, Michal; Starčuk, Zenon; Taxt, Torfinn
2016-03-01
One of the main challenges in quantitative dynamic contrast-enhanced (DCE) MRI is estimation of the arterial input function (AIF). Usually, the signal from a single artery (ignoring contrast dispersion, partial volume effects and flow artifacts) or a population average of such signals (also ignoring variability between patients) is used. Multi-channel blind deconvolution is an alternative approach avoiding most of these problems. The AIF is estimated directly from the measured tracer concentration curves in several tissues. This contribution extends the published methods of multi-channel blind deconvolution by applying a more realistic model of the impulse residue function, the distributed capillary adiabatic tissue homogeneity model (DCATH). In addition, an alternative AIF model is used and several AIF-scaling methods are tested. The proposed method is evaluated on synthetic data with respect to the number of tissue regions and to the signal-to-noise ratio. Evaluation on clinical data (renal cell carcinoma patients before and after the beginning of the treatment) gave consistent results. An initial evaluation on clinical data indicates more reliable and less noise sensitive perfusion parameter estimates. Blind multi-channel deconvolution using the DCATH model might be a method of choice for AIF estimation in a clinical setup. © 2015 Wiley Periodicals, Inc.
Rucci, Michael; Hardie, Russell C; Barnard, Kenneth J
2014-05-01
In this paper, we present a computationally efficient video restoration algorithm to address both blur and noise for a Nyquist sampled imaging system. The proposed method utilizes a temporal Kalman filter followed by a correlation-model based spatial adaptive Wiener filter (AWF). The Kalman filter employs an affine background motion model and novel process-noise variance estimate. We also propose and demonstrate a new multidelay temporal Kalman filter designed to more robustly treat local motion. The AWF is a spatial operation that performs deconvolution and adapts to the spatially varying residual noise left in the Kalman filter stage. In image areas where the temporal Kalman filter is able to provide significant noise reduction, the AWF can be aggressive in its deconvolution. In other areas, where less noise reduction is achieved with the Kalman filter, the AWF balances the deconvolution with spatial noise reduction. In this way, the Kalman filter and AWF work together effectively, but without the computational burden of full joint spatiotemporal processing. We also propose a novel hybrid system that combines a temporal Kalman filter and BM3D processing. To illustrate the efficacy of the proposed methods, we test the algorithms on both simulated imagery and video collected with a visible camera.
Ciesielski, Bartlomiej; Marciniak, Agnieszka; Zientek, Agnieszka; Krefft, Karolina; Cieszyński, Mateusz; Boguś, Piotr; Prawdzik-Dampc, Anita
2016-12-01
This study is about the accuracy of EPR dosimetry in bones based on deconvolution of the experimental spectra into the background (BG) and the radiation-induced signal (RIS) components. The model RIS's were represented by EPR spectra from irradiated enamel or bone powder; the model BG signals by EPR spectra of unirradiated bone samples or by simulated spectra. Samples of compact and trabecular bones were irradiated in the 30-270 Gy range and the intensities of their RIS's were calculated using various combinations of those benchmark spectra. The relationships between the dose and the RIS were linear (R 2 > 0.995), with practically no difference between results obtained when using signals from irradiated enamel or bone as the model RIS. Use of different experimental spectra for the model BG resulted in variations in intercepts of the dose-RIS calibration lines, leading to systematic errors in reconstructed doses, in particular for high- BG samples of trabecular bone. These errors were reduced when simulated spectra instead of the experimental ones were used as the benchmark BG signal in the applied deconvolution procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Schneiderbauer, Simon; Saeedipour, Mahdi
2018-02-01
Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
An integrated analysis-synthesis array system for spatial sound fields.
Bai, Mingsian R; Hua, Yi-Hsin; Kuo, Chia-Hao; Hsieh, Yu-Hao
2015-03-01
An integrated recording and reproduction array system for spatial audio is presented within a generic framework akin to the analysis-synthesis filterbanks in discrete time signal processing. In the analysis stage, a microphone array "encodes" the sound field by using the plane-wave decomposition. Direction of arrival of plane-wave components that comprise the sound field of interest are estimated by multiple signal classification. Next, the source signals are extracted by using a deconvolution procedure. In the synthesis stage, a loudspeaker array "decodes" the sound field by reconstructing the plane-wave components obtained in the analysis stage. This synthesis stage is carried out by pressure matching in the interior domain of the loudspeaker array. The deconvolution problem is solved by truncated singular value decomposition or convex optimization algorithms. For high-frequency reproduction that suffers from the spatial aliasing problem, vector panning is utilized. Listening tests are undertaken to evaluate the deconvolution method, vector panning, and a hybrid approach that combines both methods to cover frequency ranges below and above the spatial aliasing frequency. Localization and timbral attributes are considered in the subjective evaluation. The results show that the hybrid approach performs the best in overall preference. In addition, there is a trade-off between reproduction performance and the external radiation.
Crackles and instabilities during lung inflation
NASA Astrophysics Data System (ADS)
Alencar, Adriano M.; Majumdar, Arnab; Hantos, Zoltan; Buldyrev, Sergey V.; Eugene Stanley, H.; Suki, Béla
2005-11-01
In a variety of physico-chemical reactions, the actual process takes place in a reactive zone, called the “active surface”. We define the active surface of the lung as the set of airway segments that are closed but connected to the trachea through an open pathway, which is the interface between closed and open regions in a collapsed lung. To study the active surface and the time interval between consecutive openings, we measured the sound pressure of crackles, associated with the opening of collapsed airway segments in isolated dog lungs, inflating from the collapsed state in 120 s. We analyzed the sequence of crackle amplitudes, inter-crackle intervals, and low frequency energy from acoustic data. The series of spike amplitudes spans two orders of magnitude and the inter-crackle intervals spans over five orders of magnitude. The distribution of spike amplitudes follows a power law for nearly two decades, while the distribution of time intervals between consecutive crackles shows two regimes of power law behavior, where the first region represents crackles coming from avalanches of openings whereas the second region is due to the time intervals between separate avalanches. Using the time interval between measured crackles, we estimated the time evolution of the active surface during lung inflation. In addition, we show that recruitment and instabilities along the pressure-volume curve are associated with airway opening and recruitment. We find a good agreement between the theory of the dynamics of lung inflation and the experimental data which combined with numerical results may prove useful in the clinical diagnosis of lung diseases.
Human pericytes adopt myofibroblast properties in the microenvironment of the IPF lung.
Sava, Parid; Ramanathan, Anand; Dobronyi, Amelia; Peng, Xueyan; Sun, Huanxing; Ledesma-Mendoza, Adrian; Herzog, Erica L; Gonzalez, Anjelica L
2017-12-21
Idiopathic pulmonary fibrosis (IPF) is a fatal disease of unknown etiology characterized by a compositionally and mechanically altered extracellular matrix. Poor understanding of the origin of α-smooth muscle actin (α-SMA) expressing myofibroblasts has hindered curative therapies. Though proposed as a source of myofibroblasts in mammalian tissues, identification of microvascular pericytes (PC) as contributors to α-SMA-expressing populations in human IPF and the mechanisms driving this accumulation remain unexplored. Here, we demonstrate enhanced detection of α-SMA+ cells coexpressing the PC marker neural/glial antigen 2 in the human IPF lung. Isolated human PC cultured on decellularized IPF lung matrices adopt expression of α-SMA, demonstrating that these cells undergo phenotypic transition in response to direct contact with the extracellular matrix (ECM) of the fibrotic human lung. Using potentially novel human lung-conjugated hydrogels with tunable mechanical properties, we decoupled PC responses to matrix composition and stiffness to show that α-SMA+ PC accumulate in a mechanosensitive manner independent of matrix composition. PC activated with TGF-β1 remodel the normal lung matrix, increasing tissue stiffness to facilitate the emergence of α-SMA+ PC via MKL-1/MTRFA mechanotranduction. Nintedanib, a tyrosine-kinase inhibitor approved for IPF treatment, restores the elastic modulus of fibrotic lung matrices to reverse the α-SMA+ phenotype. This work furthers our understanding of the role that microvascular PC play in the evolution of IPF, describes the creation of an ex vivo platform that advances the study of fibrosis, and presents a potentially novel mode of action for a commonly used antifibrotic therapy that has great relevance for human disease.
NASA Astrophysics Data System (ADS)
Gurrola, H.; Berdine, A.; Pulliam, J.
2017-12-01
Interference between Ps phases and reverberations (PPs, PSs phases and reverberations thereof) make it difficult to use Ps receiver functions (RF) in regions with thick sediments. Crustal reverberations typically interfere with Ps phases from the lithosphere-asthenosphere boundary (LAB). We have developed a method to separate Ps phases from reverberations by deconvolution of all the data recorded at a seismic station by removing phases from a single wavefront at each iteration of the deconvolution (wavefield iterative deconvolution or WID). We applied WID to data collected in the Gulf Coast and Llano Front regions of Texas by the EarthScope Transportable array and by a temporary deployment of 23 broadband seismometers (deployed by Texas Tech and Baylor Universities). The 23 station temporary deployment was 300 km long; crossing from Matagorda Island onto the Llano uplift. 3-D imaging using these data shows that the deepest part of the sedimentary basin may be inboard of the coastline. The Moho beneath the Gulf Coast plain does not appear in many of the images. This could be due to interference from reverberations from shallower layers or it may indicate the lack of a strong velocity contrast at the Moho perhaps due to serpentinization of the uppermost mantle. The Moho appears to be flat, at 40 km) beneath most of the Llano uplift but may thicken to the south and thin beneath the Coastal plain. After application of WID, we were able to identify a negatively polarized Ps phase consistent with LAB depths identified in Sp RF images. The LAB appears to be 80-100 km deep beneath most of the coast but is 100 to 120 km deep beneath the Llano uplift. There are other negatively polarized phases between 160 and 200 km depths beneath the Gulf Coast and the Llano Uplift. These deeper phases may indicate that, in this region, the LAB is transitional in nature and rather than a discrete boundary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulmer, W.
2015-06-15
Purpose: The knowledge of the total nuclear cross-section Qtot(E) of therapeutic protons Qtot(E) provides important information in advanced radiotherapy with protons, such as the decrease of fluence of primary protons, the release of secondary particles (neutrons, protons, deuterons, etc.), and the production of nuclear fragments (heavy recoils), which usually undergo β+/− decay by emission of γ-quanta. Therefore determination of Qtot(E) is an important tool for sophisticated calculation algorithms of dose distributions. This cross-section can be determined by a linear combination of shifted Gaussian kernels and an error-function. The resonances resulting from deconvolutions in the energy space can be associated withmore » typical nuclear reactions. Methods: The described method of the determination of Qtot(E) results from an extension of the Breit-Wigner formula and a rather extended version of the nuclear shell theory to include nuclear correlation effects, clusters and highly excited/virtually excited nuclear states. The elastic energy transfer of protons to nucleons (the quantum numbers of the target nucleus remain constant) can be removed by the mentioned deconvolution. Results: The deconvolution of the term related to the error-function of the type cerf*er((E-ETh)/σerf] is the main contribution to obtain various nuclear reactions as resonances, since the elastic part of energy transfer is removed. The nuclear products of various elements of therapeutic interest like oxygen, calcium are classified and calculated. Conclusions: The release of neutrons is completely underrated, in particular, for low-energy protons. The transport of seconary particles, e.g. cluster formation by deuterium, tritium and α-particles, show an essential contribution to secondary particles, and the heavy recoils, which create γ-quanta by decay reactions, lead to broadening of the scatter profiles. These contributions cannot be accounted for by one single Gaussian kernel for the description of lateral scatter.« less
Microearthquake sequences along the Irpinia normal fault system in Southern Apennines, Italy
NASA Astrophysics Data System (ADS)
Orefice, Antonella; Festa, Gaetano; Alfredo Stabile, Tony; Vassallo, Maurizio; Zollo, Aldo
2013-04-01
Microearthquakes reflect a continuous readjustment of tectonic structures, such as faults, under the action of local and regional stress fields. Low magnitude seismicity in the vicinity of active fault zones may reveal insights into the mechanics of the fault systems during the inter-seismic period and shine a light on the role of fluids and other physical parameters in promoting or disfavoring the nucleation of larger size events in the same area. Here we analyzed several earthquake sequences concentrated in very limited regions along the 1980 Irpinia earthquake fault zone (Southern Italy), a complex system characterized by normal stress regime, monitored by the dense, multi-component, high dynamic range seismic network ISNet (Irpinia Seismic Network). On a specific single sequence, the May 2008 Laviano swarm, we performed accurate absolute and relative locations and estimated source parameters and scaling laws that were compared with standard stress-drops computed for the area. Additionally, from EGF deconvolution, we computed a slip model for the mainshock and investigated the space-time evolution of the events in the sequence to reveal possible interactions among earthquakes. Through the massive analysis of cross-correlation based on the master event scanning of the continuous recording, we also reconstructed the catalog of repeated earthquakes and recognized several co-located sequences. For these events, we analyzed the statistical properties, location and source parameters and their space-time evolution with the aim of inferring the processes that control the occurrence and the size of microearthquakes in a swarm.
NASA Astrophysics Data System (ADS)
Ludwig, Bethany Ann; Cunningham, Nichol
2017-01-01
We present results from an investigation of class II 6.7GHz methanol masers towards four Massive Young Stellar Objects (MYSOs). The sources, selected from the Red MSX Source (RMS) Survey (Lumsden et al. 2013), were previously understood to be non-detections for class II methanol maser emission in the methanol multi-beam (MMB) Survey (Caswell et al. 2010.) Class II methanol masers are a well-known sign post of massive star forming regions and may be utilized to probe their relatively poorly understood formation. It is possible that these non-detections are simply weak masers that are potentially associated with a younger evolutionary phase of MYSOs as hypothesized by Olmi et al. (2014). The sources were chosen to sample various stages of evolution, having similar 21 to 8 micron flux ratios and bolometric luminosities as other MYSOs with previous class II methanol maser detections. We observed all 4 MYSOs with ATCA (~2" resolution) at 10 times deeper sensitivity than previously obtained with the MMB survey and have a spectral resolution of 0.087kms^-1 . The raw data is reduced using the program Miriad (Sault, R. J., et al., 1995) and deconvolutioned using the program CASA (McMullin, J. P., et al. 2007.) We determine one of the four observed MYSOs is harboring a weak class II methanol maser. We discuss the possibility of sensitivity limitations on the remaining sources as well as environmental and evolutionary differences between the sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, Yaoguo; Xu, Shidong; Ma, Jianqun
2014-07-18
Highlights: • MiR-429 expression is upregulated in non-small cell lung cancer (NSCLC). • MiR-429 inhibits PTEN, RASSF8 and TIMP2 expression. • MiR-429 promotes metastasis and proliferation. • We report important regulatory mechanisms involved in NSCLC progression. • MiR-429 is a potential therapeutic target and diagnostic marker. - Abstract: Lung cancer is the major cause of cancer death globally. MicroRNAs are evolutionally conserved small noncoding RNAs that are critical for the regulation of gene expression. Aberrant expression of microRNA (miRNA) has been implicated in cancer initiation and progression. In this study, we demonstrated that the expression of miR-429 are often upregulatedmore » in non-small cell lung cancer (NSCLC) compared with normal lung tissues, and its expression level is also increased in NSCLC cell lines compared with normal lung cells. Overexpression of miR-429 in A549 NSCLC cells significantly promoted cell proliferation, migration and invasion, whereas inhibition of miR-429 inhibits these effects. Furthermore, we demonstrated that miR-429 down-regulates PTEN, RASSF8 and TIMP2 expression by directly targeting the 3′-untranslated region of these target genes. Taken together, our results suggest that miR-429 plays an important role in promoting the proliferation and metastasis of NSCLC cells and is a potential target for NSCLC therapy.« less
Long-term effects of inhaled budesonide on screening-detected lung nodules
Veronesi, G.; Lazzeroni, M.; Szabo, E.; Brown, P. H.; DeCensi, A.; Guerrieri-Gonzaga, A.; Bellomi, M.; Radice, D.; Grimaldi, M. C.; Spaggiari, L.; Bonanni, B.
2015-01-01
Background A previously carried out randomized phase IIb, placebo-controlled trial of 1 year of inhaled budesonide, which was nested in a lung cancer screening study, showed that non-solid and partially solid lung nodules detected by low-dose computed tomography (LDCT), and not immediately suspicious for lung cancer, tended to regress. Because some of these nodules may be slow-growing adenocarcinoma precursors, we evaluated long-term outcomes (after stopping the 1-year intervention) by annual LDCT. Patients and methods We analyzed the evolution of target and non-target trial nodules detected by LDCT in the budesonide and placebo arms up to 5 years after randomization. The numbers and characteristics of lung cancers diagnosed during follow-up were also analyzed. Results The mean maximum diameter of non-solid nodules reduced significantly (from 5.03 mm at baseline to 2.61 mm after 5 years) in the budesonide arm; there was no significant size change in the placebo arm. The mean diameter of partially solid lesions also decreased significantly, but only by 0.69 mm. The size of solid nodules did not change. Neither the number of new lesions nor the number of lung cancers differed in the two arms. Conclusions Inhaled budesonide given for 1 year significantly decreased the size of non-solid nodules detected by screening LDCT after 5 years. This is of potential importance since some of these nodules may progress slowly to adenocarcinoma. However, further studies are required to assess clinical implications. Clinical trial number NCT01540552. PMID:25672894
[Lung transplantation in pulmonary fibrosis and other interstitial lung diseases].
Berastegui, Cristina; Monforte, Victor; Bravo, Carlos; Sole, Joan; Gavalda, Joan; Tenório, Luis; Villar, Ana; Rochera, M Isabel; Canela, Mercè; Morell, Ferran; Roman, Antonio
2014-09-15
Interstitial lung disease (ILD) is the second indication for lung transplantation (LT) after emphysema. The aim of this study is to review the results of LT for ILD in Hospital Vall d'Hebron (Barcelona, Spain). We retrospectively studied 150 patients, 87 (58%) men, mean age 48 (r: 20-67) years between August 1990 and January 2010. One hundred and four (69%) were single lung transplants (SLT) and 46 (31%) bilateral-lung transplants (BLT). The postoperative diagnoses were: 94 (63%) usual interstitial pneumonia, 23 (15%) nonspecific interstitial pneumonia, 11 (7%) unclassifiable interstitial pneumonia and 15% miscellaneous. We describe the functional results, complications and survival. The actuarial survival was 87, 70 and 53% at one, 3 and 5 years respectively. The most frequent causes of death included early graft dysfunction and development of chronic rejection in the form of bronchiolitis obliterans (BOS). The mean postoperative increase in forced vital capacity and forced expiratory volume in the first second (FEV1) was similar in SLT and BLT. The best FEV1 was reached after 10 (r: 1-36) months. Sixteen percent of patients returned to work. At some point during the evolution, proven acute rejection was diagnosed histologically in 53 (35%) patients. The prevalence of BOS among survivors was 20% per year, 45% at 3 years and 63% at 5 years. LT is the best treatment option currently available for ILD, in which medical treatment has failed. Copyright © 2013 Elsevier España, S.L.U. All rights reserved.
DNA methylation intratumor heterogeneity in localized lung adenocarcinomas.
Quek, Kelly; Li, Jun; Estecio, Marcos; Zhang, Jiexin; Fujimoto, Junya; Roarty, Emily; Little, Latasha; Chow, Chi-Wan; Song, Xingzhi; Behrens, Carmen; Chen, Taiping; William, William N; Swisher, Stephen; Heymach, John; Wistuba, Ignacio; Zhang, Jianhua; Futreal, Andrew; Zhang, Jianjun
2017-03-28
Cancers are composed of cells with distinct molecular and phenotypic features within a given tumor, a phenomenon termed intratumor heterogeneity (ITH). Previously, we have demonstrated genomic ITH in localized lung adenocarcinomas; however, the nature of methylation ITH in lung cancers has not been well investigated. In this study, we generated methylation profiles of 48 spatially separated tumor regions from 11 localized lung adenocarcinomas and their matched normal lung tissues using Illumina Infinium Human Methylation 450K BeadChip array. We observed methylation ITH within the same tumors, but to a much less extent compared to inter-individual heterogeneity. On average, 25% of all differentially methylated probes compared to matched normal lung tissues were shared by all regions from the same tumors. This is in contrast to somatic mutations, of which approximately 77% were shared events amongst all regions of individual tumors, suggesting that while the majority of somatic mutations were early clonal events, the tumor-specific DNA methylation might be associated with later branched evolution of these 11 tumors. Furthermore, our data showed that a higher extent of DNA methylation ITH was associated with larger tumor size (average Euclidean distance of 35.64 (> 3cm, median size) versus 27.24 (<= 3cm), p = 0.014), advanced age (average Euclidean distance of 34.95 (above 65) verse 28.06 (below 65), p = 0.046) and increased risk of postsurgical recurrence (average Euclidean distance of 35.65 (relapsed patients) versus 29.03 (patients without relapsed), p = 0.039).
In-situ laser ultrasonic measurement of the hcp to bcc transformation in commercially pure titanium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shinbine, A., E-mail: alyssa.shinbine@gmail.com; Garcin, T.; Sinclair, C.
2016-07-15
Using a novel in-situ laser ultrasonic technique, the evolution of longitudinal velocity was used to measure the α − β transformation during cyclic heating and cooling in commercially pure titanium. In order to quantify the transformation kinetics, it is shown that changes in texture can not be ignored. This is particularly important in the case of titanium where significant grain growth occurs in the β-phase leading to the ultrasonic wave sampling a decreasing number of grains on each thermal treatment cycle. Electron backscatter diffraction measurements made postmortem in the region where the ultrasonic pulse traveled were used to obtain anmore » estimate of such local texture and grain size changes. An analysis technique for including the anisotropy of wave velocity depending on local texture is presented and shown to give self consistent results for the transformation kinetics. - Highlights: • Laser ultrasound and EBSD interpret the hcp/bcc phase transformation in cp-Ti. • Grain growth and texture produced variation in velocity during similar treatments. • Texture was deconvoluted from phase addition to obtain transformation kinetics.« less
Field dependence of nonreciprocal magnons in chiral MnSi
NASA Astrophysics Data System (ADS)
Weber, T.; Waizner, J.; Tucker, G. S.; Georgii, R.; Kugler, M.; Bauer, A.; Pfleiderer, C.; Garst, M.; Böni, P.
2018-06-01
Spin waves in chiral magnetic materials are strongly influenced by the Dzyaloshinskii-Moriya interaction, resulting in intriguing phenomena like nonreciprocal magnon propagation and magnetochiral dichroism. Here, we study the nonreciprocal magnon spectrum of the archetypical chiral magnet MnSi and its evolution as a function of magnetic field covering the field-polarized and conical helix phase. Using inelastic neutron scattering, the magnon energies and their spectral weights are determined quantitatively after deconvolution with the instrumental resolution. In the field-polarized phase the imaginary part of the dynamical susceptibility χ''(ɛ ,q ) is shown to be asymmetric with respect to wave vectors q longitudinal to the applied magnetic field H , which is a hallmark of chiral magnetism. In the helimagnetic phase, χ''(ɛ ,q ) becomes increasingly symmetric with decreasing H due to the formation of helimagnon bands and the activation of additional spin-flip and non-spin-flip scattering channels. The neutron spectra are in excellent quantitative agreement with the low-energy theory of cubic chiral magnets with a single fitting parameter being the damping rate of spin waves.
NASA Astrophysics Data System (ADS)
Vijselaar, Wouter; Westerik, Pieter; Veerbeek, Janneke; Tiggelaar, Roald M.; Berenschot, Erwin; Tas, Niels R.; Gardeniers, Han; Huskens, Jurriaan
2018-03-01
A solar-driven photoelectrochemical cell provides a promising approach to enable the large-scale conversion and storage of solar energy, but requires the use of Earth-abundant materials. Earth-abundant catalysts for the hydrogen evolution reaction, for example nickel-molybdenum (Ni-Mo), are generally opaque and require high mass loading to obtain high catalytic activity, which in turn leads to parasitic light absorption for the underlying photoabsorber (for example silicon), thus limiting production of hydrogen. Here, we show the fabrication of a highly efficient photocathode by spatially and functionally decoupling light absorption and catalytic activity. Varying the fraction of catalyst coverage over the microwires, and the pitch between the microwires, makes it possible to deconvolute the contributions of catalytic activity and light absorption to the overall device performance. This approach provided a silicon microwire photocathode that exhibited a near-ideal short-circuit photocurrent density of 35.5 mA cm-2, a photovoltage of 495 mV and a fill factor of 62% under AM 1.5G illumination, resulting in an ideal regenerative cell efficiency of 10.8%.
Physical Theory of Voltage Fade in Lithium- and Manganese-Rich Transition Metal Oxides
Rinaldo, Steven G.; Gallagher, Kevin G.; Long, Brandon R.; ...
2015-03-04
Lithium- and manganese-rich (LMR) transition metal oxide cathodes are of interest for lithium-ion battery applications due to their increased energy density and decreased cost. However, the advantages in energy density and cost are offset, in part, due to the phenomena of voltage fade. Specifically, the voltage profiles (voltage as a function of capacity) of LMR cathodes transform from a high energy configuration to a lower energy configuration as they are repeatedly charged (Li removed) and discharged (Li inserted). Here, we propose a physical model of voltage fade that accounts for the emergence of a low voltage Li phase due tomore » the introduction of transition metal ion defects within a parent Li phase. The phenomenological model was re-cast in a general form and experimental LMR charge profiles were de-convoluted to extract the evolutionary behavior of various components of LMR capacitance profiles. Evolution of the voltage fade component was found to follow a universal growth curve with a maximal voltage fade capacity of ≈ 20% of the initial total capacity.« less
International review of cytology. Volume 106. A survey of cell biology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bourne, G.H.; Jeon, K.W.; Friedlander, M.
1987-01-01
Contents: Morphology and Cytochemistry of the Endocrine Epthelial System in the Lung; Intrinsic Nerve Plexus of Mammalian Heart; Morphological Basis of Cardiac Rhythmical Activity. Structural and Functional Evolution of Gonadotropin-Releasing Hormone; Excitons and Solitons in Molecular Systems; The Centrosome and Its Role in the Organization of Microtubules. Each chapter includes references. Index.
The Evolutional History of Electromagnetic Navigation Bronchoscopy: State of the Art.
Mehta, Atul C; Hood, Kristin L; Schwarz, Yehuda; Solomon, Stephen B
2018-04-30
Electromagnetic navigation bronchoscopy (ENB) has come a long way from the early roots of electromagnetic theory. Current ENB devices have the potential to change the way lung cancer is detected and treated. This paper provides an overview of the history, current state, and future of ENB. Copyright © 2018. Published by Elsevier Inc.
[Severe metabolic alkalosis following hypokalemia from a paraneoplastic Cushing syndrome].
Dubé, L; Daenen, S; Kouatchet, A; Soltner, C; Alquier, P
2001-12-01
Metabolic alkalosis is frequently observed in critically ill patients. Etiologies are numerous but endocrinal causes are rare. We report a case of a patient with severe respiratory insufficiency, metabolic alkalosis and hypokalemia. The evolution was fatal. Further explorations revealed an ectopic Adrenocorticotropine Hormone syndrome. The initial tumor was probably a small cell lung carcinoma.
Tracking the Evolution of Non-Small-Cell Lung Cancer.
Jamal-Hanjani, Mariam; Wilson, Gareth A; McGranahan, Nicholas; Birkbak, Nicolai J; Watkins, Thomas B K; Veeriah, Selvaraju; Shafi, Seema; Johnson, Diana H; Mitter, Richard; Rosenthal, Rachel; Salm, Max; Horswell, Stuart; Escudero, Mickael; Matthews, Nik; Rowan, Andrew; Chambers, Tim; Moore, David A; Turajlic, Samra; Xu, Hang; Lee, Siow-Ming; Forster, Martin D; Ahmad, Tanya; Hiley, Crispin T; Abbosh, Christopher; Falzon, Mary; Borg, Elaine; Marafioti, Teresa; Lawrence, David; Hayward, Martin; Kolvekar, Shyam; Panagiotopoulos, Nikolaos; Janes, Sam M; Thakrar, Ricky; Ahmed, Asia; Blackhall, Fiona; Summers, Yvonne; Shah, Rajesh; Joseph, Leena; Quinn, Anne M; Crosbie, Phil A; Naidu, Babu; Middleton, Gary; Langman, Gerald; Trotter, Simon; Nicolson, Marianne; Remmen, Hardy; Kerr, Keith; Chetty, Mahendran; Gomersall, Lesley; Fennell, Dean A; Nakas, Apostolos; Rathinam, Sridhar; Anand, Girija; Khan, Sajid; Russell, Peter; Ezhil, Veni; Ismail, Babikir; Irvin-Sellers, Melanie; Prakash, Vineet; Lester, Jason F; Kornaszewska, Malgorzata; Attanoos, Richard; Adams, Haydn; Davies, Helen; Dentro, Stefan; Taniere, Philippe; O'Sullivan, Brendan; Lowe, Helen L; Hartley, John A; Iles, Natasha; Bell, Harriet; Ngai, Yenting; Shaw, Jacqui A; Herrero, Javier; Szallasi, Zoltan; Schwarz, Roland F; Stewart, Aengus; Quezada, Sergio A; Le Quesne, John; Van Loo, Peter; Dive, Caroline; Hackshaw, Allan; Swanton, Charles
2017-06-01
Among patients with non-small-cell lung cancer (NSCLC), data on intratumor heterogeneity and cancer genome evolution have been limited to small retrospective cohorts. We wanted to prospectively investigate intratumor heterogeneity in relation to clinical outcome and to determine the clonal nature of driver events and evolutionary processes in early-stage NSCLC. In this prospective cohort study, we performed multiregion whole-exome sequencing on 100 early-stage NSCLC tumors that had been resected before systemic therapy. We sequenced and analyzed 327 tumor regions to define evolutionary histories, obtain a census of clonal and subclonal events, and assess the relationship between intratumor heterogeneity and recurrence-free survival. We observed widespread intratumor heterogeneity for both somatic copy-number alterations and mutations. Driver mutations in EGFR, MET, BRAF, and TP53 were almost always clonal. However, heterogeneous driver alterations that occurred later in evolution were found in more than 75% of the tumors and were common in PIK3CA and NF1 and in genes that are involved in chromatin modification and DNA damage response and repair. Genome doubling and ongoing dynamic chromosomal instability were associated with intratumor heterogeneity and resulted in parallel evolution of driver somatic copy-number alterations, including amplifications in CDK4, FOXA1, and BCL11A. Elevated copy-number heterogeneity was associated with an increased risk of recurrence or death (hazard ratio, 4.9; P=4.4×10 -4 ), which remained significant in multivariate analysis. Intratumor heterogeneity mediated through chromosome instability was associated with an increased risk of recurrence or death, a finding that supports the potential value of chromosome instability as a prognostic predictor. (Funded by Cancer Research UK and others; TRACERx ClinicalTrials.gov number, NCT01888601 .).
Hernández-Porras, Isabel; López, Icíar Paula; De Las Rivas, Javier; Pichel, José García
2013-01-01
Background Insulin-like Growth Factor 1 (IGF1) is a multifunctional regulator of somatic growth and development throughout evolution. IGF1 signaling through IGF type 1 receptor (IGF1R) controls cell proliferation, survival and differentiation in multiple cell types. IGF1 deficiency in mice disrupts lung morphogenesis, causing altered prenatal pulmonary alveologenesis. Nevertheless, little is known about the cellular and molecular basis of IGF1 activity during lung development. Methods/Principal Findings Prenatal Igf1−/− mutant mice with a C57Bl/6J genetic background displayed severe disproportional lung hypoplasia, leading to lethal neonatal respiratory distress. Immuno-histological analysis of their lungs showed a thickened mesenchyme, alterations in extracellular matrix deposition, thinner smooth muscles and dilated blood vessels, which indicated immature and delayed distal pulmonary organogenesis. Transcriptomic analysis of Igf1−/− E18.5 lungs using RNA microarrays identified deregulated genes related to vascularization, morphogenesis and cellular growth, and to MAP-kinase, Wnt and cell-adhesion pathways. Up-regulation of immunity-related genes was verified by an increase in inflammatory markers. Increased expression of Nfib and reduced expression of Klf2, Egr1 and Ctgf regulatory proteins as well as activation of ERK2 MAP-kinase were corroborated by Western blot. Among IGF-system genes only IGFBP2 revealed a reduction in mRNA expression in mutant lungs. Immuno-staining patterns for IGF1R and IGF2, similar in both genotypes, correlated to alterations found in specific cell compartments of Igf1−/− lungs. IGF1 addition to Igf1−/− embryonic lungs cultured ex vivo increased airway septa remodeling and distal epithelium maturation, processes accompanied by up-regulation of Nfib and Klf2 transcription factors and Cyr61 matricellular protein. Conclusions/Significance We demonstrated the functional tissue specific implication of IGF1 on fetal lung development in mice. Results revealed novel target genes and gene networks mediators of IGF1 action on pulmonary cellular proliferation, differentiation, adhesion and immunity, and on vascular and distal epithelium maturation during prenatal lung development. PMID:24391734
Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution.
Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl
2016-11-16
Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.
Ströhl, Florian; Kaminski, Clemens F
2015-01-16
We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson-Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package.
NASA Astrophysics Data System (ADS)
Ströhl, Florian; Kaminski, Clemens F.
2015-03-01
We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson-Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package.
Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution
NASA Astrophysics Data System (ADS)
Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl
2016-11-01
Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.
Data matching for free-surface multiple attenuation by multidimensional deconvolution
NASA Astrophysics Data System (ADS)
van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald
2012-09-01
A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.
fDOT for in vivo follow-up of tumor development in mice lungs
NASA Astrophysics Data System (ADS)
Koenig, Anne; Hervé, Lionel; Da Silva, Anabela; Dinten, Jean-Marc; Boutet, Jérôme; Berger, Michel; Josserand, Véronique; Coll, Jean-Luc; Peltié, Philippe; Rizo, Philippe
2007-07-01
This paper presents in vivo experiments conducted on cancerous mice bearing mammary murine tumors. In order to reconstruct the fluorescence yield even in highly attenuating and heterogeneous regions like lungs, we developed a fDOT reconstruction method which at first corrects the light propagation model from optical heterogeneities by using the transmitted excitation light measurements. The same approach is also designed to enable working without immersing the mouse in adaptation liquid. The 3D fluorescence map is then reconstructed from the emitted signal of fluorescence and from the corrected propagation model by an ART (Algebraic Reconstruction Technique) algorithm. The system ability to reconstruct fluorescence distribution in presence of high attenuating objects has been validated on phantoms presenting a fluorescent absorbent inclusion. A study was conducted on mice to follow up lungs at different stages of tumor development. The mice were imaged after intravenous injection to the animal of a cancer specific fluorescent marker. A control experiment was conducted in parallel on healthy mice to ensure that the multiple injections of fluorophore did not induce parasite fluorescence distribution. These results validate our system performances for studying small animal lungs tumor evolution. Detection and localization of the fluorophore fixations expresses the tumor development.
Recent advances on the functional and evolutionary morphology of the amniote respiratory apparatus.
Lambertz, Markus
2016-02-01
Increased organismic complexity in metazoans was achieved via the specialization of certain parts of the body involved in different faculties (structure-function complexes). One of the most basic metabolic demands of animals in general is a sufficient supply of all tissues with oxygen. Specialized structures for gas exchange (and transport) consequently evolved many times and in great variety among bilaterians. This review focuses on some of the latest advancements that morphological research has added to our understanding of how the respiratory apparatus of the primarily terrestrial vertebrates (amniotes) works and how it evolved. Two main components of the respiratory apparatus, the lungs as the "exchanger" and the ventilatory apparatus as the "active pump," are the focus of this paper. Specific questions related to the exchanger concern the structure of the lungs of the first amniotes and the efficiency of structurally simple snake lungs in health and disease, as well as secondary functions of the lungs in heat exchange during the evolution of sauropod dinosaurs. With regard to the active pump, I discuss how the unique ventilatory mechanism of turtles evolved and how understanding the avian ventilatory strategy affects animal welfare issues in the poultry industry. © 2016 New York Academy of Sciences.
Pathogenesis of hyperinflation in chronic obstructive pulmonary disease
Gagnon, Philippe; Guenette, Jordan A; Langer, Daniel; Laviolette, Louis; Mainguy, Vincent; Maltais, François; Ribeiro, Fernanda; Saey, Didier
2014-01-01
Chronic obstructive pulmonary disease (COPD) is a preventable and treatable lung disease characterized by airflow limitation that is not fully reversible. In a significant proportion of patients with COPD, reduced lung elastic recoil combined with expiratory flow limitation leads to lung hyperinflation during the course of the disease. Development of hyperinflation during the course of COPD is insidious. Dynamic hyperinflation is highly prevalent in the advanced stages of COPD, and new evidence suggests that it also occurs in many patients with mild disease, independently of the presence of resting hyperinflation. Hyperinflation is clinically relevant for patients with COPD mainly because it contributes to dyspnea, exercise intolerance, skeletal muscle limitations, morbidity, and reduced physical activity levels associated with the disease. Various pharmacological and nonpharmacological interventions have been shown to reduce hyperinflation and delay the onset of ventilatory limitation in patients with COPD. The aim of this review is to address the more recent literature regarding the pathogenesis, assessment, and management of both static and dynamic lung hyperinflation in patients with COPD. We also address the influence of biological sex and obesity and new developments in our understanding of hyperinflation in patients with mild COPD and its evolution during progression of the disease. PMID:24600216
Kim, Hyun Seok; Mendiratta, Saurabh; Kim, Jiyeon; Pecot, Chad Victor; Larsen, Jill E.; Zubovych, Iryna; Seo, Bo Yeun; Kim, Jimi; Eskiocak, Banu; Chung, Hannah; McMillan, Elizabeth; Wu, Sherry; De Brabander, Jef; Komurov, Kakajan; Toombs, Jason E.; Wei, Shuguang; Peyton, Michael; Williams, Noelle; Gazdar, Adi F.; Posner, Bruce A.; Brekken, Rolf; Sood, Anil K.; Deberardinis, Ralph J.; Roth, Michael G.; Minna, John D.; White, Michael A.
2013-01-01
SUMMARY Context-specific molecular vulnerabilities that arise during tumor evolution represent an attractive intervention target class. However, the frequency and diversity of somatic lesions detected among lung tumors can confound efforts to identify these targets. To confront this challenge, we have applied parallel screening of chemical and genetic perturbations within a panel of molecularly annotated NSCLC lines to identify intervention opportunities tightly linked to molecular response indicators predictive of target sensitivity. Anchoring this analysis on a matched tumor/normal cell model from a lung adenocarcinoma patient identified three distinct target/response-indicator pairings that are represented with significant frequencies (6–16%) in the patient population. These include NLRP3 mutation/inflammasome activation-dependent FLIP addiction, co-occuring KRAS and LKB1 mutation-driven COPI addiction, and selective sensitivity to a synthetic indolotriazine that is specified by a 7-gene expression signature. Target efficacies were validated in vivo, and mechanism of action studies uncovered new cancer cell biology. PMID:24243015
Kim, Hyun Seok; Mendiratta, Saurabh; Kim, Jiyeon; Pecot, Chad Victor; Larsen, Jill E; Zubovych, Iryna; Seo, Bo Yeun; Kim, Jimi; Eskiocak, Banu; Chung, Hannah; McMillan, Elizabeth; Wu, Sherry; De Brabander, Jef; Komurov, Kakajan; Toombs, Jason E; Wei, Shuguang; Peyton, Michael; Williams, Noelle; Gazdar, Adi F; Posner, Bruce A; Brekken, Rolf A; Sood, Anil K; Deberardinis, Ralph J; Roth, Michael G; Minna, John D; White, Michael A
2013-10-24
Context-specific molecular vulnerabilities that arise during tumor evolution represent an attractive intervention target class. However, the frequency and diversity of somatic lesions detected among lung tumors can confound efforts to identify these targets. To confront this challenge, we have applied parallel screening of chemical and genetic perturbations within a panel of molecularly annotated NSCLC lines to identify intervention opportunities tightly linked to molecular response indicators predictive of target sensitivity. Anchoring this analysis on a matched tumor/normal cell model from a lung adenocarcinoma patient identified three distinct target/response-indicator pairings that are represented with significant frequencies (6%-16%) in the patient population. These include NLRP3 mutation/inflammasome activation-dependent FLIP addiction, co-occurring KRAS and LKB1 mutation-driven COPI addiction, and selective sensitivity to a synthetic indolotriazine that is specified by a seven-gene expression signature. Target efficacies were validated in vivo, and mechanism-of-action studies informed generalizable principles underpinning cancer cell biology. Copyright © 2013 Elsevier Inc. All rights reserved.
The cell as the mechanistic basis for evolution.
Torday, J S
2015-01-01
The First Principles for Physiology originated in and emanate from the unicellular state of life. Viewing physiology as a continuum from unicellular to multicellular organisms provides fundamental insight to ontogeny and phylogeny as a functionally integral whole. Such mechanisms are most evident under conditions of physiologic stress; all of the molecular pathways that evolved in service to the vertebrate water-land transition aided and abetted the evolution of the vertebrate lung, for example. Reduction of evolution to cell biology has an important scientific feature—it is predictive. One implication of this perspective on evolution is the likelihood that it is the unicellular state that is actually the object of selection. By looking at the process of evolution from its unicellular origins, the causal relationships between genotype and phenotype are revealed, as are many other aspects of physiology and medicine that have remained anecdotal and counter-intuitive. Evolutionary development can best be considered as a cyclical, epigenetic, reiterative environmental assessment process, originating from the unicellular state, both forward and backward, to sustain and perpetuate unicellular homeostasis. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Jo, J. A.; Fang, Q.; Papaioannou, T.; Qiao, J. H.; Fishbein, M. C.; Beseth, B.; Dorafshar, A. H.; Reil, T.; Baker, D.; Freischlag, J.; Marcu, L.
2006-02-01
This study introduces new methods of time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data analysis for tissue characterization. These analytical methods were applied for the detection of atherosclerotic vulnerable plaques. Upon pulsed nitrogen laser (337 nm, 1 ns) excitation, TR-LIFS measurements were obtained from carotid atherosclerotic plaque specimens (57 endarteroctomy patients) at 492 distinct areas. The emission was both spectrally- (360-600 nm range at 5 nm interval) and temporally- (0.3 ns resolution) resolved using a prototype clinically compatible fiber-optic catheter TR-LIFS apparatus. The TR-LIFS measurements were subsequently analyzed using a standard multiexponential deconvolution and a recently introduced Laguerre deconvolution technique. Based on their histopathology, the lesions were classified as early (thin intima), fibrotic (collagen-rich intima), and high-risk (thin cap over necrotic core and/or inflamed intima). Stepwise linear discriminant analysis (SLDA) was applied for lesion classification. Normalized spectral intensity values and Laguerre expansion coefficients (LEC) at discrete emission wavelengths (390, 450, 500 and 550 nm) were used as features for classification. The Laguerre based SLDA classifier provided discrimination of high-risk lesions with high sensitivity (SE>81%) and specificity (SP>95%). Based on these findings, we believe that TR-LIFS information derived from the Laguerre expansion coefficients can provide a valuable additional dimension for the diagnosis of high-risk vulnerable atherosclerotic plaques.
NASA Astrophysics Data System (ADS)
Gal, M.; Reading, A. M.; Ellingsen, S. P.; Koper, K. D.; Burlacu, R.; Gibbons, S. J.
2016-07-01
Microseisms in the period of 2-10 s are generated in deep oceans and near coastal regions. It is common for microseisms from multiple sources to arrive at the same time at a given seismometer. It is therefore desirable to be able to measure multiple slowness vectors accurately. Popular ways to estimate the direction of arrival of ocean induced microseisms are the conventional (fk) or adaptive (Capon) beamformer. These techniques give robust estimates, but are limited in their resolution capabilities and hence do not always detect all arrivals. One of the limiting factors in determining direction of arrival with seismic arrays is the array response, which can strongly influence the estimation of weaker sources. In this work, we aim to improve the resolution for weaker sources and evaluate the performance of two deconvolution algorithms, Richardson-Lucy deconvolution and a new implementation of CLEAN-PSF. The algorithms are tested with three arrays of different aperture (ASAR, WRA and NORSAR) using 1 month of real data each and compared with the conventional approaches. We find an improvement over conventional methods from both algorithms and the best performance with CLEAN-PSF. We then extend the CLEAN-PSF framework to three components (3C) and evaluate 1 yr of data from the Pilbara Seismic Array in northwest Australia. The 3C CLEAN-PSF analysis is capable in resolving a previously undetected Sn phase.
Microseismic source locations with deconvolution migration
NASA Astrophysics Data System (ADS)
Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu
2018-03-01
Identifying and locating microseismic events are critical problems in hydraulic fracturing monitoring for unconventional resources exploration. In contrast to active seismic data, microseismic data are usually recorded with unknown source excitation time and source location. In this study, we introduce deconvolution migration by combining deconvolution interferometry with interferometric cross-correlation migration (CCM). This method avoids the need for the source excitation time and enhances both the spatial resolution and robustness by eliminating the square term of the source wavelets from CCM. The proposed algorithm is divided into the following three steps: (1) generate the virtual gathers by deconvolving the master trace with all other traces in the microseismic gather to remove the unknown excitation time; (2) migrate the virtual gather to obtain a single image of the source location and (3) stack all of these images together to get the final estimation image of the source location. We test the proposed method on complex synthetic and field data set from the surface hydraulic fracturing monitoring, and compare the results with those obtained by interferometric CCM. The results demonstrate that the proposed method can obtain a 50 per cent higher spatial resolution image of the source location, and more robust estimation with smaller errors of the localization especially in the presence of velocity model errors. This method is also beneficial for source mechanism inversion and global seismology applications.
Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed
NASA Astrophysics Data System (ADS)
Walsh, Alex J.; Beier, Hope T.
2016-03-01
Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.
Wellskins and slug tests: where's the bias?
NASA Astrophysics Data System (ADS)
Rovey, C. W.; Niemann, W. L.
2001-03-01
Pumping tests in an outwash sand at the Camp Dodge Site give hydraulic conductivities ( K) approximately seven times greater than conventional slug tests in the same wells. To determine if this difference is caused by skin bias, we slug tested three sets of wells, each in a progressively greater stage of development. Results were analyzed with both the conventional Bouwer-Rice method and the deconvolution method, which quantifies the skin and eliminates its effects. In 12 undeveloped wells the average skin is +4.0, causing underestimation of conventional slug-test K (Bouwer-Rice method) by approximately a factor of 2 relative to the deconvolution method. In seven nominally developed wells the skin averages just +0.34, and the Bouwer-Rice method gives K within 10% of that calculated with the deconvolution method. The Bouwer-Rice K in this group is also within 5% of that measured by natural-gradient tracer tests at the same site. In 12 intensely developed wells the average skin is <-0.82, consistent with an average skin of -1.7 measured during single-well pumping tests. At this site the maximum possible skin bias is much smaller than the difference between slug and pumping-test Ks. Moreover, the difference in K persists even in intensely developed wells with negative skins. Therefore, positive wellskins do not cause the difference in K between pumping and slug tests at this site.
Wear, Keith A
2014-04-01
In through-transmission interrogation of cancellous bone, two longitudinal pulses ("fast" and "slow" waves) may be generated. Fast and slow wave properties convey information about material and micro-architectural characteristics of bone. However, these properties can be difficult to assess when fast and slow wave pulses overlap in time and frequency domains. In this paper, two methods are applied to decompose signals into fast and slow waves: bandlimited deconvolution and modified least-squares Prony's method with curve-fitting (MLSP + CF). The methods were tested in plastic and Zerdine(®) samples that provided fast and slow wave velocities commensurate with velocities for cancellous bone. Phase velocity estimates were accurate to within 6 m/s (0.4%) (slow wave with both methods and fast wave with MLSP + CF) and 26 m/s (1.2%) (fast wave with bandlimited deconvolution). Midband signal loss estimates were accurate to within 0.2 dB (1.7%) (fast wave with both methods), and 1.0 dB (3.7%) (slow wave with both methods). Similar accuracies were found for simulations based on fast and slow wave parameter values published for cancellous bone. These methods provide sufficient accuracy and precision for many applications in cancellous bone such that experimental error is likely to be a greater limiting factor than estimation error.
Lunge feeding in early marine reptiles and fast evolution of marine tetrapod feeding guilds.
Motani, Ryosuke; Chen, Xiao-hong; Jiang, Da-yong; Cheng, Long; Tintori, Andrea; Rieppel, Olivier
2015-03-10
Traditional wisdom holds that biotic recovery from the end-Permian extinction was slow and gradual, and was not complete until the Middle Triassic. Here, we report that the evolution of marine predator feeding guilds, and their trophic structure, proceeded faster. Marine reptile lineages with unique feeding adaptations emerged during the Early Triassic (about 248 million years ago), including the enigmatic Hupehsuchus that possessed an unusually slender mandible. A new specimen of this genus reveals a well-preserved palate and mandible, which suggest that it was a rare lunge feeder as also occurs in rorqual whales and pelicans. The diversity of feeding strategies among Triassic marine tetrapods reached their peak in the Early Triassic, soon after their first appearance in the fossil record. The diet of these early marine tetrapods most likely included soft-bodied animals that are not preserved as fossils. Early marine tetrapods most likely introduced a new trophic mechanism to redistribute nutrients to the top 10 m of the sea, where the primary productivity is highest. Therefore, a simple recovery to a Permian-like trophic structure does not explain the biotic changes seen after the Early Triassic.
Lunge feeding in early marine reptiles and fast evolution of marine tetrapod feeding guilds
Motani, Ryosuke; Chen, Xiao-hong; Jiang, Da-yong; Cheng, Long; Tintori, Andrea; Rieppel, Olivier
2015-01-01
Traditional wisdom holds that biotic recovery from the end-Permian extinction was slow and gradual, and was not complete until the Middle Triassic. Here, we report that the evolution of marine predator feeding guilds, and their trophic structure, proceeded faster. Marine reptile lineages with unique feeding adaptations emerged during the Early Triassic (about 248 million years ago), including the enigmatic Hupehsuchus that possessed an unusually slender mandible. A new specimen of this genus reveals a well-preserved palate and mandible, which suggest that it was a rare lunge feeder as also occurs in rorqual whales and pelicans. The diversity of feeding strategies among Triassic marine tetrapods reached their peak in the Early Triassic, soon after their first appearance in the fossil record. The diet of these early marine tetrapods most likely included soft-bodied animals that are not preserved as fossils. Early marine tetrapods most likely introduced a new trophic mechanism to redistribute nutrients to the top 10 m of the sea, where the primary productivity is highest. Therefore, a simple recovery to a Permian-like trophic structure does not explain the biotic changes seen after the Early Triassic. PMID:25754468
Nieto, Mary; Dicembrino, Manuela; Ferraz, Rubén; Romagnoli, Fernando; Giugno, Hilda; Ernst, Glenda; Siminovich, Monica; Botto, Hugo
2016-06-01
Alveolar proteinosis is a rare chronic lung disease, especially in children, characterized by abnormal accumulation of lipoproteins and derived surfactant in the intra-alveolar space that generates a severe reduction of gas exchange. Idiopathic presentation form constitutes over 90% of cases, a phenomenon associated with production of autoimmune antibodies directed at the receptor for granulocyte-macrophage colony-stimulating factor. A case of a girl of 5 years of age treated because of atypical pneumonia with unfavorable evolution due to persistent hypoxemia is presented. The diagnosis is obtained through pathologic examination of lung biopsy by thoracotomy, as treatment is carried out by 17bronchopulmonary bronchoscopy lavages and the patient evidences marked clinical improvement. Sociedad Argentina de Pediatría.
Targeted therapies in non-small cell lung carcinoma: what have we achieved so far?
Houhou, Wissam
2013-01-01
The search for innovative therapeutic agents in non-small cell lung cancer (NSCLC) has witnessed a swift evolution. The number of targeted drugs that can improve patient outcomes with an acceptable safety profile is steadily increasing. In this review, we highlight current drugs that have already been approved or are under evaluation for the treatment of patients with NSCLC, either in monotherapy or combined therapy for both the first- and second-line settings. Experience with drugs targeting the vascular endothelial growth factor and its receptor, as well as the epidermal growth factor receptor is summarized. Moreover, we provide an overview of more novel targets in NSCLC and initial experience with the respective therapeutic agents. PMID:23858333
NASA Astrophysics Data System (ADS)
Merino, Agustin; Fonturbel, Maria T.; Omil, Beatriz; Chávez-Vergara, Bruno; Fernandez, Cristina; Garcia-Oliva, Felipe; Vega, Jose A.
2016-04-01
The design of emergency treatment for the rehabilitation of fire-affected soils requires a quick diagnosis to assess the degree of degradation. For its implication in the erosion and subsequent evolution, the quality of soil organic matter (OM) plays a particularly important role. This paper presents a methodology that combines the visual recognition of the severity of soil burning with the use of simple analytical techniques to assess the degree of degradation of OM. The content and quality of the OM was evaluated in litter and mineral soils using thermogravimetry-differential scanning calorimetry (DSC-TG) spectroscopy, and the results were contrasted with 13C CP-MAS NMR. The types of methodologies were texted to assess the thermal analysis: a) the direct calculation of the Q areas related to three degrees of thermal stabilities: Q1 (200-375 °C; labil OM); Q2 (375-475 °C, recalcitrant OM); and Q3 (475-550 °C). b) deconvolution of DSC curves and calculation of each peak was expressed as a fraction of the total DSC curve area. Additionally, a P fractionation was done following the Hedley sequential extraction method. The severity levels visually showed different degrees of SOM degradation. Although the fire caused important SOM losses in moderate severities, changes in the quality of OM only occurred at higher severities. Besides, the labile organic P fraction decreased and the occluded inorganic P fraction increased in the high severity soils. These changes affect the OM processes such as hydrophobicity and erosion largely responsible for soil degradation post-fire. The strong correlations between the thermal parameters and NMR regions and derived measurements such as hydrophobicity and aromaticity show the usefulness of this technique as rapid diagnosis to assess the soil degradation.The marked loss of polysaccharide and transition to highly thermic-resistant compounds, visible in deconvoluted thermograms, which would explain the changes in microbial activity and soil nutrients availability (basal respiration, microbial biomass, qCO2, and enzymatic activity). And also it would have implications in hydrophobicity and stability of soil aggregates, leading to the extreme erosion rates that occur usually are found in soils affected by higher severities.
NASA Astrophysics Data System (ADS)
Deng, S.; Levander, A.
2017-12-01
Almost half of the North American continental plate is formed by the juvenile terrane accretion between 1.8-1.0 Ga, therefore, the suturing process of juvenile crust in East Central United States, not receiving as much attention probably due to low station coverage before the deployment of US transportable array, is of great importance to better understand the evolution of North American Plate. The Yavapai province is formed by the accretion of juvenile crust during 1.8-1.7 Ga. The northeastern part of Yavapai province is accreted to the Superior province along the Spirit Lake Tectonic Zone (SLTZ). During the period of 1.7-1.6 Ga, the Mazatzal Province, bounded the south of Yavapai Province, was added to Laurentia. The previous research mainly focuses on the southwestern Yavapai-Mazatzal boundary (Karlstrom et.al 2002, Magnani et.al 2004) but less in the northeastern area that we are interested in. The Granite-Rhyolite province is the product of the suturing event of juvenile arc crust reoccurring along the southeast margin of Laurentia between 1.55-1.35 Ga, which has been proved by the Nd model age (Whitmeyer et.al 2007). Here we will select the Mw>=5.5 teleseismic events with epicenter distance between 35 and 90 recorded by 300 available seismic stations in our study region. The receiver functions will be calculated by the water-level deconvolution in frequency domain (Langston 1979) and iterative deconvolution in time domain (Ligorria et.al 1999). The common conversion point (CCP) stacking method will then be applied to the receiver functions to create the 3-D image volume by imaging the conversion points in space from the time domain signals (Levander and Miller 2012). The preliminary results show that the accretion process of the tectonic provinces may have different models. The profiles of CCP image volume will inform us the seismic evidence to model the suturing process of juvenile Yavapai, Mozatzal and Granite-Rhyolite crust, hence providing great indication to interpret the growth of the North American plate. The updated results will be presented in the meeting.
A deconvolution technique to correct deep images of galaxies from instrumental scattered light
NASA Astrophysics Data System (ADS)
Karabal, E.; Duc, P.-A.; Kuntschner, H.; Chanial, P.; Cuillandre, J.-C.; Gwyn, S.
2017-05-01
Deep imaging of the diffuse light that is emitted by stellar fine structures and outer halos around galaxies is often now used to probe their past mass assembly. Because the extended halos survive longer than the relatively fragile tidal features, they trace more ancient mergers. We use images that reach surface brightness limits as low as 28.5-29 mag arcsec-2 (g-band) to obtain light and color profiles up to 5-10 effective radii of a sample of nearby early-type galaxies. These were acquired with MegaCam as part of the CFHT MATLAS large programme. These profiles may be compared to those produced using simulations of galaxy formation and evolution, once corrected for instrumental effects. Indeed they can be heavily contaminated by the scattered light caused by internal reflections within the instrument. In particular, the nucleus of galaxies generates artificial flux in the outer halo, which has to be precisely subtracted. We present a deconvolution technique to remove the artificial halos that makes use of very large kernels. The technique, which is based on PyOperators, is more time efficient than the model-convolution methods that are also used for that purpose. This is especially the case for galaxies with complex structures that are hard to model. Having a good knowledge of the point spread function (PSF), including its outer wings, is critical for the method. A database of MegaCam PSF models corresponding to different seeing conditions and bands was generated directly from the deep images. We show that the difference in the PSFs in different bands causes artificial changes in the color profiles, in particular a reddening of the outskirts of galaxies having a bright nucleus. The method is validated with a set of simulated images and applied to three representative test cases: NGC 3599, NGC 3489, and NGC 4274, which exhibits a prominent ghost halo for two of them. This method successfully removes this. The library of PSFs (FITS files) is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/601/A86
Salas, Lucas A; Koestler, Devin C; Butler, Rondi A; Hansen, Helen M; Wiencke, John K; Kelsey, Karl T; Christensen, Brock C
2018-05-29
Genome-wide methylation arrays are powerful tools for assessing cell composition of complex mixtures. We compare three approaches to select reference libraries for deconvoluting neutrophil, monocyte, B-lymphocyte, natural killer, and CD4+ and CD8+ T-cell fractions based on blood-derived DNA methylation signatures assayed using the Illumina HumanMethylationEPIC array. The IDOL algorithm identifies a library of 450 CpGs, resulting in an average R 2 = 99.2 across cell types when applied to EPIC methylation data collected on artificial mixtures constructed from the above cell types. Of the 450 CpGs, 69% are unique to EPIC. This library has the potential to reduce unintended technical differences across array platforms.
Strehl-constrained reconstruction of post-adaptive optics data and the Software Package AIRY, v. 6.1
NASA Astrophysics Data System (ADS)
Carbillet, Marcel; La Camera, Andrea; Deguignet, Jérémy; Prato, Marco; Bertero, Mario; Aristidi, Éric; Boccacci, Patrizia
2014-08-01
We first briefly present the last version of the Software Package AIRY, version 6.1, a CAOS-based tool which includes various deconvolution methods, accelerations, regularizations, super-resolution, boundary effects reduction, point-spread function extraction/extrapolation, stopping rules, and constraints in the case of iterative blind deconvolution (IBD). Then, we focus on a new formulation of our Strehl-constrained IBD, here quantitatively compared to the original formulation for simulated near-infrared data of an 8-m class telescope equipped with adaptive optics (AO), showing their equivalence. Next, we extend the application of the original method to the visible domain with simulated data of an AO-equipped 1.5-m telescope, testing also the robustness of the method with respect to the Strehl ratio estimation.
Lightfield super-resolution through turbulence
NASA Astrophysics Data System (ADS)
Trujillo-Sevilla, Juan M.; Fernández-Valdivia, Juan J.; Rodríguez-Ramos, Luis F.; Cárdenes, Óscar G.; Marichal-Hernández, José G.; Javidi, Bahram; Rodríguez-Ramos, José M.
2015-05-01
In this paper, we use information from the light field to obtain a distribution map of the wavefront phase. This distribution is associated with changes in refractive index which are relevant in the propagation of light through a heterogeneous or turbulent medium. Through the measurement of the wavefront phase from a single shot, it is possible to make the deconvolution of blurred images affected by the turbulence. If this deconvolution is applied to light fields obtained by plenoptic acquisition, the original optical resolution associated to the objective lens is restored, it means we are using a kind of superresolution technique that works properly even in the presence of turbulence. The wavefront phase can also be estimated from the defocused images associated to the light field: we present here preliminary results using this approach.
FTIR of binary lead borate glass: Structural investigation
NASA Astrophysics Data System (ADS)
Othman, H. A.; Elkholy, H. S.; Hager, I. Z.
2016-02-01
The glass samples were prepared according to the following formula: (100-x) B2O3 - x PbO, where x = 20-80 mol% by melt quenching method. The density of the prepared samples was measured and molar volume was calculated. IR spectra were measured for the prepared samples to investigate the glass structure. The IR spectra were deconvoluted using curves of Gaussian shape at approximately the same frequencies. The deconvoluted data were used to study the effect of PbO content on all the structural borate groups. Some structural parameters such as density, packing density, bond length and bond force constant were theoretically calculated and were compared to the obtained experimental results. Deviation between the experimental and theoretically calculated parameters reflects the dual role of PbO content on the network of borate glass.
Study of the Auger line shape of polyethylene and diamond
NASA Technical Reports Server (NTRS)
Dayan, M.; Pepper, S. V.
1984-01-01
The KVV Auger electron line shapes of carbon in polyethylene and diamond have been studied. The spectra were obtained in derivative form by electron beam excitation. They were treated by background subtraction, integration and deconvolution to produce the intrinsic Auger line shape. Electron energy loss spectra provided the response function in the deconvolution procedure. The line shape from polyethylene is compared with spectra from linear alkanes and with a previous spectrum of Kelber et al. Both spectra are compared with the self-convolution of their full valence band densities of states and of their p-projected densities. The experimental spectra could not be understood in terms of existing theories. This is so even when correlation effects are qualitatively taken into account account to the theories of Cini and Sawatzky and Lenselink.
Santos, Radleigh G; Appel, Jon R; Giulianotti, Marc A; Edwards, Bruce S; Sklar, Larry A; Houghten, Richard A; Pinilla, Clemencia
2013-05-30
In the past 20 years, synthetic combinatorial methods have fundamentally advanced the ability to synthesize and screen large numbers of compounds for drug discovery and basic research. Mixture-based libraries and positional scanning deconvolution combine two approaches for the rapid identification of specific scaffolds and active ligands. Here we present a quantitative assessment of the screening of 32 positional scanning libraries in the identification of highly specific and selective ligands for two formylpeptide receptors. We also compare and contrast two mixture-based library approaches using a mathematical model to facilitate the selection of active scaffolds and libraries to be pursued for further evaluation. The flexibility demonstrated in the differently formatted mixture-based libraries allows for their screening in a wide range of assays.
Semi-blind sparse image reconstruction with application to MRFM.
Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O
2012-09-01
We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.
Advanced Source Deconvolution Methods for Compton Telescopes
NASA Astrophysics Data System (ADS)
Zoglauer, Andreas
The next generation of space telescopes utilizing Compton scattering for astrophysical observations is destined to one day unravel the mysteries behind Galactic nucleosynthesis, to determine the origin of the positron annihilation excess near the Galactic center, and to uncover the hidden emission mechanisms behind gamma-ray bursts. Besides astrophysics, Compton telescopes are establishing themselves in heliophysics, planetary sciences, medical imaging, accelerator physics, and environmental monitoring. Since the COMPTEL days, great advances in the achievable energy and position resolution were possible, creating an extremely vast, but also extremely sparsely sampled data space. Unfortunately, the optimum way to analyze the data from the next generation of Compton telescopes has not yet been found, which can retrieve all source parameters (location, spectrum, polarization, flux) and achieves the best possible resolution and sensitivity at the same time. This is especially important for all sciences objectives looking at the inner Galaxy: the large amount of expected sources, the high background (internal and Galactic diffuse emission), and the limited angular resolution, make it the most taxing case for data analysis. In general, two key challenges exist: First, what are the best data space representations to answer the specific science questions? Second, what is the best way to deconvolve the data to fully retrieve the source parameters? For modern Compton telescopes, the existing data space representations can either correctly reconstruct the absolute flux (binned mode) or achieve the best possible resolution (list-mode), both together were not possible up to now. Here we propose to develop a two-stage hybrid reconstruction method which combines the best aspects of both. Using a proof-of-concept implementation we can for the first time show that it is possible to alternate during each deconvolution step between a binned-mode approach to get the flux right and a list-mode approach to get the best angular resolution, to get achieve both at the same time! The second open question concerns the best deconvolution algorithm. For example, several algorithms have been investigated for the famous COMPTEL 26Al map which resulted in significantly different images. There is no clear answer as to which approach provides the most accurate result, largely due to the fact that detailed simulations to test and verify the approaches and their limitations were not possible at that time. This has changed, and therefore we propose to evaluate several deconvolution algorithms (e.g. Richardson-Lucy, Maximum-Entropy, MREM, and stochastic origin ensembles) with simulations of typical observations to find the best algorithm for each application and for each stage of the hybrid reconstruction approach. We will adapt, implement, and fully evaluate the hybrid source reconstruction approach as well as the various deconvolution algorithms with simulations of synthetic benchmarks and simulations of key science objectives such as diffuse nuclear line science and continuum science of point sources, as well as with calibrations/observations of the COSI balloon telescope. This proposal for "development of new data analysis methods for future satellite missions" will significantly improve the source deconvolution techniques for modern Compton telescopes and will allow unlocking the full potential of envisioned satellite missions using Compton-scatter technology in astrophysics, heliophysics and planetary sciences, and ultimately help them to "discover how the universe works" and to better "understand the sun". Ultimately it will also benefit ground based applications such as nuclear medicine and environmental monitoring as all developed algorithms will be made publicly available within the open-source Compton telescope analysis framework MEGAlib.
Mimura, Takeshi; Walker, Natalie; Aoki, Yoshiro; Manning, Casey M.; Murdock, Benjamin J.; Myers, Jeffery L.; Lagstein, Amir; Osterholzer, John J.; Lama, Vibha N.
2016-01-01
Bronchiolitis obliterans is the leading cause of chronic graft failure and long-term mortality in lung transplant recipients. Here, we used a novel murine model to characterize allograft fibrogenesis within a whole-lung microenvironment. Unilateral left lung transplantation was performed in mice across varying degrees of major histocompatibility complex mismatch combinations. B6D2F1/J (a cross between C57BL/6J and DBA/2J) (Haplotype H2b/d) lungs transplanted into DBA/2J (H2d) recipients were identified to show histopathology for bronchiolitis obliterans in all allogeneic grafts. Time course analysis showed an evolution from immune cell infiltration of the bronchioles and vessels at day 14, consistent with acute rejection and lymphocytic bronchitis, to subepithelial and intraluminal fibrotic lesions of bronchiolitis obliterans by day 28. Allografts at day 28 showed a significantly higher hydroxyproline content than the isografts (33.21 ± 1.89 versus 22.36 ± 2.33 μg/mL). At day 40 the hydroxyproline content had increased further (48.91 ± 7.09 μg/mL). Flow cytometric analysis was used to investigate the origin of mesenchymal cells in fibrotic allografts. Collagen I–positive cells (89.43% ± 6.53%) in day 28 allografts were H2Db positive, showing their donor origin. This novel murine model shows consistent and reproducible allograft fibrogenesis in the context of single-lung transplantation and represents a major step forward in investigating mechanisms of chronic graft failure. PMID:25848843
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbanck, Sylvia, E-mail: sylvia.verbanck@uzbrussel.be; Hanon, Shane; Schuermans, Daniel
Purpose: To assess the effect of radiation therapy on lung function over the course of 3 years. Methods and Materials: Evolution of restrictive and obstructive lung function parameters was investigated in 108 breast cancer participants in a randomized, controlled trial comparing conventional radiation therapy (CR) and hypofractionated tomotherapy (TT) (age at inclusion ranging 32-81 years). Spirometry, plethysmography, and hemoglobin-corrected diffusing capacity were assessed at baseline and after 3 months and 1, 2, and 3 years. Natural aging was accounted for by considering all lung function parameters in terms of percent predicted values using the most recent reference values for women aged up to 80 years. Results:more » In the patients with negligible history of respiratory disease or smoking (n=77), the greatest rate of functional decline was observed during the initial 3 months, this acute decrease being more marked in the CR versus the TT arm. During the remainder of the 3-year follow-up period, values (in terms of percent predicted) were maintained (diffusing capacity) or continued to decline at a slower rate (forced vital capacity). However, the average decline of the restrictive lung function parameters over a 3-year period did not exceed 9% predicted in either the TT or the CR arm. Obstructive lung function parameters remained unaffected throughout. Including also the 31 patients with a history of respiratory disease or more than 10 pack-years showed a very similar restrictive pattern. Conclusions: In women with breast cancer, both conventional radiation therapy and hypofractionated tomotherapy induce small but consistent restrictive lung patterns over the course of a 3-year period, irrespective of baseline respiratory status or smoking history. The fastest rate of lung function decline generally occurred in the first 3 months.« less
NASA Astrophysics Data System (ADS)
de Souza, Rogério F.; de Carvalho, Marcelo; Matsuo, Tiemi; Zaia, Dimas A. M.
2010-04-01
This paper reports the results of a questionnaire administered to university students, about several questions involving the origin of the Universe and life and biological evolution, as well as questions related to more common scientific themes. As few as between 2.4% (philosophy students) and 14% (geography students) did not accept the theory of evolution, because they believed in creation as described in the Bible. However, between 41.5% (philosophy students) and 71.3% (biology students) did not see any conflict between religion and evolution. About 80% of the students believed that the relationship between lung cancer and smoking is well established by science, but this number falls to 65% for biological evolution and 28.9% for the big bang theory. It should be pointed out that for 24.5% and 7.4% of the students the big bang theory and biological evolution, respectively, are poorly established by science. The students who self-reported being Christian but not Roman Catholic are more conservative in the acceptance of biological evolution and the old age of Earth and the Universe than are other groups of students. Other factors, such as family income and the level of education of parents, appear to influence the students' acceptance of themes related to the origin of the Universe and biological evolution.
Zhang, Fang; Wang, Haoyang; Zhang, Li; Zhang, Jing; Fan, Ruojing; Yu, Chongtian; Wang, Wenwen; Guo, Yinlong
2014-10-01
A strategy for suspected-target screening of pesticide residues in complicated matrices was exploited using gas chromatography in combination with hybrid quadrupole time-of-flight mass spectrometry (GC-QTOF MS). The screening workflow followed three key steps of, initial detection, preliminary identification, and final confirmation. The initial detection of components in a matrix was done by a high resolution mass spectrum deconvolution; the preliminary identification of suspected pesticides was based on a special retention index/mass spectrum (RI/MS) library that contained both the first-stage mass spectra (MS(1) spectra) and retention indices; and the final confirmation was accomplished by accurate mass measurements of representative ions with their response ratios from the MS(1) spectra or representative product ions from the second-stage mass spectra (MS(2) spectra). To evaluate the applicability of the workflow in real samples, three matrices of apple, spinach, and scallion, each spiked with 165 test pesticides in a set of concentrations, were selected as the models. The results showed that the use of high-resolution TOF enabled effective extractions of spectra from noisy chromatograms, which was based on a narrow mass window (5 mDa) and suspected-target compounds identified by the similarity match of deconvoluted full mass spectra and filtering of linear RIs. On average, over 74% of pesticides at 50 ng/mL could be identified using deconvolution and the RI/MS library. Over 80% of pesticides at 5 ng/mL or lower concentrations could be confirmed in each matrix using at least two representative ions with their response ratios from the MS(1) spectra. In addition, the application of product ion spectra was capable of confirming suspected pesticides with specificity for some pesticides in complicated matrices. In conclusion, GC-QTOF MS combined with the RI/MS library seems to be one of the most efficient tools for the analysis of suspected-target pesticide residues in complicated matrices. Copyright © 2014 Elsevier B.V. All rights reserved.
Deconvolution of magnetic acoustic change complex (mACC).
Bardy, Fabrice; McMahon, Catherine M; Yau, Shu Hui; Johnson, Blake W
2014-11-01
The aim of this study was to design a novel experimental approach to investigate the morphological characteristics of auditory cortical responses elicited by rapidly changing synthesized speech sounds. Six sound-evoked magnetoencephalographic (MEG) responses were measured to a synthesized train of speech sounds using the vowels /e/ and /u/ in 17 normal hearing young adults. Responses were measured to: (i) the onset of the speech train, (ii) an F0 increment; (iii) an F0 decrement; (iv) an F2 decrement; (v) an F2 increment; and (vi) the offset of the speech train using short (jittered around 135ms) and long (1500ms) stimulus onset asynchronies (SOAs). The least squares (LS) deconvolution technique was used to disentangle the overlapping MEG responses in the short SOA condition only. Comparison between the morphology of the recovered cortical responses in the short and long SOAs conditions showed high similarity, suggesting that the LS deconvolution technique was successful in disentangling the MEG waveforms. Waveform latencies and amplitudes were different for the two SOAs conditions and were influenced by the spectro-temporal properties of the sound sequence. The magnetic acoustic change complex (mACC) for the short SOA condition showed significantly lower amplitudes and shorter latencies compared to the long SOA condition. The F0 transition showed a larger reduction in amplitude from long to short SOA compared to the F2 transition. Lateralization of the cortical responses were observed under some stimulus conditions and appeared to be associated with the spectro-temporal properties of the acoustic stimulus. The LS deconvolution technique provides a new tool to study the properties of the auditory cortical response to rapidly changing sound stimuli. The presence of the cortical auditory evoked responses for rapid transition of synthesized speech stimuli suggests that the temporal code is preserved at the level of the auditory cortex. Further, the reduced amplitudes and shorter latencies might reflect intrinsic properties of the cortical neurons to rapidly presented sounds. This is the first demonstration of the separation of overlapping cortical responses to rapidly changing speech sounds and offers a potential new biomarker of discrimination of rapid transition of sound. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.
A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution
NASA Astrophysics Data System (ADS)
Zuo, B.; Hu, X.; Li, H.
2011-12-01
A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.
Topography Estimation of the Core Mantle Boundary with ScS Reverberations and Diffraction Waves
NASA Astrophysics Data System (ADS)
Hein, B. E.; Nakata, N.
2017-12-01
In this study, we use the propagation of global seismic waves to study the Core Mantle Boundary (CMB). We focus on the use of S-wave reflections at the CMB (ScS reverberations) and outer-core diffracted waves. It is difficult imaging the CMB with the ScS wave because the complexity of the structure in the near surface ( 50 km); the complex structure degrades the signal-to-noise ratio of of the ScS. To avoid estimating the structure in the crust, we rely on the concept of seismic interferometry to extract wave propagation through mantle, but not through the crust. Our approach is compute the deconvolution between the ScS (and its reverberation) and direct S waves generated by intermediate to deep earthquakes (>50 km depth). Through this deconvolution, we have the ability to filter out the direct S wave and retrieve the wave field propagating from only the hypocenter to the outer core, but not between the hypocenter to the receiver. After the deconvolution, we can isolate the CMB reflected waves from the complicated wave phenomena because of the near-surface structure. Utilizing intermediate and deep earthquakes is key since we can suppress the near-surface effect from the surface to the hypocenter of the earthquakes. The variation of such waves (e.g., travel-time perturbation and/or wavefield decorrelation) at different receivers and earthquakes provides the information of the topography of the CMB. In order to get a more detailed image of the topography of the CMB we use diffracted seismic waves such as Pdiff , Sdiff, and P'P'. By using two intermediate to deep earthquakes on a great circle path with a station we can extract the wave propagation between the two earthquakes to simplify the waveform, similar to how it is preformed using the ScS wave. We generate more illumination of the CMB by using diffracted waves rather than only using ScS reverberations. The accurate topography of CMB obtained by these deconvolution analyses may provide new insight of the dynamics of the Earth such as heat flow at the CMB and through the mantle.
NASA Astrophysics Data System (ADS)
Sapia, Mark Angelo
2000-11-01
Three-dimensional microscope images typically suffer from reduced resolution due to the effects of convolution, optical aberrations and out-of-focus blurring. Two- dimensional ultrasound images are also degraded by convolutional bluffing and various sources of noise. Speckle noise is a major problem in ultrasound images. In microscopy and ultrasound, various methods of digital filtering have been used to improve image quality. Several methods of deconvolution filtering have been used to improve resolution by reversing the convolutional effects, many of which are based on regularization techniques and non-linear constraints. The technique discussed here is a unique linear filter for deconvolving 3D fluorescence microscopy or 2D ultrasound images. The process is to solve for the filter completely in the spatial-domain using an adaptive algorithm to converge to an optimum solution for de-blurring and resolution improvement. There are two key advantages of using an adaptive solution: (1)it efficiently solves for the filter coefficients by taking into account all sources of noise and degraded resolution at the same time, and (2)achieves near-perfect convergence to the ideal linear deconvolution filter. This linear adaptive technique has other advantages such as avoiding artifacts of frequency-domain transformations and concurrent adaptation to suppress noise. Ultimately, this approach results in better signal-to-noise characteristics with virtually no edge-ringing. Many researchers have not adopted linear techniques because of poor convergence, noise instability and negative valued data in the results. The methods presented here overcome many of these well-documented disadvantages and provide results that clearly out-perform other linear methods and may also out-perform regularization and constrained algorithms. In particular, the adaptive solution is most responsible for overcoming the poor performance associated with linear techniques. This linear adaptive approach to deconvolution is demonstrated with results of restoring blurred phantoms for both microscopy and ultrasound and restoring 3D microscope images of biological cells and 2D ultrasound images of human subjects (courtesy of General Electric and Diasonics, Inc.).
Aarabi, Ardalan; Osharina, Victoria; Wallois, Fabrice
2017-07-15
Slow and rapid event-related designs are used in fMRI and functional near-infrared spectroscopy (fNIRS) experiments to temporally characterize the brain hemodynamic response to discrete events. Conventional averaging (CA) and the deconvolution method (DM) are the two techniques commonly used to estimate the Hemodynamic Response Function (HRF) profile in event-related designs. In this study, we conducted a series of simulations using synthetic and real NIRS data to examine the effect of the main confounding factors, including event sequence timing parameters, different types of noise, signal-to-noise ratio (SNR), temporal autocorrelation and temporal filtering on the performance of these techniques in slow and rapid event-related designs. We also compared systematic errors in the estimates of the fitted HRF amplitude, latency and duration for both techniques. We further compared the performance of deconvolution methods based on Finite Impulse Response (FIR) basis functions and gamma basis sets. Our results demonstrate that DM was much less sensitive to confounding factors than CA. Event timing was the main parameter largely affecting the accuracy of CA. In slow event-related designs, deconvolution methods provided similar results to those obtained by CA. In rapid event-related designs, our results showed that DM outperformed CA for all SNR, especially above -5 dB regardless of the event sequence timing and the dynamics of background NIRS activity. Our results also show that periodic low-frequency systemic hemodynamic fluctuations as well as phase-locked noise can markedly obscure hemodynamic evoked responses. Temporal autocorrelation also affected the performance of both techniques by inducing distortions in the time profile of the estimated hemodynamic response with inflated t-statistics, especially at low SNRs. We also found that high-pass temporal filtering could substantially affect the performance of both techniques by removing the low-frequency components of HRF profiles. Our results emphasize the importance of characterization of event timing, background noise and SNR when estimating HRF profiles using CA and DM in event-related designs. Copyright © 2017 Elsevier Inc. All rights reserved.
Tutino, Lorenzo; Cianchi, Giovanni; Barbani, Francesco; Batacchi, Stefano; Cammelli, Rita; Peris, Adriano
2010-08-12
The use of lung ultrasound (LUS) in ICU is increasing but ultrasonographic patterns of lung are often difficult to quantify by different operators. The aim of this study was to evaluate the accuracy and quality of LUS reporting after the introduction of a standardized electronic recording sheet. Intensivists were trained for LUS following a teaching programme. From April 2008, an electronic sheet was designed and introduced in ICU database in order to uniform LUS examination reporting. A mark from 0 to 24 has been given for each exam by two senior intensivists not involved in the survey. The mark assigned was based on completeness of a precise reporting scheme, concerning the main finding of LUS. A cut off of 15 was considered sufficiency. The study comprehended 12 months of observations and a total of 637 LUS. Initially, although some improvement in the reports completeness, still the accuracy and precision of examination reporting was below 15. The time required to reach a sufficient quality was 7 months. A linear trend in physicians progress was observed. The uniformity in teaching programme and examinations reporting system permits to improve the level of completeness and accuracy of LUS reporting, helping physicians in following lung pathology evolution.
Kienlen, A; Fernandez, C; Henni-Laleg, Z; Andre, M; Gazaille, V; Coolen-Allou, N
2018-04-01
Thoracic endometriosis is a rare entity characterized by presence of endometrial tissue in pleura, lung parenchyma or airways. Most frequent manifestations are catamenial pneumothorax, hemothorax, hemoptysis and pulmonary nodules. We report here a rare case of a woman with thoracic endometriosis who developed iterative pneumothorax and pneumopericardium on bilateral bullous pulmonary dystrophy. She was a 37-year-old woman without any tobacco exposure and with previous history of pleural tuberculosis treated 5 years earlier. She was first referred to our centre for right pleuro-pneumothorax and hemorrhagic ascites. Pleural fluid examinations did not show any tuberculosis relapse, the evolution was favorable after thoracic drainage and there was no parenchymal lung abnormality on CT scan after surgery. Celioscopic peritoneal examination revealed stage IV peritoneal endometriosis. One year later, she was admitted for left catamenial pneumothorax. Thoracic CT scan showed apparition of large subpleural bulla. She underwent thoracotomy for bulla resection and left partial pleurectomy. Two years later, she was hospitalized for right pneumothorax and compressive pneumopericardium. Surgical lung biopsies confirmed pleuropulmonary endometriosis. Thoracotomy was performed for talcage pleurodesis and diaphragmatic leakages sutures. Lung bulla are rare in thoracic endometriosis, mechanism of their formation remains unknown. Pericardial involvement is rare in endometriosis; we report here a unique case of pneumopericardium. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Digital and analog chemical evolution.
Goodwin, Jay T; Mehta, Anil K; Lynn, David G
2012-12-18
Living matter is the most elaborate, elegant, and complex hierarchical material known and is consequently the natural target for an ever-expanding scientific and technological effort to unlock and deconvolute its marvelous forms and functions. Our current understanding suggests that biological materials are derived from a bottom-up process, a spontaneous emergence of molecular networks in the course of chemical evolution. Polymer cooperation, so beautifully manifested in the ribosome, appeared in these dynamic networks, and the special physicochemical properties of the nucleic and amino acid polymers made possible the critical threshold for the emergence of extant cellular life. These properties include the precise and geometrically discrete hydrogen bonding patterns that dominate the complementary interactions of nucleic acid base-pairing that guide replication and ensure replication fidelity. In contrast, complex and highly context-dependent sets of intra- and intermolecular interactions guide protein folding. These diverse interactions allow the more analog environmental chemical potential fluctuations to dictate conformational template-directed propagation. When these two different strategies converged in the remarkable synergistic ribonucleoprotein that is the ribosome, this resulting molecular digital-to-analog converter achieved the capacity for both persistent information storage and adaptive responses to an ever-changing environment. The ancestral chemical networks that preceded the Central Dogma of Earth's biology must reflect the dynamic chemical evolutionary landscapes that allowed for selection, propagation, and diversification and ultimately the demarcation and specialization of function that modern biopolymers manifest. Not only should modern biopolymers contain molecular fossils of this earlier age, but it should be possible to use this information to reinvent these dynamic functional networks. In this Account, we review the first dynamic network created by modification of a nucleic acid backbone and show how it has exploited the digital-like base pairing for reversible polymer construction and information transfer. We further review how these lessons have been extended to the complex folding landscapes of templated peptide assembly. These insights have allowed for the construction of molecular hybrids of each biopolymer class and made possible the reimagining of chemical evolution. Such elaboration of biopolymer chimeras has already led to applications in therapeutics and diagnostics, to the construction of novel nanostructured materials, and toward orthogonal biochemical pathways that expand the evolution of existing biochemical systems. The ability to look beyond the primordial emergence of the ribosome may allow us to better define the origins of chemical evolution, to extend its horizons beyond the biology of today and ask whether evolution is an inherent property of matter unbounded by physical limitations imposed by our planet's diverse environments.
Geriatric Assessment and Functional Decline in Older Patients with Lung Cancer.
Decoster, L; Kenis, C; Schallier, D; Vansteenkiste, J; Nackaerts, K; Vanacker, L; Vandewalle, N; Flamaing, J; Lobelle, J P; Milisen, K; De Grève, J; Wildiers, H
2017-10-01
Older patients with lung cancer are a heterogeneous population making treatment decisions complex. This study aims to evaluate the value of geriatric assessment (GA) as well as the evolution of functional status (FS) in older patients with lung cancer, and to identify predictors associated with functional decline and overall survival (OS). At baseline, GA was performed in patients ≥70 years with newly diagnosed lung cancer. FS measured by activities of daily living (ADL) and instrumental activities of daily living (IADL) was reassessed at follow-up to define functional decline and OS was collected. Predictors for functional decline and OS were determined. Two hundred and forty-five patients were included in this study. At baseline, GA deficiencies were present in all domains and ADL and IADL were impaired in 51 and 63% of patients, respectively. At follow-up, functional decline in ADL was observed in 23% and in IADL in 45% of patients. In multivariable analysis, radiotherapy was predictive for ADL decline. No other predictors for ADL or IADL decline were identified. Stage and baseline performance status were predictive for OS. Older patients with lung cancer present with multiple deficiencies covering all geriatric domains. During treatment, functional decline is observed in almost half of the patients. None of the specific domains of the GA were predictive for functional decline or survival, probably because of the high impact of the aggressiveness of this tumor type leading to a poor prognosis.
Pulse analysis of acoustic emission signals. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Houghton, J. R.
1976-01-01
A method for the signature analysis of pulses in the frequency domain and the time domain is presented. Fourier spectrum, Fourier transfer function, shock spectrum and shock spectrum ratio are examined in the frequency domain analysis, and pulse shape deconvolution is developed for use in the time domain analysis. To demonstrate the relative sensitivity of each of the methods to small changes in the pulse shape, signatures of computer modeled systems with analytical pulses are presented. Optimization techniques are developed and used to indicate the best design parameters values for deconvolution of the pulse shape. Several experiments are presented that test the pulse signature analysis methods on different acoustic emission sources. These include acoustic emissions associated with: (1) crack propagation, (2) ball dropping on a plate, (3) spark discharge and (4) defective and good ball bearings.
LES-Modeling of a Partially Premixed Flame using a Deconvolution Turbulence Closure
NASA Astrophysics Data System (ADS)
Wang, Qing; Wu, Hao; Ihme, Matthias
2015-11-01
The modeling of the turbulence/chemistry interaction in partially premixed and multi-stream combustion remains an outstanding issue. By extending a recently developed constrained minimum mean-square error deconvolution (CMMSED) method, to objective of this work is to develop a source-term closure for turbulent multi-stream combustion. In this method, the chemical source term is obtained from a three-stream flamelet model, and CMMSED is used as closure model, thereby eliminating the need for presumed PDF-modeling. The model is applied to LES of a piloted turbulent jet flame with inhomogeneous inlets, and simulation results are compared with experiments. Comparisons with presumed PDF-methods are performed, and issues regarding resolution and conservation of the CMMSED method are examined. The author would like to acknowledge the support of funding from Stanford Graduate Fellowship.
Hojjatoleslami, S A; Avanaki, M R N; Podoleanu, A Gh
2013-08-10
Optical coherence tomography (OCT) has the potential for skin tissue characterization due to its high axial and transverse resolution and its acceptable depth penetration. In practice, OCT cannot reach the theoretical resolutions due to imperfections of some of the components used. One way to improve the quality of the images is to estimate the point spread function (PSF) of the OCT system and deconvolve it from the output images. In this paper, we investigate the use of solid phantoms to estimate the PSF of the imaging system. We then utilize iterative Lucy-Richardson deconvolution algorithm to improve the quality of the images. The performance of the proposed algorithm is demonstrated on OCT images acquired from a variety of samples, such as epoxy-resin phantoms, fingertip skin and basaloid larynx and eyelid tissues.
Streaming Multiframe Deconvolutions on GPUs
NASA Astrophysics Data System (ADS)
Lee, M. A.; Budavári, T.
2015-09-01
Atmospheric turbulence distorts all ground-based observations, which is especially detrimental to faint detections. The point spread function (PSF) defining this blur is unknown for each exposure and varies significantly over time, making image analysis difficult. Lucky imaging and traditional co-adding throws away lots of information. We developed blind deconvolution algorithms that can simultaneously obtain robust solutions for the background image and all the PSFs. It is done in a streaming setting, which makes it practical for large number of big images. We implemented a new tool that runs of GPUs and achieves exceptional running times that can scale to the new time-domain surveys. Our code can quickly and effectively recover high-resolution images exceeding the quality of traditional co-adds. We demonstrate the power of the method on the repeated exposures in the Sloan Digital Sky Survey's Stripe 82.
NASA Astrophysics Data System (ADS)
Kwak, Sangmin; Song, Seok Goo; Kim, Geunyoung; Cho, Chang Soo; Shin, Jin Soo
2017-10-01
Using recordings of a mine collapse event (Mw 4.2) in South Korea in January 2015, we demonstrated that the phase and amplitude information of impulse response functions (IRFs) can be effectively retrieved using seismic interferometry. This event is equivalent to a single downward force at shallow depth. Using quantitative metrics, we compared three different seismic interferometry techniques—deconvolution, coherency, and cross correlation—to extract the IRFs between two distant stations with ambient seismic noise data. The azimuthal dependency of the source distribution of the ambient noise was also evaluated. We found that deconvolution is the best method for extracting IRFs from ambient seismic noise within the period band of 2-10 s. The coherency method is also effective if appropriate spectral normalization or whitening schemes are applied during the data processing.
Santos, Radleigh G.; Appel, Jon R.; Giulianotti, Marc A.; Edwards, Bruce S.; Sklar, Larry A.; Houghten, Richard A.; Pinilla, Clemencia
2014-01-01
In the past 20 years, synthetic combinatorial methods have fundamentally advanced the ability to synthesize and screen large numbers of compounds for drug discovery and basic research. Mixture-based libraries and positional scanning deconvolution combine two approaches for the rapid identification of specific scaffolds and active ligands. Here we present a quantitative assessment of the screening of 32 positional scanning libraries in the identification of highly specific and selective ligands for two formylpeptide receptors. We also compare and contrast two mixture-based library approaches using a mathematical model to facilitate the selection of active scaffolds and libraries to be pursued for further evaluation. The flexibility demonstrated in the differently formatted mixture-based libraries allows for their screening in a wide range of assays. PMID:23722730
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castello, Marco; DIBRIS, University of Genoa, Via Opera Pia 13, Genoa 16145; Diaspro, Alberto
2014-12-08
Time-gated detection, namely, only collecting the fluorescence photons after a time-delay from the excitation events, reduces complexity, cost, and illumination intensity of a stimulated emission depletion (STED) microscope. In the gated continuous-wave- (CW-) STED implementation, the spatial resolution improves with increased time-delay, but the signal-to-noise ratio (SNR) reduces. Thus, in sub-optimal conditions, such as a low photon-budget regime, the SNR reduction can cancel-out the expected gain in resolution. Here, we propose a method which does not discard photons, but instead collects all the photons in different time-gates and recombines them through a multi-image deconvolution. Our results, obtained on simulated andmore » experimental data, show that the SNR of the restored image improves relative to the gated image, thereby improving the effective resolution.« less
NASA Astrophysics Data System (ADS)
Aziz, Akram Mekhael; Sauck, William August; Shendi, El-Arabi Hendi; Rashed, Mohamed Ahmed; Abd El-Maksoud, Mohamed
2013-07-01
Progress in the past three decades in geophysical data processing and interpretation techniques was particularly focused in the field of aero-geophysics. The present study is to demonstrate the application of some of these techniques, including Analytic Signal, Located Euler Deconvolution, Standard Euler Deconvolution, and 2D inverse modelling, to help in enhancing and interpreting the archeo-magnetic measurements. A high-resolution total magnetic field survey was conducted at the ancient city of Pelusium (name derived from the ancient Pelusiac branch of the Nile, and recently called Tell el-Farama), located in the northwestern corner of the Sinai Peninsula. The historical city had served as a harbour throughout the Egyptian history. Different ruins at the site have been dated back to late Pharaonic, Graeco-Roman, Byzantine, Coptic, and Islamic periods. An area of 10,000 m2, to the west of the famous huge red brick citadel of Pelusium, was surveyed using the magnetic method. The chosen location was recommended by the Egyptian archaeologists, where they suspected the presence of buried foundations of a temple to the gods Zeus and Kasios. The interpretation of the results revealed interesting shallow-buried features, which may represent the Temple's outer walls. These walls are elongated in the same azimuth as the northern wall of the citadel, which supports the hypothesis of a controlling feature such as a former seacoast or shore of a distributary channel.
Statistical Deconvolution for Superresolution Fluorescence Microscopy
Mukamel, Eran A.; Babcock, Hazen; Zhuang, Xiaowei
2012-01-01
Superresolution microscopy techniques based on the sequential activation of fluorophores can achieve image resolution of ∼10 nm but require a sparse distribution of simultaneously activated fluorophores in the field of view. Image analysis procedures for this approach typically discard data from crowded molecules with overlapping images, wasting valuable image information that is only partly degraded by overlap. A data analysis method that exploits all available fluorescence data, regardless of overlap, could increase the number of molecules processed per frame and thereby accelerate superresolution imaging speed, enabling the study of fast, dynamic biological processes. Here, we present a computational method, referred to as deconvolution-STORM (deconSTORM), which uses iterative image deconvolution in place of single- or multiemitter localization to estimate the sample. DeconSTORM approximates the maximum likelihood sample estimate under a realistic statistical model of fluorescence microscopy movies comprising numerous frames. The model incorporates Poisson-distributed photon-detection noise, the sparse spatial distribution of activated fluorophores, and temporal correlations between consecutive movie frames arising from intermittent fluorophore activation. We first quantitatively validated this approach with simulated fluorescence data and showed that deconSTORM accurately estimates superresolution images even at high densities of activated fluorophores where analysis by single- or multiemitter localization methods fails. We then applied the method to experimental data of cellular structures and demonstrated that deconSTORM enables an approximately fivefold or greater increase in imaging speed by allowing a higher density of activated fluorophores/frame. PMID:22677393
Kunze, Karl P; Nekolla, Stephan G; Rischpler, Christoph; Zhang, Shelley HuaLei; Hayes, Carmel; Langwieser, Nicolas; Ibrahim, Tareq; Laugwitz, Karl-Ludwig; Schwaiger, Markus
2018-04-19
Systematic differences with respect to myocardial perfusion quantification exist between DCE-MRI and PET. Using the potential of integrated PET/MRI, this study was conceived to compare perfusion quantification on the basis of simultaneously acquired 13 NH 3 -ammonia PET and DCE-MRI data in patients at rest and stress. Twenty-nine patients were examined on a 3T PET/MRI scanner. DCE-MRI was implemented in dual-sequence design and additional T 1 mapping for signal normalization. Four different deconvolution methods including a modified version of the Fermi technique were compared against 13 NH 3 -ammonia results. Cohort-average flow comparison yielded higher resting flows for DCE-MRI than for PET and, therefore, significantly lower DCE-MRI perfusion ratios under the common assumption of equal arterial and tissue hematocrit. Absolute flow values were strongly correlated in both slice-average (R 2 = 0.82) and regional (R 2 = 0.7) evaluations. Different DCE-MRI deconvolution methods yielded similar flow result with exception of an unconstrained Fermi method exhibiting outliers at high flows when compared with PET. Thresholds for Ischemia classification may not be directly tradable between PET and MRI flow values. Differences in perfusion ratios between PET and DCE-MRI may be lifted by using stress/rest-specific hematocrit conversion. Proper physiological constraints are advised in model-constrained deconvolution. © 2018 International Society for Magnetic Resonance in Medicine.
Deblurring of Class-Averaged Images in Single-Particle Electron Microscopy.
Park, Wooram; Madden, Dean R; Rockmore, Daniel N; Chirikjian, Gregory S
2010-03-01
This paper proposes a method for deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre-Fourier expansions, and Hermite expansion and Laguerre-Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method.
A blind deconvolution method based on L1/L2 regularization prior in the gradient space
NASA Astrophysics Data System (ADS)
Cai, Ying; Shi, Yu; Hua, Xia
2018-02-01
In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.
2014-01-01
Introduction The goal of this paper is to present a critical review of major Computer-Aided Detection systems (CADe) for lung cancer in order to identify challenges for future research. CADe systems must meet the following requirements: improve the performance of radiologists providing high sensitivity in the diagnosis, a low number of false positives (FP), have high processing speed, present high level of automation, low cost (of implementation, training, support and maintenance), the ability to detect different types and shapes of nodules, and software security assurance. Methods The relevant literature related to “CADe for lung cancer” was obtained from PubMed, IEEEXplore and Science Direct database. Articles published from 2009 to 2013, and some articles previously published, were used. A systemic analysis was made on these articles and the results were summarized. Discussion Based on literature search, it was observed that many if not all systems described in this survey have the potential to be important in clinical practice. However, no significant improvement was observed in sensitivity, number of false positives, level of automation and ability to detect different types and shapes of nodules in the studied period. Challenges were presented for future research. Conclusions Further research is needed to improve existing systems and propose new solutions. For this, we believe that collaborative efforts through the creation of open source software communities are necessary to develop a CADe system with all the requirements mentioned and with a short development cycle. In addition, future CADe systems should improve the level of automation, through integration with picture archiving and communication systems (PACS) and the electronic record of the patient, decrease the number of false positives, measure the evolution of tumors, evaluate the evolution of the oncological treatment, and its possible prognosis. PMID:24713067
Firmino, Macedo; Morais, Antônio H; Mendoça, Roberto M; Dantas, Marcel R; Hekis, Helio R; Valentim, Ricardo
2014-04-08
The goal of this paper is to present a critical review of major Computer-Aided Detection systems (CADe) for lung cancer in order to identify challenges for future research. CADe systems must meet the following requirements: improve the performance of radiologists providing high sensitivity in the diagnosis, a low number of false positives (FP), have high processing speed, present high level of automation, low cost (of implementation, training, support and maintenance), the ability to detect different types and shapes of nodules, and software security assurance. The relevant literature related to "CADe for lung cancer" was obtained from PubMed, IEEEXplore and Science Direct database. Articles published from 2009 to 2013, and some articles previously published, were used. A systemic analysis was made on these articles and the results were summarized. Based on literature search, it was observed that many if not all systems described in this survey have the potential to be important in clinical practice. However, no significant improvement was observed in sensitivity, number of false positives, level of automation and ability to detect different types and shapes of nodules in the studied period. Challenges were presented for future research. Further research is needed to improve existing systems and propose new solutions. For this, we believe that collaborative efforts through the creation of open source software communities are necessary to develop a CADe system with all the requirements mentioned and with a short development cycle. In addition, future CADe systems should improve the level of automation, through integration with picture archiving and communication systems (PACS) and the electronic record of the patient, decrease the number of false positives, measure the evolution of tumors, evaluate the evolution of the oncological treatment, and its possible prognosis.
Muruli, Aneesha; Higgins, Steven; Diggle, Stephen P.
2014-01-01
Research into chronic infection by bacterial pathogens, such as Pseudomonas aeruginosa, uses various in vitro and live host models. While these have increased our understanding of pathogen growth, virulence, and evolution, each model has certain limitations. In vitro models cannot recapitulate the complex spatial structure of host organs, while experiments on live hosts are limited in terms of sample size and infection duration for ethical reasons; live mammal models also require specialized facilities which are costly to run. To address this, we have developed an ex vivo pig lung (EVPL) model for quantifying Pseudomonas aeruginosa growth, quorum sensing (QS), virulence factor production, and tissue damage in an environment that mimics a chronically infected cystic fibrosis (CF) lung. In a first test of our model, we show that lasR mutants, which do not respond to 3-oxo-C12-homoserine lactone (HSL)-mediated QS, exhibit reduced virulence factor production in EVPL. We also show that lasR mutants grow as well as or better than a corresponding wild-type strain in EVPL. lasR mutants frequently and repeatedly arise during chronic CF lung infection, but the evolutionary forces governing their appearance and spread are not clear. Our data are not consistent with the hypothesis that lasR mutants act as social “cheats” in the lung; rather, our results support the hypothesis that lasR mutants are more adapted to the lung environment. More generally, this model will facilitate improved studies of microbial disease, especially studies of how cells of the same and different species interact in polymicrobial infections in a spatially structured environment. PMID:24866798
A Mutation in TTF1/NKX2.1 Is Associated With Familial Neuroendocrine Cell Hyperplasia of Infancy
Young, Lisa R.; Deutsch, Gail H.; Bokulic, Ronald E.; Brody, Alan S.
2013-01-01
Background: Neuroendocrine cell hyperplasia of infancy (NEHI) is a childhood diffuse lung disease of unknown etiology. We investigated the mechanism for lung disease in a subject whose clinical, imaging, and lung biopsy specimen findings were consistent with NEHI; the subject’s extended family and eight other unrelated patients with NEHI were also investigated. Methods: The proband’s lung biopsy specimen (at age 7 months) and serial CT scans were diagnostic of NEHI. Her mother, an aunt, an uncle, and two first cousins had failure to thrive in infancy and chronic respiratory symptoms that improved with age. Genes associated with autosomal-dominant forms of childhood interstitial lung disease were sequenced. Results: A heterozygous NKX2.1 mutation was identified in the proband and the four other adult family members with histories of childhood lung disease. The mutation results in a nonconservative amino acid substitution in the homeodomain in a codon extensively conserved through evolution. None of these individuals have thyroid disease or movement disorders. NKX2.1 mutations were not identified by sequence analysis in eight other unrelated subjects with NEHI. Conclusions: The nature of the mutation and its segregation with disease support that it is disease-causing. Previously reported NKX2.1 mutations have been associated with “brain-thyroid-lung” syndrome and a spectrum of more severe pulmonary phenotypes. We conclude that genetic mechanisms may cause NEHI and that NKX2.1 mutations may result in, but are not the predominant cause of, this phenotype. We speculate that altered expression of NKX2.1 target genes other than those in the surfactant system may be responsible for the pulmonary pathophysiology of NEHI. PMID:23787483
Long-term effects of inhaled budesonide on screening-detected lung nodules.
Veronesi, G; Lazzeroni, M; Szabo, E; Brown, P H; DeCensi, A; Guerrieri-Gonzaga, A; Bellomi, M; Radice, D; Grimaldi, M C; Spaggiari, L; Bonanni, B
2015-05-01
A previously carried out randomized phase IIb, placebo-controlled trial of 1 year of inhaled budesonide, which was nested in a lung cancer screening study, showed that non-solid and partially solid lung nodules detected by low-dose computed tomography (LDCT), and not immediately suspicious for lung cancer, tended to regress. Because some of these nodules may be slow-growing adenocarcinoma precursors, we evaluated long-term outcomes (after stopping the 1-year intervention) by annual LDCT. We analyzed the evolution of target and non-target trial nodules detected by LDCT in the budesonide and placebo arms up to 5 years after randomization. The numbers and characteristics of lung cancers diagnosed during follow-up were also analyzed. The mean maximum diameter of non-solid nodules reduced significantly (from 5.03 mm at baseline to 2.61 mm after 5 years) in the budesonide arm; there was no significant size change in the placebo arm. The mean diameter of partially solid lesions also decreased significantly, but only by 0.69 mm. The size of solid nodules did not change. Neither the number of new lesions nor the number of lung cancers differed in the two arms. Inhaled budesonide given for 1 year significantly decreased the size of non-solid nodules detected by screening LDCT after 5 years. This is of potential importance since some of these nodules may progress slowly to adenocarcinoma. However, further studies are required to assess clinical implications. NCT01540552. © The Author 2015. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Fernández Rodríguez, Concepción; Padierna Sánchez, Celina; Villoria Fernández, Erica; Amigo Vázquez, Isaac; Fernández Martínez, Roberto; Peláez Fernández, Ignacio
2011-08-01
The evolution of symptoms, emotional state and daily routines in patients with breast cancer and lung cancer during treatment with intravenous chemotherapy (CT) is described and the influence of anxiety and depression on these variables is analyzed. 66 patients, 29 with breast cancer and 37 with lung cancer, were evaluated before starting treatment, and after completing the first, second and last cycle of CT using the Hospital Anxiety and Depression Scale (HADS), rating scales and interview. Less than 30% of the patients showed clinical anxiety or depression according to the HADS. Throughout the treatment, tiredness, fatigue and nausea increased significantly and work and leisure activity decreased. Concern about the future of relatives and insomnia increased significantly over time in patients with breast cancer whereas they decreased in patients with lung cancer. By introducing the HADS scores as covariates, it was found that most differences are due to the time factor and the type of cancer. During treatment with CT, emotional disturbances do not seem to have significant impact on the symptoms and changes in daily life reported by cancer patients.
Huang, Cheng-Yen; Hsieh, Ming-Ching; Zhou, Qinwei
2017-04-01
Monoclonal antibodies have become the fastest growing protein therapeutics in recent years. The stability and heterogeneity pertaining to its physical and chemical structures remain a big challenge. Tryptophan fluorescence has been proven to be a versatile tool to monitor protein tertiary structure. By modeling the tryptophan fluorescence emission envelope with log-normal distribution curves, the quantitative measure can be exercised for the routine characterization of monoclonal antibody overall tertiary structure. Furthermore, the log-normal deconvolution results can be presented as a two-dimensional plot with tryptophan emission bandwidth vs. emission maximum to enhance the resolution when comparing samples or as a function of applied perturbations. We demonstrate this by studying four different monoclonal antibodies, which show the distinction on emission bandwidth-maximum plot despite their similarity in overall amino acid sequences and tertiary structures. This strategy is also used to demonstrate the tertiary structure comparability between different lots manufactured for one of the monoclonal antibodies (mAb2). In addition, in the unfolding transition studies of mAb2 as a function of guanidine hydrochloride concentration, the evolution of the tertiary structure can be clearly traced in the emission bandwidth-maximum plot.
Three Channel Polarimetric Based Data Deconvolution
2011-03-01
which have been degraded by atmospheric turbulence and noise . This thesis explains in entirety the process used for deblurring and de- noising images...10 3.1.2 Noise Model...Blur and Noise .............................................................................................................. 34 5.3 Laboratory Results
NREL Scientist Maria Ghirardi Named AAAS Fellow | News | NREL
extreme sensitivity of the hydrogenase enzyme to oxygen, one of the byproducts of photosynthesis. Ghirardi pathways in photosynthesis, and in deconvoluting the metabolic partners of a crucial redox enzyme
Development and Evaluation of Sterographic Display for Lung Cancer Screening
2008-12-01
burden. Application of GPUs – With the evolution of commodity graphics processing units (GPUs) for accelerating games on personal computers, over the...units, which are designed for rendering computer games , are readily available and can be programmed to perform the kinds of real-time calculations...575-581, 1994. 12. Anderson CM, Saloner D, Tsuruda JS, Shapeero LG, Lee RE. "Artifacts in maximun-intensity-projection display of MR angiograms
Cappuccio, Javier; Dibarbora, Marina; Lozada, Inés; Quiroga, Alejandra; Olivera, Valeria; Dángelo, Marta; Pérez, Estefanía; Barrales, Hernán; Perfumo, Carlos; Pereda, Ariel; Pérez, Daniel R
2017-02-01
Swine farms provide a dynamic environment for the evolution of influenza A viruses (IAVs). The present report shows the results of a surveillance effort of IAV infection in one commercial swine farm in Argentina. Two cross-sectional serological and virological studies (n=480) were carried out in 2011 and 2012. Virus shedding was detected in nasal samples from pigs from ages 7, 21 and 42-days old. More than 90% of sows and gilts but less than 40% of 21-days old piglets had antibodies against IAV. In addition, IAV was detected in 8/17 nasal swabs and 10/15 lung samples taken from necropsied pigs. A subset of these samples was further processed for virus isolation resulting in 6 viruses of the H1N2 subtype (δ2 cluster). Pathological studies revealed an association between suppurative bronchopneumonia and necrotizing bronchiolitis with IAV positive samples. Statistical analyses showed that the degree of lesions in bronchi, bronchiole, and alveoli was higher in lungs positive to IAV. The results of this study depict the relevance of continuing long-term active surveillance of IAV in swine populations to establish IAV evolution relevant to swine and humans. Copyright © 2016 Elsevier Ltd. All rights reserved.
Temporal evolution of acute respiratory distress syndrome definitions.
Fioretto, José R; Carvalho, Werther B
2013-01-01
to review the evolution of acute respiratory distress syndrome (ARDS) definitions and present the current definition for the syndrome. a literature review and selection of the most relevant articles on ARDS definitions was performed using the MEDLINE®/PubMed® Resource Guide database (last ten years), in addition to including the most important articles (classic articles) that described the disease evolution. the review included the following subjects: introduction; importance of definition; description of the first diagnostic criterion and subsequently used definitions, such as acute lung injury score; definition by the American-European Consensus Conference, and its limitations; description of the definition by Delphi, and its problems; accuracy of the aforementioned definitions; description of most recent definition (the Berlin definition), and its limitations; and practical importance of the new definition. ARDS is a serious disease that remains an ongoing diagnostic and therapeutic challenge. The evolution of definitions used to describe the disease shows that studies are needed to validate the current definition, especially in pediatrics, where the data are very scarce. Copyright © 2013 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
Sparse-view proton computed tomography using modulated proton beams.
Lee, Jiseoc; Kim, Changhwan; Min, Byungjun; Kwak, Jungwon; Park, Seyjoon; Lee, Se Byeong; Park, Sungyong; Cho, Seungryong
2015-02-01
Proton imaging that uses a modulated proton beam and an intensity detector allows a relatively fast image acquisition compared to the imaging approach based on a trajectory tracking detector. In addition, it requires a relatively simple implementation in a conventional proton therapy equipment. The model of geometric straight ray assumed in conventional computed tomography (CT) image reconstruction is however challenged by multiple-Coulomb scattering and energy straggling in the proton imaging. Radiation dose to the patient is another important issue that has to be taken care of for practical applications. In this work, the authors have investigated iterative image reconstructions after a deconvolution of the sparsely view-sampled data to address these issues in proton CT. Proton projection images were acquired using the modulated proton beams and the EBT2 film as an intensity detector. Four electron-density cylinders representing normal soft tissues and bone were used as imaged object and scanned at 40 views that are equally separated over 360°. Digitized film images were converted to water-equivalent thickness by use of an empirically derived conversion curve. For improving the image quality, a deconvolution-based image deblurring with an empirically acquired point spread function was employed. They have implemented iterative image reconstruction algorithms such as adaptive steepest descent-projection onto convex sets (ASD-POCS), superiorization method-projection onto convex sets (SM-POCS), superiorization method-expectation maximization (SM-EM), and expectation maximization-total variation minimization (EM-TV). Performance of the four image reconstruction algorithms was analyzed and compared quantitatively via contrast-to-noise ratio (CNR) and root-mean-square-error (RMSE). Objects of higher electron density have been reconstructed more accurately than those of lower density objects. The bone, for example, has been reconstructed within 1% error. EM-based algorithms produced an increased image noise and RMSE as the iteration reaches about 20, while the POCS-based algorithms showed a monotonic convergence with iterations. The ASD-POCS algorithm outperformed the others in terms of CNR, RMSE, and the accuracy of the reconstructed relative stopping power in the region of lung and soft tissues. The four iterative algorithms, i.e., ASD-POCS, SM-POCS, SM-EM, and EM-TV, have been developed and applied for proton CT image reconstruction. Although it still seems that the images need to be improved for practical applications to the treatment planning, proton CT imaging by use of the modulated beams in sparse-view sampling has demonstrated its feasibility.
Iterative-Transform Phase Diversity: An Object and Wavefront Recovery Algorithm
NASA Technical Reports Server (NTRS)
Smith, J. Scott
2011-01-01
Presented is a solution for recovering the wavefront and an extended object. It builds upon the VSM architecture and deconvolution algorithms. Simulations are shown for recovering the wavefront and extended object from noisy data.
NASA Astrophysics Data System (ADS)
Marrugo, Andrés. G.; Millán, María. S.; Å orel, Michal; Kotera, Jan; Å roubek, Filip
2015-01-01
Retinal images often suffer from blurring which hinders disease diagnosis and progression assessment. The restoration of the images is carried out by means of blind deconvolution, but the success of the restoration depends on the correct estimation of the point-spread-function (PSF) that blurred the image. The restoration can be space-invariant or space-variant. Because a retinal image has regions without texture or sharp edges, the blind PSF estimation may fail. In this paper we propose a strategy for the correct assessment of PSF estimation in retinal images for restoration by means of space-invariant or space-invariant blind deconvolution. Our method is based on a decomposition in Zernike coefficients of the estimated PSFs to identify valid PSFs. This significantly improves the quality of the image restoration revealed by the increased visibility of small details like small blood vessels and by the lack of restoration artifacts.
NASA Astrophysics Data System (ADS)
Geloni, G.; Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.
2004-08-01
An effective and practical technique based on the detection of the coherent synchrotron radiation (CSR) spectrum can be used to characterize the profile function of ultra-short bunches. The CSR spectrum measurement has an important limitation: no spectral phase information is available, and the complete profile function cannot be obtained in general. In this paper we propose to use constrained deconvolution method for bunch profile reconstruction based on a priori-known information about formation of the electron bunch. Application of the method is illustrated with practically important example of a bunch formed in a single bunch-compressor. Downstream of the bunch compressor the bunch charge distribution is strongly non-Gaussian with a narrow leading peak and a long tail. The longitudinal bunch distribution is derived by measuring the bunch tail constant with a streak camera and by using a priory available information about profile function.
Denoised Wigner distribution deconvolution via low-rank matrix completion
Lee, Justin; Barbastathis, George
2016-08-23
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less
Denoised Wigner distribution deconvolution via low-rank matrix completion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Justin; Barbastathis, George
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less
A Comparative Study of Different Deblurring Methods Using Filters
NASA Astrophysics Data System (ADS)
Srimani, P. K.; Kavitha, S.
2011-12-01
This paper attempts to undertake the study of Restored Gaussian Blurred Images by using four types of techniques of deblurring image viz., Wiener filter, Regularized filter, Lucy Richardson deconvolution algorithm and Blind deconvolution algorithm with an information of the Point Spread Function (PSF) corrupted blurred image. The same is applied to the scanned image of seven months baby in the womb and they are compared with one another, so as to choose the best technique for restored or deblurring image. This paper also attempts to undertake the study of restored blurred image using Regualr Filter(RF) with no information about the Point Spread Function (PSF) by using the same four techniques after executing the guess of the PSF. The number of iterations and the weight threshold of it to choose the best guesses for restored or deblurring image of these techniques are determined.
Glenn, W V; Johnston, R J; Morton, P E; Dwyer, S J
1975-01-01
The various limitations to computerized axial tomographic (CT) interpretation are due in part to the 8-13 mm standard tissue plane thickness and in part to the absence of alternative planes of view, such as coronal or sagittal images. This paper describes a method for gathering multiple overlapped 8 mm transverse sections, subjecting these data to a deconvolution process, and then displaying thin (1 mm) transverse as well as reconstructed coronal and sagittal CT images. Verification of the deconvolution technique with phantom experiments is described. Application of the phantom results to human post mortem CT scan data illustrates this method's faithful reconstruction of coronal and sagittal tissue densities when correlated with actual specimen photographs of a sectioned brain. A special CT procedure, limited basal overlap scanning, is proposed for use on current first generation CT scanners without hardware modification.
Real-time blind image deconvolution based on coordinated framework of FPGA and DSP
NASA Astrophysics Data System (ADS)
Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun
2015-10-01
Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.
Wang, Chuangqi; Choi, Hee June; Kim, Sung-Jin; Desai, Aesha; Lee, Namgyu; Kim, Dohoon; Bae, Yongho; Lee, Kwonmoo
2018-04-27
Cell protrusion is morphodynamically heterogeneous at the subcellular level. However, the mechanism of cell protrusion has been understood based on the ensemble average of actin regulator dynamics. Here, we establish a computational framework called HACKS (deconvolution of heterogeneous activity in coordination of cytoskeleton at the subcellular level) to deconvolve the subcellular heterogeneity of lamellipodial protrusion from live cell imaging. HACKS identifies distinct subcellular protrusion phenotypes based on machine-learning algorithms and reveals their underlying actin regulator dynamics at the leading edge. Using our method, we discover "accelerating protrusion", which is driven by the temporally ordered coordination of Arp2/3 and VASP activities. We validate our finding by pharmacological perturbations and further identify the fine regulation of Arp2/3 and VASP recruitment associated with accelerating protrusion. Our study suggests HACKS can identify specific subcellular protrusion phenotypes susceptible to pharmacological perturbation and reveal how actin regulator dynamics are changed by the perturbation.
Laramée, J A; Arbogast, B; Deinzer, M L
1989-10-01
It is shown that one-electron reduction is a common process that occurs in negative ion liquid secondary ion mass spectrometry (LSIMS) of oligonucleotides and synthetic oligonucleosides and that this process is in competition with proton loss. Deconvolution of the molecular anion cluster reveals contributions from (M-2H).-, (M-H)-, M.-, and (M + H)-. A model based on these ionic species gives excellent agreement with the experimental data. A correlation between the concentration of species arising via one-electron reduction [M.- and (M + H)-] and the electron affinity of the matrix has been demonstrated. The relative intensity of M.- is mass-dependent; this is rationalized on the basis of base-stacking. Base sequence ion formation is theorized to arise from M.- radical anion among other possible pathways.
Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data
Pnevmatikakis, Eftychios A.; Soudry, Daniel; Gao, Yuanjun; Machado, Timothy A.; Merel, Josh; Pfau, David; Reardon, Thomas; Mu, Yu; Lacefield, Clay; Yang, Weijian; Ahrens, Misha; Bruno, Randy; Jessell, Thomas M.; Peterka, Darcy S.; Yuste, Rafael; Paninski, Liam
2016-01-01
SUMMARY We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multineuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data. PMID:26774160
Constrained maximum consistency multi-path mitigation
NASA Astrophysics Data System (ADS)
Smith, George B.
2003-10-01
Blind deconvolution algorithms can be useful as pre-processors for signal classification algorithms in shallow water. These algorithms remove the distortion of the signal caused by multipath propagation when no knowledge of the environment is available. A framework in which filters that produce signal estimates from each data channel that are as consistent with each other as possible in a least-squares sense has been presented [Smith, J. Acoust. Soc. Am. 107 (2000)]. This framework provides a solution to the blind deconvolution problem. One implementation of this framework yields the cross-relation on which EVAM [Gurelli and Nikias, IEEE Trans. Signal Process. 43 (1995)] and Rietsch [Rietsch, Geophysics 62(6) (1997)] processing are based. In this presentation, partially blind implementations that have good noise stability properties are compared using Classification Operating Characteristics (CLOC) analysis. [Work supported by ONR under Program Element 62747N and NRL, Stennis Space Center, MS.
Bayesian least squares deconvolution
NASA Astrophysics Data System (ADS)
Asensio Ramos, A.; Petit, P.
2015-11-01
Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
NASA Astrophysics Data System (ADS)
Supriyanto, Noor, T.; Suhanto, E.
2017-07-01
The Endut geothermal prospect is located in Banten Province, Indonesia. The geological setting of the area is dominated by quaternary volcanic, tertiary sediments and tertiary rock intrusion. This area has been in the preliminary study phase of geology, geochemistry, and geophysics. As one of the geophysical study, the gravity data measurement has been carried out and analyzed in order to understand geological condition especially subsurface fault structure that control the geothermal system in Endut area. After precondition applied to gravity data, the complete Bouguer anomaly have been analyzed using advanced derivatives method such as Horizontal Gradient (HG) and Euler Deconvolution (ED) to clarify the existance of fault structures. These techniques detected boundaries of body anomalies and faults structure that were compared with the lithologies in the geology map. The analysis result will be useful in making a further realistic conceptual model of the Endut geothermal area.
Determination of element affinities by density fractionation of bulk coal samples
Querol, X.; Klika, Z.; Weiss, Z.; Finkelman, R.B.; Alastuey, A.; Juan, R.; Lopez-Soler, A.; Plana, F.; Kolker, A.; Chenery, S.R.N.
2001-01-01
A review has been made of the various methods of determining major and trace element affinities for different phases, both mineral and organic in coals, citing their various strengths and weaknesses. These include mathematical deconvolution of chemical analyses, direct microanalysis, sequential extraction procedures and density fractionation. A new methodology combining density fractionation with mathematical deconvolution of chemical analyses of whole coals and their density fractions has been evaluated. These coals formed part of the IEA-Coal Research project on the Modes of Occurrence of Trace Elements in Coal. Results were compared to a previously reported sequential extraction methodology and showed good agreement for most elements. For particular elements (Be, Mo, Cu, Se and REEs) in specific coals where disagreement was found, it was concluded that the occurrence of rare trace element bearing phases may account for the discrepancy, and modifications to the general procedure must be made to account for these.
Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation
NASA Astrophysics Data System (ADS)
Wen, Bo; Zhang, Qiheng; Zhang, Jianlin
2011-11-01
Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.
XAP, a program for deconvolution and analysis of complex X-ray spectra
Quick, James E.; Haleby, Abdul Malik
1989-01-01
The X-ray analysis program (XAP) is a spectral-deconvolution program written in BASIC and specifically designed to analyze complex spectra produced by energy-dispersive X-ray analytical systems (EDS). XAP compensates for spectrometer drift, utilizes digital filtering to remove background from spectra, and solves for element abundances by least-squares, multiple-regression analysis. Rather than base analyses on only a few channels, broad spectral regions of a sample are reconstructed from standard reference spectra. The effects of this approach are (1) elimination of tedious spectrometer adjustments, (2) removal of background independent of sample composition, and (3) automatic correction for peak overlaps. Although the program was written specifically to operate a KEVEX 7000 X-ray fluorescence analytical system, it could be adapted (with minor modifications) to analyze spectra produced by scanning electron microscopes, electron microprobes, and probes, and X-ray defractometer patterns obtained from whole-rock powders.
Correspondence regarding Zhong et al., BMC Bioinformatics 2013 Mar 7;14:89.
Kuhn, Alexandre
2014-11-28
Computational expression deconvolution aims to estimate the contribution of individual cell populations to expression profiles measured in samples of heterogeneous composition. Zhong et al. recently proposed Digital Sorting Algorithm (BMC Bioinformatics 2013 Mar 7;14:89) and showed that they could accurately estimate population-specific expression levels and expression differences between two populations. They compared DSA with Population-Specific Expression Analysis (PSEA), a previous deconvolution method that we developed to detect expression changes occurring within the same population between two conditions (e.g. disease versus non-disease). However, Zhong et al. compared PSEA-derived specific expression levels across different cell populations. Specific expression levels obtained with PSEA cannot be directly compared across different populations as they are on a relative scale. They are accurate as we demonstrate by deconvolving the same dataset used by Zhong et al. and, importantly, allow for comparison of population-specific expression across conditions.
Simulation and analysis on ultrasonic testing for the cement grouting defects of the corrugated pipe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qingbang, Han; Ling, Chen; Changping, Zhu
2014-02-18
The defects exist in the cement grouting process of prestressed corrugated pipe may directly impair the bridge safety. In this paper, sound fields propagation in concrete structures with corrugated pipes and the influence of various different defects are simulated and analyzed using finite element method. The simulation results demonstrate a much complex propagation characteristic due to multiple reflection, refraction and scattering, where the scattering signals caused by metal are very strong, while the signals scattered by an air bubble are weaker. The influence of defect both in time and frequency domain are found through deconvolution treatment. In the time domain,more » the deconvolution signals correspond to larger defect display a larger head wave amplitude and shorter arrive time than those of smaller defects; in the frequency domain, larger defect also shows a stronger amplitude, lower center frequency and lower cutoff frequency.« less
ESO/ST-ECF Data Analysis Workshop, 5th, Garching, Germany, Apr. 26, 27, 1993, Proceedings
NASA Astrophysics Data System (ADS)
Grosbol, Preben; de Ruijsscher, Resy
1993-01-01
Various papers on astronomical data analysis are presented. Individual optics addressed include: surface photometry of early-type galaxies, wavelet transform and adaptive filtering, package for surface photometry of galaxies, calibration of large-field mosaics, surface photometry of galaxies with HST, wavefront-supported image deconvolution, seeing effects on elliptical galaxies, multiple algorithms deconvolution program, enhancement of Skylab X-ray images, MIDAS procedures for the image analysis of E-S0 galaxies, photometric data reductions under MIDAS, crowded field photometry with deconvolved images, the DENIS Deep Near Infrared Survey. Also discussed are: analysis of astronomical time series, detection of low-amplitude stellar pulsations, new SOT method for frequency analysis, chaotic attractor reconstruction and applications to variable stars, reconstructing a 1D signal from irregular samples, automatic analysis for time series with large gaps, prospects for content-based image retrieval, redshift survey in the South Galactic Pole Region.
Peckner, Ryan; Myers, Samuel A; Jacome, Alvaro Sebastian Vaca; Egertson, Jarrett D; Abelin, Jennifer G; MacCoss, Michael J; Carr, Steven A; Jaffe, Jacob D
2018-05-01
Mass spectrometry with data-independent acquisition (DIA) is a promising method to improve the comprehensiveness and reproducibility of targeted and discovery proteomics, in theory by systematically measuring all peptide precursors in a biological sample. However, the analytical challenges involved in discriminating between peptides with similar sequences in convoluted spectra have limited its applicability in important cases, such as the detection of single-nucleotide polymorphisms (SNPs) and alternative site localizations in phosphoproteomics data. We report Specter (https://github.com/rpeckner-broad/Specter), an open-source software tool that uses linear algebra to deconvolute DIA mixture spectra directly through comparison to a spectral library, thus circumventing the problems associated with typical fragment-correlation-based approaches. We validate the sensitivity of Specter and its performance relative to that of other methods, and show that Specter is able to successfully analyze cases involving highly similar peptides that are typically challenging for DIA analysis methods.
Blind channel estimation and deconvolution in colored noise using higher-order cumulants
NASA Astrophysics Data System (ADS)
Tugnait, Jitendra K.; Gummadavelli, Uma
1994-10-01
Existing approaches to blind channel estimation and deconvolution (equalization) focus exclusively on channel or inverse-channel impulse response estimation. It is well-known that the quality of the deconvolved output depends crucially upon the noise statistics also. Typically it is assumed that the noise is white and the signal-to-noise ratio is known. In this paper we remove these restrictions. Both the channel impulse response and the noise model are estimated from the higher-order (fourth, e.g.) cumulant function and the (second-order) correlation function of the received data via a least-squares cumulant/correlation matching criterion. It is assumed that the noise higher-order cumulant function vanishes (e.g., Gaussian noise, as is the case for digital communications). Consistency of the proposed approach is established under certain mild sufficient conditions. The approach is illustrated via simulation examples involving blind equalization of digital communications signals.
Jo, Javier A.; Fang, Qiyin; Marcu, Laura
2007-01-01
We report a new deconvolution method for fluorescence lifetime imaging microscopy (FLIM) based on the Laguerre expansion technique. The performance of this method was tested on synthetic and real FLIM images. The following interesting properties of this technique were demonstrated. 1) The fluorescence intensity decay can be estimated simultaneously for all pixels, without a priori assumption of the decay functional form. 2) The computation speed is extremely fast, performing at least two orders of magnitude faster than current algorithms. 3) The estimated maps of Laguerre expansion coefficients provide a new domain for representing FLIM information. 4) The number of images required for the analysis is relatively small, allowing reduction of the acquisition time. These findings indicate that the developed Laguerre expansion technique for FLIM analysis represents a robust and extremely fast deconvolution method that enables practical applications of FLIM in medicine, biology, biochemistry, and chemistry. PMID:19444338
Gokhin, David S.; Fowler, Velia M.
2016-01-01
The periodically arranged thin filaments within the striated myofibrils of skeletal and cardiac muscle have precisely regulated lengths, which can change in response to developmental adaptations, pathophysiological states, and genetic perturbations. We have developed a user-friendly, open-source ImageJ plugin that provides a graphical user interface (GUI) for super-resolution measurement of thin filament lengths by applying Distributed Deconvolution (DDecon) analysis to periodic line scans collected from fluorescence images. In the workflow presented here, we demonstrate thin filament length measurement using a phalloidin-stained cryosection of mouse skeletal muscle. The DDecon plugin is also capable of measuring distances of any periodically localized fluorescent signal from the Z- or M-line, as well as distances between successive Z- or M-lines, providing a broadly applicable tool for quantitative analysis of muscle cytoarchitecture. These functionalities can also be used to analyze periodic fluorescence signals in nonmuscle cells. PMID:27644080
A moral history of the evolution of a caste of workers.
Samuels, S W
1996-01-01
Using a dialectic method of philosophic inquiry, the actual ethical, legal, and social situation associated with genetic testing of beryllium-exposed workers in Department of Energy nuclear weapons facilities for markers of chronic beryllium disease is described. The cultural evolution of a caste system in a similar situation, and its social and biological implications, among uranium miners in the Erz Gebirge of Central Europe and on the Colorado Plateau of the United States, marked by suicide and lung disease, including cancer, is also described. The historically persistent social disease resulting from these situations. The Masada Syndrome, named from an analogous situation in biblical times, is characterized. Cultural intervention, a necessary condition for the ethical progression of the Human Genome Project, is outlined. PMID:8933047
Friedberg, Joseph S
2013-01-01
Malignant pleural mesothelioma remains an incurable disease for which the role of surgery remains controversial. Though not yet clearly defined there does appear to be a subset of patients who benefit from a surgery-based multimodal treatment plan, beyond what would be expected with current nonoperative therapies. As with other pleural cancers it is probably not possible to achieve a microscopic complete resection with any operation. The goal of surgery in this setting, therefore, is to remove all visible and palpable disease - a macroscopic complete resection. There are basically two surgical approaches to achieve a macroscopic complete resection, lung-sacrificing and lung-sparing. Lung-sacrificing surgery, which likely leaves behind the least amount of microscopic disease, is accomplished as an extrapleural pneumonectomy. This is a well established and standardized operation. Lung-sparing surgery for malignant pleural mesothelioma, on the other hand, does not currently enjoy any degree of consistency. Not only are the reported variations on the operation widely disparate, but even the nomenclature to describe the operation is highly variable. Often the selection of a lung-sparing approach is reported as an intraoperative decision that hinges on the bulk of the cancer and/or the degree of extension into the pulmonary fissures. This article describes the current evolution of a lung-sparing procedure, radical pleurectomy, which has been used to achieve a macroscopic complete resection in over a hundred patients. Many of these cases involved bulky cancers, some exceeding two liters in volume, and often with extensive invasion of the pulmonary fissures. With the described technique there has not yet been an instance where conversion to extrapleural pneumonectomy would have contributed to the ability to achieve a macroscopic complete resection. Whether or not radical pleurectomy is the optimal approach for any or all patients undergoing surgery-based multimodal treatment for malignant pleural mesothelioma is not known, but the described technique does offer an operation that can serve as a consistent foundation for any surgery-based treatment strategy where achieving a macroscopic complete resection, while sparing the lung, is desired. Copyright © 2013. Published by Elsevier Inc.
Szram, Joanna; Schofield, Susie J; Cosgrove, Martin P; Cullinan, Paul
2013-11-01
While the acute respiratory risks of welding are well characterised, more chronic effects, including those on lung function, are less clear. We carried out a systematic review of published longitudinal studies of lung function decline in welders. Original cohort studies documenting two or more sequential measurements of lung function were reviewed. Meta-analysis was carried out on studies with suitable data on forced expiratory volume in 1 s (FEV1). Seven studies were included; their quality (measured on the Newcastle-Ottawa scale) was good, although exposure assessment was limited and the studies showed significant heterogeneity. Five had data suitable for meta-analysis; the pooled estimate of the difference in FEV1 decline between welders and nonwelders was -9.0 mL · year(-1) (95% CI -22.5-4.5; p=0.193). The pooled estimates of difference in annual FEV1 decline between welders and referents who smoked was -13.7 mL · year(-1) (95% CI -33.6-6.3; p=0.179). For welders and referents who did not smoke the estimated difference was -3.8 mL · year(-1) (95% CI -20.2-12.6; p=0.650). Symptom prevalence data were mainly narrative; smoking appeared to have the greatest effect on symptom evolution. Collectively, available longitudinal data on decline of lung function in welders and respiratory symptoms suggest a greater effect in those who smoke, supporting a focus on smoking cessation as well as control of fume exposure in this trade. Further prospective studies are required to confirm these findings.
Evolution of silver nanoparticles in the rat lung investigated by X-ray absorption spectroscopy
Davidson, R. Andrew; Anderson, Donald S.; Van Winkle, Laura S.; ...
2014-12-16
Following a 6-h inhalation exposure to aerosolized 20 and 110 nm diameter silver nanoparticles, lung tissues from rats were investigated with X-ray absorption spectroscopy, which can identify the chemical state of silver species. Lung tissues were processed immediately after sacrifice of the animals at 0, 1, 3, and 7 days post exposure and the samples were stored in an inert and low-temperature environment until measured. We found that it is critical to follow a proper processing, storage and measurement protocol; otherwise only silver oxides are detected after inhalation even for the larger nanoparticles. The results of X-ray absorption spectroscopy measurementsmore » taken in air at 85 K suggest that the dominating silver species in all the postexposure lung tissues were metallic silver, not silver oxide, or solvated silver cations. The results further indicate that the silver nanoparticles in the tissues were transformed from the original nanoparticles to other forms of metallic silver nanomaterials and the rate of this transformation depended on the size of the original nanoparticles. Furthermore, we found that 20 nm diameter silver nanoparticles were significantly modified after aerosolization and 6-h inhalation/deposition, whereas larger, 110 nm diameter nanoparticles were largely unchanged. Over the seven-day postexposure period the smaller 20 nm silver nanoparticles underwent less change in the lung tissue than the larger 110 nm silver nanoparticles. In contrast, silica-coated gold nanoparticles did not undergo any modification processes and remained as the initial nanoparticles throughout the 7-day study period.« less
Evolution of silver nanoparticles in the rat lung investigated by X-ray absorption spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, R. Andrew; Anderson, Donald S.; Van Winkle, Laura S.
Following a 6-h inhalation exposure to aerosolized 20 and 110 nm diameter silver nanoparticles, lung tissues from rats were investigated with X-ray absorption spectroscopy, which can identify the chemical state of silver species. Lung tissues were processed immediately after sacrifice of the animals at 0, 1, 3, and 7 days post exposure and the samples were stored in an inert and low-temperature environment until measured. We found that it is critical to follow a proper processing, storage and measurement protocol; otherwise only silver oxides are detected after inhalation even for the larger nanoparticles. The results of X-ray absorption spectroscopy measurementsmore » taken in air at 85 K suggest that the dominating silver species in all the postexposure lung tissues were metallic silver, not silver oxide, or solvated silver cations. The results further indicate that the silver nanoparticles in the tissues were transformed from the original nanoparticles to other forms of metallic silver nanomaterials and the rate of this transformation depended on the size of the original nanoparticles. Furthermore, we found that 20 nm diameter silver nanoparticles were significantly modified after aerosolization and 6-h inhalation/deposition, whereas larger, 110 nm diameter nanoparticles were largely unchanged. Over the seven-day postexposure period the smaller 20 nm silver nanoparticles underwent less change in the lung tissue than the larger 110 nm silver nanoparticles. In contrast, silica-coated gold nanoparticles did not undergo any modification processes and remained as the initial nanoparticles throughout the 7-day study period.« less
NASA Astrophysics Data System (ADS)
Zhou, Yan; Liu, Cheng-Hui; Pu, Yang; Cheng, Gangge; Yu, Xinguang; Zhou, Lixin; Lin, Dongmei; Zhu, Ke; Alfano, Robert R.
2017-02-01
Resonance Raman (RR) spectroscopy offers a novel Optical Biopsy method in cancer discrimination by a means of enhancement in Raman scattering. It is widely acknowledged that the RR spectrum of tissue is a superposition of spectra of various key building block molecules. In this study, the Resonance Raman (RR) spectra of human metastasis of lung cancerous and normal brain tissues excited by a visible selected wavelength at 532 nm are used to explore spectral changes caused by the tumor evolution. The potential application of RR spectra human brain metastasis of lung cancer was investigated by Blind Source Separation such as Principal Component Analysis (PCA). PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components (PCs). The results show significant RR spectra difference between human metastasis of lung cancerous and normal brain tissues analyzed by PCA. To evaluate the efficacy of for cancer detection, a linear discriminant analysis (LDA) classifier is utilized to calculate the sensitivity, and specificity and the receiver operating characteristic (ROC) curves are used to evaluate the performance of this criterion. Excellent sensitivity of 0.97, specificity (close to 1.00) and the Area Under ROC Curve (AUC) of 0.99 values are achieved under best optimal circumstance. This research demonstrates that RR spectroscopy is effective for detecting changes of tissues due to the development of brain metastasis of lung cancer. RR spectroscopy analyzed by blind source separation may have potential to be a new armamentarium.