Science.gov

Sample records for denoising inferred functional

  1. Medical image denoising using one-dimensional singularity function model.

    PubMed

    Luo, Jianhua; Zhu, Yuemin; Hiba, Bassem

    2010-03-01

    A novel denoising approach is proposed that is based on a spectral data substitution mechanism through using a mathematical model of one-dimensional singularity function analysis (1-D SFA). The method consists in dividing the complete spectral domain of the noisy signal into two subsets: the preserved set where the spectral data are kept unchanged, and the substitution set where the original spectral data having lower signal-to-noise ratio (SNR) are replaced by those reconstructed using the 1-D SFA model. The preserved set containing original spectral data is determined according to the SNR of the spectrum. The singular points and singularity degrees in the 1-D SFA model are obtained through calculating finite difference of the noisy signal. The theoretical formulation and experimental results demonstrated that the proposed method allows more efficient denoising while introducing less distortion, and presents significant improvement over conventional denoising methods.

  2. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    PubMed

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  3. Denoising Single-Molecule FRET Trajectories with Wavelets and Bayesian Inference

    PubMed Central

    Taylor, J. Nick; Makarov, Dmitrii E.; Landes, Christy F.

    2010-01-01

    Abstract A method to denoise single-molecule fluorescence resonance energy (smFRET) trajectories using wavelet detail thresholding and Bayesian inference is presented. Bayesian methods are developed to identify fluorophore photoblinks in the time trajectories. Simulated data are used to quantify the improvement in static and dynamic data analysis. Application of the method to experimental smFRET data shows that it distinguishes photoblinks from large shifts in smFRET efficiency while maintaining the important advantage of an unbiased approach. Known sources of experimental noise are examined and quantified as a means to remove their contributions via soft thresholding of wavelet coefficients. A wavelet decomposition algorithm is described, and thresholds are produced through the knowledge of noise parameters in the discrete-time photon signals. Reconstruction of the signals from thresholded coefficients produces signals that contain noise arising only from unquantifiable parameters. The method is applied to simulated and observed smFRET data, and it is found that the denoised data retain their underlying dynamic properties, but with increased resolution. PMID:20074517

  4. A Neuro-Fuzzy Inference System Combining Wavelet Denoising, Principal Component Analysis, and Sequential Probability Ratio Test for Sensor Monitoring

    SciTech Connect

    Na, Man Gyun; Oh, Seungrohk

    2002-11-15

    A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information of other sensors. The parameters of the neuro-fuzzy inference system that estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce the time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system, and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors.

  5. New Denoising Method Based on Empirical Mode Decomposition and Improved Thresholding Function

    NASA Astrophysics Data System (ADS)

    Mohguen, Wahiba; RaïsEl'hadiBekka

    2017-01-01

    This paper presents a new denoising method called EMD-ITF that was based on Empirical Mode Decomposition (EMD) and the Improved Thresholding Function (ITF) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs were thresholded by applying the improved thresholding function to suppress noise and improve the signal to noise ratio (SNR). The method was tested on simulated and real data and the results were compared to the EMD-Based signal denoising methods using the soft thresholding. The results showed the superior performance of the new EMD-ITF denoising over the traditional approach. The performance were evaluated in terms of SNR in dB, and Mean Square Error (MSE).

  6. Total variation denoising of probability measures using iterated function systems with probabilities

    NASA Astrophysics Data System (ADS)

    La Torre, Davide; Mendivil, Franklin; Vrscay, Edward R.

    2017-01-01

    In this paper we present a total variation denoising problem for probability measures using the set of fixed point probability measures of iterated function systems with probabilities IFSP. By means of the Collage Theorem for contraction mappings, we provide an upper bound for this problem that can be solved by determining a set of probabilities.

  7. A New Adaptive Diffusive Function for Magnetic Resonance Imaging Denoising Based on Pixel Similarity

    PubMed Central

    Heydari, Mostafa; Karami, Mohammad Reza

    2015-01-01

    Although there are many methods for image denoising, but partial differential equation (PDE) based denoising attracted much attention in the field of medical image processing such as magnetic resonance imaging (MRI). The main advantage of PDE-based denoising approach is laid in its ability to smooth image in a nonlinear way, which effectively removes the noise, as well as preserving edge through anisotropic diffusion controlled by the diffusive function. This function was first introduced by Perona and Malik (P-M) in their model. They proposed two functions that are most frequently used in PDE-based methods. Since these functions consider only the gradient information of a diffused pixel, they cannot remove noise in noisy images with low signal-to-noise (SNR). In this paper we propose a modified diffusive function with fractional power that is based on pixel similarity to improve P-M model for low SNR. We also will show that our proposed function will stabilize the P-M method. As experimental results show, our proposed function that is modified version of P-M function effectively improves the SNR and preserves edges more than P-M functions in low SNR. PMID:26955563

  8. A New Adaptive Diffusive Function for Magnetic Resonance Imaging Denoising Based on Pixel Similarity.

    PubMed

    Heydari, Mostafa; Karami, Mohammad Reza

    2015-01-01

    Although there are many methods for image denoising, but partial differential equation (PDE) based denoising attracted much attention in the field of medical image processing such as magnetic resonance imaging (MRI). The main advantage of PDE-based denoising approach is laid in its ability to smooth image in a nonlinear way, which effectively removes the noise, as well as preserving edge through anisotropic diffusion controlled by the diffusive function. This function was first introduced by Perona and Malik (P-M) in their model. They proposed two functions that are most frequently used in PDE-based methods. Since these functions consider only the gradient information of a diffused pixel, they cannot remove noise in noisy images with low signal-to-noise (SNR). In this paper we propose a modified diffusive function with fractional power that is based on pixel similarity to improve P-M model for low SNR. We also will show that our proposed function will stabilize the P-M method. As experimental results show, our proposed function that is modified version of P-M function effectively improves the SNR and preserves edges more than P-M functions in low SNR.

  9. Image denoising in bidimensional empirical mode decomposition domain: the role of Student's probability distribution function.

    PubMed

    Lahmiri, Salim

    2016-03-01

    Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications.

  10. The impact of denoising on independent component analysis of functional magnetic resonance imaging data.

    PubMed

    Pignat, Jean Michel; Koval, Oleksiy; Van De Ville, Dimitri; Voloshynovskiy, Sviatoslav; Michel, Christoph; Pun, Thierry

    2013-02-15

    Independent component analysis (ICA) is a suitable method for decomposing functional magnetic resonance imaging (fMRI) activity into spatially independent patterns. Practice has revealed that low-pass filtering prior to ICA may improve ICA results by reducing noise and possibly by increasing source smoothness, which may enhance source independence; however, it eliminates useful information in high frequency features and it amplifies low signal fluctuations leading to independence loss. On the other hand, high-pass filtering may increase the independence by preserving spatial information, but its denoising properties are weak. Thus, such filtering strategies did not lead to simultaneous enhancements in independence and noise reduction; therefore, band-pass filtering or more sophisticated filtering methods are expected to be more appropriate. We used advanced wavelet filtering procedures, such as wavelet-based methods relying upon hard and soft coefficient thresholding and non-stationary Gaussian modelling based on geometrical prior information, to denoise artificial and real fMRI data. We compared the performance of these methods with the performance of traditional Gaussian smoothing techniques. First, we demonstrated both analytically and empirically the consistent performance increase of spatial filtering prior to ICA using spatial correlation and statistical sensitivity as quality measures. Second, all filtering methods were computationally efficient. Finally, denoising using low-pass filters was needed to improve ICA, suggesting that noise reduction may have a more significant effect on the component independence than the preservation of information contained within high frequencies. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Denoising portal images by minimizing the SURE estimator on a parameterized family of shrinkage functions.

    PubMed

    González-López, Antonio; Campos-Morcillo, Pedro

    2017-06-01

    The number of verification portal images in radiotherapy has increased in the last years. On the other hand, radiation delivered during imaging is not confined to the treatment volumes, but also affects the surrounding organs and tissues. In order to reduce the overall radiation dose due to imaging, one approach would be to reduce the dose per image, but noise would increase and the quality of portal images would reduce. The limited quality of portal images makes it difficult to propose a reduction of dose if there is no way to effectively reduce noise. Denoising algorithms could be the solution if the quality of the restored image can match the image obtained with a standard dose. In this work the statistical properties of noise in a portal imaging system and the statistical properties of portal images are used to develop an efficient denoising method. The result is a method that minimizes the Stein's unbiased risk estimator (SURE) in the image domain over a parametric family of shrinkage functions operating in the wavelet domain. The presented denoising method shows a better performance than the adaptive Wiener estimator for different portal images and noise energies. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. A wavelet-based estimator of the degrees of freedom in denoised fMRI time series for probabilistic testing of functional connectivity and brain graphs.

    PubMed

    Patel, Ameera X; Bullmore, Edward T

    2016-11-15

    Connectome mapping using techniques such as functional magnetic resonance imaging (fMRI) has become a focus of systems neuroscience. There remain many statistical challenges in analysis of functional connectivity and network architecture from BOLD fMRI multivariate time series. One key statistic for any time series is its (effective) degrees of freedom, df, which will generally be less than the number of time points (or nominal degrees of freedom, N). If we know the df, then probabilistic inference on other fMRI statistics, such as the correlation between two voxel or regional time series, is feasible. However, we currently lack good estimators of df in fMRI time series, especially after the degrees of freedom of the "raw" data have been modified substantially by denoising algorithms for head movement. Here, we used a wavelet-based method both to denoise fMRI data and to estimate the (effective) df of the denoised process. We show that seed voxel correlations corrected for locally variable df could be tested for false positive connectivity with better control over Type I error and greater specificity of anatomical mapping than probabilistic connectivity maps using the nominal degrees of freedom. We also show that wavelet despiked statistics can be used to estimate all pairwise correlations between a set of regional nodes, assign a P value to each edge, and then iteratively add edges to the graph in order of increasing P. These probabilistically thresholded graphs are likely more robust to regional variation in head movement effects than comparable graphs constructed by thresholding correlations. Finally, we show that time-windowed estimates of df can be used for probabilistic connectivity testing or dynamic network analysis so that apparent changes in the functional connectome are appropriately corrected for the effects of transient noise bursts. Wavelet despiking is both an algorithm for fMRI time series denoising and an estimator of the (effective) df of denoised

  13. Retinal Image Denoising via Bilateral Filter with a Spatial Kernel of Optimally Oriented Line Spread Function

    PubMed Central

    He, Yunlong; Zhao, Yanna; Ren, Yanju; Gee, James

    2017-01-01

    Filtering belongs to the most fundamental operations of retinal image processing and for which the value of the filtered image at a given location is a function of the values in a local window centered at this location. However, preserving thin retinal vessels during the filtering process is challenging due to vessels' small area and weak contrast compared to background, caused by the limited resolution of imaging and less blood flow in the vessel. In this paper, we present a novel retinal image denoising approach which is able to preserve the details of retinal vessels while effectively eliminating image noise. Specifically, our approach is carried out by determining an optimal spatial kernel for the bilateral filter, which is represented by a line spread function with an orientation and scale adjusted adaptively to the local vessel structure. Moreover, this approach can also be served as a preprocessing tool for improving the accuracy of the vessel detection technique. Experimental results show the superiority of our approach over state-of-the-art image denoising techniques such as the bilateral filter. PMID:28261320

  14. Retinal Image Denoising via Bilateral Filter with a Spatial Kernel of Optimally Oriented Line Spread Function.

    PubMed

    He, Yunlong; Zheng, Yuanjie; Zhao, Yanna; Ren, Yanju; Lian, Jian; Gee, James

    2017-01-01

    Filtering belongs to the most fundamental operations of retinal image processing and for which the value of the filtered image at a given location is a function of the values in a local window centered at this location. However, preserving thin retinal vessels during the filtering process is challenging due to vessels' small area and weak contrast compared to background, caused by the limited resolution of imaging and less blood flow in the vessel. In this paper, we present a novel retinal image denoising approach which is able to preserve the details of retinal vessels while effectively eliminating image noise. Specifically, our approach is carried out by determining an optimal spatial kernel for the bilateral filter, which is represented by a line spread function with an orientation and scale adjusted adaptively to the local vessel structure. Moreover, this approach can also be served as a preprocessing tool for improving the accuracy of the vessel detection technique. Experimental results show the superiority of our approach over state-of-the-art image denoising techniques such as the bilateral filter.

  15. Can orangutans (Pongo abelii) infer tool functionality?

    PubMed

    Mulcahy, Nicholas J; Schubiger, Michèle N

    2014-05-01

    It is debatable whether apes can reason about the unobservable properties of tools. We tested orangutans for this ability with a range of tool tasks that they could solve by using observational cues to infer tool functionality. In experiment 1, subjects successfully chose an unbroken tool over a broken one when each tool's middle section was hidden. This prevented seeing which tool was functional but it could be inferred by noting the tools' visible ends that were either disjointed (broken tool) or aligned (unbroken tool). We investigated whether success in experiment 1 was best explained by inferential reasoning or by having a preference per se for a hidden tool with an aligned configuration. We conducted a similar task to experiment 1 and included a functional bent tool that could be arranged to have the same disjointed configuration as the broken tool. The results suggested that subjects had a preference per se for the aligned tool by choosing it regardless of whether it was paired with the broken tool or the functional bent tool. However, further experiments with the bent tool task suggested this preference was a result of additional demands of having to attend to and remember the properties of the tools from the beginning of the task. In our last experiment, we removed these task demands and found evidence that subjects could infer the functionality of a broken tool and an unbroken tool that both looked identical at the time of choice.

  16. Neural Circuit Inference from Function to Structure.

    PubMed

    Real, Esteban; Asari, Hiroki; Gollisch, Tim; Meister, Markus

    2017-01-23

    Advances in technology are opening new windows on the structural connectivity and functional dynamics of brain circuits. Quantitative frameworks are needed that integrate these data from anatomy and physiology. Here, we present a modeling approach that creates such a link. The goal is to infer the structure of a neural circuit from sparse neural recordings, using partial knowledge of its anatomy as a regularizing constraint. We recorded visual responses from the output neurons of the retina, the ganglion cells. We then generated a systematic sequence of circuit models that represents retinal neurons and connections and fitted them to the experimental data. The optimal models faithfully recapitulated the ganglion cell outputs. More importantly, they made predictions about dynamics and connectivity among unobserved neurons internal to the circuit, and these were subsequently confirmed by experiment. This circuit inference framework promises to facilitate the integration and understanding of big data in neuroscience.

  17. Adaptive Denoising Technique for Robust Analysis of Functional Magnetic Resonance Imaging Data

    DTIC Science & Technology

    2007-11-02

    or receive while t fMRI o versatil of epoc method ER-fM to the studies comes intra-su functioADAPTIVE DENOISING TECHNIQUE FOR ROBUST ANALYSIS OF...supported in part by the Center for Advanced Software and Biomedical Engineering Consultations (CASBEC), Cairo University, and IBE Technologies , Egypt

  18. Functional network inference of the suprachiasmatic nucleus

    SciTech Connect

    Abel, John H.; Meeker, Kirsten; Granados-Fuentes, Daniel; St. John, Peter C.; Wang, Thomas J.; Bales, Benjamin B.; Doyle, Francis J.; Herzog, Erik D.; Petzold, Linda R.

    2016-04-04

    In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. Although the intracellular genetic oscillator and intercellular biochemical coupling mechanisms have been examined previously, the network topology driving synchronization of the SCN has not been elucidated. This network has been particularly challenging to probe, due to its oscillatory components and slow coupling timescale. In this work, we investigated the SCN network at a single-cell resolution through a chemically induced desynchronization. We then inferred functional connections in the SCN by applying the maximal information coefficient statistic to bioluminescence reporter data from individual neurons while they resynchronized their circadian cycling. Our results demonstrate that the functional network of circadian cells associated with resynchronization has small-world characteristics, with a node degree distribution that is exponential. We show that hubs of this small-world network are preferentially located in the central SCN, with sparsely connected shells surrounding these cores. Finally, we used two computational models of circadian neurons to validate our predictions of network structure.

  19. Functional network inference of the suprachiasmatic nucleus

    PubMed Central

    Abel, John H.; Meeker, Kirsten; Granados-Fuentes, Daniel; St. John, Peter C.; Wang, Thomas J.; Bales, Benjamin B.; Doyle, Francis J.; Herzog, Erik D.; Petzold, Linda R.

    2016-01-01

    In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. Although the intracellular genetic oscillator and intercellular biochemical coupling mechanisms have been examined previously, the network topology driving synchronization of the SCN has not been elucidated. This network has been particularly challenging to probe, due to its oscillatory components and slow coupling timescale. In this work, we investigated the SCN network at a single-cell resolution through a chemically induced desynchronization. We then inferred functional connections in the SCN by applying the maximal information coefficient statistic to bioluminescence reporter data from individual neurons while they resynchronized their circadian cycling. Our results demonstrate that the functional network of circadian cells associated with resynchronization has small-world characteristics, with a node degree distribution that is exponential. We show that hubs of this small-world network are preferentially located in the central SCN, with sparsely connected shells surrounding these cores. Finally, we used two computational models of circadian neurons to validate our predictions of network structure. PMID:27044085

  20. Image denoising using local tangent space alignment

    NASA Astrophysics Data System (ADS)

    Feng, JianZhou; Song, Li; Huo, Xiaoming; Yang, XiaoKang; Zhang, Wenjun

    2010-07-01

    We propose a novel image denoising approach, which is based on exploring an underlying (nonlinear) lowdimensional manifold. Using local tangent space alignment (LTSA), we 'learn' such a manifold, which approximates the image content effectively. The denoising is performed by minimizing a newly defined objective function, which is a sum of two terms: (a) the difference between the noisy image and the denoised image, (b) the distance from the image patch to the manifold. We extend the LTSA method from manifold learning to denoising. We introduce the local dimension concept that leads to adaptivity to different kind of image patches, e.g. flat patches having lower dimension. We also plug in a basic denoising stage to estimate the local coordinate more accurately. It is found that the proposed method is competitive: its performance surpasses the K-SVD denoising method.

  1. Functional neuroanatomy of intuitive physical inference

    PubMed Central

    Mikhael, John G.; Tenenbaum, Joshua B.; Kanwisher, Nancy

    2016-01-01

    To engage with the world—to understand the scene in front of us, plan actions, and predict what will happen next—we must have an intuitive grasp of the world’s physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events—a “physics engine” in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general “multiple demand” system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action. PMID:27503892

  2. Functional neuroanatomy of intuitive physical inference.

    PubMed

    Fischer, Jason; Mikhael, John G; Tenenbaum, Joshua B; Kanwisher, Nancy

    2016-08-23

    To engage with the world-to understand the scene in front of us, plan actions, and predict what will happen next-we must have an intuitive grasp of the world's physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events-a "physics engine" in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general "multiple demand" system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action.

  3. Image denoising using a combined criterion

    NASA Astrophysics Data System (ADS)

    Semenishchev, Evgeny; Marchuk, Vladimir; Shrafel, Igor; Dubovskov, Vadim; Onoyko, Tatyana; Maslennikov, Stansilav

    2016-05-01

    A new image denoising method is proposed in this paper. We are considering an optimization problem with a linear objective function based on two criteria, namely, L2 norm and the first order square difference. This method is a parametric, so by a choice of the parameters we can adapt a proposed criteria of the objective function. The denoising algorithm consists of the following steps: 1) multiple denoising estimates are found on local areas of the image; 2) image edges are determined; 3) parameters of the method are fixed and denoised estimates of the local area are found; 4) local window is moved to the next position (local windows are overlapping) in order to produce the final estimate. A proper choice of parameters of the introduced method is discussed. A comparative analysis of a new denoising method with existed ones is performed on a set of test images.

  4. Wavelet denoising for quantum noise removal in chest digital tomosynthesis.

    PubMed

    Gomi, Tsutomu; Nakajima, Masahiro; Umeda, Tokuo

    2015-01-01

    Quantum noise impairs image quality in chest digital tomosynthesis (DT). A wavelet denoising processing algorithm for selectively removing quantum noise was developed and tested. A wavelet denoising technique was implemented on a DT system and experimentally evaluated using chest phantom measurements including spatial resolution. Comparison was made with an existing post-reconstruction wavelet denoising processing algorithm reported by Badea et al. (Comput Med Imaging Graph 22:309-315, 1998). The potential DT quantum noise decrease was evaluated using different exposures with our technique (pre-reconstruction and post-reconstruction wavelet denoising processing via the balance sparsity-norm method) and the existing wavelet denoising processing algorithm. Wavelet denoising processing algorithms such as the contrast-to-noise ratio (CNR), root mean square error (RMSE) were compared with and without wavelet denoising processing. Modulation transfer functions (MTF) were evaluated for the in-focus plane. We performed a statistical analysis (multi-way analysis of variance) using the CNR and RMSE values. Our wavelet denoising processing algorithm significantly decreased the quantum noise and improved the contrast resolution in the reconstructed images (CNR and RMSE: pre-balance sparsity-norm wavelet denoising processing versus existing wavelet denoising processing, P<0.05; post-balance sparsity-norm wavelet denoising processing versus existing wavelet denoising processing, P<0.05; CNR: with versus without wavelet denoising processing, P<0.05). The results showed that although MTF did not vary (thus preserving spatial resolution), the existing wavelet denoising processing algorithm caused MTF deterioration. A balance sparsity-norm wavelet denoising processing algorithm for removing quantum noise in DT was demonstrated to be effective for certain classes of structures with high-frequency component features. This denoising approach may be useful for a variety of clinical

  5. Green channel guiding denoising on bayer image.

    PubMed

    Tan, Xin; Lai, Shiming; Liu, Yu; Zhang, Maojun

    2014-01-01

    Denoising is an indispensable function for digital cameras. In respect that noise is diffused during the demosaicking, the denoising ought to work directly on bayer data. The difficulty of denoising on bayer image is the interlaced mosaic pattern of red, green, and blue. Guided filter is a novel time efficient explicit filter kernel which can incorporate additional information from the guidance image, but it is still not applied for bayer image. In this work, we observe that the green channel of bayer mode is higher in both sampling rate and Signal-to-Noise Ratio (SNR) than the red and blue ones. Therefore the green channel can be used to guide denoising. This kind of guidance integrates the different color channels together. Experiments on both actual and simulated bayer images indicate that green channel acts well as the guidance signal, and the proposed method is competitive with other popular filter kernel denoising methods.

  6. Green Channel Guiding Denoising on Bayer Image

    PubMed Central

    Zhang, Maojun

    2014-01-01

    Denoising is an indispensable function for digital cameras. In respect that noise is diffused during the demosaicking, the denoising ought to work directly on bayer data. The difficulty of denoising on bayer image is the interlaced mosaic pattern of red, green, and blue. Guided filter is a novel time efficient explicit filter kernel which can incorporate additional information from the guidance image, but it is still not applied for bayer image. In this work, we observe that the green channel of bayer mode is higher in both sampling rate and Signal-to-Noise Ratio (SNR) than the red and blue ones. Therefore the green channel can be used to guide denoising. This kind of guidance integrates the different color channels together. Experiments on both actual and simulated bayer images indicate that green channel acts well as the guidance signal, and the proposed method is competitive with other popular filter kernel denoising methods. PMID:24741370

  7. Evaluation of Denoising Strategies to Address Motion-Correlated Artifacts in Resting-State Functional Magnetic Resonance Imaging Data from the Human Connectome Project.

    PubMed

    Burgess, Gregory C; Kandala, Sridhar; Nolan, Dan; Laumann, Timothy O; Power, Jonathan D; Adeyemo, Babatunde; Harms, Michael P; Petersen, Steven E; Barch, Deanna M

    2016-11-01

    Like all resting-state functional connectivity data, the data from the Human Connectome Project (HCP) are adversely affected by structured noise artifacts arising from head motion and physiological processes. Functional connectivity estimates (Pearson's correlation coefficients) were inflated for high-motion time points and for high-motion participants. This inflation occurred across the brain, suggesting the presence of globally distributed artifacts. The degree of inflation was further increased for connections between nearby regions compared with distant regions, suggesting the presence of distance-dependent spatially specific artifacts. We evaluated several denoising methods: censoring high-motion time points, motion regression, the FMRIB independent component analysis-based X-noiseifier (FIX), and mean grayordinate time series regression (MGTR; as a proxy for global signal regression). The results suggest that FIX denoising reduced both types of artifacts, but left substantial global artifacts behind. MGTR significantly reduced global artifacts, but left substantial spatially specific artifacts behind. Censoring high-motion time points resulted in a small reduction of distance-dependent and global artifacts, eliminating neither type. All denoising strategies left differences between high- and low-motion participants, but only MGTR substantially reduced those differences. Ultimately, functional connectivity estimates from HCP data showed spatially specific and globally distributed artifacts, and the most effective approach to address both types of motion-correlated artifacts was a combination of FIX and MGTR.

  8. Automatic denoising of functional MRI data: combining independent component analysis and hierarchical fusion of classifiers.

    PubMed

    Salimi-Khorshidi, Gholamreza; Douaud, Gwenaëlle; Beckmann, Christian F; Glasser, Matthew F; Griffanti, Ludovica; Smith, Stephen M

    2014-04-15

    Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis (ICA) - one of the most widely used techniques for the exploratory analysis of fMRI data - has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject "at rest"). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing "signal" (brain activity) can be distinguished form the "noise" components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX ("FMRIB's ICA-based X-noiseifier"), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of) the original

  9. Progressive image denoising.

    PubMed

    Knaus, Claude; Zwicker, Matthias

    2014-07-01

    Image denoising continues to be an active research topic. Although state-of-the-art denoising methods are numerically impressive and approch theoretical limits, they suffer from visible artifacts.While they produce acceptable results for natural images, human eyes are less forgiving when viewing synthetic images. At the same time, current methods are becoming more complex, making analysis, and implementation difficult. We propose image denoising as a simple physical process, which progressively reduces noise by deterministic annealing. The results of our implementation are numerically and visually excellent. We further demonstrate that our method is particularly suited for synthetic images. Finally, we offer a new perspective on image denoising using robust estimators.

  10. Quantitative evaluation of statistical inference in resting state functional MRI.

    PubMed

    Yang, Xue; Kang, Hakmook; Newton, Allen; Landman, Bennett A

    2012-01-01

    Modern statistical inference techniques may be able to improve the sensitivity and specificity of resting state functional MRI (rs-fMRI) connectivity analysis through more realistic characterization of distributional assumptions. In simulation, the advantages of such modern methods are readily demonstrable. However quantitative empirical validation remains elusive in vivo as the true connectivity patterns are unknown and noise/artifact distributions are challenging to characterize with high fidelity. Recent innovations in capturing finite sample behavior of asymptotically consistent estimators (i.e., SIMulation and EXtrapolation - SIMEX) have enabled direct estimation of bias given single datasets. Herein, we leverage the theoretical core of SIMEX to study the properties of inference methods in the face of diminishing data (in contrast to increasing noise). The stability of inference methods with respect to synthetic loss of empirical data (defined as resilience) is used to quantify the empirical performance of one inference method relative to another. We illustrate this new approach in a comparison of ordinary and robust inference methods with rs-fMRI.

  11. Differential Expression and Network Inferences through Functional Data Modeling

    PubMed Central

    Telesca, Donatello; Inoue, Lurdes Y.T.; Neira, Mauricio; Etzioni, Ruth; Gleave, Martin; Nelson, Colleen

    2010-01-01

    Time–course microarray data consist of mRNA expression from a common set of genes collected at different time points. Such data are thought to reflect underlying biological processes developing over time. In this article we propose a model that allows us to examine differential expression and gene network relationships using time course microarray data. We model each gene expression profile as a random functional transformation of the scale, amplitude and phase of a common curve. Inferences about the gene–specific amplitude parameters allow us to examine differential gene expression. Inferences about measures of functional similarity based on estimated time transformation functions allow us to examine gene networks while accounting for features of the gene expression profiles. We discuss applications to simulated data as well as to microarray data on prostate cancer progression. PMID:19053995

  12. PRINCIPAL COMPONENTS FOR NON-LOCAL MEANS IMAGE DENOISING.

    PubMed

    Tasdizen, Tolga

    2008-01-01

    This paper presents an image denoising algorithm that uses principal component analysis (PCA) in conjunction with the non-local means image denoising. Image neighborhood vectors used in the non-local means algorithm are first projected onto a lower-dimensional subspace using PCA. Consequently, neighborhood similarity weights for denoising are computed using distances in this subspace rather than the full space. This modification to the non-local means algorithm results in improved accuracy and computational performance. We present an analysis of the proposed method's accuracy as a function of the dimensionality of the projection subspace and demonstrate that denoising accuracy peaks at a relatively low number of dimensions.

  13. Extensions to total variation denoising

    NASA Astrophysics Data System (ADS)

    Blomgren, Peter; Chan, Tony F.; Mulet, Pep

    1997-10-01

    The total variation denoising method, proposed by Rudin, Osher and Fatermi, 92, is a PDE-based algorithm for edge-preserving noise removal. The images resulting from its application are usually piecewise constant, possibly with a staircase effect at smooth transitions and may contain significantly less fine details than the original non-degraded image. In this paper we present some extensions to this technique that aim to improve the above drawbacks, through redefining the total variation functional or the noise constraints.

  14. Adaptive image denoising by targeted databases.

    PubMed

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2015-07-01

    We propose a data-dependent denoising procedure to restore noisy images. Different from existing denoising algorithms which search for patches from either the noisy image or a generic database, the new algorithm finds patches from a database that contains relevant patches. We formulate the denoising problem as an optimal filter design problem and make two contributions. First, we determine the basis function of the denoising filter by solving a group sparsity minimization problem. The optimization formulation generalizes existing denoising algorithms and offers systematic analysis of the performance. Improvement methods are proposed to enhance the patch search process. Second, we determine the spectral coefficients of the denoising filter by considering a localized Bayesian prior. The localized prior leverages the similarity of the targeted database, alleviates the intensive Bayesian computation, and links the new method to the classical linear minimum mean squared error estimation. We demonstrate applications of the proposed method in a variety of scenarios, including text images, multiview images, and face images. Experimental results show the superiority of the new algorithm over existing methods.

  15. Nuisance Regression of High-Frequency Functional Magnetic Resonance Imaging Data: Denoising Can Be Noisy.

    PubMed

    Chen, Jingyuan E; Jahanian, Hesamoddin; Glover, Gary H

    2017-02-01

    Recently, emerging studies have demonstrated the existence of brain resting-state spontaneous activity at frequencies higher than the conventional 0.1 Hz. A few groups utilizing accelerated acquisitions have reported persisting signals beyond 1 Hz, which seems too high to be accommodated by the sluggish hemodynamic process underpinning blood oxygen level-dependent contrasts (the upper limit of the canonical model is ∼0.3 Hz). It is thus questionable whether the observed high-frequency (HF) functional connectivity originates from alternative mechanisms (e.g., inflow effects, proton density changes in or near activated neural tissue) or rather is artificially introduced by improper preprocessing operations. In this study, we examined the influence of a common preprocessing step-whole-band linear nuisance regression (WB-LNR)-on resting-state functional connectivity (RSFC) and demonstrated through both simulation and analysis of real dataset that WB-LNR can introduce spurious network structures into the HF bands of functional magnetic resonance imaging (fMRI) signals. Findings of present study call into question whether published observations on HF-RSFC are partly attributable to improper data preprocessing instead of actual neural activities.

  16. Estimation and Inference of Directionally Differentiable Functions: Theory and Applications

    NASA Astrophysics Data System (ADS)

    Fang, Zheng

    This dissertation addresses a large class of irregular models in economics and statistics -- settings in which the parameters of interest take the form φ(theta 0), where φ is a known directionally differentiable function and theta 0 is estimated by thetan. Chapter 1 provides a tractable framework for conducting inference, Chapter 2 focuses on optimality of estimation, and Chapter 3 applies the developed theory to construct a test whether a Hilbert space valued parameter belongs to a convex set and to derive the uniform weak convergence of the Grenander distribution function -- i.e. the least concave majorant of the empirical distribution function -- under minimal assumptions.

  17. Inferring consistent functional interaction patterns from natural stimulus FMRI data.

    PubMed

    Sun, Jiehuan; Hu, Xintao; Huang, Xiu; Liu, Yang; Li, Kaiming; Li, Xiang; Han, Junwei; Guo, Lei; Liu, Tianming; Zhang, Jing

    2012-07-16

    There has been increasing interest in how the human brain responds to natural stimulus such as video watching in the neuroimaging field. Along this direction, this paper presents our effort in inferring consistent and reproducible functional interaction patterns under natural stimulus of video watching among known functional brain regions identified by task-based fMRI. Then, we applied and compared four statistical approaches, including Bayesian network modeling with searching algorithms: greedy equivalence search (GES), Peter and Clark (PC) analysis, independent multiple greedy equivalence search (IMaGES), and the commonly used Granger causality analysis (GCA), to infer consistent and reproducible functional interaction patterns among these brain regions. It is interesting that a number of reliable and consistent functional interaction patterns were identified by the GES, PC and IMaGES algorithms in different participating subjects when they watched multiple video shots of the same semantic category. These interaction patterns are meaningful given current neuroscience knowledge and are reasonably reproducible across different brains and video shots. In particular, these consistent functional interaction patterns are supported by structural connections derived from diffusion tensor imaging (DTI) data, suggesting the structural underpinnings of consistent functional interactions. Our work demonstrates that specific consistent patterns of functional interactions among relevant brain regions might reflect the brain's fundamental mechanisms of online processing and comprehension of video messages.

  18. Anisotropic Nonlocal Means Denoising

    DTIC Science & Technology

    2011-11-26

    match the nuanced edges and textures of real-world images remains open, since we have considered only brutal binary images here. Finally, while NLM...com- puter vision. Denoising algorithms have evolved from the classical linear and median filters to more modern schemes like total variation denoising...underlying image gradients outperforms NLM by a signi cant margin. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same

  19. Network inference from functional experimental data (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Desrosiers, Patrick; Labrecque, Simon; Tremblay, Maxime; Bélanger, Mathieu; De Dorlodot, Bertrand; Côté, Daniel C.

    2016-03-01

    Functional connectivity maps of neuronal networks are critical tools to understand how neurons form circuits, how information is encoded and processed by neurons, how memory is shaped, and how these basic processes are altered under pathological conditions. Current light microscopy allows to observe calcium or electrical activity of thousands of neurons simultaneously, yet assessing comprehensive connectivity maps directly from such data remains a non-trivial analytical task. There exist simple statistical methods, such as cross-correlation and Granger causality, but they only detect linear interactions between neurons. Other more involved inference methods inspired by information theory, such as mutual information and transfer entropy, identify more accurately connections between neurons but also require more computational resources. We carried out a comparative study of common connectivity inference methods. The relative accuracy and computational cost of each method was determined via simulated fluorescence traces generated with realistic computational models of interacting neurons in networks of different topologies (clustered or non-clustered) and sizes (10-1000 neurons). To bridge the computational and experimental works, we observed the intracellular calcium activity of live hippocampal neuronal cultures infected with the fluorescent calcium marker GCaMP6f. The spontaneous activity of the networks, consisting of 50-100 neurons per field of view, was recorded from 20 to 50 Hz on a microscope controlled by a homemade software. We implemented all connectivity inference methods in the software, which rapidly loads calcium fluorescence movies, segments the images, extracts the fluorescence traces, and assesses the functional connections (with strengths and directions) between each pair of neurons. We used this software to assess, in real time, the functional connectivity from real calcium imaging data in basal conditions, under plasticity protocols, and epileptic

  20. Beyond the bounds of orthology: functional inference from metagenomic context.

    PubMed

    Vey, Gregory; Moreno-Hagelsieb, Gabriel

    2010-07-01

    The effectiveness of the computational inference of function by genomic context is bounded by the diversity of known microbial genomes. Although metagenomes offer access to previously inaccessible organisms, their fragmentary nature prevents the conventional establishment of orthologous relationships required for reliably predicting functional interactions. We introduce a protocol for the prediction of functional interactions using data sources without information about orthologous relationships. To illustrate this process, we use the Sargasso Sea metagenome to construct a functional interaction network for the Escherichia coli K12 genome. We identify two reliability metrics, target intergenic distance and source interaction count, and apply them to selectively filter the predictions retained to construct the network of functional interactions. The resulting network contains 2297 nodes with 10 072 edges with a positive predictive value of 0.80. The metagenome yielded 8423 functional interactions beyond those found using only the genomic orthologs as a data source. This amounted to a 134% increase in the total number of functional interactions that are predicted by combining the metagenome and the genomic orthologs versus the genomic orthologs alone. In the absence of detectable orthologous relationships it remains feasible to derive a reliable set of predicted functional interactions. This offers a strategy for harnessing other metagenomes and homologs in general. Because metagenomes allow access to previously unreachable microorganisms, this will result in expanding the universe of known functional interactions thus furthering our understanding of functional organization.

  1. Inference of gene regulation functions from dynamic transcriptome data

    PubMed Central

    Hillenbrand, Patrick; Maier, Kerstin C; Cramer, Patrick; Gerland, Ulrich

    2016-01-01

    To quantify gene regulation, a function is required that relates transcription factor binding to DNA (input) to the rate of mRNA synthesis from a target gene (output). Such a ‘gene regulation function’ (GRF) generally cannot be measured because the experimental titration of inputs and simultaneous readout of outputs is difficult. Here we show that GRFs may instead be inferred from natural changes in cellular gene expression, as exemplified for the cell cycle in the yeast S. cerevisiae. We develop this inference approach based on a time series of mRNA synthesis rates from a synchronized population of cells observed over three cell cycles. We first estimate the functional form of how input transcription factors determine mRNA output and then derive GRFs for target genes in the CLB2 gene cluster that are expressed during G2/M phase. Systematic analysis of additional GRFs suggests a network architecture that rationalizes transcriptional cell cycle oscillations. We find that a transcription factor network alone can produce oscillations in mRNA expression, but that additional input from cyclin oscillations is required to arrive at the native behaviour of the cell cycle oscillator. DOI: http://dx.doi.org/10.7554/eLife.12188.001 PMID:27652904

  2. Explanation and inference: mechanistic and functional explanations guide property generalization.

    PubMed

    Lombrozo, Tania; Gwynne, Nicholas Z

    2014-01-01

    The ability to generalize from the known to the unknown is central to learning and inference. Two experiments explore the relationship between how a property is explained and how that property is generalized to novel species and artifacts. The experiments contrast the consequences of explaining a property mechanistically, by appeal to parts and processes, with the consequences of explaining the property functionally, by appeal to functions and goals. The findings suggest that properties that are explained functionally are more likely to be generalized on the basis of shared functions, with a weaker relationship between mechanistic explanations and generalization on the basis of shared parts and processes. The influence of explanation type on generalization holds even though all participants are provided with the same mechanistic and functional information, and whether an explanation type is freely generated (Experiment 1), experimentally provided (Experiment 2), or experimentally induced (Experiment 2). The experiments also demonstrate that explanations and generalizations of a particular type (mechanistic or functional) can be experimentally induced by providing sample explanations of that type, with a comparable effect when the sample explanations come from the same domain or from a different domains. These results suggest that explanations serve as a guide to generalization, and contribute to a growing body of work supporting the value of distinguishing mechanistic and functional explanations.

  3. Explanation and inference: mechanistic and functional explanations guide property generalization

    PubMed Central

    Lombrozo, Tania; Gwynne, Nicholas Z.

    2014-01-01

    The ability to generalize from the known to the unknown is central to learning and inference. Two experiments explore the relationship between how a property is explained and how that property is generalized to novel species and artifacts. The experiments contrast the consequences of explaining a property mechanistically, by appeal to parts and processes, with the consequences of explaining the property functionally, by appeal to functions and goals. The findings suggest that properties that are explained functionally are more likely to be generalized on the basis of shared functions, with a weaker relationship between mechanistic explanations and generalization on the basis of shared parts and processes. The influence of explanation type on generalization holds even though all participants are provided with the same mechanistic and functional information, and whether an explanation type is freely generated (Experiment 1), experimentally provided (Experiment 2), or experimentally induced (Experiment 2). The experiments also demonstrate that explanations and generalizations of a particular type (mechanistic or functional) can be experimentally induced by providing sample explanations of that type, with a comparable effect when the sample explanations come from the same domain or from a different domains. These results suggest that explanations serve as a guide to generalization, and contribute to a growing body of work supporting the value of distinguishing mechanistic and functional explanations. PMID:25309384

  4. Parametric surface denoising

    NASA Astrophysics Data System (ADS)

    Kakadiaris, Ioannis A.; Konstantinidis, Ioannis; Papadakis, Manos; Ding, Wei; Shen, Lixin

    2005-08-01

    Three dimensional (3D) surfaces can be sampled parametrically in the form of range image data. Smoothing/denoising of such raw data is usually accomplished by adapting techniques developed for intensity image processing, since both range and intensity images comprise parametrically sampled geometry and appearance measurements, respectively. We present a transform-based algorithm for surface denoising, motivated by our previous work on intensity image denoising, which utilizes a non-separable Parseval frame and an ensemble thresholding scheme. The frame is constructed from separable (tensor) products of a piecewise linear spline tight frame and incorporates the weighted average operator and the Sobel operators in directions that are integer multiples of 45°. We compare the performance of this algorithm with other transform-based methods from the recent literature. Our results indicate that such transform methods are suited to the task of smoothing range images.

  5. Computational approaches for inferring the functions of intrinsically disordered proteins

    PubMed Central

    Varadi, Mihaly; Vranken, Wim; Guharoy, Mainak; Tompa, Peter

    2015-01-01

    Intrinsically disordered proteins (IDPs) are ubiquitously involved in cellular processes and often implicated in human pathological conditions. The critical biological roles of these proteins, despite not adopting a well-defined fold, encouraged structural biologists to revisit their views on the protein structure-function paradigm. Unfortunately, investigating the characteristics and describing the structural behavior of IDPs is far from trivial, and inferring the function(s) of a disordered protein region remains a major challenge. Computational methods have proven particularly relevant for studying IDPs: on the sequence level their dependence on distinct characteristics determined by the local amino acid context makes sequence-based prediction algorithms viable and reliable tools for large scale analyses, while on the structure level the in silico integration of fundamentally different experimental data types is essential to describe the behavior of a flexible protein chain. Here, we offer an overview of the latest developments and computational techniques that aim to uncover how protein function is connected to intrinsic disorder. PMID:26301226

  6. Medical-Legal Inferences From Functional Neuroimaging Evidence.

    PubMed

    Mayberg

    1996-07-01

    Positron emission (PET) and single-photon emission tomography (SPECT) are validated functional imaging techniques for the in vivo measurement of many neuro-phsyiological and neurochemical parameters. Research studies of patients with a broad range of neurological and psychiatric illness have been published. Reproducible and specific patterns of altered cerebral blood flow and glucose metabolism, however, have been demonstrated and confirmed for only a limited number of specific illnesses. The association of functional scan patterns with specific deficits is less conclusive. Correlations of regional abnormalities with clinical symptoms such as motor weakness, aphasia, and visual spatial dysfunction are the most reproducible but are more poorly localized than lesion-deficit studies would suggest. Findings are even less consistent for nonlocalizing behavioral symptoms such as memory difficulties, poor concentration, irritability, or chronic pain, and no reliable patterns have been demonstrated. In a forensic context, homicidal and sadistic tendencies, aberrant sexual drive, violent impulsivity, psychopathic and sociopathic personality traits, as well as impaired judgement and poor insight, have no known PET or SPECT patterns, and their presence in an individual with any PET or SPECT scan finding cannot be inferred or concluded. Furthermore, the reliable prediction of any specific neurological, psychiatric, or behavioral deficits from specific scan findings has not been demonstrated. Unambiguous results from experiments designed to specifically examine the causative relationships between regional brain dysfunction and these types of complex behaviors are needed before any introduction of functional scans into the courts can be considered scientifically justified or legally admissible.

  7. Functional and evolutionary inference in gene networks: does topology matter?

    PubMed

    Siegal, Mark L; Promislow, Daniel E L; Bergman, Aviv

    2007-01-01

    The relationship between the topology of a biological network and its functional or evolutionary properties has attracted much recent interest. It has been suggested that most, if not all, biological networks are 'scale free.' That is, their connections follow power-law distributions, such that there are very few nodes with very many connections and vice versa. The number of target genes of known transcriptional regulators in the yeast, Saccharomyces cerevisiae, appears to follow such a distribution, as do other networks, such as the yeast network of protein-protein interactions. These findings have inspired attempts to draw biological inferences from general properties associated with scale-free network topology. One often cited general property is that, when compromised, highly connected nodes will tend to have a larger effect on network function than sparsely connected nodes. For example, more highly connected proteins are more likely to be lethal when knocked out. However, the correlation between lethality and connectivity is relatively weak, and some highly connected proteins can be removed without noticeable phenotypic effect. Similarly, network topology only weakly predicts the response of gene expression to environmental perturbations. Evolutionary simulations of gene-regulatory networks, presented here, suggest that such weak or non-existent correlations are to be expected, and are likely not due to inadequacy of experimental data. We argue that 'top-down' inferences of biological properties based on simple measures of network topology are of limited utility, and we present simulation results suggesting that much more detailed information about a gene's location in a regulatory network, as well as dynamic gene-expression data, are needed to make more meaningful functional and evolutionary predictions. Specifically, we find in our simulations that: (1) the relationship between a gene's connectivity and its fitness effect upon knockout depends on its

  8. Multiscale image blind denoising.

    PubMed

    Lebrun, Marc; Colom, Miguel; Morel, Jean-Michel

    2015-10-01

    Arguably several thousands papers are dedicated to image denoising. Most papers assume a fixed noise model, mainly white Gaussian or Poissonian. This assumption is only valid for raw images. Yet, in most images handled by the public and even by scientists, the noise model is imperfectly known or unknown. End users only dispose the result of a complex image processing chain effectuated by uncontrolled hardware and software (and sometimes by chemical means). For such images, recent progress in noise estimation permits to estimate from a single image a noise model, which is simultaneously signal and frequency dependent. We propose here a multiscale denoising algorithm adapted to this broad noise model. This leads to a blind denoising algorithm which we demonstrate on real JPEG images and on scans of old photographs for which the formation model is unknown. The consistency of this algorithm is also verified on simulated distorted images. This algorithm is finally compared with the unique state of the art previous blind denoising method.

  9. Nonlinear Image Denoising Methodologies

    DTIC Science & Technology

    2002-05-01

    53 5.3 A Multiscale Approach to Scale-Space Analysis . . . . . . . . . . . . . . . . 53 5.4...etc. In this thesis, Our approach to denoising is first based on a controlled nonlinear stochastic random walk to achieve a scale space analysis ( as in... stochastic treatment or interpretation of the diffusion. In addition, unless a specific stopping time is known to be adequate, the resulting evolution

  10. Photoacoustic signals denoising of the glucose aqueous solutions using an improved wavelet threshold method

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Xiong, Zhihua

    2016-10-01

    The photoacoustic signals denoising of glucose is one of most important steps in the quality identification of the fruit because the real-time photoacoustic singals of glucose are easily interfered by all kinds of noises. To remove the noises and some useless information, an improved wavelet threshld function were proposed. Compared with the traditional wavelet hard and soft threshold functions, the improved wavelet threshold function can overcome the pseudo-oscillation effect of the denoised photoacoustic signals due to the continuity of the improved wavelet threshold function, and the error between the denoised signals and the original signals can be decreased. To validate the feasibility of the improved wavelet threshold function denoising, the denoising simulation experiments based on MATLAB programmimg were performed. In the simulation experiments, the standard test signal was used, and three different denoising methods were used and compared with the improved wavelet threshold function. The signal-to-noise ratio (SNR) and the root-mean-square error (RMSE) values were used to evaluate the performance of the improved wavelet threshold function denoising. The experimental results demonstrate that the SNR value of the improved wavelet threshold function is largest and the RMSE value is lest, which fully verifies that the improved wavelet threshold function denoising is feasible. Finally, the improved wavelet threshold function denoising was used to remove the noises of the photoacoustic signals of the glucose solutions. The denoising effect is also very good. Therefore, the improved wavelet threshold function denoising proposed by this paper, has a potential value in the field of denoising for the photoacoustic singals.

  11. MR image denoising method for brain surface 3D modeling

    NASA Astrophysics Data System (ADS)

    Zhao, De-xin; Liu, Peng-jie; Zhang, De-gan

    2014-11-01

    Three-dimensional (3D) modeling of medical images is a critical part of surgical simulation. In this paper, we focus on the magnetic resonance (MR) images denoising for brain modeling reconstruction, and exploit a practical solution. We attempt to remove the noise existing in the MR imaging signal and preserve the image characteristics. A wavelet-based adaptive curve shrinkage function is presented in spherical coordinates system. The comparative experiments show that the denoising method can preserve better image details and enhance the coefficients of contours. Using these denoised images, the brain 3D visualization is given through surface triangle mesh model, which demonstrates the effectiveness of the proposed method.

  12. Minimum risk wavelet shrinkage operator for Poisson image denoising.

    PubMed

    Cheng, Wu; Hirakawa, Keigo

    2015-05-01

    The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.

  13. Study on De-noising Technology of Radar Life Signal

    NASA Astrophysics Data System (ADS)

    Yang, Xiu-Fang; Wang, Lian-Huan; Ma, Jiang-Fei; Wang, Pei-Pei

    2016-05-01

    Radar detection is a kind of novel life detection technology, which can be applied to medical monitoring, anti-terrorism and disaster relief street fighting, etc. As the radar life signal is very weak, it is often submerged in the noise. Because of non-stationary and randomness of these clutter signals, it is necessary to denoise efficiently before extracting and separating the useful signal. This paper improves the radar life signal's theoretical model of the continuous wave, does de-noising processing by introducing lifting wavelet transform and determine the best threshold function through comparing the de-noising effects of different threshold functions. The result indicates that both SNR and MSE of the signal are better than the traditional ones by introducing lifting wave transform and using a new improved soft threshold function de-noising method..

  14. CONSTRUCTING A FLEXIBLE LIKELIHOOD FUNCTION FOR SPECTROSCOPIC INFERENCE

    SciTech Connect

    Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Green, Gregory M.; Hogg, David W.

    2015-10-20

    We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf.

  15. Constructing a Flexible Likelihood Function for Spectroscopic Inference

    NASA Astrophysics Data System (ADS)

    Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Hogg, David W.; Green, Gregory M.

    2015-10-01

    We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf.

  16. On the Inference of Functional Circadian Networks Using Granger Causality.

    PubMed

    Pourzanjani, Arya; Herzog, Erik D; Petzold, Linda R

    2015-01-01

    Being able to infer one way direct connections in an oscillatory network such as the suprachiastmatic nucleus (SCN) of the mammalian brain using time series data is difficult but crucial to understanding network dynamics. Although techniques have been developed for inferring networks from time series data, there have been no attempts to adapt these techniques to infer directional connections in oscillatory time series, while accurately distinguishing between direct and indirect connections. In this paper an adaptation of Granger Causality is proposed that allows for inference of circadian networks and oscillatory networks in general called Adaptive Frequency Granger Causality (AFGC). Additionally, an extension of this method is proposed to infer networks with large numbers of cells called LASSO AFGC. The method was validated using simulated data from several different networks. For the smaller networks the method was able to identify all one way direct connections without identifying connections that were not present. For larger networks of up to twenty cells the method shows excellent performance in identifying true and false connections; this is quantified by an area-under-the-curve (AUC) 96.88%. We note that this method like other Granger Causality-based methods, is based on the detection of high frequency signals propagating between cell traces. Thus it requires a relatively high sampling rate and a network that can propagate high frequency signals.

  17. Quantum Boolean image denoising

    NASA Astrophysics Data System (ADS)

    Mastriani, Mario

    2015-05-01

    A quantum Boolean image processing methodology is presented in this work, with special emphasis in image denoising. A new approach for internal image representation is outlined together with two new interfaces: classical to quantum and quantum to classical. The new quantum Boolean image denoising called quantum Boolean mean filter works with computational basis states (CBS), exclusively. To achieve this, we first decompose the image into its three color components, i.e., red, green and blue. Then, we get the bitplanes for each color, e.g., 8 bits per pixel, i.e., 8 bitplanes per color. From now on, we will work with the bitplane corresponding to the most significant bit (MSB) of each color, exclusive manner. After a classical-to-quantum interface (which includes a classical inverter), we have a quantum Boolean version of the image within the quantum machine. This methodology allows us to avoid the problem of quantum measurement, which alters the results of the measured except in the case of CBS. Said so far is extended to quantum algorithms outside image processing too. After filtering of the inverted version of MSB (inside quantum machine), the result passes through a quantum-classical interface (which involves another classical inverter) and then proceeds to reassemble each color component and finally the ending filtered image. Finally, we discuss the more appropriate metrics for image denoising in a set of experimental results.

  18. Improved Rotating Kernel Transformation Based Contourlet Domain Image Denoising Framework.

    PubMed

    Guo, Qing; Dong, Fangmin; Sun, Shuifa; Ren, Xuhong; Feng, Shiyu; Gao, Bruce Zhi

    A contourlet domain image denoising framework based on a novel Improved Rotating Kernel Transformation is proposed, where the difference of subbands in contourlet domain is taken into account. In detail: (1). A novel Improved Rotating Kernel Transformation (IRKT) is proposed to calculate the direction statistic of the image; The validity of the IRKT is verified by the corresponding extracted edge information comparing with the state-of-the-art edge detection algorithm. (2). The direction statistic represents the difference between subbands and is introduced to the threshold function based contourlet domain denoising approaches in the form of weights to get the novel framework. The proposed framework is utilized to improve the contourlet soft-thresholding (CTSoft) and contourlet bivariate-thresholding (CTB) algorithms. The denoising results on the conventional testing images and the Optical Coherence Tomography (OCT) medical images show that the proposed methods improve the existing contourlet based thresholding denoising algorithm, especially for the medical images.

  19. Improved Rotating Kernel Transformation Based Contourlet Domain Image Denoising Framework

    PubMed Central

    Guo, Qing; Dong, Fangmin; Ren, Xuhong; Feng, Shiyu; Gao, Bruce Zhi

    2016-01-01

    A contourlet domain image denoising framework based on a novel Improved Rotating Kernel Transformation is proposed, where the difference of subbands in contourlet domain is taken into account. In detail: (1). A novel Improved Rotating Kernel Transformation (IRKT) is proposed to calculate the direction statistic of the image; The validity of the IRKT is verified by the corresponding extracted edge information comparing with the state-of-the-art edge detection algorithm. (2). The direction statistic represents the difference between subbands and is introduced to the threshold function based contourlet domain denoising approaches in the form of weights to get the novel framework. The proposed framework is utilized to improve the contourlet soft-thresholding (CTSoft) and contourlet bivariate-thresholding (CTB) algorithms. The denoising results on the conventional testing images and the Optical Coherence Tomography (OCT) medical images show that the proposed methods improve the existing contourlet based thresholding denoising algorithm, especially for the medical images. PMID:27148597

  20. Color Image Denoising via Discriminatively Learned Iterative Shrinkage.

    PubMed

    Sun, Jian; Sun, Jian; Xu, Zingben

    2015-11-01

    In this paper, we propose a novel model, a discriminatively learned iterative shrinkage (DLIS) model, for color image denoising. The DLIS is a generalization of wavelet shrinkage by iteratively performing shrinkage over patch groups and whole image aggregation. We discriminatively learn the shrinkage functions and basis from the training pairs of noisy/noise-free images, which can adaptively handle different noise characteristics in luminance/chrominance channels, and the unknown structured noise in real-captured color images. Furthermore, to remove the splotchy real color noises, we design a Laplacian pyramid-based denoising framework to progressively recover the clean image from the coarsest scale to the finest scale by the DLIS model learned from the real color noises. Experiments show that our proposed approach can achieve the state-of-the-art denoising results on both synthetic denoising benchmark and real-captured color images.

  1. Structure-based inference of molecular functions of proteins of unknown function from Berkeley Structural Genomics Center

    SciTech Connect

    Kim, Sung-Hou; Shin, Dong Hae; Hou, Jingtong; Chandonia, John-Marc; Das, Debanu; Choi, In-Geol; Kim, Rosalind; Kim, Sung-Hou

    2007-09-02

    Advances in sequence genomics have resulted in an accumulation of a huge number of protein sequences derived from genome sequences. However, the functions of a large portion of them cannot be inferred based on the current methods of sequence homology detection to proteins of known functions. Three-dimensional structure can have an important impact in providing inference of molecular function (physical and chemical function) of a protein of unknown function. Structural genomics centers worldwide have been determining many 3-D structures of the proteins of unknown functions, and possible molecular functions of them have been inferred based on their structures. Combined with bioinformatics and enzymatic assay tools, the successful acceleration of the process of protein structure determination through high throughput pipelines enables the rapid functional annotation of a large fraction of hypothetical proteins. We present a brief summary of the process we used at the Berkeley Structural Genomics Center to infer molecular functions of proteins of unknown function.

  2. Structure-based inference of molecular functions of proteins of unknown function from Berkeley Structural Genomics Center.

    PubMed

    Shin, Dong Hae; Hou, Jingtong; Chandonia, John-Marc; Das, Debanu; Choi, In-Geol; Kim, Rosalind; Kim, Sung-Hou

    2007-09-01

    Advances in sequence genomics have resulted in an accumulation of a huge number of protein sequences derived from genome sequences. However, the functions of a large portion of them cannot be inferred based on the current methods of sequence homology detection to proteins of known functions. Three-dimensional structure can have an important impact in providing inference of molecular function (physical and chemical function) of a protein of unknown function. Structural genomics centers worldwide have been determining many 3-D structures of the proteins of unknown functions, and possible molecular functions of them have been inferred based on their structures. Combined with bioinformatics and enzymatic assay tools, the successful acceleration of the process of protein structure determination through high throughput pipelines enables the rapid functional annotation of a large fraction of hypothetical proteins. We present a brief summary of the process we used at the Berkeley Structural Genomics Center to infer molecular functions of proteins of unknown function.

  3. Nonlocal Markovian models for image denoising

    NASA Astrophysics Data System (ADS)

    Salvadeo, Denis H. P.; Mascarenhas, Nelson D. A.; Levada, Alexandre L. M.

    2016-01-01

    Currently, the state-of-the art methods for image denoising are patch-based approaches. Redundant information present in nonlocal regions (patches) of the image is considered for better image modeling, resulting in an improved quality of filtering. In this respect, nonlocal Markov random field (MRF) models are proposed by redefining the energy functions of classical MRF models to adopt a nonlocal approach. With the new energy functions, the pairwise pixel interaction is weighted according to the similarities between the patches corresponding to each pair. Also, a maximum pseudolikelihood estimation of the spatial dependency parameter (β) for these models is presented here. For evaluating this proposal, these models are used as an a priori model in a maximum a posteriori estimation to denoise additive white Gaussian noise in images. Finally, results display a notable improvement in both quantitative and qualitative terms in comparison with the local MRFs.

  4. CT reconstruction via denoising approximate message passing

    NASA Astrophysics Data System (ADS)

    Perelli, Alessandro; Lexa, Michael A.; Can, Ali; Davies, Mike E.

    2016-05-01

    In this paper, we adapt and apply a compressed sensing based reconstruction algorithm to the problem of computed tomography reconstruction for luggage inspection. Specifically, we propose a variant of the denoising generalized approximate message passing (D-GAMP) algorithm and compare its performance to the performance of traditional filtered back projection and to a penalized weighted least squares (PWLS) based reconstruction method. D-GAMP is an iterative algorithm that at each iteration estimates the conditional probability of the image given the measurements and employs a non-linear "denoising" function which implicitly imposes an image prior. Results on real baggage show that D-GAMP is well-suited to limited-view acquisitions.

  5. Global Image Denoising.

    PubMed

    Talebi, Hossein; Milanfar, Peyman

    2014-02-01

    Most existing state-of-the-art image denoising algorithms are based on exploiting similarity between a relatively modest number of patches. These patch-based methods are strictly dependent on patch matching, and their performance is hamstrung by the ability to reliably find sufficiently similar patches. As the number of patches grows, a point of diminishing returns is reached where the performance improvement due to more patches is offset by the lower likelihood of finding sufficiently close matches. The net effect is that while patch-based methods, such as BM3D, are excellent overall, they are ultimately limited in how well they can do on (larger) images with increasing complexity. In this paper, we address these shortcomings by developing a paradigm for truly global filtering where each pixel is estimated from all pixels in the image. Our objectives in this paper are two-fold. First, we give a statistical analysis of our proposed global filter, based on a spectral decomposition of its corresponding operator, and we study the effect of truncation of this spectral decomposition. Second, we derive an approximation to the spectral (principal) components using the Nyström extension. Using these, we demonstrate that this global filter can be implemented efficiently by sampling a fairly small percentage of the pixels in the image. Experiments illustrate that our strategy can effectively globalize any existing denoising filters to estimate each pixel using all pixels in the image, hence improving upon the best patch-based methods.

  6. Multicomponent MR Image Denoising

    PubMed Central

    Manjón, José V.; Thacker, Neil A.; Lull, Juan J.; Garcia-Martí, Gracian; Martí-Bonmatí, Luís; Robles, Montserrat

    2009-01-01

    Magnetic Resonance images are normally corrupted by random noise from the measurement process complicating the automatic feature extraction and analysis of clinical data. It is because of this reason that denoising methods have been traditionally applied to improve MR image quality. Many of these methods use the information of a single image without taking into consideration the intrinsic multicomponent nature of MR images. In this paper we propose a new filter to reduce random noise in multicomponent MR images by spatially averaging similar pixels using information from all available image components to perform the denoising process. The proposed algorithm also uses a local Principal Component Analysis decomposition as a postprocessing step to remove more noise by using information not only in the spatial domain but also in the intercomponent domain dealing in a higher noise reduction without significantly affecting the original image resolution. The proposed method has been compared with similar state-of-art methods over synthetic and real clinical multicomponent MR images showing an improved performance in all cases analyzed. PMID:19888431

  7. Study on an improved wavelet shift-invariant threshold denoising for pulsed laser induced glucose photoacoustic signals

    NASA Astrophysics Data System (ADS)

    Wang, Zhengzi; Ren, Zhong; Liu, Guodong

    2015-10-01

    Noninvasive measurement of blood glucose concentration has become a hotspot research in the world due to its characteristic of convenient, rapid and non-destructive etc. The blood glucose concentration monitoring based on photoacoustic technique has attracted many attentions because the detected signal is ultrasonic signals rather than the photo signals. But during the acquisition of the photoacoustic signals of glucose, the photoacoustic signals are not avoid to be polluted by some factors, such as the pulsed laser, electronic noises and circumstance noises etc. These disturbances will impact the measurement accuracy of the glucose concentration, So, the denoising of the glucose photoacoustic signals is a key work. In this paper, a wavelet shift-invariant threshold denoising method is improved, and a novel wavelet threshold function is proposed. For the novel wavelet threshold function, two threshold values and two different factors are set, and the novel function is high order derivative and continuous, which can be looked as the compromise between the wavelet soft threshold denoising and hard threshold denoising. Simulation experimental results illustrate that, compared with other wavelet threshold denoising, this improved wavelet shift-invariant threshold denoising has higher signal-to-noise ratio(SNR) and smaller root mean-square error (RMSE) value. And this improved denoising also has better denoising effect than others. Therefore, this improved denoising has a certain of potential value in the denoising of glucose photoacoustic signals.

  8. Bayesian Inference for Functional Dynamics Exploring in fMRI Data

    PubMed Central

    Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao

    2016-01-01

    This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come. PMID:27034708

  9. Bayesian Inference for Functional Dynamics Exploring in fMRI Data.

    PubMed

    Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao; Pan, Yi; Zhang, Jing

    2016-01-01

    This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.

  10. Denoising forced-choice detection data.

    PubMed

    García-Pérez, Miguel A

    2010-02-01

    Observers in a two-alternative forced-choice (2AFC) detection task face the need to produce a response at random (a guess) on trials in which neither presentation appeared to display a stimulus. Observers could alternatively be instructed to use a 'guess' key on those trials, a key that would produce a random guess and would also record the resultant correct or wrong response as emanating from a computer-generated guess. A simulation study shows that 'denoising' 2AFC data with information regarding which responses are a result of guesses yields estimates of detection threshold and spread of the psychometric function that are far more precise than those obtained in the absence of this information, and parallel the precision of estimates obtained with yes-no tasks running for the same number of trials. Simulations also show that partial compliance with the instructions to use the 'guess' key reduces the quality of the estimates, which nevertheless continue to be more precise than those obtained from conventional 2AFC data if the observers are still moderately compliant. An empirical study testing the validity of simulation results showed that denoised 2AFC estimates of spread were clearly superior to conventional 2AFC estimates and similar to yes-no estimates, but variations in threshold across observers and across sessions hid the benefits of denoising for threshold estimation. The empirical study also proved the feasibility of using a 'guess' key in addition to the conventional response keys defined in 2AFC tasks.

  11. Executive Functions in Adolescence: Inferences from Brain and Behavior

    ERIC Educational Resources Information Center

    Crone, Eveline A.

    2009-01-01

    Despite the advances in understanding cognitive improvements in executive function in adolescence, much less is known about the influence of affective and social modulators on executive function and the biological underpinnings of these functions and sensitivities. Here, recent behavioral and neuroscientific studies are summarized that have used…

  12. Creators' Intentions Bias Judgments of Function Independently from Causal Inferences

    ERIC Educational Resources Information Center

    Chaigneau, Sergio E.; Castillo, Ramon D.; Martinez, Luis

    2008-01-01

    Participants learned about novel artifacts that were created for function X, but later used for function Y. When asked to rate the extent to which X and Y were a given artifact's function, participants consistently rated X higher than Y. In Experiments 1 and 2, participants were also asked to rate artifacts' efficiency to perform X and Y. This…

  13. Executive Functions in Adolescence: Inferences from Brain and Behavior

    ERIC Educational Resources Information Center

    Crone, Eveline A.

    2009-01-01

    Despite the advances in understanding cognitive improvements in executive function in adolescence, much less is known about the influence of affective and social modulators on executive function and the biological underpinnings of these functions and sensitivities. Here, recent behavioral and neuroscientific studies are summarized that have used…

  14. Birdsong Denoising Using Wavelets

    PubMed Central

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  15. Role of Utility and Inference in the Evolution of Functional Information

    PubMed Central

    Sharov, Alexei A.

    2009-01-01

    Functional information means an encoded network of functions in living organisms from molecular signaling pathways to an organism’s behavior. It is represented by two components: code and an interpretation system, which together form a self-sustaining semantic closure. Semantic closure allows some freedom between components because small variations of the code are still interpretable. The interpretation system consists of inference rules that control the correspondence between the code and the function (phenotype) and determines the shape of the fitness landscape. The utility factor operates at multiple time scales: short-term selection drives evolution towards higher survival and reproduction rate within a given fitness landscape, and long-term selection favors those fitness landscapes that support adaptability and lead to evolutionary expansion of certain lineages. Inference rules make short-term selection possible by shaping the fitness landscape and defining possible directions of evolution, but they are under control of the long-term selection of lineages. Communication normally occurs within a set of agents with compatible interpretation systems, which I call communication system. Functional information cannot be directly transferred between communication systems with incompatible inference rules. Each biological species is a genetic communication system that carries unique functional information together with inference rules that determine evolutionary directions and constraints. This view of the relation between utility and inference can resolve the conflict between realism/positivism and pragmatism. Realism overemphasizes the role of inference in evolution of human knowledge because it assumes that logic is embedded in reality. Pragmatism substitutes usefulness for truth and therefore ignores the advantage of inference. The proposed concept of evolutionary pragmatism rejects the idea that logic is embedded in reality; instead, inference rules are

  16. Craniofacial biomechanics and functional and dietary inferences in hominin paleontology.

    PubMed

    Grine, Frederick E; Judex, Stefan; Daegling, David J; Ozcivici, Engin; Ungar, Peter S; Teaford, Mark F; Sponheimer, Matt; Scott, Jessica; Scott, Robert S; Walker, Alan

    2010-04-01

    Finite element analysis (FEA) is a potentially powerful tool by which the mechanical behaviors of different skeletal and dental designs can be investigated, and, as such, has become increasingly popular for biomechanical modeling and inferring the behavior of extinct organisms. However, the use of FEA to extrapolate from characterization of the mechanical environment to questions of trophic or ecological adaptation in a fossil taxon is both challenging and perilous. Here, we consider the problems and prospects of FEA applications in paleoanthropology, and provide a critical examination of one such study of the trophic adaptations of Australopithecus africanus. This particular FEA is evaluated with regard to 1) the nature of the A. africanus cranial composite, 2) model validation, 3) decisions made with respect to model parameters, 4) adequacy of data presentation, and 5) interpretation of the results. Each suggests that the results reflect methodological decisions as much as any underlying biological significance. Notwithstanding these issues, this model yields predictions that follow from the posited emphasis on premolar use by A. africanus. These predictions are tested with data from the paleontological record, including a phylogenetically-informed consideration of relative premolar size, and postcanine microwear fabrics and antemortem enamel chipping. In each instance, the data fail to conform to predictions from the model. This model thus serves to emphasize the need for caution in the application of FEA in paleoanthropological enquiry. Theoretical models can be instrumental in the construction of testable hypotheses; but ultimately, the studies that serve to test these hypotheses - rather than data from the models - should remain the source of information pertaining to hominin paleobiology and evolution. Copyright 2010 Elsevier Ltd. All rights reserved.

  17. Blind Image Denoising via Dependent Dirichlet Process Tree.

    PubMed

    Fengyuan Zhu; Guangyong Chen; Jianye Hao; Pheng-Ann Heng

    2017-08-01

    Most existing image denoising approaches assumed the noise to be homogeneous white Gaussian distributed with known intensity. However, in real noisy images, the noise models are usually unknown beforehand and can be much more complex. This paper addresses this problem and proposes a novel blind image denoising algorithm to recover the clean image from noisy one with the unknown noise model. To model the empirical noise of an image, our method introduces the mixture of Gaussian distribution, which is flexible enough to approximate different continuous distributions. The problem of blind image denoising is reformulated as a learning problem. The procedure is to first build a two-layer structural model for noisy patches and consider the clean ones as latent variable. To control the complexity of the noisy patch model, this work proposes a novel Bayesian nonparametric prior called "Dependent Dirichlet Process Tree" to build the model. Then, this study derives a variational inference algorithm to estimate model parameters and recover clean patches. We apply our method on synthesis and real noisy images with different noise models. Comparing with previous approaches, ours achieves better performance. The experimental results indicate the efficiency of the proposed algorithm to cope with practical image denoising tasks.

  18. Blind Image Denoising via Dependent Dirichlet Process Tree.

    PubMed

    Zhu, Fengyuan; Chen, Guangyong; Hao, Jianye; Heng, Pheng-Ann

    2016-08-31

    Most existing image denoising approaches assumed the noise to be homogeneous white Gaussian distributed with known intensity. However, in real noisy images, the noise models are usually unknown beforehand and can be much more complex. This paper addresses this problem and proposes a novel blind image denoising algorithm to recover the clean image from noisy one with the unknown noise model. To model the empirical noise of an image, our method introduces the mixture of Gaussian distribution, which is flexible enough to approximate different continuous distributions. The problem of blind image denoising is reformulated as a learning problem. The procedure is to first build a two-layer structural model for noisy patches and consider the clean ones as latent variable. To control the complexity of the noisy patch model, this work proposes a novel Bayesian nonparametric prior called "Dependent Dirichlet Process Tree" to build the model. Then, this study derives a variational inference algorithm to estimate model parameters and recover clean patches. We apply our method on synthesis and real noisy images with different noise models. Comparing with previous approaches, ours achieves better performance. The experimental results indicate the efficiency of the proposed algorithm to cope with practical image denoising tasks.

  19. Inferring plant microRNA functional similarity using a weighted protein-protein interaction network.

    PubMed

    Meng, Jun; Liu, Dong; Luan, Yushi

    2015-11-04

    MiRNAs play a critical role in the response of plants to abiotic and biotic stress. However, the functions of most plant miRNAs remain unknown. Inferring these functions from miRNA functional similarity would thus be useful. This study proposes a new method, called PPImiRFS, for inferring miRNA functional similarity. The functional similarity of miRNAs was inferred from the functional similarity of their target gene sets. A protein-protein interaction network with semantic similarity weights of edges generated using Gene Ontology terms was constructed to infer the functional similarity between two target genes that belong to two different miRNAs, and the score for functional similarity was calculated using the weighted shortest path for the two target genes through the whole network. The experimental results showed that the proposed method was more effective and reliable than previous methods (miRFunSim and GOSemSim) applied to Arabidopsis thaliana. Additionally, miRNAs responding to the same type of stress had higher functional similarity than miRNAs responding to different types of stress. For the first time, a protein-protein interaction network with semantic similarity weights generated using Gene Ontology terms was employed to calculate the functional similarity of plant miRNAs. A novel method based on calculating the weighted shortest path between two target genes was introduced.

  20. Bayesian inference of nonpositive spectral functions in quantum field theory

    NASA Astrophysics Data System (ADS)

    Rothkopf, Alexander

    2017-03-01

    We present the generalization to nonpositive definite spectral functions of a recently proposed Bayesian deconvolution approach (BR method). The novel prior used here retains many of the beneficial analytic properties of the original method; in particular, it allows us to integrate out the hyperparameter α directly. To preserve the underlying axiom of scale invariance, we introduce a second default-model related function, whose role is discussed. Our reconstruction prescription is contrasted with existing direct methods, as well as with an approach where shift functions are introduced to compensate for negative spectral features. A mock spectrum analysis inspired by the study of gluon spectral functions in QCD illustrates the capabilities of this new approach.

  1. Generalised partition functions: inferences on phase space distributions

    NASA Astrophysics Data System (ADS)

    Treumann, Rudolf A.; Baumjohann, Wolfgang

    2016-06-01

    It is demonstrated that the statistical mechanical partition function can be used to construct various different forms of phase space distributions. This indicates that its structure is not restricted to the Gibbs-Boltzmann factor prescription which is based on counting statistics. With the widely used replacement of the Boltzmann factor by a generalised Lorentzian (also known as the q-deformed exponential function, where κ = 1/|q - 1|, with κ, q ∈ R) both the kappa-Bose and kappa-Fermi partition functions are obtained in quite a straightforward way, from which the conventional Bose and Fermi distributions follow for κ → ∞. For κ ≠ ∞ these are subject to the restrictions that they can be used only at temperatures far from zero. They thus, as shown earlier, have little value for quantum physics. This is reasonable, because physical κ systems imply strong correlations which are absent at zero temperature where apart from stochastics all dynamical interactions are frozen. In the classical large temperature limit one obtains physically reasonable κ distributions which depend on energy respectively momentum as well as on chemical potential. Looking for other functional dependencies, we examine Bessel functions whether they can be used for obtaining valid distributions. Again and for the same reason, no Fermi and Bose distributions exist in the low temperature limit. However, a classical Bessel-Boltzmann distribution can be constructed which is a Bessel-modified Lorentzian distribution. Whether it makes any physical sense remains an open question. This is not investigated here. The choice of Bessel functions is motivated solely by their convergence properties and not by reference to any physical demands. This result suggests that the Gibbs-Boltzmann partition function is fundamental not only to Gibbs-Boltzmann but also to a large class of generalised Lorentzian distributions as well as to the corresponding nonextensive statistical mechanics.

  2. The use of gene clusters to infer functional coupling

    PubMed Central

    Overbeek, Ross; Fonstein, Michael; D’Souza, Mark; Pusch, Gordon D.; Maltsev, Natalia

    1999-01-01

    Previously, we presented evidence that it is possible to predict functional coupling between genes based on conservation of gene clusters between genomes. With the rapid increase in the availability of prokaryotic sequence data, it has become possible to verify and apply the technique. In this paper, we extend our characterization of the parameters that determine the utility of the approach, and we generalize the approach in a way that supports detection of common classes of functionally coupled genes (e.g., transport and signal transduction clusters). Now that the analysis includes over 30 complete or nearly complete genomes, it has become clear that this approach will play a significant role in supporting efforts to assign functionality to the remaining uncharacterized genes in sequenced genomes. PMID:10077608

  3. Network-based inference of protein activity helps functionalize the genetic landscape of cancer

    PubMed Central

    Alvarez, Mariano J.; Shen, Yao; Giorgi, Federico M.; Lachmann, Alexander; Ding, B. Belinda; Ye, B. Hilda; Califano, Andrea

    2016-01-01

    Identifying the multiple dysregulated oncoproteins that contribute to tumorigenesis in a given patient is crucial for developing personalized treatment plans. However, accurate inference of aberrant protein activity in biological samples is still challenging as genetic alterations are only partially predictive and direct measurements of protein activity are generally not feasible. To address this problem we introduce and experimentally validate a new algorithm, VIPER (Virtual Inference of Protein-activity by Enriched Regulon analysis), for the accurate assessment of protein activity from gene expression data. We use VIPER to evaluate the functional relevance of genetic alterations in regulatory proteins across all TCGA samples. In addition to accurately inferring aberrant protein activity induced by established mutations, we also identify a significant fraction of tumors with aberrant activity of druggable oncoproteins—despite a lack of mutations, and vice-versa. In vitro assays confirmed that VIPER-inferred protein activity outperforms mutational analysis in predicting sensitivity to targeted inhibitors. PMID:27322546

  4. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  5. Adaptively Tuned Iterative Low Dose CT Image Denoising.

    PubMed

    Hashemi, SayedMasoud; Paul, Narinder S; Beheshti, Soosan; Cobbold, Richard S C

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction.

  6. Protein function annotation by homology-based inference

    PubMed Central

    Loewenstein, Yaniv; Raimondo, Domenico; Redfern, Oliver C; Watson, James; Frishman, Dmitrij; Linial, Michal; Orengo, Christine; Thornton, Janet; Tramontano, Anna

    2009-01-01

    With many genomes now sequenced, computational annotation methods to characterize genes and proteins from their sequence are increasingly important. The BioSapiens Network has developed tools to address all stages of this process, and here we review progress in the automated prediction of protein function based on protein sequence and structure. PMID:19226439

  7. Actively Learning Specific Function Properties with Applications to Statistical Inference

    DTIC Science & Technology

    2007-12-01

    which are distant from their nearest neigh- bors . However, when searching for level-sets, we are less interested in the function away from the level...34 excludes openly gay , lesbian and bisexual students from receiving ROTC scholarships or serving in the military. Nevertheless, all ROTC classes at

  8. Riemann Liouvelle Fractional Integral based Empirical Mode Decomposition for ECG Denoising.

    PubMed

    Jain, Shweta; Bajaj, Varun; Kumar, Anil

    2017-09-18

    Electrocardiograph (ECG) denoising is the most important step in diagnosis of heart related diseases, as the diagnosis gets influenced with noises. In this paper, a new method for ECG denoising is proposed, which incorporates empirical mode decomposition algorithm and Riemann Liouvelle (RL) fractional integral filtering. In the proposed method, noisy ECG signal is decomposed into its intrinsic mode functions (IMFs); from which noisy IMFs are identified by proposed noisy-IMFs identification methodology. RL fractional integral filtering is applied on noisy IMFs to get denoised IMFs; ECG signal is reconstructed with denoised IMFs and remaining signal dominant IMFs to obtain noise-free ECG signal. Proposed methodology is tested with MIT-BIH arrhythmia database. Its performance, in terms of signal to noise ratio (SNR) and mean square error (MSE), is compared with other related fractional integral and EMD based ECG denoising methods. The obtained results by proposed method prove that the proposed method gives efficient noise removal performance.

  9. PIV anisotropic denoising using uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wieneke, B.

    2017-08-01

    Recently, progress has been made to reliably compute uncertainty estimates for each velocity vector in planar flow fields measured with 2D-or stereo-PIV. This information can be used for a post-processing denoising scheme to reduce errors by a spatial averaging scheme preserving true flow fluctuations. Starting with a 5 × 5 vector kernel, a second-order 2D-polynomial function is fitted to the flow field. Vectors just outside will be included in the filter kernel if they lie within the uncertainty band around the fitted function. Repeating this procedure, vectors are added in all directions until the true flow field can no longer be approximated by the second-order polynomial function. The center vector is then replaced by the value of the fitted function. The final shape and size of the filter kernel automatically adjusts to local flow gradients in an optimal way preserving true velocity fluctuations above the noise level. This anisotropic denoising scheme is validated first on synthetic vector fields varying spatial wavelengths of the flow field and noise levels relative to the fluctuation amplitude. For wavelengths larger than 5-7 times the spatial resolution, a noise reduction factor of 2-4 is achieved significantly increasing the velocity dynamic range. For large noise levels above 50% of the flow fluctuation, the denoising scheme can no longer distinguish between true flow fluctuations and noise. Finally, it is shown that the procedure performs well for typical experimental PIV vector fields. It provides an effective alternative to more complicated adaptive PIV algorithms optimizing interrogation window sizes and shapes based on seeding density, local flow gradients, and other criteria.

  10. Communicative functions of directional verbal probabilities: Speaker's choice, listener's inference, and reference points.

    PubMed

    Honda, Hidehito; Yamagishi, Kimihiko

    2016-09-09

    Verbal probabilities have directional communicative functions, and most can be categorized as positive (e.g., "it is likely") or negative (e.g., "it is doubtful"). We examined the communicative functions of verbal probabilities based on the reference point hypothesis According to this hypothesis, listeners are sensitive to and can infer a speaker's reference points based on the speaker's selected directionality. In four experiments (two of which examined speakers' choice of directionality and two of which examined listeners' inferences about a speaker's reference point), we found that listeners could make inferences about speakers' reference points based on the stated directionality of verbal probability. Thus, the directionality of verbal probabilities serves the communicative function of conveying information about a speaker's reference point.

  11. Inferring Functional Relationships from Conservation of Gene Order.

    PubMed

    Moreno-Hagelsieb, Gabriel

    2017-01-01

    Predicting functional associations using the Gene Neighbor Method depends on the simple idea that if genes are conserved next to each other in evolutionarily distant prokaryotes they might belong to a polycistronic transcription unit. The procedure presented in this chapter starts with the organization of the genes within genomes into pairs of adjacent genes. Then, the pairs of adjacent genes in a genome of interest are mapped to their corresponding orthologs in other, informative, genomes. The final step is to verify if the mapped orthologs are also pairs of adjacent genes in the informative genomes.

  12. Structure and function of the mammalian middle ear. II: Inferring function from structure.

    PubMed

    Mason, Matthew J

    2016-02-01

    Anatomists and zoologists who study middle ear morphology are often interested to know what the structure of an ear can reveal about the auditory acuity and hearing range of the animal in question. This paper represents an introduction to middle ear function targetted towards biological scientists with little experience in the field of auditory acoustics. Simple models of impedance matching are first described, based on the familiar concepts of the area and lever ratios of the middle ear. However, using the Mongolian gerbil Meriones unguiculatus as a test case, it is shown that the predictions made by such 'ideal transformer' models are generally not consistent with measurements derived from recent experimental studies. Electrical analogue models represent a better way to understand some of the complex, frequency-dependent responses of the middle ear: these have been used to model the effects of middle ear subcavities, and the possible function of the auditory ossicles as a transmission line. The concepts behind such models are explained here, again aimed at those with little background knowledge. Functional inferences based on middle ear anatomy are more likely to be valid at low frequencies. Acoustic impedance at low frequencies is dominated by compliance; expanded middle ear cavities, found in small desert mammals including gerbils, jerboas and the sengi Macroscelides, are expected to improve low-frequency sound transmission, as long as the ossicular system is not too stiff. © 2015 Anatomical Society.

  13. Denoising Magnetic Resonance Images Using Collaborative Non-Local Means.

    PubMed

    Chen, Geng; Zhang, Pei; Wu, Yafeng; Shen, Dinggang; Yap, Pew-Thian

    2016-02-12

    Noise artifacts in magnetic resonance (MR) images increase the complexity of image processing workflows and decrease the reliability of inferences drawn from the images. It is thus often desirable to remove such artifacts beforehand for more robust and effective quantitative analysis. It is important to preserve the integrity of relevant image information while removing noise in MR images. A variety of approaches have been developed for this purpose, and the non-local means (NLM) filter has been shown to be able to achieve state-of-the-art denoising performance. For effective denoising, NLM relies heavily on the existence of repeating structural patterns, which however might not always be present within a single image. This is especially true when one considers the fact that the human brain is complex and contains a lot of unique structures. In this paper we propose to leverage the repeating structures from multiple images to collaboratively denoise an image. The underlying assumption is that it is more likely to find repeating structures from multiple scans than from a single scan. Specifically, to denoise a target image, multiple images, which may be acquired from different subjects, are spatially aligned to the target image, and an NLM-like block matching is performed on these aligned images with the target image as the reference. This will significantly increase the number of matching structures and thus boost the denoising performance. Experiments on both synthetic and real data show that the proposed approach, collaborative non-local means (CNLM), outperforms the classic NLM and yields results with markedly improved structural details.

  14. Denoising Magnetic Resonance Images Using Collaborative Non-Local Means

    PubMed Central

    Chen, Geng; Zhang, Pei; Wu, Yafeng; Shen, Dinggang; Yap, Pew-Thian

    2015-01-01

    Noise artifacts in magnetic resonance (MR) images increase the complexity of image processing workflows and decrease the reliability of inferences drawn from the images. It is thus often desirable to remove such artifacts beforehand for more robust and effective quantitative analysis. It is important to preserve the integrity of relevant image information while removing noise in MR images. A variety of approaches have been developed for this purpose, and the non-local means (NLM) filter has been shown to be able to achieve state-of-the-art denoising performance. For effective denoising, NLM relies heavily on the existence of repeating structural patterns, which however might not always be present within a single image. This is especially true when one considers the fact that the human brain is complex and contains a lot of unique structures. In this paper we propose to leverage the repeating structures from multiple images to collaboratively denoise an image. The underlying assumption is that it is more likely to find repeating structures from multiple scans than from a single scan. Specifically, to denoise a target image, multiple images, which may be acquired from different subjects, are spatially aligned to the target image, and an NLM-like block matching is performed on these aligned images with the target image as the reference. This will significantly increase the number of matching structures and thus boost the denoising performance. Experiments on both synthetic and real data show that the proposed approach, collaborative non-local means (CNLM), outperforms the classic NLM and yields results with markedly improved structural details. PMID:26949289

  15. Functional Inference of Complex Anatomical Tendinous Networks at a Macroscopic Scale via Sparse Experimentation

    PubMed Central

    Saxena, Anupam; Lipson, Hod; Valero-Cuevas, Francisco J.

    2012-01-01

    In systems and computational biology, much effort is devoted to functional identification of systems and networks at the molecular-or cellular scale. However, similarly important networks exist at anatomical scales such as the tendon network of human fingers: the complex array of collagen fibers that transmits and distributes muscle forces to finger joints. This network is critical to the versatility of the human hand, and its function has been debated since at least the 16th century. Here, we experimentally infer the structure (both topology and parameter values) of this network through sparse interrogation with force inputs. A population of models representing this structure co-evolves in simulation with a population of informative future force inputs via the predator-prey estimation-exploration algorithm. Model fitness depends on their ability to explain experimental data, while the fitness of future force inputs depends on causing maximal functional discrepancy among current models. We validate our approach by inferring two known synthetic Latex networks, and one anatomical tendon network harvested from a cadaver's middle finger. We find that functionally similar but structurally diverse models can exist within a narrow range of the training set and cross-validation errors. For the Latex networks, models with low training set error [<4%] and resembling the known network have the smallest cross-validation errors [∼5%]. The low training set [<4%] and cross validation [<7.2%] errors for models for the cadaveric specimen demonstrate what, to our knowledge, is the first experimental inference of the functional structure of complex anatomical networks. This work expands current bioinformatics inference approaches by demonstrating that sparse, yet informative interrogation of biological specimens holds significant computational advantages in accurate and efficient inference over random testing, or assuming model topology and only inferring parameters values. These

  16. Regulatory Networks:. Inferring Functional Relationships Through Co-Expression

    NASA Astrophysics Data System (ADS)

    Wanke, Dierk; Hahn, Achim; Kilian, Joachim; Harter, Klaus; Berendzen, Kenneth W.

    2010-01-01

    Gene expression data not only provide us insights into discrete transcript abundance of specific genes, but contain cryptic information that can not readily be assessed without interpretation. We again used data of the plant Arabidopsis thaliana as our reference organism, yet the analysis presented herein can be performed with any organism with various data sources. Within the cell, information is transduced via different signaling cascades and results in differential gene expression responses. The incoming signals are perceived from upstream signaling components and handed to downstream messengers that further deliver the signals to effector proteins which can directly influence gene expression. In most cases, we can assume that proteins, which are connected to other signaling components within such a regulatory network, exhibit similar expression trajectories. Thus, we extracted a known functional network from literature and demonstrated that it is possible to superimpose microarray expression data onto the pathways. Thereby, we could follow the information flow through time reflected by gene expression changes. This allowed us to predict, whether the upstream signal was transmitted from known elements contained in the network or relayed from outside components. We next conducted the vice versa approach and used large scale microarray expression data to build a co-expression matrix for all genes present on the array. From this, we computed a regulatory network, which allowed us to deduce known and novel signaling pathways.

  17. Crustal structure beneath northeast India inferred from receiver function modeling

    NASA Astrophysics Data System (ADS)

    Borah, Kajaljyoti; Bora, Dipok K.; Goyal, Ayush; Kumar, Raju

    2016-09-01

    We estimated crustal shear velocity structure beneath ten broadband seismic stations of northeast India, by using H-Vp/Vs stacking method and a non-linear direct search approach, Neighbourhood Algorithm (NA) technique followed by joint inversion of Rayleigh wave group velocity and receiver function, calculated from teleseismic earthquakes data. Results show significant variations of thickness, shear velocities (Vs) and Vp/Vs ratio in the crust of the study region. The inverted shear wave velocity models show crustal thickness variations of 32-36 km in Shillong Plateau (North), 36-40 in Assam Valley and ∼44 km in Lesser Himalaya (South). Average Vp/Vs ratio in Shillong Plateau is less (1.73-1.77) compared to Assam Valley and Lesser Himalaya (∼1.80). Average crustal shear velocity beneath the study region varies from 3.4 to 3.5 km/s. Sediment structure beneath Shillong Plateau and Assam Valley shows 1-2 km thick sediment layer with low Vs (2.5-2.9 km/s) and high Vp/Vs ratio (1.8-2.1), while it is observed to be of greater thickness (4 km) with similar Vs and high Vp/Vs (∼2.5) in RUP (Lesser Himalaya). Both Shillong Plateau and Assam Valley show thick upper and middle crust (10-20 km), and thin (4-9 km) lower crust. Average Vp/Vs ratio in Assam Valley and Shillong Plateau suggest that the crust is felsic-to-intermediate and intermediate-to-mafic beneath Shillong Plateau and Assam Valley, respectively. Results show that lower crust rocks beneath the Shillong Plateau and Assam Valley lies between mafic granulite and mafic garnet granulite.

  18. Pipeline for effective denoising of digital mammography and digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Borges, Lucas R.; Bakic, Predrag R.; Foi, Alessandro; Maidment, Andrew D. A.; Vieira, Marcelo A. C.

    2017-03-01

    Denoising can be used as a tool to enhance image quality and enforce low radiation doses in X-ray medical imaging. The effectiveness of denoising techniques relies on the validity of the underlying noise model. In full-field digital mammography (FFDM) and digital breast tomosynthesis (DBT), calibration steps like the detector offset and flat-fielding can affect some assumptions made by most denoising techniques. Furthermore, quantum noise found in X-ray images is signal-dependent and can only be treated by specific filters. In this work we propose a pipeline for FFDM and DBT image denoising that considers the calibration steps and simplifies the modeling of the noise statistics through variance-stabilizing transformations (VST). The performance of a state-of-the-art denoising method was tested with and without the proposed pipeline. To evaluate the method, objective metrics such as the normalized root mean square error (N-RMSE), noise power spectrum, modulation transfer function (MTF) and the frequency signal-to-noise ratio (SNR) were analyzed. Preliminary tests show that the pipeline improves denoising. When the pipeline is not used, bright pixels of the denoised image are under-filtered and dark pixels are over-smoothed due to the assumption of a signal-independent Gaussian model. The pipeline improved denoising up to 20% in terms of spatial N-RMSE and up to 15% in terms of frequency SNR. Besides improving the denoising, the pipeline does not increase signal smoothing significantly, as shown by the MTF. Thus, the proposed pipeline can be used with state-of-the-art denoising techniques to improve the quality of DBT and FFDM images.

  19. Pipeline for inferring protein function from dynamics using coarse-grained molecular mechanics forcefield.

    PubMed

    Bhadra, Pratiti; Pal, Debnath

    2017-04-01

    Dynamics is integral to the function of proteins, yet the use of molecular dynamics (MD) simulation as a technique remains under-explored for molecular function inference. This is more important in the context of genomics projects where novel proteins are determined with limited evolutionary information. Recently we developed a method to match the query protein's flexible segments to infer function using a novel approach combining analysis of residue fluctuation-graphs and auto-correlation vectors derived from coarse-grained (CG) MD trajectory. The method was validated on a diverse dataset with sequence identity between proteins as low as 3%, with high function-recall rates. Here we share its implementation as a publicly accessible web service, named DynFunc (Dynamics Match for Function) to query protein function from ≥1 µs long CG dynamics trajectory information of protein subunits. Users are provided with the custom-developed coarse-grained molecular mechanics (CGMM) forcefield to generate the MD trajectories for their protein of interest. On upload of trajectory information, the DynFunc web server identifies specific flexible regions of the protein linked to putative molecular function. Our unique application does not use evolutionary information to infer molecular function from MD information and can, therefore, work for all proteins, including moonlighting and the novel ones, whenever structural information is available. Our pipeline is expected to be of utility to all structural biologists working with novel proteins and interested in moonlighting functions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Functional neighbors: inferring relationships between nonhomologous protein families using family-specific packing motifs.

    PubMed

    Bandyopadhyay, Deepak; Huan, Jun; Liu, Jinze; Prins, Jan; Snoeyink, Jack; Wang, Wei; Tropsha, Alexander

    2010-09-01

    We describe a new approach for inferring the functional relationships between nonhomologous protein families by looking at statistical enrichment of alternative function predictions in classification hierarchies such as Gene Ontology (GO) and Structural Classification of Proteins (SCOP). Protein structures are represented by robust graph representations, and the fast frequent subgraph mining algorithm is applied to protein families to generate sets of family-specific packing motifs, i.e., amino acid residue-packing patterns shared by most family members but infrequent in other proteins. The function of a protein is inferred by identifying in it motifs characteristic of a known family. We employ these family-specific motifs to elucidate functional relationships between families in the GO and SCOP hierarchies. Specifically, we postulate that two families are functionally related if one family is statistically enriched by motifs characteristic of another family, i.e., if the number of proteins in a family containing a motif from another family is greater than expected by chance. This function-inference method can help annotate proteins of unknown function, establish functional neighbors of existing families, and help specify alternate functions for known proteins.

  1. HARDI DATA DENOISING USING VECTORIAL TOTAL VARIATION AND LOGARITHMIC BARRIER

    PubMed Central

    Kim, Yunho; Thompson, Paul M.; Vese, Luminita A.

    2010-01-01

    In this work, we wish to denoise HARDI (High Angular Resolution Diffusion Imaging) data arising in medical brain imaging. Diffusion imaging is a relatively new and powerful method to measure the three-dimensional profile of water diffusion at each point in the brain. These images can be used to reconstruct fiber directions and pathways in the living brain, providing detailed maps of fiber integrity and connectivity. HARDI data is a powerful new extension of diffusion imaging, which goes beyond the diffusion tensor imaging (DTI) model: mathematically, intensity data is given at every voxel and at any direction on the sphere. Unfortunately, HARDI data is usually highly contaminated with noise, depending on the b-value which is a tuning parameter pre-selected to collect the data. Larger b-values help to collect more accurate information in terms of measuring diffusivity, but more noise is generated by many factors as well. So large b-values are preferred, if we can satisfactorily reduce the noise without losing the data structure. Here we propose two variational methods to denoise HARDI data. The first one directly denoises the collected data S, while the second one denoises the so-called sADC (spherical Apparent Diffusion Coefficient), a field of radial functions derived from the data. These two quantities are related by an equation of the form S = SSexp (−b · sADC) (in the noise-free case). By applying these two different models, we will be able to determine which quantity will most accurately preserve data structure after denoising. The theoretical analysis of the proposed models is presented, together with experimental results and comparisons for denoising synthetic and real HARDI data. PMID:20802839

  2. Diverse Effects, Complex Causes: Children Use Information about Machines' Functional Diversity to Infer Internal Complexity

    ERIC Educational Resources Information Center

    Ahl, Richard E.; Keil, Frank C.

    2017-01-01

    Four studies explored the abilities of 80 adults and 180 children (4-9 years), from predominantly middle-class families in the Northeastern United States, to use information about machines' observable functional capacities to infer their internal, "hidden" mechanistic complexity. Children as young as 4 and 5 years old used machines'…

  3. Specificity of Emotion Inferences as a Function of Emotional Contextual Support

    ERIC Educational Resources Information Center

    Gillioz, Christelle; Gygax, Pascal M.

    2017-01-01

    Research on emotion inferences has shown that readers include a representation of the main character's emotional state in their mental representations of the text. We examined the specificity of emotion representations as a function of the emotion content of short narratives, in terms of the quantity and quality of emotion components included in…

  4. Pragmatic Inference Abilities in Individuals with Asperger Syndrome or High-Functioning Autism. A Review

    ERIC Educational Resources Information Center

    Loukusa, Soile; Moilanen, Irma

    2009-01-01

    This review summarizes studies involving pragmatic language comprehension and inference abilities in individuals with Asperger syndrome or high-functioning autism. Systematic searches of three electronic databases, selected journals, and reference lists identified 20 studies meeting the inclusion criteria. These studies were evaluated in terms of:…

  5. Specificity of Emotion Inferences as a Function of Emotional Contextual Support

    ERIC Educational Resources Information Center

    Gillioz, Christelle; Gygax, Pascal M.

    2017-01-01

    Research on emotion inferences has shown that readers include a representation of the main character's emotional state in their mental representations of the text. We examined the specificity of emotion representations as a function of the emotion content of short narratives, in terms of the quantity and quality of emotion components included in…

  6. Pragmatic Inference Abilities in Individuals with Asperger Syndrome or High-Functioning Autism. A Review

    ERIC Educational Resources Information Center

    Loukusa, Soile; Moilanen, Irma

    2009-01-01

    This review summarizes studies involving pragmatic language comprehension and inference abilities in individuals with Asperger syndrome or high-functioning autism. Systematic searches of three electronic databases, selected journals, and reference lists identified 20 studies meeting the inclusion criteria. These studies were evaluated in terms of:…

  7. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    NASA Astrophysics Data System (ADS)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  8. Deep RNNs for video denoising

    NASA Astrophysics Data System (ADS)

    Chen, Xinyuan; Song, Li; Yang, Xiaokang

    2016-09-01

    Video denoising can be described as the problem of mapping from a specific length of noisy frames to clean one. We propose a deep architecture based on Recurrent Neural Network (RNN) for video denoising. The model learns a patch-based end-to-end mapping between the clean and noisy video sequences. It takes the corrupted video sequences as the input and outputs the clean one. Our deep network, which we refer to as deep Recurrent Neural Networks (deep RNNs or DRNNs), stacks RNN layers where each layer receives the hidden state of the previous layer as input. Experiment shows (i) the recurrent architecture through temporal domain extracts motion information and does favor to video denoising, and (ii) deep architecture have large enough capacity for expressing mapping relation between corrupted videos as input and clean videos as output, furthermore, (iii) the model has generality to learned different mappings from videos corrupted by different types of noise (e.g., Poisson-Gaussian noise). By training on large video databases, we are able to compete with some existing video denoising methods.

  9. An estimating function approach to inference for inhomogeneous Neyman-Scott processes.

    PubMed

    Waagepetersen, Rasmus Plenge

    2007-03-01

    This article is concerned with inference for a certain class of inhomogeneous Neyman-Scott point processes depending on spatial covariates. Regression parameter estimates obtained from a simple estimating function are shown to be asymptotically normal when the "mother" intensity for the Neyman-Scott process tends to infinity. Clustering parameter estimates are obtained using minimum contrast estimation based on the K-function. The approach is motivated and illustrated by applications to point pattern data from a tropical rain forest plot.

  10. Locally Based Kernel PLS Regression De-noising with Application to Event-Related Potentials

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Tino, Peter

    2002-01-01

    The close relation of signal de-noising and regression problems dealing with the estimation of functions reflecting dependency between a set of inputs and dependent outputs corrupted with some level of noise have been employed in our approach.

  11. Category-Specific Object Image Denoising.

    PubMed

    Anwar, Saeed; Porikli, Fatih; Huynh, Cong Phuoc

    2017-07-31

    We present a novel image denoising algorithm that uses external, category specific image database. In contrast to existing noisy image restoration algorithms that search patches either from a generic database or noisy image itself, our method first selects clean images similar to the noisy image from a database that consists of images of the same class. Then, within the spatial locality of each noisy patch, it assembles a set of "support patches" from the selected images. These noisyfree support samples resemble the noisy patch and correspond principally to the identical part of the depicted object. In addition, we employ a content adaptive distribution model for each patch where we derive the parameters of the distribution from the support patches. We formulate noise removal task as an optimization problem in the transform domain. Our objective function composed of a Gaussian fidelity term that imposes category specific information, and a low-rank term that encourages the similarity between the noisy and the support patches in a robust manner. The denoising process is driven by an iterative selection of support patches and optimization of the objective function. Our extensive experiments on five different object categories confirm the benefit of incorporating category-specific information to noise removal and demonstrates the superior performance of our method over the state-of-the-art alternatives.

  12. Transcriptional network inference from functional similarity and expression data: a global supervised approach.

    PubMed

    Ambroise, Jérôme; Robert, Annie; Macq, Benoit; Gala, Jean-Luc

    2012-01-06

    An important challenge in system biology is the inference of biological networks from postgenomic data. Among these biological networks, a gene transcriptional regulatory network focuses on interactions existing between transcription factors (TFs) and and their corresponding target genes. A large number of reverse engineering algorithms were proposed to infer such networks from gene expression profiles, but most current methods have relatively low predictive performances. In this paper, we introduce the novel TNIFSED method (Transcriptional Network Inference from Functional Similarity and Expression Data), that infers a transcriptional network from the integration of correlations and partial correlations of gene expression profiles and gene functional similarities through a supervised classifier. In the current work, TNIFSED was applied to predict the transcriptional network in Escherichia coli and in Saccharomyces cerevisiae, using datasets of 445 and 170 affymetrix arrays, respectively. Using the area under the curve of the receiver operating characteristics and the F-measure as indicators, we showed the predictive performance of TNIFSED to be better than unsupervised state-of-the-art methods. TNIFSED performed slightly worse than the supervised SIRENE algorithm for the target genes identification of the TF having a wide range of yet identified target genes but better for TF having only few identified target genes. Our results indicate that TNIFSED is complementary to the SIRENE algorithm, and particularly suitable to discover target genes of "orphan" TFs.

  13. Wavelet-based denoising using local Laplace prior

    NASA Astrophysics Data System (ADS)

    Rabbani, Hossein; Vafadust, Mansur; Selesnick, Ivan

    2007-09-01

    Although wavelet-based image denoising is a powerful tool for image processing applications, relatively few publications have addressed so far wavelet-based video denoising. The main reason is that the standard 3-D data transforms do not provide useful representations with good energy compaction property, for most video data. For example, the multi-dimensional standard separable discrete wavelet transform (M-D DWT) mixes orientations and motions in its subbands, and produces the checkerboard artifacts. So, instead of M-D DWT, usually oriented transforms suchas multi-dimensional complex wavelet transform (M-D DCWT) are proposed for video processing. In this paper we use a Laplace distribution with local variance to model the statistical properties of noise-free wavelet coefficients. This distribution is able to simultaneously model the heavy-tailed and intrascale dependency properties of wavelets. Using this model, simple shrinkage functions are obtained employing maximum a posteriori (MAP) and minimum mean squared error (MMSE) estimators. These shrinkage functions are proposed for video denoising in DCWT domain. The simulation results shows that this simple denoising method has impressive performance visually and quantitatively.

  14. Vikodak - A Modular Framework for Inferring Functional Potential of Microbial Communities from 16S Metagenomic Datasets

    PubMed Central

    Nagpal, Sunil; Haque, Mohammed Monzoorul; Mande, Sharmila S.

    2016-01-01

    Background The overall metabolic/functional potential of any given environmental niche is a function of the sum total of genes/proteins/enzymes that are encoded and expressed by various interacting microbes residing in that niche. Consequently, prior (collated) information pertaining to genes, enzymes encoded by the resident microbes can aid in indirectly (re)constructing/ inferring the metabolic/ functional potential of a given microbial community (given its taxonomic abundance profile). In this study, we present Vikodak—a multi-modular package that is based on the above assumption and automates inferring and/ or comparing the functional characteristics of an environment using taxonomic abundance generated from one or more environmental sample datasets. With the underlying assumptions of co-metabolism and independent contributions of different microbes in a community, a concerted effort has been made to accommodate microbial co-existence patterns in various modules incorporated in Vikodak. Results Validation experiments on over 1400 metagenomic samples have confirmed the utility of Vikodak in (a) deciphering enzyme abundance profiles of any KEGG metabolic pathway, (b) functional resolution of distinct metagenomic environments, (c) inferring patterns of functional interaction between resident microbes, and (d) automating statistical comparison of functional features of studied microbiomes. Novel features incorporated in Vikodak also facilitate automatic removal of false positives and spurious functional predictions. Conclusions With novel provisions for comprehensive functional analysis, inclusion of microbial co-existence pattern based algorithms, automated inter-environment comparisons; in-depth analysis of individual metabolic pathways and greater flexibilities at the user end, Vikodak is expected to be an important value addition to the family of existing tools for 16S based function prediction. Availability and Implementation A web implementation of Vikodak

  15. 2D Orthogonal Locality Preserving Projection for Image Denoising.

    PubMed

    Shikkenawis, Gitam; Mitra, Suman K

    2016-01-01

    Sparse representations using transform-domain techniques are widely used for better interpretation of the raw data. Orthogonal locality preserving projection (OLPP) is a linear technique that tries to preserve local structure of data in the transform domain as well. Vectorized nature of OLPP requires high-dimensional data to be converted to vector format, hence may lose spatial neighborhood information of raw data. On the other hand, processing 2D data directly, not only preserves spatial information, but also improves the computational efficiency considerably. The 2D OLPP is expected to learn the transformation from 2D data itself. This paper derives mathematical foundation for 2D OLPP. The proposed technique is used for image denoising task. Recent state-of-the-art approaches for image denoising work on two major hypotheses, i.e., non-local self-similarity and sparse linear approximations of the data. Locality preserving nature of the proposed approach automatically takes care of self-similarity present in the image while inferring sparse basis. A global basis is adequate for the entire image. The proposed approach outperforms several state-of-the-art image denoising approaches for gray-scale, color, and texture images.

  16. Inferring Higher Functional Information for RIKEN Mouse Full-Length cDNA Clones With FACTS

    PubMed Central

    Nagashima, Takeshi; Silva, Diego G.; Petrovsky, Nikolai; Socha, Luis A.; Suzuki, Harukazu; Saito, Rintaro; Kasukawa, Takeya; Kurochkin, Igor V.; Konagaya, Akihiko; Schönbach, Christian

    2003-01-01

    FACTS (Functional Association/Annotation of cDNA Clones from Text/Sequence Sources) is a semiautomated knowledge discovery and annotation system that integrates molecular function information derived from sequence analysis results (sequence inferred) with functional information extracted from text. Text-inferred information was extracted from keyword-based retrievals of MEDLINE abstracts and by matching of gene or protein names to OMIM, BIND, and DIP database entries. Using FACTS, we found that 47.5% of the 60,770 RIKEN mouse cDNA FANTOM2 clone annotations were informative for text searches. MEDLINE queries yielded molecular interaction-containing sentences for 23.1% of the clones. When disease MeSH and GO terms were matched with retrieved abstracts, 22.7% of clones were associated with potential diseases, and 32.5% with GO identifiers. A significant number (23.5%) of disease MeSH-associated clones were also found to have a hereditary disease association (OMIM Morbidmap). Inferred neoplastic and nervous system disease represented 49.6% and 36.0% of disease MeSH-associated clones, respectively. A comparison of sequence-based GO assignments with informative text-based GO assignments revealed that for 78.2% of clones, identical GO assignments were provided for that clone by either method, whereas for 21.8% of clones, the assignments differed. In contrast, for OMIM assignments, only 28.5% of clones had identical sequence-based and text-based OMIM assignments. Sequence, sentence, and term-based functional associations are included in the FACTS database (http://facts.gsc.riken.go.jp/), which permits results to be annotated and explored through web-accessible keyword and sequence search interfaces. The FACTS database will be a critical tool for investigating the functional complexity of the mouse transcriptome, cDNA-inferred interactome (molecular interactions), and pathome (pathologies). PMID:12819151

  17. A variance components model for statistical inference on functional connectivity networks.

    PubMed

    Fiecas, Mark; Cribben, Ivor; Bahktiari, Reyhaneh; Cummine, Jacqueline

    2017-01-24

    We propose a variance components linear modeling framework to conduct statistical inference on functional connectivity networks that directly accounts for the temporal autocorrelation inherent in functional magnetic resonance imaging (fMRI) time series data and for the heterogeneity across subjects in the study. The novel method estimates the autocorrelation structure in a nonparametric and subject-specific manner, and estimates the variance due to the heterogeneity using iterative least squares. We apply the new model to a resting-state fMRI study to compare the functional connectivity networks in both typical and reading impaired young adults in order to characterize the resting state networks that are related to reading processes. We also compare the performance of our model to other methods of statistical inference on functional connectivity networks that do not account for the temporal autocorrelation or heterogeneity across the subjects using simulated data, and show that by accounting for these sources of variation and covariation results in more powerful tests for statistical inference.

  18. Locally linear denoising on image manifolds

    PubMed Central

    Gong, Dian; Sha, Fei; Medioni, Gérard

    2010-01-01

    We study the problem of image denoising where images are assumed to be samples from low dimensional (sub)manifolds. We propose the algorithm of locally linear denoising. The algorithm approximates manifolds with locally linear patches by constructing nearest neighbor graphs. Each image is then locally denoised within its neighborhoods. A global optimal denoising result is then identified by aligning those local estimates. The algorithm has a closed-form solution that is efficient to compute. We evaluated and compared the algorithm to alternative methods on two image data sets. We demonstrated the effectiveness of the proposed algorithm, which yields visually appealing denoising results, incurs smaller reconstruction errors and results in lower error rates when the denoised data are used in supervised learning tasks. PMID:25309138

  19. Time-varying coupling functions: Dynamical inference and cause of synchronization transitions

    NASA Astrophysics Data System (ADS)

    Stankovski, Tomislav

    2017-02-01

    Interactions in nature can be described by their coupling strength, direction of coupling, and coupling function. The coupling strength and directionality are relatively well understood and studied, at least for two interacting systems; however, there can be a complexity in the interactions uniquely dependent on the coupling functions. Such a special case is studied here: synchronization transition occurs only due to the time variability of the coupling functions, while the net coupling strength is constant throughout the observation time. To motivate the investigation, an example is used to present an analysis of cross-frequency coupling functions between delta and alpha brain waves extracted from the electroencephalography recording of a healthy human subject in a free-running resting state. The results indicate that time-varying coupling functions are a reality for biological interactions. A model of phase oscillators is used to demonstrate and detect the synchronization transition caused by the varying coupling functions during an invariant coupling strength. The ability to detect this phenomenon is discussed with the method of dynamical Bayesian inference, which was able to infer the time-varying coupling functions. The form of the coupling function acts as an additional dimension for the interactions, and it should be taken into account when detecting biological or other interactions from data.

  20. Time-varying coupling functions: Dynamical inference and cause of synchronization transitions.

    PubMed

    Stankovski, Tomislav

    2017-02-01

    Interactions in nature can be described by their coupling strength, direction of coupling, and coupling function. The coupling strength and directionality are relatively well understood and studied, at least for two interacting systems; however, there can be a complexity in the interactions uniquely dependent on the coupling functions. Such a special case is studied here: synchronization transition occurs only due to the time variability of the coupling functions, while the net coupling strength is constant throughout the observation time. To motivate the investigation, an example is used to present an analysis of cross-frequency coupling functions between delta and alpha brain waves extracted from the electroencephalography recording of a healthy human subject in a free-running resting state. The results indicate that time-varying coupling functions are a reality for biological interactions. A model of phase oscillators is used to demonstrate and detect the synchronization transition caused by the varying coupling functions during an invariant coupling strength. The ability to detect this phenomenon is discussed with the method of dynamical Bayesian inference, which was able to infer the time-varying coupling functions. The form of the coupling function acts as an additional dimension for the interactions, and it should be taken into account when detecting biological or other interactions from data.

  1. Approximation Of Multi-Valued Inverse Functions Using Clustering And Sugeno Fuzzy Inference

    NASA Technical Reports Server (NTRS)

    Walden, Maria A.; Bikdash, Marwan; Homaifar, Abdollah

    1998-01-01

    Finding the inverse of a continuous function can be challenging and computationally expensive when the inverse function is multi-valued. Difficulties may be compounded when the function itself is difficult to evaluate. We show that we can use fuzzy-logic approximators such as Sugeno inference systems to compute the inverse on-line. To do so, a fuzzy clustering algorithm can be used in conjunction with a discriminating function to split the function data into branches for the different values of the forward function. These data sets are then fed into a recursive least-squares learning algorithm that finds the proper coefficients of the Sugeno approximators; each Sugeno approximator finds one value of the inverse function. Discussions about the accuracy of the approximation will be included.

  2. Approximation Of Multi-Valued Inverse Functions Using Clustering And Sugeno Fuzzy Inference

    NASA Technical Reports Server (NTRS)

    Walden, Maria A.; Bikdash, Marwan; Homaifar, Abdollah

    1998-01-01

    Finding the inverse of a continuous function can be challenging and computationally expensive when the inverse function is multi-valued. Difficulties may be compounded when the function itself is difficult to evaluate. We show that we can use fuzzy-logic approximators such as Sugeno inference systems to compute the inverse on-line. To do so, a fuzzy clustering algorithm can be used in conjunction with a discriminating function to split the function data into branches for the different values of the forward function. These data sets are then fed into a recursive least-squares learning algorithm that finds the proper coefficients of the Sugeno approximators; each Sugeno approximator finds one value of the inverse function. Discussions about the accuracy of the approximation will be included.

  3. ELISA: Structure-Function Inferences based on statistically significant and evolutionarily inspired observations

    PubMed Central

    Shakhnovich, Boris E; Harvey, John M; Comeau, Steve; Lorenz, David; DeLisi, Charles; Shakhnovich, Eugene

    2003-01-01

    The problem of functional annotation based on homology modeling is primary to current bioinformatics research. Researchers have noted regularities in sequence, structure and even chromosome organization that allow valid functional cross-annotation. However, these methods provide a lot of false negatives due to limited specificity inherent in the system. We want to create an evolutionarily inspired organization of data that would approach the issue of structure-function correlation from a new, probabilistic perspective. Such organization has possible applications in phylogeny, modeling of functional evolution and structural determination. ELISA (Evolutionary Lineage Inferred from Structural Analysis, ) is an online database that combines functional annotation with structure and sequence homology modeling to place proteins into sequence-structure-function "neighborhoods". The atomic unit of the database is a set of sequences and structural templates that those sequences encode. A graph that is built from the structural comparison of these templates is called PDUG (protein domain universe graph). We introduce a method of functional inference through a probabilistic calculation done on an arbitrary set of PDUG nodes. Further, all PDUG structures are mapped onto all fully sequenced proteomes allowing an easy interface for evolutionary analysis and research into comparative proteomics. ELISA is the first database with applicability to evolutionary structural genomics explicitly in mind. Availability: The database is available at . PMID:12952559

  4. Statistical inference for assessing functional connectivity of neuronal ensembles with sparse spiking data.

    PubMed

    Chen, Zhe; Putrino, David F; Ghosh, Soumya; Barbieri, Riccardo; Brown, Emery N

    2011-04-01

    The ability to accurately infer functional connectivity between ensemble neurons using experimentally acquired spike train data is currently an important research objective in computational neuroscience. Point process generalized linear models and maximum likelihood estimation have been proposed as effective methods for the identification of spiking dependency between neurons. However, unfavorable experimental conditions occasionally results in insufficient data collection due to factors such as low neuronal firing rates or brief recording periods, and in these cases, the standard maximum likelihood estimate becomes unreliable. The present studies compares the performance of different statistical inference procedures when applied to the estimation of functional connectivity in neuronal assemblies with sparse spiking data. Four inference methods were compared: maximum likelihood estimation, penalized maximum likelihood estimation, using either l(2) or l(1) regularization, and hierarchical Bayesian estimation based on a variational Bayes algorithm. Algorithmic performances were compared using well-established goodness-of-fit measures in benchmark simulation studies, and the hierarchical Bayesian approach performed favorably when compared with the other algorithms, and this approach was then successfully applied to real spiking data recorded from the cat motor cortex. The identification of spiking dependencies in physiologically acquired data was encouraging, since their sparse nature would have previously precluded them from successful analysis using traditional methods.

  5. INTEGRATING EVOLUTIONARY AND FUNCTIONAL APPROACHES TO INFER ADAPTATION AT SPECIFIC LOCI

    PubMed Central

    Storz, Jay F.; Wheat, Christopher W.

    2010-01-01

    Inferences about adaptation at specific loci are often exclusively based on the static analysis of DNA sequence variation. Ideally, population-genetic evidence for positive selection serves as a stepping-off point for experimental studies to elucidate the functional significance of the putatively adaptive variation. We argue that inferences about adaptation at specific loci are best achieved by integrating the indirect, retrospective insights provided by population-genetic analyses with the more direct, mechanistic insights provided by functional experiments. Integrative studies of adaptive genetic variation may sometimes be motivated by experimental insights into molecular function, which then provide the impetus to perform population genetic tests to evaluate whether the functional variation is of adaptive significance. In other cases, studies may be initiated by genome scans of DNA variation to identify candidate loci for recent adaptation. Results of such analyses can then motivate experimental efforts to test whether the identified candidate loci do in fact contribute to functional variation in some fitness-related phenotype. Functional studies can provide corroborative evidence for positive selection at particular loci, and can potentially reveal specific molecular mechanisms of adaptation. PMID:20500215

  6. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising.

    PubMed

    Zhang, Kai; Zuo, Wangmeng; Chen, Yunjin; Meng, Deyu; Zhang, Lei

    2017-02-01

    Discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise (AWGN) at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks such as Gaussian denoising, single image super-resolution and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.

  7. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Zuo, Wangmeng; Chen, Yunjin; Meng, Deyu; Zhang, Lei

    2017-07-01

    Discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise (AWGN) at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks such as Gaussian denoising, single image super-resolution and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.

  8. Denoising PET Images Using Singular Value Thresholding and Stein's Unbiased Risk Estimate*

    PubMed Central

    Bagci, Ulas; Mollura, Daniel J.

    2014-01-01

    Image denoising is an important pre-processing step for accurately quantifying functional morphology and measuring activities of the tissues using PET images. Unlike structural imaging modalities, PET images have two difficulties: (1) the Gaussian noise model does not necessarily fit into PET imaging because the exact nature of noise propagation in PET imaging is not well known, and (2) PET images are low resolution; therefore, it is challenging to denoise them while preserving structural information. To address these two difficulties, we introduce a novel methodology for denoising PET images. The proposed method uses the singular value thresholding concept and Stein's unbiased risk estimate to optimize a soft thresholding rule. Results, obtained from 40 MRI-PET images, demonstrate that the proposed algorithm is able to denoise PET images successfully, while still maintaining the quantitative information. PMID:24505751

  9. Relevant modes selection method based on Spearman correlation coefficient for laser signal denoising using empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Duan, Yabo; Song, Chengtian

    2016-12-01

    Empirical mode decomposition (EMD) is a recently proposed nonlinear and nonstationary laser signal denoising method. A noisy signal is broken down using EMD into oscillatory components that are called intrinsic mode functions (IMFs). Thresholding-based denoising and correlation-based partial reconstruction of IMFs are the two main research directions for EMD-based denoising. Similar to other decomposition-based denoising approaches, EMD-based denoising methods require a reliable threshold to determine which IMFs are noise components and which IMFs are noise-free components. In this work, we propose a new approach in which each IMF is first denoised using EMD interval thresholding (EMD-IT), and then a robust thresholding process based on Spearman correlation coefficient is used for relevant modes selection. The proposed method tackles the problem using a thresholding-based denoising approach coupled with partial reconstruction of the relevant IMFs. Other traditional denoising methods, including correlation-based EMD partial reconstruction (EMD-Correlation), discrete Fourier transform and wavelet-based methods, are investigated to provide a comparison with the proposed technique. Simulation and test results demonstrate the superior performance of the proposed method when compared with the other methods.

  10. Diverse Effects, Complex Causes: Children Use Information About Machines' Functional Diversity to Infer Internal Complexity.

    PubMed

    Ahl, Richard E; Keil, Frank C

    2017-05-01

    Four studies explored the abilities of 80 adults and 180 children (4-9 years), from predominantly middle-class families in the Northeastern United States, to use information about machines' observable functional capacities to infer their internal, "hidden" mechanistic complexity. Children as young as 4 and 5 years old used machines' numbers of functions as indications of complexity and matched machines performing more functions with more complex "insides" (Study 1). However, only older children (6 and older) and adults used machines' functional diversity alone as an indication of complexity (Studies 2-4). The ability to use functional diversity as a complexity cue therefore emerges during the early school years, well before the use of diversity in most categorical induction tasks. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.

  11. Multilevel statistical inference from functional near-infrared spectroscopy data during stroop interference.

    PubMed

    Ciftçi, Koray; Sankur, Bülent; Kahya, Yasemin P; Akin, Ata

    2008-09-01

    Functional near-infrared spectroscopy (fNIRS) is an emerging technique for monitoring the concentration changes of oxy- and deoxy-hemoglobin (oxy-Hb and deoxy-Hb) in the brain. An important consideration in fNIRS-based neuroimaging modality is to conduct group-level analysis from a set of time series measured from a group of subjects. We investigate the feasibility of multilevel statistical inference for fNIRS. As a case study, we search for hemodynamic activations in the prefrontal cortex during Stroop interference. Hierarchical general linear model (GLM) is used for making this multilevel analysis. Activation patterns both at the subject and group level are investigated on a comparative basis using various classical and Bayesian inference methods. All methods showed consistent left lateral prefrontal cortex activation for oxy-Hb during interference condition, while the effects were much less pronounced for deoxy-Hb. Our analysis showed that mixed effects or Bayesian models are more convenient for faithful analysis of fNIRS data. We arrived at two important conclusions. First, fNIRS has the capability to identify activations at the group level, and second, the mixed effects or Bayesian model is the appropriate mechanism to pass from subject to group-level inference.

  12. EFICAz: a comprehensive approach for accurate genome-scale enzyme function inference

    PubMed Central

    Tian, Weidong; Arakaki, Adrian K.; Skolnick, Jeffrey

    2004-01-01

    EFICAz (Enzyme Function Inference by Combined Approach) is an automatic engine for large-scale enzyme function inference that combines predictions from four different methods developed and optimized to achieve high prediction accuracy: (i) recognition of functionally discriminating residues (FDRs) in enzyme families obtained by a Conservation-controlled HMM Iterative procedure for Enzyme Family classification (CHIEFc), (ii) pairwise sequence comparison using a family specific Sequence Identity Threshold, (iii) recognition of FDRs in Multiple Pfam enzyme families, and (iv) recognition of multiple Prosite patterns of high specificity. For FDR (i.e. conserved positions in an enzyme family that discriminate between true and false members of the family) identification, we have developed an Evolutionary Footprinting method that uses evolutionary information from homofunctional and heterofunctional multiple sequence alignments associated with an enzyme family. The FDRs show a significant correlation with annotated active site residues. In a jackknife test, EFICAz shows high accuracy (92%) and sensitivity (82%) for predicting four EC digits in testing sequences that are <40% identical to any member of the corresponding training set. Applied to Escherichia coli genome, EFICAz assigns more detailed enzymatic function than KEGG, and generates numerous novel predictions. PMID:15576349

  13. EFICAz: a comprehensive approach for accurate genome-scale enzyme function inference.

    PubMed

    Tian, Weidong; Arakaki, Adrian K; Skolnick, Jeffrey

    2004-01-01

    EFICAz (Enzyme Function Inference by Combined Approach) is an automatic engine for large-scale enzyme function inference that combines predictions from four different methods developed and optimized to achieve high prediction accuracy: (i) recognition of functionally discriminating residues (FDRs) in enzyme families obtained by a Conservation-controlled HMM Iterative procedure for Enzyme Family classification (CHIEFc), (ii) pairwise sequence comparison using a family specific Sequence Identity Threshold, (iii) recognition of FDRs in Multiple Pfam enzyme families, and (iv) recognition of multiple Prosite patterns of high specificity. For FDR (i.e. conserved positions in an enzyme family that discriminate between true and false members of the family) identification, we have developed an Evolutionary Footprinting method that uses evolutionary information from homofunctional and heterofunctional multiple sequence alignments associated with an enzyme family. The FDRs show a significant correlation with annotated active site residues. In a jackknife test, EFICAz shows high accuracy (92%) and sensitivity (82%) for predicting four EC digits in testing sequences that are <40% identical to any member of the corresponding training set. Applied to Escherichia coli genome, EFICAz assigns more detailed enzymatic function than KEGG, and generates numerous novel predictions.

  14. Optical Aperture Synthesis Object's Information Extracting Based on Wavelet Denoising

    NASA Astrophysics Data System (ADS)

    Fan, W. J.; Lu, Y.

    2006-10-01

    Wavelet denoising is studied to improve OAS(optical aperture synthesis) object's Fourier information extracting. Translation invariance wavelet denoising based on Donoho wavelet soft threshold denoising is researched to remove Pseudo-Gibbs in wavelet soft threshold image. OAS object's information extracting based on translation invariance wavelet denoising is studied. The study shows that wavelet threshold denoising can improve the precision and the repetition of object's information extracting from interferogram, and the translation invariance wavelet denoising information extracting is better than soft threshold wavelet denoising information extracting.

  15. Determination of optimal wavelet denoising parameters for red edge feature extraction from hyperspectral data

    NASA Astrophysics Data System (ADS)

    Shafri, Helmi Z. M.; Yusof, Mohd R. M.

    2009-05-01

    A study of wavelet denoising on hyperspectral reflectance data, specifically the red edge position (REP) and its first derivative is presented in this paper. A synthetic data set was created using a sigmoid to simulate the red edge feature for this study. The sigmoid is injected with Gaussian white noise to simulate noisy reflectance data from handheld spectroradiometers. The use of synthetic data enables better quantification and statistical study of the effects of wavelet denoising on the features of hyperspectral data, specifically the REP. The simulation study will help to identify the most suitable wavelet parameters for denoising and represents the applicability of the wavelet-based denoising procedure in hyperspectral sensing for vegetation. The suitability of the thresholding rules and mother wavelets used in wavelet denoising is evaluated by comparing the denoised sigmoid function with the clean sigmoid, in terms of the shift in the inflection point meant to represent the REP, and also the overall change in the denoised signal compared with the clean one. The VisuShrink soft threshold was used with rescaling based on the noise estimate, in conjunction with wavelets of the Daubechies, Symlet and Coiflet families. It was found that for the VisuShrink threshold with single level noise estimate rescaling, the Daubechies 9 and Symlet 8 wavelets produced the least distortion in the location of sigmoid inflection point and the overall curve. The selected mother wavelets were used to denoise oil palm reflectance data to enable determination of the red edge position by locating the peak of the first derivative.

  16. De novo inference of protein function from coarse-grained dynamics.

    PubMed

    Bhadra, Pratiti; Pal, Debnath

    2014-10-01

    Inference of molecular function of proteins is the fundamental task in the quest for understanding cellular processes. The task is getting increasingly difficult with thousands of new proteins discovered each day. The difficulty arises primarily due to lack of high-throughput experimental technique for assessing protein molecular function, a lacunae that computational approaches are trying hard to fill. The latter too faces a major bottleneck in absence of clear evidence based on evolutionary information. Here we propose a de novo approach to annotate protein molecular function through structural dynamics match for a pair of segments from two dissimilar proteins, which may share even <10% sequence identity. To screen these matches, corresponding 1 µs coarse-grained (CG) molecular dynamics trajectories were used to compute normalized root-mean-square-fluctuation graphs and select mobile segments, which were, thereafter, matched for all pairs using unweighted three-dimensional autocorrelation vectors. Our in-house custom-built forcefield (FF), extensively validated against dynamics information obtained from experimental nuclear magnetic resonance data, was specifically used to generate the CG dynamics trajectories. The test for correspondence of dynamics-signature of protein segments and function revealed 87% true positive rate and 93.5% true negative rate, on a dataset of 60 experimentally validated proteins, including moonlighting proteins and those with novel functional motifs. A random test against 315 unique fold/function proteins for a negative test gave >99% true recall. A blind prediction on a novel protein appears consistent with additional evidences retrieved therein. This is the first proof-of-principle of generalized use of structural dynamics for inferring protein molecular function leveraging our custom-made CG FF, useful to all.

  17. Homology-based inference sets the bar high for protein function prediction.

    PubMed

    Hamp, Tobias; Kassner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Boehm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas A; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Rost, Burkhard

    2013-01-01

    Any method that de novo predicts protein function should do better than random. More challenging, it also ought to outperform simple homology-based inference. Here, we describe a few methods that predict protein function exclusively through homology. Together, they set the bar or lower limit for future improvements. During the development of these methods, we faced two surprises. Firstly, our most successful implementation for the baseline ranked very high at CAFA1. In fact, our best combination of homology-based methods fared only slightly worse than the top-of-the-line prediction method from the Jones group. Secondly, although the concept of homology-based inference is simple, this work revealed that the precise details of the implementation are crucial: not only did the methods span from top to bottom performers at CAFA, but also the reasons for these differences were unexpected. In this work, we also propose a new rigorous measure to compare predicted and experimental annotations. It puts more emphasis on the details of protein function than the other measures employed by CAFA and may best reflect the expectations of users. Clearly, the definition of proper goals remains one major objective for CAFA.

  18. Impact of Prematurity and Perinatal Antibiotics on the Developing Intestinal Microbiota: A Functional Inference Study.

    PubMed

    Arboleya, Silvia; Sánchez, Borja; Solís, Gonzalo; Fernández, Nuria; Suárez, Marta; Hernández-Barranco, Ana M; Milani, Christian; Margolles, Abelardo; de Los Reyes-Gavilán, Clara G; Ventura, Marco; Gueimonde, Miguel

    2016-04-29

    The microbial colonization of the neonatal gut provides a critical stimulus for normal maturation and development. This process of early microbiota establishment, known to be affected by several factors, constitutes an important determinant for later health. We studied the establishment of the microbiota in preterm and full-term infants and the impact of perinatal antibiotics upon this process in premature babies. To this end, 16S rRNA gene sequence-based microbiota assessment was performed at phylum level and functional inference analyses were conducted. Moreover, the levels of the main intestinal microbial metabolites, the short-chain fatty acids (SCFA) acetate, propionate and butyrate, were measured by Gas-Chromatography Flame ionization/Mass spectrometry detection. Prematurity affects microbiota composition at phylum level, leading to increases of Proteobacteria and reduction of other intestinal microorganisms. Perinatal antibiotic use further affected the microbiota of the preterm infant. These changes involved a concomitant alteration in the levels of intestinal SCFA. Moreover, functional inference analyses allowed for identifying metabolic pathways potentially affected by prematurity and perinatal antibiotics use. A deficiency or delay in the establishment of normal microbiota function seems to be present in preterm infants. Perinatal antibiotic use, such as intrapartum prophylaxis, affected the early life microbiota establishment in preterm newborns, which may have consequences for later health.

  19. Impact of Prematurity and Perinatal Antibiotics on the Developing Intestinal Microbiota: A Functional Inference Study

    PubMed Central

    Arboleya, Silvia; Sánchez, Borja; Solís, Gonzalo; Fernández, Nuria; Suárez, Marta; Hernández-Barranco, Ana M.; Milani, Christian; Margolles, Abelardo; de los Reyes-Gavilán, Clara G.; Ventura, Marco; Gueimonde, Miguel

    2016-01-01

    Background: The microbial colonization of the neonatal gut provides a critical stimulus for normal maturation and development. This process of early microbiota establishment, known to be affected by several factors, constitutes an important determinant for later health. Methods: We studied the establishment of the microbiota in preterm and full-term infants and the impact of perinatal antibiotics upon this process in premature babies. To this end, 16S rRNA gene sequence-based microbiota assessment was performed at phylum level and functional inference analyses were conducted. Moreover, the levels of the main intestinal microbial metabolites, the short-chain fatty acids (SCFA) acetate, propionate and butyrate, were measured by Gas-Chromatography Flame ionization/Mass spectrometry detection. Results: Prematurity affects microbiota composition at phylum level, leading to increases of Proteobacteria and reduction of other intestinal microorganisms. Perinatal antibiotic use further affected the microbiota of the preterm infant. These changes involved a concomitant alteration in the levels of intestinal SCFA. Moreover, functional inference analyses allowed for identifying metabolic pathways potentially affected by prematurity and perinatal antibiotics use. Conclusion: A deficiency or delay in the establishment of normal microbiota function seems to be present in preterm infants. Perinatal antibiotic use, such as intrapartum prophylaxis, affected the early life microbiota establishment in preterm newborns, which may have consequences for later health. PMID:27136545

  20. Inferring deep-brain activity from cortical activity using functional near-infrared spectroscopy.

    PubMed

    Liu, Ning; Cui, Xu; Bryant, Daniel M; Glover, Gary H; Reiss, Allan L

    2015-03-01

    Functional near-infrared spectroscopy (fNIRS) is an increasingly popular technology for studying brain function because it is non-invasive, non-irradiating and relatively inexpensive. Further, fNIRS potentially allows measurement of hemodynamic activity with high temporal resolution (milliseconds) and in naturalistic settings. However, in comparison with other imaging modalities, namely fMRI, fNIRS has a significant drawback: limited sensitivity to hemodynamic changes in deep-brain regions. To overcome this limitation, we developed a computational method to infer deep-brain activity using fNIRS measurements of cortical activity. Using simultaneous fNIRS and fMRI, we measured brain activity in 17 participants as they completed three cognitive tasks. A support vector regression (SVR) learning algorithm was used to predict activity in twelve deep-brain regions using information from surface fNIRS measurements. We compared these predictions against actual fMRI-measured activity using Pearson's correlation to quantify prediction performance. To provide a benchmark for comparison, we also used fMRI measurements of cortical activity to infer deep-brain activity. When using fMRI-measured activity from the entire cortex, we were able to predict deep-brain activity in the fusiform cortex with an average correlation coefficient of 0.80 and in all deep-brain regions with an average correlation coefficient of 0.67. The top 15% of predictions using fNIRS signal achieved an accuracy of 0.7. To our knowledge, this study is the first to investigate the feasibility of using cortical activity to infer deep-brain activity. This new method has the potential to extend fNIRS applications in cognitive and clinical neuroscience research.

  1. Statistical inference of dynamic resting-state functional connectivity using hierarchical observation modeling.

    PubMed

    Sojoudi, Alireza; Goodyear, Bradley G

    2016-12-01

    Spontaneous fluctuations of blood-oxygenation level-dependent functional magnetic resonance imaging (BOLD fMRI) signals are highly synchronous between brain regions that serve similar functions. This provides a means to investigate functional networks; however, most analysis techniques assume functional connections are constant over time. This may be problematic in the case of neurological disease, where functional connections may be highly variable. Recently, several methods have been proposed to determine moment-to-moment changes in the strength of functional connections over an imaging session (so called dynamic connectivity). Here a novel analysis framework based on a hierarchical observation modeling approach was proposed, to permit statistical inference of the presence of dynamic connectivity. A two-level linear model composed of overlapping sliding windows of fMRI signals, incorporating the fact that overlapping windows are not independent was described. To test this approach, datasets were synthesized whereby functional connectivity was either constant (significant or insignificant) or modulated by an external input. The method successfully determines the statistical significance of a functional connection in phase with the modulation, and it exhibits greater sensitivity and specificity in detecting regions with variable connectivity, when compared with sliding-window correlation analysis. For real data, this technique possesses greater reproducibility and provides a more discriminative estimate of dynamic connectivity than sliding-window correlation analysis. Hum Brain Mapp 37:4566-4580, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. Inferring deep biosphere function and diversity through (near) surface biosphere portals (Invited)

    NASA Astrophysics Data System (ADS)

    Meyer-Dombard, D. R.; Cardace, D.; Woycheese, K. M.; Swingley, W.; Schubotz, F.; Shock, E.

    2013-12-01

    The consideration of surface expressions of the deep subsurface- such as springs- remains one of the most economically viable means to query the deep biosphere's diversity and function. Hot spring source pools are ideal portals for accessing and inferring the taxonomic and functional diversity of related deep subsurface microbial communities. Consideration of the geochemical composition of deep vs. surface fluids provides context for interpretation of community function. Further, parallel assessment of 16S rRNA data, metagenomic sequencing, and isotopic compositions of biomass in surface springs allows inference of the functional capacities of subsurface ecosystems. Springs in Yellowstone National Park (YNP), the Philippines, and Turkey are considered here, incorporating near-surface, transition, and surface ecosystems to identify 'legacy' taxa and functions of the deep biosphere. We find that source pools often support functional capacity suited to subsurface ecosystems. For example, in hot ecosystems, source pools are strictly chemosynthetic, and surface environments with measureable dissolved oxygen may contain evidence of community functions more favorable under anaerobic conditions. Metagenomic reads from a YNP ecosystem indicate the genetic capacity for sulfate reduction at high temperature. However, inorganic sulfate reduction is only minimally energy-yielding in these surface environments suggesting the potential that sulfate reduction is a 'legacy' function of deeper biosphere ecosystems. Carbon fixation tactics shift with increased surface exposure of the thermal fluids. Genes related to the rTCA cycle and the acetyl co-A pathway are most prevalent in highest temperature, anaerobic sites. At lower temperature sites, fewer total carbon fixation genes were observed, perhaps indicating an increase in heterotrophic metabolism with increased surface exposure. In hydrogen and methane rich springs in the Philippines and Turkey, methanogenic taxa dominate source

  3. Testing two mechanisms by which rational and irrational beliefs may affect the functionality of inferences.

    PubMed

    Bond, F W; Dryden, W; Briscoe, R

    1999-12-01

    This article describes a role playing experiment that examined the sufficiency hypothesis of Rational Emotive Behaviour Therapy (REBT). This proposition states that it is sufficient for rational and irrational beliefs to refer to preferences and musts, respectively, if those beliefs are to affect the functionality of inferences (FI). Consistent with the REBT literature (e.g. Dryden, 1994; Dryden & Ellis, 1988; Palmer, Dryden, Ellis & Yapp, 1995) results from this experiment showed that rational and irrational beliefs, as defined by REBT, do affect FI. Specifically, results showed that people who hold a rational belief form inferences that are significantly more functional than those that are formed by people who hold an irrational belief. Contrary to REBT theory, the sufficiency hypothesis was not supported. Thus, results indicated that it is not sufficient for rational and irrational beliefs to refer to preferences and musts, respectively, if those beliefs are to affect the FI. It appears, then, that preferences and musts are not sufficient mechanisms by which rational and irrational beliefs, respectively, affect the FI. Psychotherapeutic implications of these findings are considered.

  4. Bayesian Inference of Two-Dimensional Contrast Sensitivity Function from Data Obtained with Classical One-Dimensional Algorithms Is Efficient

    PubMed Central

    Wang, Xiaoxiao; Wang, Huan; Huang, Jinfeng; Zhou, Yifeng; Tzvetanov, Tzvetomir

    2017-01-01

    The contrast sensitivity function that spans the two dimensions of contrast and spatial frequency is crucial in predicting functional vision both in research and clinical applications. In this study, the use of Bayesian inference was proposed to determine the parameters of the two-dimensional contrast sensitivity function. Two-dimensional Bayesian inference was extensively simulated in comparison to classical one-dimensional measures. Its performance on two-dimensional data gathered with different sampling algorithms was also investigated. The results showed that the two-dimensional Bayesian inference method significantly improved the accuracy and precision of the contrast sensitivity function, as compared to the more common one-dimensional estimates. In addition, applying two-dimensional Bayesian estimation to the final data set showed similar levels of reliability and efficiency across widely disparate and established sampling methods (from classical one-dimensional sampling, such as Ψ or staircase, to more novel multi-dimensional sampling methods, such as quick contrast sensitivity function and Fisher information gain). Furthermore, the improvements observed following the application of Bayesian inference were maintained even when the prior poorly matched the subject's contrast sensitivity function. Simulation results were confirmed in a psychophysical experiment. The results indicated that two-dimensional Bayesian inference of contrast sensitivity function data provides similar estimates across a wide range of sampling methods. The present study likely has implications for the measurement of contrast sensitivity function in various settings (including research and clinical settings) and would facilitate the comparison of existing data from previous studies. PMID:28119563

  5. Bayesian Inference of Two-Dimensional Contrast Sensitivity Function from Data Obtained with Classical One-Dimensional Algorithms Is Efficient.

    PubMed

    Wang, Xiaoxiao; Wang, Huan; Huang, Jinfeng; Zhou, Yifeng; Tzvetanov, Tzvetomir

    2016-01-01

    The contrast sensitivity function that spans the two dimensions of contrast and spatial frequency is crucial in predicting functional vision both in research and clinical applications. In this study, the use of Bayesian inference was proposed to determine the parameters of the two-dimensional contrast sensitivity function. Two-dimensional Bayesian inference was extensively simulated in comparison to classical one-dimensional measures. Its performance on two-dimensional data gathered with different sampling algorithms was also investigated. The results showed that the two-dimensional Bayesian inference method significantly improved the accuracy and precision of the contrast sensitivity function, as compared to the more common one-dimensional estimates. In addition, applying two-dimensional Bayesian estimation to the final data set showed similar levels of reliability and efficiency across widely disparate and established sampling methods (from classical one-dimensional sampling, such as Ψ or staircase, to more novel multi-dimensional sampling methods, such as quick contrast sensitivity function and Fisher information gain). Furthermore, the improvements observed following the application of Bayesian inference were maintained even when the prior poorly matched the subject's contrast sensitivity function. Simulation results were confirmed in a psychophysical experiment. The results indicated that two-dimensional Bayesian inference of contrast sensitivity function data provides similar estimates across a wide range of sampling methods. The present study likely has implications for the measurement of contrast sensitivity function in various settings (including research and clinical settings) and would facilitate the comparison of existing data from previous studies.

  6. LncRNA ontology: inferring lncRNA functions based on chromatin states and expression patterns.

    PubMed

    Li, Yongsheng; Chen, Hong; Pan, Tao; Jiang, Chunjie; Zhao, Zheng; Wang, Zishan; Zhang, Jinwen; Xu, Juan; Li, Xia

    2015-11-24

    Accumulating evidences suggest that long non-coding RNAs (lncRNAs) perform important functions. Genome-wide chromatin-states area rich source of information about cellular state, yielding insights beyond what is typically obtained by transcriptome profiling. We propose an integrative method for genome-wide functional predictions of lncRNAs by combining chromatin states data with gene expression patterns. We first validated the method using protein-coding genes with known function annotations. Our validation results indicated that our integrative method performs better than co-expression analysis, and is accurate across different conditions. Next, by applying the integrative model genome-wide, we predicted the probable functions for more than 97% of human lncRNAs. The putative functions inferred by our method match with previously annotated by the targets of lncRNAs. Moreover, the linkage from the cellular processes influenced by cancer-associated lncRNAs to the cancer hallmarks provided a "lncRNA point-of-view" on tumor biology. Our approach provides a functional annotation of the lncRNAs, which we developed into a web-based application, LncRNA Ontology, to provide visualization, analysis, and downloading of lncRNA putative functions.

  7. LncRNA ontology: inferring lncRNA functions based on chromatin states and expression patterns

    PubMed Central

    Li, Yongsheng; Chen, Hong; Pan, Tao; Jiang, Chunjie; Zhao, Zheng; Wang, Zishan; Zhang, Jinwen; Xu, Juan; Li, Xia

    2015-01-01

    Accumulating evidences suggest that long non-coding RNAs (lncRNAs) perform important functions. Genome-wide chromatin-states area rich source of information about cellular state, yielding insights beyond what is typically obtained by transcriptome profiling. We propose an integrative method for genome-wide functional predictions of lncRNAs by combining chromatin states data with gene expression patterns. We first validated the method using protein-coding genes with known function annotations. Our validation results indicated that our integrative method performs better than co-expression analysis, and is accurate across different conditions. Next, by applying the integrative model genome-wide, we predicted the probable functions for more than 97% of human lncRNAs. The putative functions inferred by our method match with previously annotated by the targets of lncRNAs. Moreover, the linkage from the cellular processes influenced by cancer-associated lncRNAs to the cancer hallmarks provided a “lncRNA point-of-view” on tumor biology. Our approach provides a functional annotation of the lncRNAs, which we developed into a web-based application, LncRNA Ontology, to provide visualization, analysis, and downloading of lncRNA putative functions. PMID:26485761

  8. Bioinformatic approaches for functional annotation and pathway inference in metagenomics data

    PubMed Central

    De Filippo, Carlotta; Ramazzotti, Matteo; Fontana, Paolo; Cavalieri, Duccio

    2012-01-01

    Metagenomic approaches are increasingly recognized as a baseline for understanding the ecology and evolution of microbial ecosystems. The development of methods for pathway inference from metagenomics data is of paramount importance to link a phenotype to a cascade of events stemming from a series of connected sets of genes or proteins. Biochemical and regulatory pathways have until recently been thought and modelled within one cell type, one organism, one species. This vision is being dramatically changed by the advent of whole microbiome sequencing studies, revealing the role of symbiotic microbial populations in fundamental biochemical functions. The new landscape we face requires a clear picture of the potentialities of existing tools and development of new tools to characterize, reconstruct and model biochemical and regulatory pathways as the result of integration of function in complex symbiotic interactions of ontologically and evolutionary distinct cell types. PMID:23175748

  9. LC-MS/MS Based Proteomic Analysis and Functional Inference of Hypothetical Proteins in Desulfovibrio Vulgaris

    SciTech Connect

    Zhang, Weiwen; Culley, David E.; Gritsenko, Marina A.; Moore, Ronald J.; Nie, Lei; Scholten, Johannes C.; Petritis, Konstantinos; Strittmatter, Eric F.; Camp, David G.; Smith, Richard D.; Brockman, Fred J.

    2006-11-03

    Direct liquid chromatography-tandem mass spectrometry (LC-MS/MS) was used to examine the proteins extracted from Desulfovibrio vulgaris cells. While our previous study provided a proteomic overview of the cellular metabolism based on proteins with known functions (Zhang et al., 2006a, Proteomics, 6: 4286-4299), this study describes the global detection and functional inference for hypothetical D. vulgaris proteins. Across six growth conditions, 15,841 tryptic peptides were identified with high confidence. Using a criterion of peptide identification from at least two out of three independent LC-MS/MS analyses per protein, 176 open reading frames (ORFs) originally annotated as hypothetical proteins were found to encode expressed proteins. These proteins ranged from 6.0 to 153 kDa, and had calculated pI values ranging from 3.7 to 11.5. Based on homology search results (with E value <= 0.01 as a cutoff), 159 proteins were defined as conserved hypothetical proteins, and 17 proteins were unique to the D. vulgaris genome. Functional inference of the conserved hypothetical proteins was performed by a combination of several non-homology based methods: genomic context analysis, phylogenomic profiling, and analysis of a combination of experimental information including peptide detection in cells grown under specific culture conditions and cellular location of the proteins. Using this approach we were able to assign possible functions to 27 conserved hypothetical proteins. This study demonstrated that a combination of proteomics and bioinformatics methodologies can provide verification for the authenticity of hypothetical proteins and improve annotation for the D. vulgaris genome.

  10. Denoising Medical Images using Calculus of Variations.

    PubMed

    Kohan, Mahdi Nakhaie; Behnam, Hamid

    2011-07-01

    We propose a method for medical image denoising using calculus of variations and local variance estimation by shaped windows. This method reduces any additive noise and preserves small patterns and edges of images. A pyramid structure-texture decomposition of images is used to separate noise and texture components based on local variance measures. The experimental results show that the proposed method has visual improvement as well as a better SNR, RMSE and PSNR than common medical image denoising methods. Experimental results in denoising a sample Magnetic Resonance image show that SNR, PSNR and RMSE have been improved by 19, 9 and 21 percents respectively.

  11. Inferring modules of functionally interacting proteins using the Bond Energy Algorithm

    PubMed Central

    Watanabe, Ryosuke LA; Morett, Enrique; Vallejo, Edgar E

    2008-01-01

    Background Non-homology based methods such as phylogenetic profiles are effective for predicting functional relationships between proteins with no considerable sequence or structure similarity. Those methods rely heavily on traditional similarity metrics defined on pairs of phylogenetic patterns. Proteins do not exclusively interact in pairs as the final biological function of a protein in the cellular context is often hold by a group of proteins. In order to accurately infer modules of functionally interacting proteins, the consideration of not only direct but also indirect relationships is required. In this paper, we used the Bond Energy Algorithm (BEA) to predict functionally related groups of proteins. With BEA we create clusters of phylogenetic profiles based on the associations of the surrounding elements of the analyzed data using a metric that considers linked relationships among elements in the data set. Results Using phylogenetic profiles obtained from the Cluster of Orthologous Groups of Proteins (COG) database, we conducted a series of clustering experiments using BEA to predict (upper level) relationships between profiles. We evaluated our results by comparing with COG's functional categories, And even more, with the experimentally determined functional relationships between proteins provided by the DIP and ECOCYC databases. Our results demonstrate that BEA is capable of predicting meaningful modules of functionally related proteins. BEA outperforms traditionally used clustering methods, such as k-means and hierarchical clustering by predicting functional relationships between proteins with higher accuracy. Conclusion This study shows that the linked relationships of phylogenetic profiles obtained by BEA is useful for detecting functional associations between profiles and extending functional modules not found by traditional methods. BEA is capable of detecting relationship among phylogenetic patterns by linking them through a common element shared in

  12. Inferring functional constraints and divergence in protein families using 3D mapping of phylogenetic information

    PubMed Central

    Blouin, Christian; Boucher, Yan; Roger, Andrew J.

    2003-01-01

    Comparative sequence analysis has been used to study specific questions about the structure and function of proteins for many years. Here we propose a knowledge-based framework in which the maximum likelihood rate of evolution is used to quantify the level of constraint on the identity of a site. We demonstrate that site-rate mapping on 3D structures using datasets of rhodopsin-like G-protein receptors and α- and β-tubulins provides an excellent tool for pinpointing the functional features shared between orthologous and paralogous proteins. In addition, functional divergence within protein families can be inferred by examining the differences in the site rates, the differences in the chemical properties of the side chains or amino acid usage between aligned sites. Two novel analytical methods are introduced to characterize rate- independent functional divergence. These are tested using a dataset of two classes of HMG-CoA reductases for which only one class can perform both the forward and reverse reaction. We show that functionally divergent sites occur in a cluster of sites interacting with the catalytic residues and that this information should facilitate the design of experimental strategies to directly test functional properties of residues. PMID:12527789

  13. Inferring functional constraints and divergence in protein families using 3D mapping of phylogenetic information.

    PubMed

    Blouin, Christian; Boucher, Yan; Roger, Andrew J

    2003-01-15

    Comparative sequence analysis has been used to study specific questions about the structure and function of proteins for many years. Here we propose a knowledge-based framework in which the maximum likelihood rate of evolution is used to quantify the level of constraint on the identity of a site. We demonstrate that site-rate mapping on 3D structures using datasets of rhodopsin-like G-protein receptors and alpha- and beta-tubulins provides an excellent tool for pinpointing the functional features shared between orthologous and paralogous proteins. In addition, functional divergence within protein families can be inferred by examining the differences in the site rates, the differences in the chemical properties of the side chains or amino acid usage between aligned sites. Two novel analytical methods are introduced to characterize rate- independent functional divergence. These are tested using a dataset of two classes of HMG-CoA reductases for which only one class can perform both the forward and reverse reaction. We show that functionally divergent sites occur in a cluster of sites interacting with the catalytic residues and that this information should facilitate the design of experimental strategies to directly test functional properties of residues.

  14. Functional inference by ProtoNet family tree: the uncharacterized proteome of Daphnia pulex

    PubMed Central

    2013-01-01

    Background Daphnia pulex (Water flea) is the first fully sequenced crustacean genome. The crustaceans and insects have diverged from a common ancestor. It is a model organism for studying the molecular makeup for coping with the environmental challenges. In the complete proteome, there are 30,550 putative proteins. However, about 10,000 of them have no known homologues. Currently, the UniProtoKB reports on 95% of the Daphnia's proteins as putative and uncharacterized proteins. Results We have applied ProtoNet, an unsupervised hierarchical protein clustering method that covers about 10 million sequences, for automatic annotation of the Daphnia's proteome. 98.7% (26,625) of the Daphnia full-length proteins were successfully mapped to 13,880 ProtoNet stable clusters, and only 1.3% remained unmapped. We compared the properties of the Daphnia's protein families with those of the mouse and the fruitfly proteomes. Functional annotations were successfully assigned for 86% of the proteins. Most proteins (61%) were mapped to only 2953 clusters that contain Daphnia's duplicated genes. We focused on the functionality of maximally amplified paralogs. Cuticle structure components and a variety of ion channels protein families were associated with a maximal level of gene amplification. We focused on gene amplification as a leading strategy of the Daphnia in coping with environmental toxicity. Conclusions Automatic inference is achieved through mapping of sequences to the protein family tree of ProtoNet 6.0. Applying a careful inference protocol resulted in functional assignments for over 86% of the complete proteome. We conclude that the scaffold of ProtoNet can be used as an alignment-free protocol for large-scale annotation task of uncharacterized proteomes. PMID:23514195

  15. Structure-based function inference using protein family-specific fingerprints

    PubMed Central

    Bandyopadhyay, Deepak; Huan, Jun; Liu, Jinze; Prins, Jan; Snoeyink, Jack; Wang, Wei; Tropsha, Alexander

    2006-01-01

    We describe a method to assign a protein structure to a functional family using family-specific fingerprints. Fingerprints represent amino acid packing patterns that occur in most members of a family but are rare in the background, a nonredundant subset of PDB; their information is additional to sequence alignments, sequence patterns, structural superposition, and active-site templates. Fingerprints were derived for 120 families in SCOP using Frequent Subgraph Mining. For a new structure, all occurrences of these family-specific fingerprints may be found by a fast algorithm for subgraph isomorphism; the structure can then be assigned to a family with a confidence value derived from the number of fingerprints found and their distribution in background proteins. In validation experiments, we infer the function of new members added to SCOP families and we discriminate between structurally similar, but functionally divergent TIM barrel families. We then apply our method to predict function for several structural genomics proteins, including orphan structures. Some predictions have been corroborated by other computational methods and some validated by subsequent functional characterization. PMID:16731985

  16. Unleashing the power of meta-threading for evolution/structure-based function inference of proteins.

    PubMed

    Brylinski, Michal

    2013-01-01

    Protein threading is widely used in the prediction of protein structure and the subsequent functional annotation. Most threading approaches employ similar criteria for the template identification for use in both protein structure and function modeling. Using structure similarity alone might result in a high false positive rate in protein function inference, which suggests that selecting functional templates should be subject to a different set of constraints. In this study, we extend the functionality of eThread, a recently developed approach to meta-threading, focusing on the optimal selection of functional templates. We optimized the selection of template proteins to cover a broad spectrum of protein molecular function: ligand, metal, inorganic cluster, protein, and nucleic acid binding. In large-scale benchmarks, we demonstrate that the recognition rates in identifying templates that bind molecular partners in similar locations are very high, typically 70-80%, at the expense of a relatively low false positive rate. eThread also provides useful insights into the chemical properties of binding molecules and the structural features of binding. For instance, the sensitivity in recognizing similar protein-binding interfaces is 58% at only 18% false positive rate. Furthermore, in comparative analysis, we demonstrate that meta-threading supported by machine learning outperforms single-threading approaches in functional template selection. We show that meta-threading effectively detects many facets of protein molecular function, even in a low-sequence identity regime. The enhanced version of eThread is freely available as a webserver and stand-alone software at http://www.brylinski.org/ethread.

  17. A Decomposition Framework for Image Denoising Algorithms.

    PubMed

    Ghimpeteanu, Gabriela; Batard, Thomas; Bertalmio, Marcelo; Levine, Stacey

    2016-01-01

    In this paper, we consider an image decomposition model that provides a novel framework for image denoising. The model computes the components of the image to be processed in a moving frame that encodes its local geometry (directions of gradients and level lines). Then, the strategy we develop is to denoise the components of the image in the moving frame in order to preserve its local geometry, which would have been more affected if processing the image directly. Experiments on a whole image database tested with several denoising methods show that this framework can provide better results than denoising the image directly, both in terms of Peak signal-to-noise ratio and Structural similarity index metrics.

  18. Bivariate Frequency Analysis using Archimedean Copula and Nonstationary GEV distribution with Inference Function for Margin method

    NASA Astrophysics Data System (ADS)

    Joo, Kyungwon; Kim, Taereem; Jung, Younghun; Heo, Jun-Haeng

    2017-04-01

    Multivariate frequency analysis has been developing for hydrological data recently. The time-series rainfall data can be characterized to rainfall event by inter-event time definition and each rainfall event has a rainfall depth and rainfall duration. With these two variables, bivariate frequency analysis has performed. In current study, bivariate frequency analysis using Archimedean copula on hourly recorded rainfall data of Seoul from Korea meteorological administration. The parameter of copula model is estimated by inference function for margin (IFM) method and stationary/nonstationary Generalized extreme value(GEV) distribution is used for marginal distribution. As a result, level curve of copula model is obtained and CVM statistic for goodness-of-fit test to compare stationary and nonstationary GEV distribution for margin.

  19. Pragmatic inferences in high-functioning adults with autism and Asperger syndrome.

    PubMed

    Pijnacker, Judith; Hagoort, Peter; Buitelaar, Jan; Teunisse, Jan-Pieter; Geurts, Bart

    2009-04-01

    Although people with autism spectrum disorders (ASD) often have severe problems with pragmatic aspects of language, little is known about their pragmatic reasoning. We carried out a behavioral study on high-functioning adults with autistic disorder (n = 11) and Asperger syndrome (n = 17) and matched controls (n = 28) to investigate whether they are capable of deriving scalar implicatures, which are generally considered to be pragmatic inferences. Participants were presented with underinformative sentences like "Some sparrows are birds". This sentence is logically true, but pragmatically inappropriate if the scalar implicature "Not all sparrows are birds" is derived. The present findings indicate that the combined ASD group was just as likely as controls to derive scalar implicatures, yet there was a difference between participants with autistic disorder and Asperger syndrome, suggesting a potential differentiation between these disorders in pragmatic reasoning. Moreover, our results suggest that verbal intelligence is a constraint for task performance in autistic disorder but not in Asperger syndrome.

  20. Experimental evidence validating the computational inference of functional associations from gene fusion events: a critical survey.

    PubMed

    Promponas, Vasilis J; Ouzounis, Christos A; Iliopoulos, Ioannis

    2014-05-01

    More than a decade ago, a number of methods were proposed for the inference of protein interactions, using whole-genome information from gene clusters, gene fusions and phylogenetic profiles. This structural and evolutionary view of entire genomes has provided a valuable approach for the functional characterization of proteins, especially those without sequence similarity to proteins of known function. Furthermore, this view has raised the real possibility to detect functional associations of genes and their corresponding proteins for any entire genome sequence. Yet, despite these exciting developments, there have been relatively few cases of real use of these methods outside the computational biology field, as reflected from citation analysis. These methods have the potential to be used in high-throughput experimental settings in functional genomics and proteomics to validate results with very high accuracy and good coverage. In this critical survey, we provide a comprehensive overview of 30 most prominent examples of single pairwise protein interaction cases in small-scale studies, where protein interactions have either been detected by gene fusion or yielded additional, corroborating evidence from biochemical observations. Our conclusion is that with the derivation of a validated gold-standard corpus and better data integration with big experiments, gene fusion detection can truly become a valuable tool for large-scale experimental biology.

  1. Nonlocal means image denoising using orthogonal moments.

    PubMed

    Kumar, Ahlad

    2015-09-20

    An image denoising method in moment domain has been proposed. The denoising involves the development and evaluation based on the modified nonlocal means (NLM) algorithm. It uses the similarity of the neighborhood, evaluated using Krawtchouk moments. The results of the proposed denoising method have been validated using peak signal-to-noise ratio (PSNR), a well-known quality measure such as structural similarity (SSIM) index and blind/referenceless image spatial quality evaluator (BRISQUE). The denoising algorithm has been evaluated for synthetic and real clinical images contaminated by Gaussian, Poisson, and Rician noise. The algorithm performs well compared to the Zernike based denoising as indicated by the PSNR, SSIM, and BRISQUE scores of the denoised images with an improvement of 3.1 dB, 0.1285, and 4.23, respectively. Further, comparative analysis of the proposed work with the existing techniques has also been performed. It has been observed that the results are competitive in terms of PSNR, SSIM, and BRISQUE scores when evaluated for varying levels of noise.

  2. Denoising of gravitational wave signals via dictionary learning algorithms

    NASA Astrophysics Data System (ADS)

    Torres-Forné, Alejandro; Marquina, Antonio; Font, José A.; Ibáñez, José M.

    2016-12-01

    Gravitational wave astronomy has become a reality after the historical detections accomplished during the first observing run of the two advanced LIGO detectors. In the following years, the number of detections is expected to increase significantly with the full commissioning of the advanced LIGO, advanced Virgo and KAGRA detectors. The development of sophisticated data analysis techniques to improve the opportunities of detection for low signal-to-noise-ratio events is, hence, a most crucial effort. In this paper, we present one such technique, dictionary-learning algorithms, which have been extensively developed in the last few years and successfully applied mostly in the context of image processing. However, to the best of our knowledge, such algorithms have not yet been employed to denoise gravitational wave signals. By building dictionaries from numerical relativity templates of both binary black holes mergers and bursts of rotational core collapse, we show how machine-learning algorithms based on dictionaries can also be successfully applied for gravitational wave denoising. We use a subset of signals from both catalogs, embedded in nonwhite Gaussian noise, to assess our techniques with a large sample of tests and to find the best model parameters. The application of our method to the actual signal GW150914 shows promising results. Dictionary-learning algorithms could be a complementary addition to the gravitational wave data analysis toolkit. They may be used to extract signals from noise and to infer physical parameters if the data are in good enough agreement with the morphology of the dictionary atoms.

  3. [Wavelet analysis and its application in denoising the spectrum of hyperspectral image].

    PubMed

    Zhou, Dan; Wang, Qin-Jun; Tian, Qing-Jiu; Lin, Qi-Zhong; Fu, Wen-Xue

    2009-07-01

    In order to remove the sawtoothed noise in the spectrum of hyperspectral remote sensing and improve the accuracy of information extraction using spectrum in the present research, the spectrum of vegetation in the USGS (United States Geological Survey) spectrum library was used to simulate the performance of wavelet denoising. These spectra were measured by a custom-modified and computer-controlled Beckman spectrometer at the USGS Denver Spectroscopy Lab. The wavelength accuracy is about 5 nm in the NIR and 2 nm in the visible. In the experiment, noise with signal to noise ratio (SNR) 30 was first added to the spectrum, and then removed by the wavelet denoising approach. For the purpose of finding the optimal parameters combinations, the SNR, mean squared error (MSE), spectral angle (SA) and integrated evaluation coefficient eta were used to evaluate the approach's denoising effects. Denoising effect is directly proportional to SNR, and inversely proportional to MSE, SA and the integrated evaluation coefficient eta. Denoising results show that the sawtoothed noise in noisy spectrum was basically eliminated, and the denoised spectrum basically coincides with the original spectrum, maintaining a good spectral characteristic of the curve. Evaluation results show that the optimal denoising can be achieved by firstly decomposing the noisy spectrum into 3-7 levels using db12, db10, sym9 and sym6 wavelets, then processing the wavelet transform coefficients by soft-threshold functions, and finally estimating the thresholds by heursure threshold selection rule and rescaling using a single estimation of level noise based on first-level coefficients. However, this approach depends on the noise level, which means that for different noise level the optimal parameters combination is also diverse.

  4. LC-MS/MS based proteomic analysis and functional inference of hypothetical proteins in Desulfovibrio vulgaris

    SciTech Connect

    Zhang, Weiwen; Culley, David E.; Gritsenko, Marina A.; Moore, Ronald J.; Nie, Lei; Scholten, Johannes C.; Petritis, Konstantinos; Strittmatter, Eric F.; Camp, David G.; Smith, Richard D.; Brockman, Fred J.

    2006-11-03

    ABSTRACT In the previous study, the whole-genome gene expression profiles of D. vulgaris in response to oxidative stress and heat shock were determined. The results showed 24-28% of the responsive genes were hypothetical proteins that have not been experimentally characterized or whose function can not be deduced by simple sequence comparison. To further explore the protecting mechanisms employed in D. vulgaris against the oxidative stress and heat shock, attempt was made in this study to infer functions of these hypothetical proteins by phylogenomic profiling along with detailed sequence comparison against various publicly available databases. By this approach we were ableto assign possible functions to 25 responsive hypothetical proteins. The findings included that DVU0725, induced by oxidative stress, may be involved in lipopolysaccharide biosynthesis, implying that the alternation of lipopolysaccharide on cell surface might service as a mechanism against oxidative stress in D. vulgaris. In addition, two responsive proteins, DVU0024 encoding a putative transcriptional regulator and DVU1670 encoding predicted redox protein, were sharing co-evolution atterns with rubrerythrin in Archaeoglobus fulgidus and Clostridium perfringens, respectively, implying that they might be part of the stress response and protective systems in D. vulgaris. The study demonstrated that phylogenomic profiling is a useful tool in interpretation of experimental genomics data, and also provided further insight on cellular response to oxidative stress and heat shock in D. vulgaris.

  5. Inference of Functionally-Relevant N-acetyltransferase Residues Based on Statistical Correlations.

    PubMed

    Neuwald, Andrew F; Altschul, Stephen F

    2016-12-01

    Over evolutionary time, members of a superfamily of homologous proteins sharing a common structural core diverge into subgroups filling various functional niches. At the sequence level, such divergence appears as correlations that arise from residue patterns distinct to each subgroup. Such a superfamily may be viewed as a population of sequences corresponding to a complex, high-dimensional probability distribution. Here we model this distribution as hierarchical interrelated hidden Markov models (hiHMMs), which describe these sequence correlations implicitly. By characterizing such correlations one may hope to obtain information regarding functionally-relevant properties that have thus far evaded detection. To do so, we infer a hiHMM distribution from sequence data using Bayes' theorem and Markov chain Monte Carlo (MCMC) sampling, which is widely recognized as the most effective approach for characterizing a complex, high dimensional distribution. Other routines then map correlated residue patterns to available structures with a view to hypothesis generation. When applied to N-acetyltransferases, this reveals sequence and structural features indicative of functionally important, yet generally unknown biochemical properties. Even for sets of proteins for which nothing is known beyond unannotated sequences and structures, this can lead to helpful insights. We describe, for example, a putative coenzyme-A-induced-fit substrate binding mechanism mediated by arginine residue switching between salt bridge and π-π stacking interactions. A suite of programs implementing this approach is available (psed.igs.umaryland.edu).

  6. Inference of Functionally-Relevant N-acetyltransferase Residues Based on Statistical Correlations

    PubMed Central

    Neuwald, Andrew F.

    2016-01-01

    Over evolutionary time, members of a superfamily of homologous proteins sharing a common structural core diverge into subgroups filling various functional niches. At the sequence level, such divergence appears as correlations that arise from residue patterns distinct to each subgroup. Such a superfamily may be viewed as a population of sequences corresponding to a complex, high-dimensional probability distribution. Here we model this distribution as hierarchical interrelated hidden Markov models (hiHMMs), which describe these sequence correlations implicitly. By characterizing such correlations one may hope to obtain information regarding functionally-relevant properties that have thus far evaded detection. To do so, we infer a hiHMM distribution from sequence data using Bayes’ theorem and Markov chain Monte Carlo (MCMC) sampling, which is widely recognized as the most effective approach for characterizing a complex, high dimensional distribution. Other routines then map correlated residue patterns to available structures with a view to hypothesis generation. When applied to N-acetyltransferases, this reveals sequence and structural features indicative of functionally important, yet generally unknown biochemical properties. Even for sets of proteins for which nothing is known beyond unannotated sequences and structures, this can lead to helpful insights. We describe, for example, a putative coenzyme-A-induced-fit substrate binding mechanism mediated by arginine residue switching between salt bridge and π-π stacking interactions. A suite of programs implementing this approach is available (psed.igs.umaryland.edu). PMID:28002465

  7. Constrained parametric model for simultaneous inference of two cumulative incidence functions.

    PubMed

    Shi, Haiwen; Cheng, Yu; Jeong, Jong-Hyeon

    2013-01-01

    We propose a parametric regression model for the cumulative incidence functions (CIFs) commonly used for competing risks data. The model adopts a modified logistic model as the baseline CIF and a generalized odds-rate model for covariate effects, and it explicitly takes into account the constraint that a subject with any given prognostic factors should eventually fail from one of the causes such that the asymptotes of the CIFs should add up to one. This constraint intrinsically holds in a nonparametric analysis without covariates, but is easily overlooked in a semiparametric or parametric regression setting. We hence model the CIF from the primary cause assuming the generalized odds-rate transformation and the modified logistic function as the baseline CIF. Under the additivity constraint, the covariate effects on the competing cause are modeled by a function of the asymptote of the baseline distribution and the covariate effects on the primary cause. The inference procedure is straightforward by using the standard maximum likelihood theory. We demonstrate desirable finite-sample performance of our model by simulation studies in comparison with existing methods. Its practical utility is illustrated in an analysis of a breast cancer dataset to assess the treatment effect of tamoxifen, adjusting for age and initial pathological tumor size, on breast cancer recurrence that is subject to dependent censoring by second primary cancers and deaths.

  8. Low-rank separated representation surrogates of high-dimensional stochastic functions: Application in Bayesian inference

    SciTech Connect

    Validi, AbdoulAhad

    2014-03-01

    This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.

  9. GWIS: Genome-Wide Inferred Statistics for Functions of Multiple Phenotypes.

    PubMed

    Nieuwboer, Harold A; Pool, René; Dolan, Conor V; Boomsma, Dorret I; Nivard, Michel G

    2016-10-06

    Here we present a method of genome-wide inferred study (GWIS) that provides an approximation of genome-wide association study (GWAS) summary statistics for a variable that is a function of phenotypes for which GWAS summary statistics, phenotypic means, and covariances are available. A GWIS can be performed regardless of sample overlap between the GWAS of the phenotypes on which the function depends. Because a GWIS provides association estimates and their standard errors for each SNP, a GWIS can form the basis for polygenic risk scoring, LD score regression, Mendelian randomization studies, biological annotation, and other analyses. GWISs can also be used to boost power of a GWAS meta-analysis where cohorts have not measured all constituent phenotypes in the function. We demonstrate the accuracy of a BMI GWIS by performing power simulations and type I error simulations under varying circumstances, and we apply a GWIS by reconstructing a body mass index (BMI) GWAS based on a weight GWAS and a height GWAS. Furthermore, we apply a GWIS to further our understanding of the underlying genetic structure of bipolar disorder and schizophrenia and their relation to educational attainment. Our analyses suggest that the previously reported genetic correlation between schizophrenia and educational attainment is probably induced by the observed genetic correlation between schizophrenia and bipolar disorder and the previously reported genetic correlation between bipolar disorder and educational attainment. Copyright © 2016 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  10. Atypical Learning in Autism Spectrum Disorders: A Functional Magnetic Resonance Imaging Study of Transitive Inference

    PubMed Central

    Solomon, Marjorie; Ragland, J. Daniel; Niendam, Tara A.; Lesh, Tyler A.; Beck, Jonathan S.; Matter, John C.; Frank, Michael J.; Carter, Cameron S.

    2015-01-01

    Objective To investigate the neural mechanisms underlying impairments in generalizing learning shown by adolescents with autism spectrum disorder (ASD). Method Twenty-one high-functioning individuals with ASD aged 12–18 years, and 23 gender, IQ, and age-matched adolescents with typical development (TYP) completed a transitive inference (TI) task implemented using rapid event-related functional magnetic resonance imaging (fMRI). They were trained on overlapping pairs in a stimulus hierarchy of colored ovals where A>B>C>D>E>F and then tested on generalizing this training to new stimulus pairings (AF, BD, BE) in a “Big Game.” Whole-brain univariate, region of interest, and functional connectivity analyses were used. Results During training, TYP exhibited increased recruitment of the prefrontal cortex (PFC), while the group with ASD showed greater functional connectivity between the PFC and the anterior cingulate cortex (ACC). Both groups recruited the hippocampus and caudate comparably; however, functional connectivity between these regions was positively associated with TI performance for only the group with ASD. During the Big Game, TYP showed greater recruitment of the PFC, parietal cortex, and the ACC. Recruitment of these regions increased with age in the group with ASD. Conclusion During TI, TYP recruited cognitive control-related brain regions implicated in mature problem solving/reasoning including the PFC, parietal cortex, and ACC, while the group with ASD showed functional connectivity of the hippocampus and the caudate that was associated with task performance. Failure to reliably engage cognitive control-related brain regions may produce less integrated flexible learning in those with ASD unless they are provided with task support that in essence provides them with cognitive control, but this pattern may normalize with age. PMID:26506585

  11. FPGA acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods

    PubMed Central

    2010-01-01

    Background Likelihood (ML)-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF) is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA)-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. Results We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10× speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Conclusions Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs) [1]. PMID:20385005

  12. Denoising and dimensionality reduction of genomic data

    NASA Astrophysics Data System (ADS)

    Capobianco, Enrico

    2005-05-01

    Genomics represents a challenging research field for many quantitative scientists, and recently a vast variety of statistical techniques and machine learning algorithms have been proposed and inspired by cross-disciplinary work with computational and systems biologists. In genomic applications, the researcher deals with noisy and complex high-dimensional feature spaces; a wealth of genes whose expression levels are experimentally measured, can often be observed for just a few time points, thus limiting the available samples. This unbalanced combination suggests that it might be hard for standard statistical inference techniques to come up with good general solutions, likewise for machine learning algorithms to avoid heavy computational work. Thus, one naturally turns to two major aspects of the problem: sparsity and intrinsic dimensionality. These two aspects are studied in this paper, where for both denoising and dimensionality reduction, a very efficient technique, i.e., Independent Component Analysis, is used. The numerical results are very promising, and lead to a very good quality of gene feature selection, due to the signal separation power enabled by the decomposition technique. We investigate how the use of replicates can improve these results, and deal with noise through a stabilization strategy which combines the estimated components and extracts the most informative biological information from them. Exploiting the inherent level of sparsity is a key issue in genetic regulatory networks, where the connectivity matrix needs to account for the real links among genes and discard many redundancies. Most experimental evidence suggests that real gene-gene connections represent indeed a subset of what is usually mapped onto either a huge gene vector or a typically dense and highly structured network. Inferring gene network connectivity from the expression levels represents a challenging inverse problem that is at present stimulating key research in biomedical

  13. [Curvelet denoising algorithm for medical ultrasound image based on adaptive threshold].

    PubMed

    Zhuang, Zhemin; Yao, Weike; Yang, Jinyao; Li, FenLan; Yuan, Ye

    2014-11-01

    The traditional denoising algorithm for ultrasound images would lost a lot of details and weak edge information when suppressing speckle noise. A new denoising algorithm of adaptive threshold based on curvelet transform is proposed in this paper. The algorithm utilizes differences of coefficients' local variance between texture and smooth region in each layer of ultrasound image to define fuzzy regions and membership functions. In the end, using the adaptive threshold that determine by the membership function to denoise the ultrasound image. The experimental text shows that the algorithm can reduce the speckle noise effectively and retain the detail information of original image at the same time, thus it can greatly enhance the performance of B ultrasound instrument.

  14. Integrating gene and protein expression data with genome-scale metabolic networks to infer functional pathways.

    PubMed

    Pey, Jon; Valgepea, Kaspar; Rubio, Angel; Beasley, John E; Planes, Francisco J

    2013-12-08

    The study of cellular metabolism in the context of high-throughput -omics data has allowed us to decipher novel mechanisms of importance in biotechnology and health. To continue with this progress, it is essential to efficiently integrate experimental data into metabolic modeling. We present here an in-silico framework to infer relevant metabolic pathways for a particular phenotype under study based on its gene/protein expression data. This framework is based on the Carbon Flux Path (CFP) approach, a mixed-integer linear program that expands classical path finding techniques by considering additional biophysical constraints. In particular, the objective function of the CFP approach is amended to account for gene/protein expression data and influence obtained paths. This approach is termed integrative Carbon Flux Path (iCFP). We show that gene/protein expression data also influences the stoichiometric balancing of CFPs, which provides a more accurate picture of active metabolic pathways. This is illustrated in both a theoretical and real scenario. Finally, we apply this approach to find novel pathways relevant in the regulation of acetate overflow metabolism in Escherichia coli. As a result, several targets which could be relevant for better understanding of the phenomenon leading to impaired acetate overflow are proposed. A novel mathematical framework that determines functional pathways based on gene/protein expression data is presented and validated. We show that our approach is able to provide new insights into complex biological scenarios such as acetate overflow in Escherichia coli.

  15. Integrating gene and protein expression data with genome-scale metabolic networks to infer functional pathways

    PubMed Central

    2013-01-01

    Background The study of cellular metabolism in the context of high-throughput -omics data has allowed us to decipher novel mechanisms of importance in biotechnology and health. To continue with this progress, it is essential to efficiently integrate experimental data into metabolic modeling. Results We present here an in-silico framework to infer relevant metabolic pathways for a particular phenotype under study based on its gene/protein expression data. This framework is based on the Carbon Flux Path (CFP) approach, a mixed-integer linear program that expands classical path finding techniques by considering additional biophysical constraints. In particular, the objective function of the CFP approach is amended to account for gene/protein expression data and influence obtained paths. This approach is termed integrative Carbon Flux Path (iCFP). We show that gene/protein expression data also influences the stoichiometric balancing of CFPs, which provides a more accurate picture of active metabolic pathways. This is illustrated in both a theoretical and real scenario. Finally, we apply this approach to find novel pathways relevant in the regulation of acetate overflow metabolism in Escherichia coli. As a result, several targets which could be relevant for better understanding of the phenomenon leading to impaired acetate overflow are proposed. Conclusions A novel mathematical framework that determines functional pathways based on gene/protein expression data is presented and validated. We show that our approach is able to provide new insights into complex biological scenarios such as acetate overflow in Escherichia coli. PMID:24314206

  16. Inferring Hypotheses on Functional Relationships of Genes: Analysis of the Arabidopsis thaliana Subtilase Gene Family

    PubMed Central

    Büssis, Dirk; Stintzi, Annick; Schaller, Andreas; Kopka, Joachim; Altmann, Thomas

    2005-01-01

    The gene family of subtilisin-like serine proteases (subtilases) in Arabidopsis thaliana comprises 56 members, divided into six distinct subfamilies. Whereas the members of five subfamilies are similar to pyrolysins, two genes share stronger similarity to animal kexins. Mutant screens confirmed 144 T-DNA insertion lines with knockouts for 55 out of the 56 subtilases. Apart from SDD1, none of the confirmed homozygous mutants revealed any obvious visible phenotypic alteration during growth under standard conditions. Apart from this specific case, forward genetics gave us no hints about the function of the individual 54 non-characterized subtilase genes. Therefore, the main objective of our work was to overcome the shortcomings of the forward genetic approach and to infer alternative experimental approaches by using an integrative bioinformatics and biological approach. Computational analyses based on transcriptional co-expression and co-response pattern revealed at least two expression networks, suggesting that functional redundancy may exist among subtilases with limited similarity. Furthermore, two hubs were identified, which may be involved in signalling or may represent higher-order regulatory factors involved in responses to environmental cues. A particular enrichment of co-regulated genes with metabolic functions was observed for four subtilases possibly representing late responsive elements of environmental stress. The kexin homologs show stronger associations with genes of transcriptional regulation context. Based on the analyses presented here and in accordance with previously characterized subtilases, we propose three main functions of subtilases: involvement in (i) control of development, (ii) protein turnover, and (iii) action as downstream components of signalling cascades. Supplemental material is available in the Plant Subtilase Database (PSDB) (http://csbdb.mpimp-golm.mpg.de/psdb.html) , as well as from the CSB.DB (http

  17. Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.

    PubMed

    Pang, Jiahao; Cheung, Gene

    2017-04-01

    Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.

  18. Electrocardiogram Signal Denoising Using Extreme-Point Symmetric Mode Decomposition and Nonlocal Means

    PubMed Central

    Tian, Xiaoying; Li, Yongshuai; Zhou, Huan; Li, Xiang; Chen, Lisha; Zhang, Xuming

    2016-01-01

    Electrocardiogram (ECG) signals contain a great deal of essential information which can be utilized by physicians for the diagnosis of heart diseases. Unfortunately, ECG signals are inevitably corrupted by noise which will severely affect the accuracy of cardiovascular disease diagnosis. Existing ECG signal denoising methods based on wavelet shrinkage, empirical mode decomposition and nonlocal means (NLM) cannot provide sufficient noise reduction or well-detailed preservation, especially with high noise corruption. To address this problem, we have proposed a hybrid ECG signal denoising scheme by combining extreme-point symmetric mode decomposition (ESMD) with NLM. In the proposed method, the noisy ECG signals will first be decomposed into several intrinsic mode functions (IMFs) and adaptive global mean using ESMD. Then, the first several IMFs will be filtered by the NLM method according to the frequency of IMFs while the QRS complex detected from these IMFs as the dominant feature of the ECG signal and the remaining IMFs will be left unprocessed. The denoised IMFs and unprocessed IMFs are combined to produce the final denoised ECG signals. Experiments on both simulated ECG signals and real ECG signals from the MIT-BIH database demonstrate that the proposed method can suppress noise in ECG signals effectively while preserving the details very well, and it outperforms several state-of-the-art ECG signal denoising methods in terms of signal-to-noise ratio (SNR), root mean squared error (RMSE), percent root mean square difference (PRD) and mean opinion score (MOS) error index. PMID:27681729

  19. Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain

    NASA Astrophysics Data System (ADS)

    Pang, Jiahao; Cheung, Gene

    2017-04-01

    Inverse imaging problems are inherently under-determined, and hence it is important to employ appropriate image priors for regularization. One recent popular prior---the graph Laplacian regularizer---assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.

  20. Image denoising using nonsubsampled shearlet transform and twin support vector machines.

    PubMed

    Yang, Hong-Ying; Wang, Xiang-Yang; Niu, Pan-Pan; Liu, Yang-Cheng

    2014-09-01

    Denoising of images is one of the most basic tasks of image processing. It is a challenging work to design a edge/texture-preserving image denoising scheme. Nonsubsampled shearlet transform (NSST) is an effective multi-scale and multi-direction analysis method, it not only can exactly compute the shearlet coefficients based on a multiresolution analysis, but also can provide nearly optimal approximation for a piecewise smooth function. Based on NSST, a new edge/texture-preserving image denoising using twin support vector machines (TSVMs) is proposed in this paper. Firstly, the noisy image is decomposed into different subbands of frequency and orientation responses using the NSST. Secondly, the feature vector for a pixel in a noisy image is formed by the spatial geometric regularity in NSST domain, and the TSVMs model is obtained by training. Then the NSST detail coefficients are divided into information-related coefficients and noise-related ones by TSVMs training model. Finally, the detail subbands of NSST coefficients are denoised by using the adaptive threshold. Extensive experimental results demonstrate that our method can obtain better performances in terms of both subjective and objective evaluations than those state-of-the-art denoising techniques. Especially, the proposed method can preserve edges and textures very well while removing noise. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Time Difference of Arrival (TDOA) Estimation Using Wavelet Based Denoising

    DTIC Science & Technology

    1999-03-01

    NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS TIME DIFFERENCE OF ARRIVAL (TDOA) ESTIMATION USING WAVELET BASED DENOISING by Unal Aktas...4. TITLE AND SUBTITLE TIME DIFFERENCE OF ARRIVAL (TDOA) ESTIMATION USING WAVELET BASED DENOISING 6. AUTHOR(S) Unal Aktas 7...difference of arrival (TDOA) method. The wavelet transform is used to increase the accuracy of TDOA estimation. Several denoising techniques based on

  2. Phydbac2: improved inference of gene function using interactive phylogenomic profiling and chromosomal location analysis.

    PubMed

    Enault, François; Suhre, Karsten; Poirot, Olivier; Abergel, Chantal; Claverie, Jean-Michel

    2004-07-01

    Phydbac (phylogenomic display of bacterial genes) implemented a method of phylogenomic profiling using a distance measure based on normalized BLAST scores. This method was able to increase the predictive power of phylogenomic profiling by about 25% when compared to the classical approach based on Hamming distances. Here we present a major extension of Phydbac (named here Phydbac2), that extends both the concept and the functionality of the original web-service. While phylogenomic profiles remain the central focus of Phydbac2, it now integrates chromosomal proximity and gene fusion analyses as two additional non-similarity-based indicators for inferring pairwise gene functional relationships. Moreover, all presently available (January 2004) fully sequenced bacterial genomes and those of three lower eukaryotes are now included in the profiling process, thus increasing the initial number of reference genomes (71 in Phydbac) to 150 in Phydbac2. Using the KEGG metabolic pathway database as a benchmark, we show that the predictive power of Phydbac2 is improved by 27% over the previous version. This gain is accounted for on one hand, by the increased number of reference genomes (11%) and on the other hand, as a result of including chromosomal proximity into the distance measure (16%). The expanded functionality of Phydbac2 now allows the user to query more than 50 different genomes, including at least one member of each major bacterial group, most major pathogens and potential bio-terrorism agents. The search for co-evolving genes based on consensus profiles from multiple organisms, the display of Phydbac2 profiles side by side with COG information, the inclusion of KEGG metabolic pathway maps the production of chromosomal proximity maps, and the possibility of collecting and processing results from different Phydbac queries in a common shopping cart are the main new features of Phydbac2. The Phydbac2 web server is available at http://igs-server.cnrs-mrs.fr/phydbac/.

  3. Inference of gene function based on gene fusion events: the rosetta-stone method.

    PubMed

    Suhre, Karsten

    2007-01-01

    The method described in this chapter can be used to infer putative functional links between two proteins. The basic idea is based on the principle of "guilt by association." It is assumed that two proteins, which are found to be transcribed by a single transcript in one (or several) genomes are likely to be functionally linked, for example by acting in a same metabolic pathway or by forming a multiprotein complex. This method is of particular interest for studying genes that exhibit no, or only remote, homologies with already well-characterized proteins. Combined with other non-homology based methods, gene fusion events may yield valuable information for hypothesis building on protein function, and may guide experimental characterization of the target protein, for example by suggesting potential ligands or binding partners. This chapter uses the FusionDB database (http://www.igs.cnrs-mrs.fr/FusionDB/) as source of information. FusionDB provides a characterization of a large number of gene fusion events at hand of multiple sequence alignments. Orthologous genes are included to yield a comprehensive view of the structure of a gene fusion event. Phylogenetic tree reconstruction is provided to evaluate the history of a gene fusion event, and three-dimensional protein structure information is used, where available, to further characterize the nature of the gene fusion. For genes that are not comprised in FusionDB, some instructions are given as how to generate a similar type of information, based solely on publicly available web tools that are listed here.

  4. Denoising and hyperbola recognition in GPR data

    NASA Astrophysics Data System (ADS)

    Belotti, Vittorio; Dell'Acqua, Fabio; Gamba, Paolo

    2002-01-01

    The automatic analysis of Ground Penetrating Radar (GPR) images is an interesting topic in remote sensing image processing, since it involves the use of pre-processing, detection and classification tools with the aim of near-real time or at least very fast data interpretation. However, actual chains of preprocessing tools for GPR images do not consider usually denoising, essentially because most of the successive data interpretation is based on single radar trace analysis. So, no speckle noise analysis and denoising has been attempted, perhaps assuming that this point is immaterial for the following interpretation or detection tools. Instead, we expect that speckle denoising procedures would help. In this paper we address this problem, providing a detailed and exhaustive comparison of many of the statistical algorithms for speckle reduction provided in literature, i.e. Kuan, Lee, Median, Oddy and wavelet filters. For a more precise comparison, we use the Equivalent Number of Look (ENL), the Variance Ratio (VR). Moreover, we validate the denoising results by applying an interpretation step to the pre-processed data. We show that a wavelet denoising procedure results in a large improvement for both the ENL and VR. Moreover, it also allows the neural detector to individuate more targets and less false positive in the same GPR data set.

  5. Geodesic denoising for optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Shahrian Varnousfaderani, Ehsan; Vogl, Wolf-Dieter; Wu, Jing; Gerendas, Bianca S.; Simader, Christian; Langs, Georg; Waldstein, Sebastian M.; Schmidt-Erfurth, Ursula

    2016-03-01

    Optical coherence tomography (OCT) is an optical signal acquisition method capturing micrometer resolution, cross-sectional three-dimensional images. OCT images are used widely in ophthalmology to diagnose and monitor retinal diseases such as age-related macular degeneration (AMD) and Glaucoma. While OCT allows the visualization of retinal structures such as vessels and retinal layers, image quality and contrast is reduced by speckle noise, obfuscating small, low intensity structures and structural boundaries. Existing denoising methods for OCT images may remove clinically significant image features such as texture and boundaries of anomalies. In this paper, we propose a novel patch based denoising method, Geodesic Denoising. The method reduces noise in OCT images while preserving clinically significant, although small, pathological structures, such as fluid-filled cysts in diseased retinas. Our method selects optimal image patch distribution representations based on geodesic patch similarity to noisy samples. Patch distributions are then randomly sampled to build a set of best matching candidates for every noisy sample, and the denoised value is computed based on a geodesic weighted average of the best candidate samples. Our method is evaluated qualitatively on real pathological OCT scans and quantitatively on a proposed set of ground truth, noise free synthetic OCT scans with artificially added noise and pathologies. Experimental results show that performance of our method is comparable with state of the art denoising methods while outperforming them in preserving the critical clinically relevant structures.

  6. Obesity as a risk factor for developing functional limitation among older adults: A conditional inference tree analysis

    USDA-ARS?s Scientific Manuscript database

    Objective: To examine the risk factors of developing functional decline and make probabilistic predictions by using a tree-based method that allows higher order polynomials and interactions of the risk factors. Methods: The conditional inference tree analysis, a data mining approach, was used to con...

  7. Fracture in teeth: a diagnostic for inferring bite force and tooth function.

    PubMed

    Lee, James J-W; Constantino, Paul J; Lucas, Peter W; Lawn, Brian R

    2011-11-01

    Teeth are brittle and highly susceptible to cracking. We propose that observations of such cracking can be used as a diagnostic tool for predicting bite force and inferring tooth function in living and fossil mammals. Laboratory tests on model tooth structures and extracted human teeth in simulated biting identify the principal fracture modes in enamel. Examination of museum specimens reveals the presence of similar fractures in a wide range of vertebrates, suggesting that cracks extended during ingestion or mastication. The use of 'fracture mechanics' from materials engineering provides elegant relations for quantifying critical bite forces in terms of characteristic tooth size and enamel thickness. The role of enamel microstructure in determining how cracks initiate and propagate within the enamel (and beyond) is discussed. The picture emerges of teeth as damage-tolerant structures, full of internal weaknesses and defects and yet able to contain the expansion of seemingly precarious cracks and fissures within the enamel shell. How the findings impact on dietary pressures forms an undercurrent of the study.

  8. Comparative internal anatomy of Staurozoa (Cnidaria), with functional and evolutionary inferences.

    PubMed

    Miranda, Lucília S; Collins, Allen G; Hirano, Yayoi M; Mills, Claudia E; Marques, Antonio C

    2016-01-01

    Comparative efforts to understand the body plan evolution of stalked jellyfishes are scarce. Most characters, and particularly internal anatomy, have neither been explored for the class Staurozoa, nor broadly applied in its taxonomy and classification. Recently, a molecular phylogenetic hypothesis was derived for Staurozoa, allowing for the first broad histological comparative study of staurozoan taxa. This study uses comparative histology to describe the body plans of nine staurozoan species, inferring functional and evolutionary aspects of internal morphology based on the current phylogeny of Staurozoa. We document rarely-studied structures, such as ostia between radial pockets, intertentacular lobules, gametoducts, pad-like adhesive structures, and white spots of nematocysts (the last four newly proposed putative synapomorphies for Staurozoa). Two different regions of nematogenesis are documented. This work falsifies the view that the peduncle region of stauromedusae only retains polypoid characters; metamorphosis from stauropolyp to stauromedusa occurs both at the apical region (calyx) and basal region (peduncle). Intertentacular lobules, observed previously in only a small number of species, are shown to be widespread. Similarly, gametoducts were documented in all analyzed genera, both in males and females, thereby elucidating gamete release. Finally, ostia connecting adjacent gastric radial pockets appear to be universal for Staurozoa. Detailed histological studies of medusozoan polyps and medusae are necessary to further understand the relationships between staurozoan features and those of other medusozoan cnidarians.

  9. Comparative internal anatomy of Staurozoa (Cnidaria), with functional and evolutionary inferences

    PubMed Central

    Collins, Allen G.; Hirano, Yayoi M.; Mills, Claudia E.

    2016-01-01

    Comparative efforts to understand the body plan evolution of stalked jellyfishes are scarce. Most characters, and particularly internal anatomy, have neither been explored for the class Staurozoa, nor broadly applied in its taxonomy and classification. Recently, a molecular phylogenetic hypothesis was derived for Staurozoa, allowing for the first broad histological comparative study of staurozoan taxa. This study uses comparative histology to describe the body plans of nine staurozoan species, inferring functional and evolutionary aspects of internal morphology based on the current phylogeny of Staurozoa. We document rarely-studied structures, such as ostia between radial pockets, intertentacular lobules, gametoducts, pad-like adhesive structures, and white spots of nematocysts (the last four newly proposed putative synapomorphies for Staurozoa). Two different regions of nematogenesis are documented. This work falsifies the view that the peduncle region of stauromedusae only retains polypoid characters; metamorphosis from stauropolyp to stauromedusa occurs both at the apical region (calyx) and basal region (peduncle). Intertentacular lobules, observed previously in only a small number of species, are shown to be widespread. Similarly, gametoducts were documented in all analyzed genera, both in males and females, thereby elucidating gamete release. Finally, ostia connecting adjacent gastric radial pockets appear to be universal for Staurozoa. Detailed histological studies of medusozoan polyps and medusae are necessary to further understand the relationships between staurozoan features and those of other medusozoan cnidarians. PMID:27812408

  10. Inferring muscle functional roles of the ostrich pelvic limb during walking and running using computer optimization

    PubMed Central

    Rubenson, Jonas

    2016-01-01

    Owing to their cursorial background, ostriches (Struthio camelus) walk and run with high metabolic economy, can reach very fast running speeds and quickly execute cutting manoeuvres. These capabilities are believed to be a result of their ability to coordinate muscles to take advantage of specialized passive limb structures. This study aimed to infer the functional roles of ostrich pelvic limb muscles during gait. Existing gait data were combined with a newly developed musculoskeletal model to generate simulations of ostrich walking and running that predict muscle excitations, force and mechanical work. Consistent with previous avian electromyography studies, predicted excitation patterns showed that individual muscles tended to be excited primarily during only stance or swing. Work and force estimates show that ostrich gaits are partially hip-driven with the bi-articular hip–knee muscles driving stance mechanics. Conversely, the knee extensors acted as brakes, absorbing energy. The digital extensors generated large amounts of both negative and positive mechanical work, with increased magnitudes during running, providing further evidence that ostriches make extensive use of tendinous elastic energy storage to improve economy. The simulations also highlight the need to carefully consider non-muscular soft tissues that may play a role in ostrich gait. PMID:27146688

  11. The effect of cluster size imbalance and covariates on the estimation performance of quadratic inference functions.

    PubMed

    Westgate, Philip M; Braun, Thomas M

    2012-09-10

    Generalized estimating equations (GEE) are commonly used for the analysis of correlated data. However, use of quadratic inference functions (QIFs) is becoming popular because it increases efficiency relative to GEE when the working covariance structure is misspecified. Although shown to be advantageous in the literature, the impacts of covariates and imbalanced cluster sizes on the estimation performance of the QIF method in finite samples have not been studied. This cluster size variation causes QIF's estimating equations and GEE to be in separate classes when an exchangeable correlation structure is implemented, causing QIF and GEE to be incomparable in terms of efficiency. When utilizing this structure and the number of clusters is not large, we discuss how covariates and cluster size imbalance can cause QIF, rather than GEE, to produce estimates with the larger variability. This occurrence is mainly due to the empirical nature of weighting QIF employs, rather than differences in estimating equations classes. We demonstrate QIF's lost estimation precision through simulation studies covering a variety of general cluster randomized trial scenarios and compare QIF and GEE in the analysis of data from a cluster randomized trial. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Extreme deconvolution: Inferring complete distribution functions from noisy, heterogeneous and incomplete observations

    NASA Astrophysics Data System (ADS)

    Bovy Jo; Hogg, David W.; Roweis, Sam T.

    2011-06-01

    We generalize the well-known mixtures of Gaussians approach to density estimation and the accompanying Expectation-Maximization technique for finding the maximum likelihood parameters of the mixture to the case where each data point carries an individual d-dimensional uncertainty covariance and has unique missing data properties. This algorithm reconstructs the error-deconvolved or "underlying" distribution function common to all samples, even when the individual data points are samples from different distributions, obtained by convolving the underlying distribution with the heteroskedastic uncertainty distribution of the data point and projecting out the missing data directions. We show how this basic algorithm can be extended with conjugate priors on all of the model parameters and a "split-and-"erge- procedure designed to avoid local maxima of the likelihood. We demonstrate the full method by applying it to the problem of inferring the three-dimensional veloc! ity distribution of stars near the Sun from noisy two-dimensional, transverse velocity measurements from the Hipparcos satellite.

  13. Machinery vibration signal denoising based on learned dictionary and sparse representation

    NASA Astrophysics Data System (ADS)

    Guo, Liang; Gao, Hongli; Li, Jun; Huang, Haifeng; Zhang, Xiaochen

    2015-07-01

    Mechanical vibration signal denoising has been an import problem for machine damage assessment and health monitoring. Wavelet transfer and sparse reconstruction are the powerful and practical methods. However, those methods are based on the fixed basis functions or atoms. In this paper, a novel method is presented. The atoms used to represent signals are learned from the raw signal. And in order to satisfy the requirements of real-time signal processing, an online dictionary learning algorithm is adopted. Orthogonal matching pursuit is applied to extract the most pursuit column in the dictionary. At last, denoised signal is calculated with the sparse vector and learned dictionary. A simulation signal and real bearing fault signal are utilized to evaluate the improved performance of the proposed method through the comparison with kinds of denoising algorithms. Then Its computing efficiency is demonstrated by an illustrative runtime example. The results show that the proposed method outperforms current algorithms with efficiency calculation.

  14. Equivalence between Step Selection Functions and Biased Correlated Random Walks for Statistical Inference on Animal Movement.

    PubMed

    Duchesne, Thierry; Fortin, Daniel; Rivest, Louis-Paul

    2015-01-01

    Animal movement has a fundamental impact on population and community structure and dynamics. Biased correlated random walks (BCRW) and step selection functions (SSF) are commonly used to study movements. Because no studies have contrasted the parameters and the statistical properties of their estimators for models constructed under these two Lagrangian approaches, it remains unclear whether or not they allow for similar inference. First, we used the Weak Law of Large Numbers to demonstrate that the log-likelihood function for estimating the parameters of BCRW models can be approximated by the log-likelihood of SSFs. Second, we illustrated the link between the two approaches by fitting BCRW with maximum likelihood and with SSF to simulated movement data in virtual environments and to the trajectory of bison (Bison bison L.) trails in natural landscapes. Using simulated and empirical data, we found that the parameters of a BCRW estimated directly from maximum likelihood and by fitting an SSF were remarkably similar. Movement analysis is increasingly used as a tool for understanding the influence of landscape properties on animal distribution. In the rapidly developing field of movement ecology, management and conservation biologists must decide which method they should implement to accurately assess the determinants of animal movement. We showed that BCRW and SSF can provide similar insights into the environmental features influencing animal movements. Both techniques have advantages. BCRW has already been extended to allow for multi-state modeling. Unlike BCRW, however, SSF can be estimated using most statistical packages, it can simultaneously evaluate habitat selection and movement biases, and can easily integrate a large number of movement taxes at multiple scales. SSF thus offers a simple, yet effective, statistical technique to identify movement taxis.

  15. Dichoptic Metacontrast Masking Functions to Infer Transmission Delay in Optic Neuritis

    PubMed Central

    Bruchmann, Maximilian; Korsukewitz, Catharina; Krämer, Julia; Wiendl, Heinz; Meuth, Sven G.

    2016-01-01

    Optic neuritis (ON) has detrimental effects on the transmission of neuronal signals generated at the earliest stages of visual information processing. The amount, as well as the speed of transmitted visual signals is impaired. Measurements of visual evoked potentials (VEP) are often implemented in clinical routine. However, the specificity of VEPs is limited because multiple cortical areas are involved in the generation of P1 potentials, including feedback signals from higher cortical areas. Here, we show that dichoptic metacontrast masking can be used to estimate the temporal delay caused by ON. A group of 15 patients with unilateral ON, nine of which had sufficient visual acuity and volunteered to participate, and a group of healthy control subjects (N = 8) were presented with flashes of gray disks to one eye and flashes of gray annuli to the corresponding retinal location of the other eye. By asking subjects to report the subjective visibility of the target (i.e. the disk) while varying the stimulus onset asynchrony (SOA) between disk and annulus, we obtained typical U-shaped masking functions. From these functions we inferred the critical SOAmax at which the mask (i.e. the annulus) optimally suppressed the visibility of the target. ON-associated transmission delay was estimated by comparing the SOAmax between conditions in which the disk had been presented to the affected and the mask to the other eye, and vice versa. SOAmax differed on average by 28 ms, suggesting a reduction in transmission speed in the affected eye. Compared to previously reported methods assessing perceptual consequences of altered neuronal transmission speed the presented method is more accurate as it is not limited by the observers’ ability to judge subtle variations in perceived synchrony. PMID:27711139

  16. Equivalence between Step Selection Functions and Biased Correlated Random Walks for Statistical Inference on Animal Movement

    PubMed Central

    Duchesne, Thierry; Fortin, Daniel; Rivest, Louis-Paul

    2015-01-01

    Animal movement has a fundamental impact on population and community structure and dynamics. Biased correlated random walks (BCRW) and step selection functions (SSF) are commonly used to study movements. Because no studies have contrasted the parameters and the statistical properties of their estimators for models constructed under these two Lagrangian approaches, it remains unclear whether or not they allow for similar inference. First, we used the Weak Law of Large Numbers to demonstrate that the log-likelihood function for estimating the parameters of BCRW models can be approximated by the log-likelihood of SSFs. Second, we illustrated the link between the two approaches by fitting BCRW with maximum likelihood and with SSF to simulated movement data in virtual environments and to the trajectory of bison (Bison bison L.) trails in natural landscapes. Using simulated and empirical data, we found that the parameters of a BCRW estimated directly from maximum likelihood and by fitting an SSF were remarkably similar. Movement analysis is increasingly used as a tool for understanding the influence of landscape properties on animal distribution. In the rapidly developing field of movement ecology, management and conservation biologists must decide which method they should implement to accurately assess the determinants of animal movement. We showed that BCRW and SSF can provide similar insights into the environmental features influencing animal movements. Both techniques have advantages. BCRW has already been extended to allow for multi-state modeling. Unlike BCRW, however, SSF can be estimated using most statistical packages, it can simultaneously evaluate habitat selection and movement biases, and can easily integrate a large number of movement taxes at multiple scales. SSF thus offers a simple, yet effective, statistical technique to identify movement taxis. PMID:25898019

  17. Causal inference in cross-lagged panel analysis: a reciprocal causal relationship between cognitive function and depressive symptoms.

    PubMed

    Yoon, Ju Young; Brown, Roger L

    2014-01-01

    Cross-lagged panel analysis (CLPA) is a method of examining one-way or reciprocal causal inference between longitudinally changing variables. It has been used in the social sciences for many years, but not much in nursing research. This article introduces the conceptual and statistical background of CLPA and provides an exemplar of CLPA that examines the reciprocal causal relationship between depression and cognitive function over time in older adults. The 2-year cross-lagged effects of depressive symptoms (T1) on cognitive function (T2) and cognitive function (T1) on depressive symptoms (T2) were significant, which demonstrated a reciprocal causal relationship between cognitive function and depressive mood over time. Although CLPA is a methodologically strong approach to examine the reciprocal causal inferences over time, it is necessary to consider potential sources of spuriousness to lead to false causal relationship and a reasonable time frame to detect the change of the variables.

  18. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition.

  19. Image-Specific Prior Adaptation for Denoising.

    PubMed

    Lu, Xin; Lin, Zhe; Jin, Hailin; Yang, Jianchao; Wang, James Z

    2015-12-01

    Image priors are essential to many image restoration applications, including denoising, deblurring, and inpainting. Existing methods use either priors from the given image (internal) or priors from a separate collection of images (external). We find through statistical analysis that unifying the internal and external patch priors may yield a better patch prior. We propose a novel prior learning algorithm that combines the strength of both internal and external priors. In particular, we first learn a generic Gaussian mixture model from a collection of training images and then adapt the model to the given image by simultaneously adding additional components and refining the component parameters. We apply this image-specific prior to image denoising. The experimental results show that our approach yields better or competitive denoising results in terms of both the peak signal-to-noise ratio and structural similarity.

  20. Affine Non-Local Means Image Denoising.

    PubMed

    Fedorov, Vadim; Ballester, Coloma

    2017-05-01

    This paper presents an extension of the Non-Local Means denoising method, that effectively exploits the affine invariant self-similarities present in the images of real scenes. Our method provides a better image denoising result by grounding on the fact that in many occasions similar patches exist in the image but have undergone a transformation. The proposal uses an affine invariant patch similarity measure that performs an appropriate patch comparison by automatically and intrinsically adapting the size and shape of the patches. As a result, more similar patches are found and appropriately used. We show that this image denoising method achieves top-tier performance in terms of PSNR, outperforming consistently the results of the regular Non-Local Means, and that it provides state-of-the-art qualitative results.

  1. The "flowerpot" sign: inference of poor renal function in high grade vesicoureteral reflux by calyceal orientation.

    PubMed

    Martin, Aaron D; Gupta, Kavita; Swords, Kelly A; Belman, A Barry; Majd, Massoud; Rushton, H Gil; Pohl, Hans G

    2015-02-01

    Modern radiographic advances have allowed for detailed and accurate imaging of not only urologic anatomy but also urologic function. The art of observational inference of subtle anatomic features and function from a static radiograph is being traded for new, more precise, and more expensive modalities. While the superiority of these methods cannot be denied, the total information provided in simpler tests should not be ignored. The relationship between high grade vesicoureteral reflux with the dilated calyces arranged cephalad to a dilated funnel-shaped renal pelvis on VCUG and reduced differential renal function has not been previously described, but has been anecdotally designated a "flowerpot" sign by our clinicians. We hypothesize that the appearance of a "flowerpot" kidney as described herein is an indicator of poor renal function in the setting of high grade VUR. IRB approval was obtained and 315 patients were identified from system-wide VCUG reports from 2004-2012 with diagnosed "high grade" or "severe" vesicoureteral reflux. Inclusion into the study required grade IV or V VUR on initial VCUG and an initial radionuclide study for determination of differential function. Patients with a solitary kidney, posterior urethral valve, multicystic dysplastic kidney, renal ectopia, or duplex collecting systems were excluded. Grade of reflux, angle of the inferior-superior calyceal axis relative to the lumbar spine, and differential uptake were recorded along with presence of the new "flowerpot" sign. Variables were analyzed using the Mann-Whitney U test to determine statistical significance. Fifty seven patients met inclusion criteria with 11 being designated as "flowerpot" kidneys. These "flowerpot" kidneys could be objectively differentiated from other kidneys with grade IV and/or grade V VUR both by inferior-superior calyceal axis (median angle, 52° [37-66] vs. 13° [2-37], respectively p < 0.001) and by differential renal uptake (median, 23% [5-49] vs. 45% [15

  2. Study on torpedo fuze signal denoising method based on WPT

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Sun, Changcun; Zhang, Tao; Ren, Zhiliang

    2013-07-01

    Torpedo fuze signal denoising is an important action to ensure reliable operation of fuze. Based on the good characteristics of wavelet packet transform (WPT) in signal denoising, the paper used wavelet packet transform to denoise the fuze signal under a complex background interference, and a simulation of the denoising results with Matlab is performed. Simulation result shows that the WPT denoising method can effectively eliminate background noise exist in torpedo fuze target signal with higher precision and less distortion, leading to advance the reliability of torpedo fuze operation.

  3. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  4. EFICAz2: enzyme function inference by a combined approach enhanced by machine learning.

    PubMed

    Arakaki, Adrian K; Huang, Ying; Skolnick, Jeffrey

    2009-04-13

    We previously developed EFICAz, an enzyme function inference approach that combines predictions from non-completely overlapping component methods. Two of the four components in the original EFICAz are based on the detection of functionally discriminating residues (FDRs). FDRs distinguish between member of an enzyme family that are homofunctional (classified under the EC number of interest) or heterofunctional (annotated with another EC number or lacking enzymatic activity). Each of the two FDR-based components is associated to one of two specific kinds of enzyme families. EFICAz exhibits high precision performance, except when the maximal test to training sequence identity (MTTSI) is lower than 30%. To improve EFICAz's performance in this regime, we: i) increased the number of predictive components and ii) took advantage of consensual information from the different components to make the final EC number assignment. We have developed two new EFICAz components, analogs to the two FDR-based components, where the discrimination between homo and heterofunctional members is based on the evaluation, via Support Vector Machine models, of all the aligned positions between the query sequence and the multiple sequence alignments associated to the enzyme families. Benchmark results indicate that: i) the new SVM-based components outperform their FDR-based counterparts, and ii) both SVM-based and FDR-based components generate unique predictions. We developed classification tree models to optimally combine the results from the six EFICAz components into a final EC number prediction. The new implementation of our approach, EFICAz2, exhibits a highly improved prediction precision at MTTSI < 30% compared to the original EFICAz, with only a slight decrease in prediction recall. A comparative analysis of enzyme function annotation of the human proteome by EFICAz2 and KEGG shows that: i) when both sources make EC number assignments for the same protein sequence, the assignments tend to

  5. EFICAz2: enzyme function inference by a combined approach enhanced by machine learning

    PubMed Central

    Arakaki, Adrian K; Huang, Ying; Skolnick, Jeffrey

    2009-01-01

    Background We previously developed EFICAz, an enzyme function inference approach that combines predictions from non-completely overlapping component methods. Two of the four components in the original EFICAz are based on the detection of functionally discriminating residues (FDRs). FDRs distinguish between member of an enzyme family that are homofunctional (classified under the EC number of interest) or heterofunctional (annotated with another EC number or lacking enzymatic activity). Each of the two FDR-based components is associated to one of two specific kinds of enzyme families. EFICAz exhibits high precision performance, except when the maximal test to training sequence identity (MTTSI) is lower than 30%. To improve EFICAz's performance in this regime, we: i) increased the number of predictive components and ii) took advantage of consensual information from the different components to make the final EC number assignment. Results We have developed two new EFICAz components, analogs to the two FDR-based components, where the discrimination between homo and heterofunctional members is based on the evaluation, via Support Vector Machine models, of all the aligned positions between the query sequence and the multiple sequence alignments associated to the enzyme families. Benchmark results indicate that: i) the new SVM-based components outperform their FDR-based counterparts, and ii) both SVM-based and FDR-based components generate unique predictions. We developed classification tree models to optimally combine the results from the six EFICAz components into a final EC number prediction. The new implementation of our approach, EFICAz2, exhibits a highly improved prediction precision at MTTSI < 30% compared to the original EFICAz, with only a slight decrease in prediction recall. A comparative analysis of enzyme function annotation of the human proteome by EFICAz2 and KEGG shows that: i) when both sources make EC number assignments for the same protein sequence, the

  6. Function inferences from a molecular structural model of bacterial ParE toxin.

    PubMed

    Barbosa, Luiz Carlos Bertucci; Garrido, Saulo Santesso; Garcia, Anderson; Delfino, Davi Barbosa; Marchetto, Reinaldo

    2010-04-30

    Toxin-antitoxin (TA) systems contribute to plasmid stability by a mechanism that relies on the differential stabilities of the toxin and antitoxin proteins and leads to the killing of daughter bacteria that did not receive a plasmid copy at the cell division. ParE is the toxic component of a TA system that constitutes along with RelE an important class of bacterial toxin called RelE/ParE superfamily. For ParE toxin, no crystallographic structure is available so far and rare in vitro studies demonstrated that the target of toxin activity is E. coli DNA gyrase. Here, a 3D Model for E. coli ParE toxin by molecular homology modeling was built using MODELLER, a program for comparative modeling. The Model was energy minimized by CHARMM and validated using PROCHECK and VERIFY3D programs. Resulting Ramachandran plot analysis it was found that the portion residues failing into the most favored and allowed regions was 96.8%. Structural similarity search employing DALI server showed as the best matches RelE and YoeB families. The Model also showed similarities with other microbial ribonucleases but in a small score. A possible homologous deep cleft active site was identified in the Model using CASTp program. Additional studies to investigate the nuclease activity in members of ParE family as well as to confirm the inhibitory replication activity are needed. The predicted Model allows initial inferences about the unexplored 3D structure of the ParE toxin and may be further used in rational design of molecules for structure-function studies.

  7. Modulated Modularity Clustering as an Exploratory Tool for Functional Genomic Inference

    PubMed Central

    Stone, Eric A.; Ayroles, Julien F.

    2009-01-01

    In recent years, the advent of high-throughput assays, coupled with their diminishing cost, has facilitated a systems approach to biology. As a consequence, massive amounts of data are currently being generated, requiring efficient methodology aimed at the reduction of scale. Whole-genome transcriptional profiling is a standard component of systems-level analyses, and to reduce scale and improve inference clustering genes is common. Since clustering is often the first step toward generating hypotheses, cluster quality is critical. Conversely, because the validation of cluster-driven hypotheses is indirect, it is critical that quality clusters not be obtained by subjective means. In this paper, we present a new objective-based clustering method and demonstrate that it yields high-quality results. Our method, modulated modularity clustering (MMC), seeks community structure in graphical data. MMC modulates the connection strengths of edges in a weighted graph to maximize an objective function (called modularity) that quantifies community structure. The result of this maximization is a clustering through which tightly-connected groups of vertices emerge. Our application is to systems genetics, and we quantitatively compare MMC both to the hierarchical clustering method most commonly employed and to three popular spectral clustering approaches. We further validate MMC through analyses of human and Drosophila melanogaster expression data, demonstrating that the clusters we obtain are biologically meaningful. We show MMC to be effective and suitable to applications of large scale. In light of these features, we advocate MMC as a standard tool for exploration and hypothesis generation. PMID:19424432

  8. Function inferences from a molecular structural model of bacterial ParE toxin

    PubMed Central

    Barbosa, Luiz Carlos Bertucci; Garrido, Saulo Santesso; Garcia, Anderson; Delfino, Davi Barbosa; Marchetto, Reinaldo

    2010-01-01

    Toxin-antitoxin (TA) systems contribute to plasmid stability by a mechanism that relies on the differential stabilities of the toxin and antitoxin proteins and leads to the killing of daughter bacteria that did not receive a plasmid copy at the cell division. ParE is the toxic component of a TA system that constitutes along with RelE an important class of bacterial toxin called RelE/ParE superfamily. For ParE toxin, no crystallographic structure is available so far and rare in vitro studies demonstrated that the target of toxin activity is E. coli DNA gyrase. Here, a 3D Model for E. coli ParE toxin by molecular homology modeling was built using MODELLER, a program for comparative modeling. The Model was energy minimized by CHARMM and validated using PROCHECK and VERIFY3D programs. Resulting Ramachandran plot analysis it was found that the portion residues failing into the most favored and allowed regions was 96.8%. Structural similarity search employing DALI server showed as the best matches RelE and YoeB families. The Model also showed similarities with other microbial ribonucleases but in a small score. A possible homologous deep cleft active site was identified in the Model using CASTp program. Additional studies to investigate the nuclease activity in members of ParE family as well as to confirm the inhibitory replication activity are needed. The predicted Model allows initial inferences about the unexplored 3D structure of the ParE toxin and may be further used in rational design of molecules for structure­function studies. PMID:20975905

  9. Wavelet-based fMRI analysis: 3-D denoising, signal separation, and validation metrics.

    PubMed

    Khullar, Siddharth; Michael, Andrew; Correa, Nicolle; Adali, Tulay; Baum, Stefi A; Calhoun, Vince D

    2011-02-14

    We present a novel integrated wavelet-domain based framework (w-ICA) for 3-D denoising functional magnetic resonance imaging (fMRI) data followed by source separation analysis using independent component analysis (ICA) in the wavelet domain. We propose the idea of a 3-D wavelet-based multi-directional denoising scheme where each volume in a 4-D fMRI data set is sub-sampled using the axial, sagittal and coronal geometries to obtain three different slice-by-slice representations of the same data. The filtered intensity value of an arbitrary voxel is computed as an expected value of the denoised wavelet coefficients corresponding to the three viewing geometries for each sub-band. This results in a robust set of denoised wavelet coefficients for each voxel. Given the de-correlated nature of these denoised wavelet coefficients, it is possible to obtain more accurate source estimates using ICA in the wavelet domain. The contributions of this work can be realized as two modules: First, in the analysis module we combine a new 3-D wavelet denoising approach with signal separation properties of ICA in the wavelet domain. This step helps obtain an activation component that corresponds closely to the true underlying signal, which is maximally independent with respect to other components. Second, we propose and describe two novel shape metrics for post-ICA comparisons between activation regions obtained through different frameworks. We verified our method using simulated as well as real fMRI data and compared our results against the conventional scheme (Gaussian smoothing+spatial ICA: s-ICA). The results show significant improvements based on two important features: (1) preservation of shape of the activation region (shape metrics) and (2) receiver operating characteristic curves. It was observed that the proposed framework was able to preserve the actual activation shape in a consistent manner even for very high noise levels in addition to significant reduction in false

  10. Making group inferences using sparse representation of resting-state functional mRI data with application to sleep deprivation.

    PubMed

    Shen, Hui; Xu, Huaze; Wang, Lubin; Lei, Yu; Yang, Liu; Zhang, Peng; Qin, Jian; Zeng, Ling-Li; Zhou, Zongtan; Yang, Zheng; Hu, Dewen

    2017-09-01

    Past studies on drawing group inferences for functional magnetic resonance imaging (fMRI) data usually assume that a brain region is involved in only one functional brain network. However, recent evidence has demonstrated that some brain regions might simultaneously participate in multiple functional networks. Here, we presented a novel approach for making group inferences using sparse representation of resting-state fMRI data and its application to the identification of changes in functional networks in the brains of 37 healthy young adult participants after 36 h of sleep deprivation (SD) in contrast to the rested wakefulness (RW) stage. Our analysis based on group-level sparse representation revealed that multiple functional networks involved in memory, emotion, attention, and vigilance processing were impaired by SD. Of particular interest, the thalamus was observed to contribute to multiple functional networks in which differentiated response patterns were exhibited. These results not only further elucidate the impact of SD on brain function but also demonstrate the ability of the proposed approach to provide new insights into the functional organization of the resting-state brain by permitting spatial overlap between networks and facilitating the description of the varied relationships of the overlapping regions with other regions of the brain in the context of different functional systems. Hum Brain Mapp 38:4671-4689, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Leveraging enzyme structure-function relationships for functional inference and experimental design: the structure-function linkage database.

    PubMed

    Pegg, Scott C-H; Brown, Shoshana D; Ojha, Sunil; Seffernick, Jennifer; Meng, Elaine C; Morris, John H; Chang, Patricia J; Huang, Conrad C; Ferrin, Thomas E; Babbitt, Patricia C

    2006-02-28

    The study of mechanistically diverse enzyme superfamilies-collections of enzymes that perform different overall reactions but share both a common fold and a distinct mechanistic step performed by key conserved residues-helps elucidate the structure-function relationships of enzymes. We have developed a resource, the structure-function linkage database (SFLD), to analyze these structure-function relationships. Unique to the SFLD is its hierarchical classification scheme based on linking the specific partial reactions (or other chemical capabilities) that are conserved at the superfamily, subgroup, and family levels with the conserved structural elements that mediate them. We present the results of analyses using the SFLD in correcting misannotations, guiding protein engineering experiments, and elucidating the function of recently solved enzyme structures from the structural genomics initiative. The SFLD is freely accessible at http://sfld.rbvi.ucsf.edu.

  12. Sublexical Inferences in Beginning Reading: Medial Vowel Digraphs as Functional Units of Transfer.

    ERIC Educational Resources Information Center

    Savage, Robert; Stuart, Morag

    1998-01-01

    Two experiments evaluated young children's use of lexical inference. Found that equivalent transfer occurred when both clue-word pronunciation and orthography were present at transfer and when only the pronunciation of the clue word was given, but not when the clue word was pre-taught. Improvements from pre-taught clue words sharing rimes or vowel…

  13. To denoise or deblur: parameter optimization for imaging systems

    NASA Astrophysics Data System (ADS)

    Mitra, Kaushik; Cossairt, Oliver; Veeraraghavan, Ashok

    2014-03-01

    In recent years smartphone cameras have improved a lot but they still produce very noisy images in low light conditions. This is mainly because of their small sensor size. Image quality can be improved by increasing the aperture size and/or exposure time however this make them susceptible to defocus and/or motion blurs. In this paper, we analyze the trade-off between denoising and deblurring as a function of the illumination level. For this purpose we utilize a recently introduced framework for analysis of computational imaging systems that takes into account the effect of (1) optical multiplexing, (2) noise characteristics of the sensor, and (3) the reconstruction algorithm, which typically uses image priors. Following this framework, we model the image prior using Gaussian Mixture Model (GMM), which allows us to analytically compute the Minimum Mean Squared Error (MMSE). We analyze the specific problem of motion and defocus deblurring, showing how to find the optimal exposure time and aperture setting as a function of illumination level. This framework gives us the machinery to answer an open question in computational imaging: To deblur or denoise?.

  14. Improved 3D wavelet-based de-noising of fMRI data

    NASA Astrophysics Data System (ADS)

    Khullar, Siddharth; Michael, Andrew M.; Correa, Nicolle; Adali, Tulay; Baum, Stefi A.; Calhoun, Vince D.

    2011-03-01

    Functional MRI (fMRI) data analysis deals with the problem of detecting very weak signals in very noisy data. Smoothing with a Gaussian kernel is often used to decrease noise at the cost of losing spatial specificity. We present a novel wavelet-based 3-D technique to remove noise in fMRI data while preserving the spatial features in the component maps obtained through group independent component analysis (ICA). Each volume is decomposed into eight volumetric sub-bands using a separable 3-D stationary wavelet transform. Each of the detail sub-bands are then treated through the main denoising module. This module facilitates computation of shrinkage factors through a hierarchical framework. It utilizes information iteratively from the sub-band at next higher level to estimate denoised coefficients at the current level. These de-noised sub-bands are then reconstructed back to the spatial domain using an inverse wavelet transform. Finally, the denoised group fMRI data is analyzed using ICA where the data is decomposed in to clusters of functionally correlated voxels (spatial maps) as indicators of task-related neural activity. The proposed method enables the preservation of shape of the actual activation regions associated with the BOLD activity. In addition it is able to achieve high specificity as compared to the conventionally used FWHM (full width half maximum) Gaussian kernels for smoothing fMRI data.

  15. A denoising algorithm for projection measurements in cone-beam computed tomography.

    PubMed

    Karimi, Davood; Ward, Rabab

    2016-02-01

    The ability to reduce the radiation dose in computed tomography (CT) is limited by the excessive quantum noise present in the projection measurements. Sinogram denoising is, therefore, an essential step towards reconstructing high-quality images, especially in low-dose CT. Effective denoising requires accurate modeling of the photon statistics and of the prior knowledge about the characteristics of the projection measurements. This paper proposes an algorithm for denoising low-dose sinograms in cone-beam CT. The proposed algorithm is based on minimizing a cost function that includes a measurement consistency term and two regularizations in terms of the gradient and the Hessian of the sinogram. This choice of the regularization is motivated by the nature of CT projections. We use a split Bregman algorithm to minimize the proposed cost function. We apply the algorithm on simulated and real cone-beam projections and compare the results with another algorithm based on bilateral filtering. Our experiments with simulated and real data demonstrate the effectiveness of the proposed algorithm. Denoising of the projections with the proposed algorithm leads to a significant reduction of the noise in the reconstructed images without oversmoothing the edges or introducing artifacts.

  16. A Phylogeny-Based Benchmarking Test for Orthology Inference Reveals the Limitations of Function-Based Validation

    PubMed Central

    Larsson, Tomas; Powell, Sean; Doerks, Tobias; von Mering, Christian

    2014-01-01

    Accurate orthology prediction is crucial for many applications in the post-genomic era. The lack of broadly accepted benchmark tests precludes a comprehensive analysis of orthology inference. So far, functional annotation between orthologs serves as a performance proxy. However, this violates the fundamental principle of orthology as an evolutionary definition, while it is often not applicable due to limited experimental evidence for most species. Therefore, we constructed high quality "gold standard" orthologous groups that can serve as a benchmark set for orthology inference in bacterial species. Herein, we used this dataset to demonstrate 1) why a manually curated, phylogeny-based dataset is more appropriate for benchmarking orthology than other popular practices and 2) how it guides database design and parameterization through careful error quantification. More specifically, we illustrate how function-based tests often fail to identify false assignments, misjudging the true performance of orthology inference methods. We also examined how our dataset can instruct the selection of a “core” species repertoire to improve detection accuracy. We conclude that including more genomes at the proper evolutionary distances can influence the overall quality of orthology detection. The curated gene families, called Reference Orthologous Groups, are publicly available at http://eggnog.embl.de/orthobench2. PMID:25369365

  17. A phylogeny-based benchmarking test for orthology inference reveals the limitations of function-based validation.

    PubMed

    Trachana, Kalliopi; Forslund, Kristoffer; Larsson, Tomas; Powell, Sean; Doerks, Tobias; von Mering, Christian; Bork, Peer

    2014-01-01

    Accurate orthology prediction is crucial for many applications in the post-genomic era. The lack of broadly accepted benchmark tests precludes a comprehensive analysis of orthology inference. So far, functional annotation between orthologs serves as a performance proxy. However, this violates the fundamental principle of orthology as an evolutionary definition, while it is often not applicable due to limited experimental evidence for most species. Therefore, we constructed high quality "gold standard" orthologous groups that can serve as a benchmark set for orthology inference in bacterial species. Herein, we used this dataset to demonstrate 1) why a manually curated, phylogeny-based dataset is more appropriate for benchmarking orthology than other popular practices and 2) how it guides database design and parameterization through careful error quantification. More specifically, we illustrate how function-based tests often fail to identify false assignments, misjudging the true performance of orthology inference methods. We also examined how our dataset can instruct the selection of a "core" species repertoire to improve detection accuracy. We conclude that including more genomes at the proper evolutionary distances can influence the overall quality of orthology detection. The curated gene families, called Reference Orthologous Groups, are publicly available at http://eggnog.embl.de/orthobench2.

  18. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  19. Robust modeling based on optimized EEG bands for functional brain state inference.

    PubMed

    Podlipsky, Ilana; Ben-Simon, Eti; Hendler, Talma; Intrator, Nathan

    2012-01-30

    The need to infer brain states in a data driven approach is crucial for BCI applications as well as for neuroscience research. In this work we present a novel classification framework based on Regularized Linear Regression classifier constructed from time-frequency decomposition of an EEG (electro-encephalography) signal. The regression is then used to derive a model of frequency distributions that identifies brain states. The process of classifier construction, preprocessing and selection of optimal regularization parameter by means of cross-validation is presented and discussed. The framework and the feature selection technique are demonstrated on EEG data recorded from 10 healthy subjects while requested to open and close their eyes every 30 s. This paradigm is well known in inducing Alpha power modulations that differ from low power (during eyes opened) to high (during eyes closed). The classifier was trained to infer eyes opened or eyes closed states and achieved higher than 90% classification accuracy. Furthermore, our findings reveal interesting patterns of relations between experimental conditions, EEG frequencies, regularization parameters and classifier choice. This viable tool enables identification of the most contributing frequency bands to any given brain state and their optimal combination in inferring this state. These features allow for much greater detail than the standard Fourier Transform power analysis, making it an essential method for both BCI proposes and neuroimaging research.

  20. A New Wavelet Denoising Method for Experimental Time-Domain Signals: Pulsed Dipolar Electron Spin Resonance.

    PubMed

    Srivastava, Madhur; Georgieva, Elka R; Freed, Jack H

    2017-03-30

    We adapt a new wavelet-transform-based method of denoising experimental signals to pulse-dipolar electron-spin resonance spectroscopy (PDS). We show that signal averaging times of the time-domain signals can be reduced by as much as 2 orders of magnitude, while retaining the fidelity of the underlying signals, in comparison with noiseless reference signals. We have achieved excellent signal recovery when the initial noisy signal has an SNR ≳ 3. This approach is robust and is expected to be applicable to other time-domain spectroscopies. In PDS, these time-domain signals representing the dipolar interaction between two electron spin labels are converted into their distance distribution functions P(r), usually by regularization methods such as Tikhonov regularization. The significant improvements achieved by using denoised signals for this regularization are described. We show that they yield P(r)'s with more accurate detail and yield clearer separations of respective distances, which is especially important when the P(r)'s are complex. Also, longer distance P(r)'s, requiring longer dipolar evolution times, become accessible after denoising. In comparison to standard wavelet denoising approaches, it is clearly shown that the new method (WavPDS) is superior.

  1. A stacked contractive denoising auto-encoder for ECG signal denoising.

    PubMed

    Xiong, Peng; Wang, Hongrui; Liu, Ming; Lin, Feng; Hou, Zengguang; Liu, Xiuling

    2016-12-01

    As a primary diagnostic tool for cardiac diseases, electrocardiogram (ECG) signals are often contaminated by various kinds of noise, such as baseline wander, electrode contact noise and motion artifacts. In this paper, we propose a contractive denoising technique to improve the performance of current denoising auto-encoders (DAEs) for ECG signal denoising. Based on the Frobenius norm of the Jacobean matrix for the learned features with respect to the input, we develop a stacked contractive denoising auto-encoder (CDAE) to build a deep neural network (DNN) for noise reduction, which can significantly improve the expression of ECG signals through multi-level feature extraction. The proposed method is evaluated on ECG signals from the bench-marker MIT-BIH Arrhythmia Database, and the noises come from the MIT-BIH noise stress test database. The experimental results show that the new CDAE algorithm performs better than the conventional ECG denoising method, specifically with more than 2.40 dB improvement in the signal-to-noise ratio (SNR) and nearly 0.075 to 0.350 improvements in the root mean square error (RMSE).

  2. [A non-local means approach for PET image denoising].

    PubMed

    Yin, Yong; Sun, Weifeng; Lu, Jie; Liu, Tonghai

    2010-04-01

    Denoising is an important issue for medical image processing. Based on the analysis of the Non-local means algorithm recently reported by Buades A, et al. in international journals we herein propose adapting it for PET image denoising. Experimental de-noising results for real clinical PET images show that Non-local means method is superior to median filtering and wiener filtering methods and it can suppress noise in PET images effectively and preserve important details of structure for diagnosis.

  3. Image denoising by exploring external and internal correlations.

    PubMed

    Yue, Huanjing; Sun, Xiaoyan; Yang, Jingyu; Wu, Feng

    2015-06-01

    Single image denoising suffers from limited data collection within a noisy image. In this paper, we propose a novel image denoising scheme, which explores both internal and external correlations with the help of web images. For each noisy patch, we build internal and external data cubes by finding similar patches from the noisy and web images, respectively. We then propose reducing noise by a two-stage strategy using different filtering approaches. In the first stage, since the noisy patch may lead to inaccurate patch selection, we propose a graph based optimization method to improve patch matching accuracy in external denoising. The internal denoising is frequency truncation on internal cubes. By combining the internal and external denoising patches, we obtain a preliminary denoising result. In the second stage, we propose reducing noise by filtering of external and internal cubes, respectively, on transform domain. In this stage, the preliminary denoising result not only enhances the patch matching accuracy but also provides reliable estimates of filtering parameters. The final denoising image is obtained by fusing the external and internal filtering results. Experimental results show that our method constantly outperforms state-of-the-art denoising schemes in both subjective and objective quality measurements, e.g., it achieves >2 dB gain compared with BM3D at a wide range of noise levels.

  4. Image denoising by adaptive Compressed Sensing reconstructions and fusions

    NASA Astrophysics Data System (ADS)

    Meiniel, William; Angelini, Elsa; Olivo-Marin, Jean-Christophe

    2015-09-01

    In this work, Compressed Sensing (CS) is investigated as a denoising tool in bioimaging. The denoising algorithm exploits multiple CS reconstructions, taking advantage of the robustness of CS in the presence of noise via regularized reconstructions and the properties of the Fourier transform of bioimages. Multiple reconstructions at low sampling rates are combined to generate high quality denoised images using several sparsity constraints. We present different combination methods for the CS reconstructions and quantitatively compare the performance of our denoising methods to state-of-the-art ones.

  5. Ostracod-inferred conductivity transfer function and its utility in palaeo-conductivity reconstruction in Tibetan Lakes

    NASA Astrophysics Data System (ADS)

    Peng, P.; Zhu, L.; Guo, Y.; Wang, J.; Fürstenberg, S.; Ju, J.; Wang, Y.; Frenzel, P.

    2016-12-01

    Ostracod, was used as a sensitive monitor in palaeo-environmental change research. Ostracod transfer function was developing as a quantitate indicator in palaeo-limnology research. Plenty of lakes scattered on the Tibetan Plateau supplied sediments for analyzing indexes of environment in past climate change research. This application was research on samples of sub-fossil ostracod and its habitat condition, including water sample and water parameters, to produce a database for a forward transfer function based on gradient analyses. This transfer function was used for environment reconstruction of Tibetan lakes to preview past climate changes. In our research, twelve species belonging to ten genus were documented from 114 studied samples in 34 lakes. This research illustrated a specific conductivity gradient gradually increased by L.sinensis-L.dorsotuberosa-C.xizangensis, L.dorsotuberosa-L.inopinata and L.inopinata to indicate fresh-lightly brackish, brackish, brine water condition, respectively. Gradient analysis revealed that specific conductivity was the most important variable drove the distribution of sub-fossil Ostracods. A specific conductivity transfer function using a weighted averaging partial least squares (WA-PLS) model was set up to reconstruct palaeo-specific conductivity. The model presented a good correlation of measured and estimated specific conductivity (R2=0.67), a relative low root mean squared error of prediction (RMSEP=0.47). Multi-proxies, including ostracod assemblages, ostracod-inferred lake level and specific conductivity, mean grain size, total organic carbon and total inorganic carbon of sediment from core of Tibetan Lakes, inferred the palaeo-climate change history of the research area. The environmental change probably was an adaption to the weakening activities of India monsoon since mid-Holocene inferred from the comparable climatic change records from the Tibetan Plateau and relative monsoonal areas.

  6. MRI denoising using non-local means.

    PubMed

    Manjón, José V; Carbonell-Caballero, José; Lull, Juan J; García-Martí, Gracián; Martí-Bonmatí, Luís; Robles, Montserrat

    2008-08-01

    Magnetic Resonance (MR) images are affected by random noise which limits the accuracy of any quantitative measurements from the data. In the present work, a recently proposed filter for random noise removal is analyzed and adapted to reduce this noise in MR magnitude images. This parametric filter, named Non-Local Means (NLM), is highly dependent on the setting of its parameters. The aim of this paper is to find the optimal parameter selection for MR magnitude image denoising. For this purpose, experiments have been conducted to find the optimum parameters for different noise levels. Besides, the filter has been adapted to fit with specific characteristics of the noise in MR image magnitude images (i.e. Rician noise). From the results over synthetic and real images we can conclude that this filter can be successfully used for automatic MR denoising.

  7. Adaptive Image Denoising by Mixture Adaptation.

    PubMed

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  8. Postprocessing of Compressed Images via Sequential Denoising

    NASA Astrophysics Data System (ADS)

    Dar, Yehuda; Bruckstein, Alfred M.; Elad, Michael; Giryes, Raja

    2016-07-01

    In this work we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via Alternating Direction Method of Multipliers (ADMM), leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. Specifically, we demonstrate impressive gains in image quality for several leading compression methods - JPEG, JPEG2000, and HEVC.

  9. Adaptive Image Denoising by Mixture Adaptation

    NASA Astrophysics Data System (ADS)

    Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  10. Simultaneous denoising and compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.

    2013-01-01

    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  11. Bayesian MRI denoising in complex domain.

    PubMed

    Baselice, Fabio; Ferraioli, Giampaolo; Pascazio, Vito; Sorriso, Antonietta

    2017-05-01

    In recent years, several efforts have been done for producing Magnetic Resonance Image scanner with higher magnetic field strength mainly for increasing the Signal to Noise Ratio and the Contrast to Noise Ratio of the acquired images. However, denoising methodologies still play an important role for achieving images neatness. Several denoising algorithms have been presented in literature. Some of them exploit the statistical characteristics of the involved noise, some others project the image in a transformed domain, some others look for geometrical properties of the image. However, the common denominator consists in working in the amplitude domain, i.e. on the gray scale, real valued image. Within this manuscript we propose the idea of performing the noise filtering in the complex domain, i.e. on the real and on the imaginary parts of the acquired images. The advantage of the proposed methodology is that the statistical model of the involved signals is greatly simplified and no approximations are required, together with the full exploitation of the whole acquired signal. More in detail, a Maximum A Posteriori estimator developed for the handling complex data, which adopts Markov Random Fields for modeling the images, is proposed. First results and comparison with other widely adopted denoising filters confirm the validity of the method. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Electrocardiograph signal denoising based on sparse decomposition.

    PubMed

    Zhu, Junjiang; Li, Xiaolu

    2017-08-01

    Noise in ECG signals will affect the result of post-processing if left untreated. Since ECG is highly subjective, the linear denoising method with a specific threshold working well on one subject could fail on another. Therefore, in this Letter, sparse-based method, which represents every segment of signal using different linear combinations of atoms from a dictionary, is used to denoise ECG signals, with a view to myoelectric interference existing in ECG signals. Firstly, a denoising model for ECG signals is constructed. Then the model is solved by matching pursuit algorithm. In order to get better results, four kinds of dictionaries are investigated with the ECG signals from MIT-BIH arrhythmia database, compared with wavelet transform (WT)-based method. Signal-noise ratio (SNR) and mean square error (MSE) between estimated signal and original signal are used as indicators to evaluate the performance. The results show that by using the present method, the SNR is higher while the MSE between estimated signal and original signal is smaller.

  13. A novel strategy for signal denoising using reweighted SVD and its applications to weak fault feature enhancement of rotating machinery

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Jia, Xiaodong

    2017-09-01

    Singular value decomposition (SVD), as an effective signal denoising tool, has been attracting considerable attention in recent years. The basic idea behind SVD denoising is to preserve the singular components (SCs) with significant singular values. However, it is shown that the singular values mainly reflect the energy of decomposed SCs, therefore traditional SVD denoising approaches are essentially energy-based, which tend to highlight the high-energy regular components in the measured signal, while ignoring the weak feature caused by early fault. To overcome this issue, a reweighted singular value decomposition (RSVD) strategy is proposed for signal denoising and weak feature enhancement. In this work, a novel information index called periodic modulation intensity is introduced to quantify the diagnostic information in a mechanical signal. With this index, the decomposed SCs can be evaluated and sorted according to their information levels, rather than energy. Based on that, a truncated linear weighting function is proposed to control the contribution of each SC in the reconstruction of the denoised signal. In this way, some weak but informative SCs could be highlighted effectively. The advantages of RSVD over traditional approaches are demonstrated by both simulated signals and real vibration/acoustic data from a two-stage gearbox as well as train bearings. The results demonstrate that the proposed method can successfully extract the weak fault feature even in the presence of heavy noise and ambient interferences.

  14. Application of the discrete torus wavelet transform to the denoising of magnetic resonance images of uterine and ovarian masses

    NASA Astrophysics Data System (ADS)

    Sarty, Gordon E.; Atkins, M. Stella; Olatunbosun, Femi; Chizen, Donna; Loewy, John; Kendall, Edward J.; Pierson, Roger A.

    1999-10-01

    A new numerical wavelet transform, the discrete torus wavelet transform, is described and an application is given to the denoising of abdominal magnetic resonance imaging (MRI) data. The discrete tori wavelet transform is an undecimated wavelet transform which is computed using a discrete Fourier transform and multiplication instead of by direct convolution in the image domain. This approach leads to a decomposition of the image onto frames in the space of square summable functions on the discrete torus, l2(T2). The new transform was compared to the traditional decimated wavelet transform in its ability to denoise MRI data. By using denoised images as the basis for the computation of a nuclear magnetic resonance spin-spin relaxation-time map through least squares curve fitting, an error map was generated that was used to assess the performance of the denoising algorithms. The discrete torus wavelet transform outperformed the traditional wavelet transform in 88% of the T2 error map denoising tests with phantoms and gynecologic MRI images.

  15. Mobile sensing of point-source fugitive methane emissions using Bayesian inference: the determination of the likelihood function

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Albertson, J. D.

    2016-12-01

    Natural gas is considered as a bridge fuel towards clean energy due to its potential lower greenhouse gas emission comparing with other fossil fuels. Despite numerous efforts, an efficient and cost-effective approach to monitor fugitive methane emissions along the natural gas production-supply chain has not been developed yet. Recently, mobile methane measurement has been introduced which applies a Bayesian approach to probabilistically infer methane emission rates and update estimates recursively when new measurements become available. However, the likelihood function, especially the error term which determines the shape of the estimate uncertainty, is not rigorously defined and evaluated with field data. To address this issue, we performed a series of near-source (< 30 m) controlled methane release experiments using a specialized vehicle mounted with fast response methane analyzers and a GPS unit. Methane concentrations were measured at two different heights along mobile traversals downwind of the sources, and concurrent wind and temperature data are recorded by nearby 3-D sonic anemometers. With known methane release rates, the measurements were used to determine the functional form and the parameterization of the likelihood function in the Bayesian inference scheme under different meteorological conditions.

  16. Sublexical Inferences in Beginning Reading: Medial Vowel Digraphs as Functional Units of Transfer.

    PubMed

    Savage; Stuart

    1998-05-01

    Two experiments evaluated young children's use of lexical inference. Experiment 1 compared transfer from shared rimes (e.g., "beak"-"peak"), or heads (e.g., "beak"-"bean"), under three conditions: (a) when both clue word pronunciation and orthography were present at transfer; (b) when only the pronunciation of the clue word was given; and (c) when the clue was pretaught. Equivalent transfer occurred in both conditions (a and b) where clue word pronunciations were provided at transfer, but no transfer was found when the clue word was pretaught (condition c). Experiment 2 investigated transfer from three pretaught clue words sharing rimes (e.g., "leak"-"peak"), or vowel digraphs (e.g., "leak"-"bean"). Children demonstrated lexical transfer under these conditions, but improvements were equivalent for vowel and rime analogous words. Results are interpreted in terms of models of vowel transfer. Copyright 1998 Academic Press.

  17. Spectral data de-noising using semi-classical signal analysis: application to localized MRS.

    PubMed

    Laleg-Kirati, Taous-Meriem; Zhang, Jiayu; Achten, Eric; Serrai, Hacene

    2016-10-01

    In this paper, we propose a new post-processing technique called semi-classical signal analysis (SCSA) for MRS data de-noising. Similar to Fourier transformation, SCSA decomposes the input real positive MR spectrum into a set of linear combinations of squared eigenfunctions equivalently represented by localized functions with shape derived from the potential function of the Schrödinger operator. In this manner, the MRS spectral peaks represented as a sum of these 'shaped like' functions are efficiently separated from noise and accurately analyzed. The performance of the method is tested by analyzing simulated and real MRS data. The results obtained demonstrate that the SCSA method is highly efficient in localized MRS data de-noising and allows for an accurate data quantification.

  18. Adaptive denoising for simplified signal-dependent random noise model in optoelectronic detector

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Wang, Weiping; Wang, Guangyi; Xu, Jiangtao

    2017-05-01

    Existing denoising algorithms based on a simplified signal-dependent noise model are valid under the assumption of the predefined parameters. Consequently, these methods fail if the predefined conditions are not satisfied. An adaptive method for eliminating random noise from the simplified signal-dependent noise model is presented in this paper. A linear mapping function between multiplicative noise and noiseless image data is established using the Maclaurin formula. Through demonstrations of the cross-correlation between random variables and independent random variable functions, the mapping function between the variances of multiplicative noise and noiseless image data is acquired. Accordingly, the adaptive denoising model of simplified signal-dependent noise in the wavelet domain is built. The experimental results confirm that the proposed method outperforms conventional ones.

  19. Remote sensing image denoising by using discrete multiwavelet transform techniques

    NASA Astrophysics Data System (ADS)

    Wang, Haihui; Wang, Jun; Zhang, Jian

    2006-01-01

    We present a new method by using GHM discrete multiwavelet transform in image denoising on this paper. The developments in wavelet theory have given rise to the wavelet thresholding method, for extracting a signal from noisy data. The method of signal denoising via wavelet thresholding was popularized. Multiwavelets have recently been introduced and they offer simultaneous orthogonality, symmetry and short support. This property makes multiwavelets more suitable for various image processing applications, especially denoising. It is based on thresholding of multiwavelet coefficients arising from the standard scalar orthogonal wavelet transform. It takes into account the covariance structure of the transform. Denoising of images via thresholding of the multiwavelet coefficients result from preprocessing and the discrete multiwavelet transform can be carried out by treating the output in this paper. The form of the threshold is carefully formulated and is the key to the excellent results obtained in the extensive numerical simulations of image denoising. We apply the multiwavelet-based to remote sensing image denoising. Multiwavelet transform technique is rather a new method, and it has a big advantage over the other techniques that it less distorts spectral characteristics of the image denoising. The experimental results show that multiwavelet based image denoising schemes outperform wavelet based method both subjectively and objectively.

  20. True 4D Image Denoising on the GPU

    PubMed Central

    Eklund, Anders; Andersson, Mats; Knutsson, Hans

    2011-01-01

    The use of image denoising techniques is an important part of many medical imaging applications. One common application is to improve the image quality of low-dose (noisy) computed tomography (CT) data. While 3D image denoising previously has been applied to several volumes independently, there has not been much work done on true 4D image denoising, where the algorithm considers several volumes at the same time. The problem with 4D image denoising, compared to 2D and 3D denoising, is that the computational complexity increases exponentially. In this paper we describe a novel algorithm for true 4D image denoising, based on local adaptive filtering, and how to implement it on the graphics processing unit (GPU). The algorithm was applied to a 4D CT heart dataset of the resolution 512  × 512  × 445  × 20. The result is that the GPU can complete the denoising in about 25 minutes if spatial filtering is used and in about 8 minutes if FFT-based filtering is used. The CPU implementation requires several days of processing time for spatial filtering and about 50 minutes for FFT-based filtering. The short processing time increases the clinical value of true 4D image denoising significantly. PMID:21977020

  1. True 4D Image Denoising on the GPU.

    PubMed

    Eklund, Anders; Andersson, Mats; Knutsson, Hans

    2011-01-01

    The use of image denoising techniques is an important part of many medical imaging applications. One common application is to improve the image quality of low-dose (noisy) computed tomography (CT) data. While 3D image denoising previously has been applied to several volumes independently, there has not been much work done on true 4D image denoising, where the algorithm considers several volumes at the same time. The problem with 4D image denoising, compared to 2D and 3D denoising, is that the computational complexity increases exponentially. In this paper we describe a novel algorithm for true 4D image denoising, based on local adaptive filtering, and how to implement it on the graphics processing unit (GPU). The algorithm was applied to a 4D CT heart dataset of the resolution 512  × 512  × 445  × 20. The result is that the GPU can complete the denoising in about 25 minutes if spatial filtering is used and in about 8 minutes if FFT-based filtering is used. The CPU implementation requires several days of processing time for spatial filtering and about 50 minutes for FFT-based filtering. The short processing time increases the clinical value of true 4D image denoising significantly.

  2. Image denoising using principal component analysis in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Bacchelli, Silvia; Papi, Serena

    2006-05-01

    In this work we describe a method for removing Gaussian noise from digital images, based on the combination of the wavelet packet transform and the principal component analysis. In particular, since the aim of denoising is to retain the energy of the signal while discarding the energy of the noise, our basic idea is to construct powerful tailored filters by applying the Karhunen-Loeve transform in the wavelet packet domain, thus obtaining a compaction of the signal energy into a few principal components, while the noise is spread over all the transformed coefficients. This allows us to act with a suitable shrinkage function on these new coefficients, removing the noise without blurring the edges and the important characteristics of the images. The results of a large numerical experimentation encourage us to keep going in this direction with our studies.

  3. Inferring the Early Evolution of Translation: Ancestral Reconstruction, Compositional Analysis, and Functional Specificity

    NASA Astrophysics Data System (ADS)

    Fournier, G. P.; Gogarten, J. P.

    2010-04-01

    Using ancestral sequence reconstruction and compositional analysis, it is possible to reconstruct the ancestral functions of many enzymes involved in protein synthesis, elucidating the early functional evolution of the translation machinery and genetic code.

  4. Combining interior and exterior characteristics for remote sensing image denoising

    NASA Astrophysics Data System (ADS)

    Peng, Ni; Sun, Shujin; Wang, Runsheng; Zhong, Ping

    2016-04-01

    Remote sensing image denoising faces many challenges since a remote sensing image usually covers a wide area and thus contains complex contents. Using the patch-based statistical characteristics is a flexible method to improve the denoising performance. There are usually two kinds of statistical characteristics available: interior and exterior characteristics. Different statistical characteristics have their own strengths to restore specific image contents. Combining different statistical characteristics to use their strengths together may have the potential to improve denoising results. This work proposes a method combining statistical characteristics to adaptively select statistical characteristics for different image contents. The proposed approach is implemented through a new characteristics selection criterion learned over training data. Moreover, with the proposed combination method, this work develops a denoising algorithm for remote sensing images. Experimental results show that our method can make full use of the advantages of interior and exterior characteristics for different image contents and thus improve the denoising performance.

  5. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    PubMed

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  6. Patch-based near-optimal image denoising.

    PubMed

    Chatterjee, Priyam; Milanfar, Peyman

    2012-04-01

    In this paper, we propose a denoising method motivated by our previous analysis of the performance bounds for image denoising. Insights from that study are used here to derive a high-performance practical denoising algorithm. We propose a patch-based Wiener filter that exploits patch redundancy for image denoising. Our framework uses both geometrically and photometrically similar patches to estimate the different filter parameters. We describe how these parameters can be accurately estimated directly from the input noisy image. Our denoising approach, designed for near-optimal performance (in the mean-squared error sense), has a sound statistical foundation that is analyzed in detail. The performance of our approach is experimentally verified on a variety of images and noise levels. The results presented here demonstrate that our proposed method is on par or exceeding the current state of the art, both visually and quantitatively.

  7. Dual-domain denoising in three dimensional magnetic resonance imaging.

    PubMed

    Peng, Jing; Zhou, Jiliu; Wu, Xi

    2016-08-01

    Denoising is a crucial preprocessing procedure for three dimensional magnetic resonance imaging (3D MRI). Existing denoising methods are predominantly implemented in a single domain, ignoring information in other domains. However, denoising methods are becoming increasingly complex, making analysis and implementation challenging. The present study aimed to develop a dual-domain image denoising (DDID) algorithm for 3D MRI that encapsulates information from the spatial and transform domains. In the present study, the DDID method was used to distinguish signal from noise in the spatial and frequency domains, after which robust accurate noise estimation was introduced for iterative filtering, which is simple and beneficial for computation. In addition, the proposed method was compared quantitatively and qualitatively with existing methods for synthetic and in vivo MRI datasets. The results of the present study suggested that the novel DDID algorithm performed well and provided competitive results, as compared with existing MRI denoising filters.

  8. A connection between score matching and denoising autoencoders.

    PubMed

    Vincent, Pascal

    2011-07-01

    Denoising autoencoders have been previously shown to be competitive alternatives to restricted Boltzmann machines for unsupervised pretraining of each layer of a deep architecture. We show that a simple denoising autoencoder training criterion is equivalent to matching the score (with respect to the data) of a specific energy-based model to that of a nonparametric Parzen density estimator of the data. This yields several useful insights. It defines a proper probabilistic model for the denoising autoencoder technique, which makes it in principle possible to sample from them or rank examples by their energy. It suggests a different way to apply score matching that is related to learning to denoise and does not require computing second derivatives. It justifies the use of tied weights between the encoder and decoder and suggests ways to extend the success of denoising autoencoders to a larger family of energy-based models.

  9. Denoising portal images by means of wavelet techniques

    NASA Astrophysics Data System (ADS)

    Gonzalez Lopez, Antonio Francisco

    Portal images are used in radiotherapy for the verification of patient positioning. The distinguishing feature of this image type lies in its formation process: the same beam used for patient treatment is used for image formation. The high energy of the photons used in radiotherapy strongly limits the quality of portal images: Low contrast between tissues, low spatial resolution and low signal to noise ratio. This Thesis studies the enhancement of these images, in particular denoising of portal images. The statistical properties of portal images and noise are studied: power spectra, statistical dependencies between image and noise and marginal, joint and conditional distributions in the wavelet domain. Later, various denoising methods are applied to noisy portal images. Methods operating in the wavelet domain are the basis of this Thesis. In addition, the Wiener filter and the non local means filter (NLM), operating in the image domain, are used as a reference. Other topics studied in this Thesis are spatial resolution, wavelet processing and image processing in dosimetry in radiotherapy. In this regard, the spatial resolution of portal imaging systems is studied; a new method for determining the spatial resolution of the imaging equipments in digital radiology is presented; the calculation of the power spectrum in the wavelet domain is studied; reducing uncertainty in film dosimetry is investigated; a method for the dosimetry of small radiation fields with radiochromic film is presented; the optimal signal resolution is determined, as a function of the noise level and the quantization step, in the digitization process of films and the useful optical density range is set, as a function of the required uncertainty level, for a densitometric system. Marginal distributions of portal images are similar to those of natural images. This also applies to the statistical relationships between wavelet coefficients, intra-band and inter-band. These facts result in a better

  10. Inference of functional properties from large-scale analysis of enzyme superfamilies.

    PubMed

    Brown, Shoshana D; Babbitt, Patricia C

    2012-01-02

    As increasingly large amounts of data from genome and other sequencing projects become available, new approaches are needed to determine the functions of the proteins these genes encode. We show how large-scale computational analysis can help to address this challenge by linking functional information to sequence and structural similarities using protein similarity networks. Network analyses using three functionally diverse enzyme superfamilies illustrate the use of these approaches for facile updating and comparison of available structures for a large superfamily, for creation of functional hypotheses for metagenomic sequences, and to summarize the limits of our functional knowledge about even well studied superfamilies.

  11. Effect of taxonomic resolution on ecological and palaeoecological inference - a test using testate amoeba water table depth transfer functions

    NASA Astrophysics Data System (ADS)

    Mitchell, Edward A. D.; Lamentowicz, Mariusz; Payne, Richard J.; Mazei, Yuri

    2014-05-01

    Sound taxonomy is a major requirement for quantitative environmental reconstruction using biological data. Transfer function performance should theoretically be expected to decrease with reduced taxonomic resolution. However for many groups of organisms taxonomy is imperfect and species level identification not always possible. We conducted numerical experiments on five testate amoeba water table (DWT) transfer function data sets. We sequentially reduced the number of taxonomic groups by successively merging morphologically similar species and removing inconspicuous species. We then assessed how these changes affected model performance and palaeoenvironmental reconstruction using two fossil data sets. Model performance decreased with decreasing taxonomic resolution, but this had only limited effects on patterns of inferred DWT, at least to detect major dry/wet shifts. Higher-resolution taxonomy may however still be useful to detect more subtle changes, or for reconstructed shifts to be significant.

  12. Computational approaches to spatial orientation: from transfer functions to dynamic Bayesian inference.

    PubMed

    MacNeilage, Paul R; Ganesan, Narayan; Angelaki, Dora E

    2008-12-01

    Spatial orientation is the sense of body orientation and self-motion relative to the stationary environment, fundamental to normal waking behavior and control of everyday motor actions including eye movements, postural control, and locomotion. The brain achieves spatial orientation by integrating visual, vestibular, and somatosensory signals. Over the past years, considerable progress has been made toward understanding how these signals are processed by the brain using multiple computational approaches that include frequency domain analysis, the concept of internal models, observer theory, Bayesian theory, and Kalman filtering. Here we put these approaches in context by examining the specific questions that can be addressed by each technique and some of the scientific insights that have resulted. We conclude with a recent application of particle filtering, a probabilistic simulation technique that aims to generate the most likely state estimates by incorporating internal models of sensor dynamics and physical laws and noise associated with sensory processing as well as prior knowledge or experience. In this framework, priors for low angular velocity and linear acceleration can explain the phenomena of velocity storage and frequency segregation, both of which have been modeled previously using arbitrary low-pass filtering. How Kalman and particle filters may be implemented by the brain is an emerging field. Unlike past neurophysiological research that has aimed to characterize mean responses of single neurons, investigations of dynamic Bayesian inference should attempt to characterize population activities that constitute probabilistic representations of sensory and prior information.

  13. Tectonomagmatic origin of Precambrian rocks of Mexico and Argentina inferred from multi-dimensional discriminant-function based discrimination diagrams

    NASA Astrophysics Data System (ADS)

    Pandarinath, Kailasa

    2014-12-01

    Several new multi-dimensional tectonomagmatic discrimination diagrams employing log-ratio variables of chemical elements and probability based procedure have been developed during the last 10 years for basic-ultrabasic, intermediate and acid igneous rocks. There are numerous studies on extensive evaluations of these newly developed diagrams which have indicated their successful application to know the original tectonic setting of younger and older as well as sea-water and hydrothermally altered volcanic rocks. In the present study, these diagrams were applied to Precambrian rocks of Mexico (southern and north-eastern) and Argentina. The study indicated the original tectonic setting of Precambrian rocks from the Oaxaca Complex of southern Mexico as follows: (1) dominant rift (within-plate) setting for rocks of 1117-988 Ma age; (2) dominant rift and less-dominant arc setting for rocks of 1157-1130 Ma age; and (3) a combined tectonic setting of collision and rift for Etla Granitoid Pluton (917 Ma age). The diagrams have indicated the original tectonic setting of the Precambrian rocks from the north-eastern Mexico as: (1) a dominant arc tectonic setting for the rocks of 988 Ma age; and (2) an arc and collision setting for the rocks of 1200-1157 Ma age. Similarly, the diagrams have indicated the dominant original tectonic setting for the Precambrian rocks from Argentina as: (1) with-in plate (continental rift-ocean island) and continental rift (CR) setting for the rocks of 800 Ma and 845 Ma age, respectively; and (2) an arc setting for the rocks of 1174-1169 Ma and of 1212-1188 Ma age. The inferred tectonic setting for these Precambrian rocks are, in general, in accordance to the tectonic setting reported in the literature, though there are some inconsistence inference of tectonic settings by some of the diagrams. The present study confirms the importance of these newly developed discriminant-function based diagrams in inferring the original tectonic setting of

  14. Adaptive anatomical preservation optimal denoising for radiation therapy daily MRI.

    PubMed

    Maitree, Rapeepan; Perez-Carrillo, Gloria J Guzman; Shimony, Joshua S; Gach, H Michael; Chundury, Anupama; Roach, Michael; Li, H Harold; Yang, Deshan

    2017-07-01

    Low-field magnetic resonance imaging (MRI) has recently been integrated with radiation therapy systems to provide image guidance for daily cancer radiation treatments. The main benefit of the low-field strength is minimal electron return effects. The main disadvantage of low-field strength is increased image noise compared to diagnostic MRIs conducted at 1.5 T or higher. The increased image noise affects both the discernibility of soft tissues and the accuracy of further image processing tasks for both clinical and research applications, such as tumor tracking, feature analysis, image segmentation, and image registration. An innovative method, adaptive anatomical preservation optimal denoising (AAPOD), was developed for optimal image denoising, i.e., to maximally reduce noise while preserving the tissue boundaries. AAPOD employs a series of adaptive nonlocal mean (ANLM) denoising trials with increasing denoising filter strength (i.e., the block similarity filtering parameter in the ANLM algorithm), and then detects the tissue boundary losses on the differences of sequentially denoised images using a zero-crossing edge detection method. The optimal denoising filter strength per voxel is determined by identifying the denoising filter strength value at which boundary losses start to appear around the voxel. The final denoising result is generated by applying the ANLM denoising method with the optimal per-voxel denoising filter strengths. The experimental results demonstrated that AAPOD was capable of reducing noise adaptively and optimally while avoiding tissue boundary losses. AAPOD is useful for improving the quality of MRIs with low-contrast-to-noise ratios and could be applied to other medical imaging modalities, e.g., computed tomography.

  15. Inferring biological functions and associated transcriptional regulators using gene set expression coherence analysis

    PubMed Central

    Kim, Tae-Min; Chung, Yeun-Jun; Rhyu, Mun-Gan; Ho Jung, Myeong

    2007-01-01

    Background Gene clustering has been widely used to group genes with similar expression pattern in microarray data analysis. Subsequent enrichment analysis using predefined gene sets can provide clues on which functional themes or regulatory sequence motifs are associated with individual gene clusters. In spite of the potential utility, gene clustering and enrichment analysis have been used in separate platforms, thus, the development of integrative algorithm linking both methods is highly challenging. Results In this study, we propose an algorithm for discovery of molecular functions and elucidation of transcriptional logics using two kinds of gene information, functional and regulatory motif gene sets. The algorithm, termed gene set expression coherence analysis first selects functional gene sets with significantly high expression coherences. Those candidate gene sets are further processed into a number of functionally related themes or functional clusters according to the expression similarities. Each functional cluster is then, investigated for the enrichment of transcriptional regulatory motifs using modified gene set enrichment analysis and regulatory motif gene sets. The method was tested for two publicly available expression profiles representing murine myogenesis and erythropoiesis. For respective profiles, our algorithm identified myocyte- and erythrocyte-related molecular functions, along with the putative transcriptional regulators for the corresponding molecular functions. Conclusion As an integrative and comprehensive method for the analysis of large-scaled gene expression profiles, our method is able to generate a set of testable hypotheses: the transcriptional regulator X regulates function Y under cellular condition Z. GSECA algorithm is implemented into freely available software package. PMID:18021416

  16. A probabilistic framework to infer brain functional connectivity from anatomical connections.

    PubMed

    Deligianni, Fani; Varoquaux, Gael; Thirion, Bertrand; Robinson, Emma; Sharp, David J; Edwards, A David; Rueckert, Daniel

    2011-01-01

    We present a novel probabilistic framework to learn across several subjects a mapping from brain anatomical connectivity to functional connectivity, i.e. the covariance structure of brain activity. This prediction problem must be formulated as a structured-output learning task, as the predicted parameters are strongly correlated. We introduce a model selection framework based on cross-validation with a parametrization-independent loss function suitable to the manifold of covariance matrices. Our model is based on constraining the conditional independence structure of functional activity by the anatomical connectivity. Subsequently, we learn a linear predictor of a stationary multivariate autoregressive model. This natural parameterization of functional connectivity also enforces the positive-definiteness of the predicted covariance and thus matches the structure of the output space. Our results show that functional connectivity can be explained by anatomical connectivity on a rigorous statistical basis, and that a proper model of functional connectivity is essential to assess this link.

  17. Nebulon: a system for the inference of functional relationships of gene products from the rearrangement of predicted operons

    PubMed Central

    Janga, Sarath Chandra; Collado-Vides, Julio; Moreno-Hagelsieb, Gabriel

    2005-01-01

    Since operons are unstable across Prokaryotes, it has been suggested that perhaps they re-combine in a conservative manner. Thus, genes belonging to a given operon in one genome might re-associate in other genomes revealing functional relationships among gene products. We developed a system to build networks of functional relationships of gene products based on their organization into operons in any available genome. The operon predictions are based on inter-genic distances. Our system can use different kinds of thresholds to accept a functional relationship, either related to the prediction of operons, or to the number of non-redundant genomes that support the associations. We also work by shells, meaning that we decide on the number of linking iterations to allow for the complementation of related gene sets. The method shows high reliability benchmarked against knowledge-bases of functional interactions. We also illustrate the use of Nebulon in finding new members of regulons, and of other functional groups of genes. Operon rearrangements produce thousands of high-quality new interactions per prokaryotic genome, and thousands of confirmations per genome to other predictions, making it another important tool for the inference of functional interactions from genomic context. PMID:15867197

  18. Ecological Inference

    NASA Astrophysics Data System (ADS)

    King, Gary; Rosen, Ori; Tanner, Martin A.

    2004-09-01

    This collection of essays brings together a diverse group of scholars to survey the latest strategies for solving ecological inference problems in various fields. The last half-decade has witnessed an explosion of research in ecological inference--the process of trying to infer individual behavior from aggregate data. Although uncertainties and information lost in aggregation make ecological inference one of the most problematic types of research to rely on, these inferences are required in many academic fields, as well as by legislatures and the Courts in redistricting, by business in marketing research, and by governments in policy analysis.

  19. Fast Translation Invariant Multiscale Image Denoising.

    PubMed

    Li, Meng; Ghosal, Subhashis

    2015-12-01

    Translation invariant (TI) cycle spinning is an effective method for removing artifacts from images. However, for a method using O(n) time, the exact TI cycle spinning by averaging all possible circulant shifts requires O(n(2)) time where n is the number of pixels, and therefore is not feasible in practice. Existing literature has investigated efficient algorithms to calculate TI version of some denoising approaches such as Haar wavelet. Multiscale methods, especially those based on likelihood decomposition, such as penalized likelihood estimator and Bayesian methods, have become popular in image processing because of their effectiveness in denoising images. As far as we know, there is no systematic investigation of the TI calculation corresponding to general multiscale approaches. In this paper, we propose a fast TI (FTI) algorithm and a more general k-TI (k-TI) algorithm allowing TI for the last k scales of the image, which are applicable to general d-dimensional images (d = 2, 3, …) with either Gaussian or Poisson noise. The proposed FTI leads to the exact TI estimation but only requires O(n log2 n) time. The proposed k-TI can achieve almost the same performance as the exact TI estimation, but requires even less time. We achieve this by exploiting the regularity present in the multiscale structure, which is justified theoretically. The proposed FTI and k-TI are generic in that they are applicable on any smoothing techniques based on the multiscale structure. We demonstrate the FTI and k-TI algorithms on some recently proposed state-of-the-art methods for both Poisson and Gaussian noised images. Both simulations and real data application confirm the appealing performance of the proposed algorithms. MATLAB toolboxes are online accessible to reproduce the results and be implemented for general multiscale denoising approaches provided by the users.

  20. The natural history of molecular functions inferred from an extensive phylogenomic analysis of gene ontology data

    PubMed Central

    Koç, Ibrahim; Caetano-Anollés, Gustavo

    2017-01-01

    The origin and natural history of molecular functions hold the key to the emergence of cellular organization and modern biochemistry. Here we use a genomic census of Gene Ontology (GO) terms to reconstruct phylogenies at the three highest (1, 2 and 3) and the lowest (terminal) levels of the hierarchy of molecular functions, which reflect the broadest and the most specific GO definitions, respectively. These phylogenies define evolutionary timelines of functional innovation. We analyzed 249 free-living organisms comprising the three superkingdoms of life, Archaea, Bacteria, and Eukarya. Phylogenies indicate catalytic, binding and transport functions were the oldest, suggesting a ‘metabolism-first’ origin scenario for biochemistry. Metabolism made use of increasingly complicated organic chemistry. Primordial features of ancient molecular functions and functional recruitments were further distilled by studying the oldest child terms of the oldest level 1 GO definitions. Network analyses showed the existence of an hourglass pattern of enzyme recruitment in the molecular functions of the directed acyclic graph of molecular functions. Older high-level molecular functions were thoroughly recruited at younger lower levels, while very young high-level functions were used throughout the timeline. This pattern repeated in every one of the three mappings, which gave a criss-cross pattern. The timelines and their mappings were remarkable. They revealed the progressive evolutionary development of functional toolkits, starting with the early rise of metabolic activities, followed chronologically by the rise of macromolecular biosynthesis, the establishment of controlled interactions with the environment and self, adaptation to oxygen, and enzyme coordinated regulation, and ending with the rise of structural and cellular complexity. This historical account holds important clues for dissection of the emergence of biomcomplexity and life. PMID:28467492

  1. The natural history of molecular functions inferred from an extensive phylogenomic analysis of gene ontology data.

    PubMed

    Koç, Ibrahim; Caetano-Anollés, Gustavo

    2017-01-01

    The origin and natural history of molecular functions hold the key to the emergence of cellular organization and modern biochemistry. Here we use a genomic census of Gene Ontology (GO) terms to reconstruct phylogenies at the three highest (1, 2 and 3) and the lowest (terminal) levels of the hierarchy of molecular functions, which reflect the broadest and the most specific GO definitions, respectively. These phylogenies define evolutionary timelines of functional innovation. We analyzed 249 free-living organisms comprising the three superkingdoms of life, Archaea, Bacteria, and Eukarya. Phylogenies indicate catalytic, binding and transport functions were the oldest, suggesting a 'metabolism-first' origin scenario for biochemistry. Metabolism made use of increasingly complicated organic chemistry. Primordial features of ancient molecular functions and functional recruitments were further distilled by studying the oldest child terms of the oldest level 1 GO definitions. Network analyses showed the existence of an hourglass pattern of enzyme recruitment in the molecular functions of the directed acyclic graph of molecular functions. Older high-level molecular functions were thoroughly recruited at younger lower levels, while very young high-level functions were used throughout the timeline. This pattern repeated in every one of the three mappings, which gave a criss-cross pattern. The timelines and their mappings were remarkable. They revealed the progressive evolutionary development of functional toolkits, starting with the early rise of metabolic activities, followed chronologically by the rise of macromolecular biosynthesis, the establishment of controlled interactions with the environment and self, adaptation to oxygen, and enzyme coordinated regulation, and ending with the rise of structural and cellular complexity. This historical account holds important clues for dissection of the emergence of biomcomplexity and life.

  2. Inference of S-system models of genetic networks by solving one-dimensional function optimization problems.

    PubMed

    Kimura, S; Araki, D; Matsumura, K; Okada-Hatakeyama, M

    2012-02-01

    Voit and Almeida have proposed the decoupling approach as a method for inferring the S-system models of genetic networks. The decoupling approach defines the inference of a genetic network as a problem requiring the solutions of sets of algebraic equations. The computation can be accomplished in a very short time, as the approach estimates S-system parameters without solving any of the differential equations. Yet the defined algebraic equations are non-linear, which sometimes prevents us from finding reasonable S-system parameters. In this study, we propose a new technique to overcome this drawback of the decoupling approach. This technique transforms the problem of solving each set of algebraic equations into a one-dimensional function optimization problem. The computation can still be accomplished in a relatively short time, as the problem is transformed by solving a linear programming problem. We confirm the effectiveness of the proposed approach through numerical experiments. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Musculoskeletal ultrasound image denoising using Daubechies wavelets

    NASA Astrophysics Data System (ADS)

    Gupta, Rishu; Elamvazuthi, I.; Vasant, P.

    2012-11-01

    Among various existing medical imaging modalities Ultrasound is providing promising future because of its ease availability and use of non-ionizing radiations. In this paper we have attempted to denoise ultrasound image using daubechies wavelet and analyze the results with peak signal to noise ratio and coefficient of correlation as performance measurement index. The different daubechies from 1 to 6 is used on four different ultrasound bone fracture images with three different levels from 1 to 3. The images for visual inspection and PSNR, Coefficient of correlation values are graphically shown for quantitaive analysis of resultant images.

  4. Image denoising using a tight frame.

    PubMed

    Shen, Lixin; Papadakis, Manos; Kakadiaris, Ioannis A; Konstantinidis, Ioannis; Kouri, Donald; Hoffman, David

    2006-05-01

    We present a general mathematical theory for lifting frames that allows us to modify existing filters to construct new ones that form Parseval frames. We apply our theory to design nonseparable Parseval frames from separable (tensor) products of a piecewise linear spline tight frame. These new frame systems incorporate the weighted average operator, the Sobel operator, and the Laplacian operator in directions that are integer multiples of 45 degrees. A new image denoising algorithm is then proposed, tailored to the specific properties of these new frame filters. We demonstrate the performance of our algorithm on a diverse set of images with very encouraging results.

  5. Potential and limitations of inferring ecosystem photosynthetic capacity from leaf functional traits

    Treesearch

    Talie Musavi; Mirco Migliavacca; Martine Janet van de Weg; Jens Kattge; Georg Wohlfahrt; Peter M. van Bodegom; Markus Reichstein; Michael Bahn; Arnaud Carrara; Tomas F. Domingues; Michael Gavazzi; Damiano Gianelle; Cristina Gimeno; André Granier; Carsten Gruening; Kateřina Havránková; Mathias Herbst; Charmaine Hrynkiw; Aram Kalhori; Thomas Kaminski; Katja Klumpp; Pasi Kolari; Bernard Longdoz; Stefano Minerbi; Leonardo Montagnani; Eddy Moors; Walter C. Oechel; Peter B. Reich; Shani Rohatyn; Alessandra Rossi; Eyal Rotenberg; Andrej Varlagin; Matthew Wilkinson; Christian Wirth; Miguel D. Mahecha

    2016-01-01

    The aim of this study was to systematically analyze the potential and limitations of using plant functional trait observations from global databases versus in situ data to improve our understanding of vegetation impacts on ecosystem functional properties (EFPs). Using ecosystem photosynthetic capacity as an example, we first provide an objective approach to derive...

  6. Comparative population genomics: power and principles for the inference of functionality

    PubMed Central

    Lawrie, David S.; Petrov, Dmitri A.

    2014-01-01

    The availability of sequenced genomes from multiple related organisms allows the detection and localization of functional genomic elements based on the idea that such elements evolve more slowly than neutral sequences. Although such comparative genomics methods have proven useful in discovering functional elements and ascertaining levels of functional constraint in the genome as a whole, here we outline limitations intrinsic to this approach that cannot be overcome by sequencing more species. We argue that it is essential to supplement comparative genomics with ultra-deep sampling of populations from closely related species to enable substantially more powerful genomic scans for functional elements. The convergence of sequencing technology and population genetics theory has made such projects feasible and has exciting implications for functional genomics. PMID:24656563

  7. Comparative population genomics: power and principles for the inference of functionality.

    PubMed

    Lawrie, David S; Petrov, Dmitri A

    2014-04-01

    The availability of sequenced genomes from multiple related organisms allows the detection and localization of functional genomic elements based on the idea that such elements evolve more slowly than neutral sequences. Although such comparative genomics methods have proven useful in discovering functional elements and ascertaining levels of functional constraint in the genome as a whole, here we outline limitations intrinsic to this approach that cannot be overcome by sequencing more species. We argue that it is essential to supplement comparative genomics with ultra-deep sampling of populations from closely related species to enable substantially more powerful genomic scans for functional elements. The convergence of sequencing technology and population genetics theory has made such projects feasible and has exciting implications for functional genomics. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. The interval testing procedure: A general framework for inference in functional data analysis.

    PubMed

    Pini, Alessia; Vantini, Simone

    2016-09-01

    We introduce in this work the Interval Testing Procedure (ITP), a novel inferential technique for functional data. The procedure can be used to test different functional hypotheses, e.g., distributional equality between two or more functional populations, equality of mean function of a functional population to a reference. ITP involves three steps: (i) the representation of data on a (possibly high-dimensional) functional basis; (ii) the test of each possible set of consecutive basis coefficients; (iii) the computation of the adjusted p-values associated to each basis component, by means of a new strategy here proposed. We define a new type of error control, the interval-wise control of the family wise error rate, particularly suited for functional data. We show that ITP is provided with such a control. A simulation study comparing ITP with other testing procedures is reported. ITP is then applied to the analysis of hemodynamical features involved with cerebral aneurysm pathology. ITP is implemented in the fdatest R package.

  9. Phylogenetic approach for inferring the origin and functional evolution of bacterial ADP-ribosylation superfamily.

    PubMed

    Chellapandi, P; Sakthishree, S; Bharathi, M

    2013-09-01

    Bacterial ADP-ribosyltransferases (BADPRTs) are extensively contributed to determine the strain-specific virulence state and pathogenesis in human hosts. Understanding molecular evolution and functional diversity of the BADPRTs is an important standpoint to describe the fundamental behind in the vaccine designing for bacterial infections. In the present study, we have evaluated the origin and functional evolution of conserved domains within the BADPRTs by analyzing their sequence-function relationship. To represent the evolution history of BADPRTs, phylogenetic trees were constructed based on their protein sequence, structure and conserved domains using different evolutionary programs. Sequence divergence and genetic diversity were studied herein to deduce the functional evolution of conserved domains across the family and superfamily. The results of sequence similarity search have shown that three hypothetical proteins (above 90%) were identical to the members of BADPRTs and their functions were annotated by phylogenetic approach. Phylogenetic analysis of this study has revealed the family members of BADPRTs were phylogenetically related to one another, functionally diverged within the same family, and dispersed into closely related bacteria. The presence of core substitution pattern in the conserved domains would determine the family-specific function of BADPRTs. Functional diversity of the BADPRTs was exclusively distinguished by Darwinian positive selection (diphtheria toxin C and pertussis toxin S) and neutral selection (arginine ADP-ribosyltransferase, enterotoxin A and binary toxin A) acting on the existing domains. Many of the family members were sharing their sequence-specific features from members in the arginine ADP-ribosyltransferase family. Conservative functions of members in the BADPRTs have shown to be expanded only within closely related families, and retained as such in pathogenic bacteria by evolutionary process (domain duplication or

  10. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    SciTech Connect

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frederic

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimation of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a

  11. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction.

    PubMed

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frederic

    2015-02-01

    Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy-Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy-Richardson deconvolution algorithm to the current estimation of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. This study demonstrates the feasibility of incorporating the Lucy-Richardson deconvolution associated with a wavelet-based denoising in the reconstruction

  12. Predicting Protein Function by Genomic Context: Quantitative Evaluation and Qualitative Inferences

    PubMed Central

    Huynen, Martijn; Snel, Berend; Lathe, Warren; Bork, Peer

    2000-01-01

    Various new methods have been proposed to predict functional interactions between proteins based on the genomic context of their genes. The types of genomic context that they use are Type I: the fusion of genes; Type II: the conservation of gene-order or co-occurrence of genes in potential operons; and Type III: the co-occurrence of genes across genomes (phylogenetic profiles). Here we compare these types for their coverage, their correlations with various types of functional interaction, and their overlap with homology-based function assignment. We apply the methods to Mycoplasma genitalium, the standard benchmarking genome in computational and experimental genomics. Quantitatively, conservation of gene order is the technique with the highest coverage, applying to 37% of the genes. By combining gene order conservation with gene fusion (6%), the co-occurrence of genes in operons in absence of gene order conservation (8%), and the co-occurrence of genes across genomes (11%), significant context information can be obtained for 50% of the genes (the categories overlap). Qualitatively, we observe that the functional interactions between genes are stronger as the requirements for physical neighborhood on the genome are more stringent, while the fraction of potential false positives decreases. Moreover, only in cases in which gene order is conserved in a substantial fraction of the genomes, in this case six out of twenty-five, does a single type of functional interaction (physical interaction) clearly dominate (>80%). In other cases, complementary function information from homology searches, which is available for most of the genes with significant genomic context, is essential to predict the type of interaction. Using a combination of genomic context and homology searches, new functional features can be predicted for 10% of M. genitalium genes. PMID:10958638

  13. Inference of Functional Relations in Predicted Protein Networks with a Machine Learning Approach

    PubMed Central

    Ezkurdia, Iakes; Andrés-León, Eduardo; Valencia, Alfonso

    2010-01-01

    Background Molecular biology is currently facing the challenging task of functionally characterizing the proteome. The large number of possible protein-protein interactions and complexes, the variety of environmental conditions and cellular states in which these interactions can be reorganized, and the multiple ways in which a protein can influence the function of others, requires the development of experimental and computational approaches to analyze and predict functional associations between proteins as part of their activity in the interactome. Methodology/Principal Findings We have studied the possibility of constructing a classifier in order to combine the output of the several protein interaction prediction methods. The AODE (Averaged One-Dependence Estimators) machine learning algorithm is a suitable choice in this case and it provides better results than the individual prediction methods, and it has better performances than other tested alternative methods in this experimental set up. To illustrate the potential use of this new AODE-based Predictor of Protein InterActions (APPIA), when analyzing high-throughput experimental data, we show how it helps to filter the results of published High-Throughput proteomic studies, ranking in a significant way functionally related pairs. Availability: All the predictions of the individual methods and of the combined APPIA predictor, together with the used datasets of functional associations are available at http://ecid.bioinfo.cnio.es/. Conclusions We propose a strategy that integrates the main current computational techniques used to predict functional associations into a unified classifier system, specifically focusing on the evaluation of poorly characterized protein pairs. We selected the AODE classifier as the appropriate tool to perform this task. AODE is particularly useful to extract valuable information from large unbalanced and heterogeneous data sets. The combination of the information provided by five

  14. Image denoising substantially improves accuracy and precision of intravoxel incoherent motion parameter estimates

    PubMed Central

    Reischauer, Carolin; Gutzeit, Andreas

    2017-01-01

    Applicability of intravoxel incoherent motion (IVIM) imaging in the clinical setting is hampered by the limited reliability in particular of the perfusion-related parameter estimates. To alleviate this problem, various advanced postprocessing methods have been introduced. However, the underlying algorithms are not readily available and generally suffer from an increased computational burden. Contrary, several computationally fast image denoising methods have recently been proposed which are accessible online and may improve reliability of IVIM parameter estimates. The objective of the present work is to investigate the impact of image denoising on accuracy and precision of IVIM parameter estimates using comprehensive in-silico and in-vivo experiments. Image denoising is performed with four different algorithms that work on magnitude data: two algorithms which are based on nonlocal means (NLM) filtering, one algorithm that relies on local principal component analysis (LPCA) of the diffusion-weighted images, and another algorithms that exploits joint rank and edge constraints (JREC). Accuracy and precision of IVIM parameter estimates is investigated in an in-silico brain phantom and an in-vivo ground truth as a function of the signal-to-noise ratio for spatially homogenous and inhomogenous levels of Rician noise. Moreover, precision is evaluated using bootstrap analysis of in-vivo measurements. In the experiments, IVIM parameters are computed a) by using a segmented fit method and b) by performing a biexponential fit of the entire attenuation curve based on nonlinear least squares estimates. Irrespective of the fit method, the results demonstrate that reliability of IVIM parameter estimates is substantially improved by image denoising. The experiments show that the LPCA and the JREC algorithms perform in a similar manner and outperform the NLM-related methods. Relative to noisy data, accuracy of the IVIM parameters in the in-silico phantom improves after image

  15. Image denoising substantially improves accuracy and precision of intravoxel incoherent motion parameter estimates.

    PubMed

    Reischauer, Carolin; Gutzeit, Andreas

    2017-01-01

    Applicability of intravoxel incoherent motion (IVIM) imaging in the clinical setting is hampered by the limited reliability in particular of the perfusion-related parameter estimates. To alleviate this problem, various advanced postprocessing methods have been introduced. However, the underlying algorithms are not readily available and generally suffer from an increased computational burden. Contrary, several computationally fast image denoising methods have recently been proposed which are accessible online and may improve reliability of IVIM parameter estimates. The objective of the present work is to investigate the impact of image denoising on accuracy and precision of IVIM parameter estimates using comprehensive in-silico and in-vivo experiments. Image denoising is performed with four different algorithms that work on magnitude data: two algorithms which are based on nonlocal means (NLM) filtering, one algorithm that relies on local principal component analysis (LPCA) of the diffusion-weighted images, and another algorithms that exploits joint rank and edge constraints (JREC). Accuracy and precision of IVIM parameter estimates is investigated in an in-silico brain phantom and an in-vivo ground truth as a function of the signal-to-noise ratio for spatially homogenous and inhomogenous levels of Rician noise. Moreover, precision is evaluated using bootstrap analysis of in-vivo measurements. In the experiments, IVIM parameters are computed a) by using a segmented fit method and b) by performing a biexponential fit of the entire attenuation curve based on nonlinear least squares estimates. Irrespective of the fit method, the results demonstrate that reliability of IVIM parameter estimates is substantially improved by image denoising. The experiments show that the LPCA and the JREC algorithms perform in a similar manner and outperform the NLM-related methods. Relative to noisy data, accuracy of the IVIM parameters in the in-silico phantom improves after image

  16. Evolutionary Fuzzy Block-Matching-Based Camera Raw Image Denoising.

    PubMed

    Yang, Chin-Chang; Guo, Shu-Mei; Tsai, Jason Sheng-Hong

    2017-09-01

    An evolutionary fuzzy block-matching-based image denoising algorithm is proposed to remove noise from a camera raw image. Recently, a variance stabilization transform is widely used to stabilize the noise variance, so that a Gaussian denoising algorithm can be used to remove the signal-dependent noise in camera sensors. However, in the stabilized domain, the existed denoising algorithm may blur too much detail. To provide a better estimate of the noise-free signal, a new block-matching approach is proposed to find similar blocks by the use of a type-2 fuzzy logic system (FLS). Then, these similar blocks are averaged with the weightings which are determined by the FLS. Finally, an efficient differential evolution is used to further improve the performance of the proposed denoising algorithm. The experimental results show that the proposed denoising algorithm effectively improves the performance of image denoising. Furthermore, the average performance of the proposed method is better than those of two state-of-the-art image denoising algorithms in subjective and objective measures.

  17. Fst-Filter: A flexible spatio-temporal filter for biomedical multichannel data denoising.

    PubMed

    Nuanprasert, Somchai; Adachi, Yoshiaki; Suzuki, Takashi

    2015-08-01

    In this paper, we present the noise reduction method for a multichannel measurement system where the true underlying signal is spatially low-rank and contaminated by spatially correlated noise. Our proposed formulation applies generalized singular value decomposition (GSVD) with signal recovery approach to extend the conventional subspace-based methods for performing the spatio-temporal filtering. Without necessarily requiring the noise covariance data in advance, the implemented optimization scheme allows users to choose the denoising function, F(·) flexibly satisfying for different temporal noise characteristics from a variety of existing efficient temporal filters. An effectiveness of proposed method is demonstrated by yielding the better accuracy for the brain source estimation on simulated magnetoencephalography (MEG) experiments than some traditional methods, e.g., principal component analysis (PCA), robust principal component analysis (RPCA) and multivariate wavelet denoising (MWD).

  18. Denoising infrared maritime imagery using tailored dictionaries via modified K-SVD algorithm.

    PubMed

    Smith, L N; Olson, C C; Judd, K P; Nichols, J M

    2012-06-10

    Recent work has shown that tailored overcomplete dictionaries can provide a better image model than standard basis functions for a variety of image processing tasks. Here we propose a modified K-SVD dictionary learning algorithm designed to maintain the advantages of the original approach but with a focus on improved convergence. We then use the learned model to denoise infrared maritime imagery and compare the performance to the original K-SVD algorithm, several overcomplete "fixed" dictionaries, and a standard wavelet denoising algorithm. Results indicate the superiority of overcomplete representations and show that our tailored approach provides similar peak signal-to-noise ratios as the traditional K-SVD at roughly half the computational cost.

  19. [A novel denoising approach to SVD filtering based on DCT and PCA in CT image].

    PubMed

    Feng, Fuqiang; Wang, Jun

    2013-10-01

    Because of various effects of the imaging mechanism, noises are inevitably introduced in medical CT imaging process. Noises in the images will greatly degrade the quality of images and bring difficulties to clinical diagnosis. This paper presents a new method to improve singular value decomposition (SVD) filtering performance in CT image. Filter based on SVD can effectively analyze characteristics of the image in horizontal (and/or vertical) directions. According to the features of CT image, we can make use of discrete cosine transform (DCT) to extract the region of interest and to shield uninterested region so as to realize the extraction of structure characteristics of the image. Then we transformed SVD to the image after DCT, constructing weighting function for image reconstruction adaptively weighted. The algorithm for the novel denoising approach in this paper was applied in CT image denoising, and the experimental results showed that the new method could effectively improve the performance of SVD filtering.

  20. Multiresolution parametric estimation of transparent motions and denoising of fluoroscopic images.

    PubMed

    Auvray, Vincent; Liénard, Jean; Bouthemy, Patrick

    2005-01-01

    We describe a novel multiresolution parametric framework to estimate transparent motions typically present in X-Ray exams. Assuming the presence if two transparent layers, it computes two affine velocity fields by minimizing an appropriate objective function with an incremental Gauss-Newton technique. We have designed a realistic simulation scheme of fluoroscopic image sequences to validate our method on data with ground truth and different levels of noise. An experiment on real clinical images is also reported. We then exploit this transparent-motion estimation method to denoise two layers image sequences using a motion-compensated estimation method. In accordance with theory, we show that we reach a denoising factor of 2/3 in a few iterations without bringing any local artifacts in the image sequence.

  1. Analysis of the intestinal microbial community and inferred functional capacities during the host response to Pneumocystis pneumonia.

    PubMed

    Samuelson, Derrick R; Charles, Tysheena P; de la Rua, Nicholas M; Taylor, Christopher M; Blanchard, Eugene E; Luo, Meng; Shellito, Judd E; Welsh, David A

    Pneumocystis pneumonia is a major cause of morbidity and mortality in patients infected with HIV/AIDS. In this study, we evaluated the intestinal microbial communities associated with the development of experimental Pneumocystis pneumonia, as there is growing evidence that the intestinal microbiota is critical for host defense against fungal pathogens. C57BL/6 mice were infected with live Pneumocystis murina (P. murina) via intratracheal inoculation and sacrificed 7 and 14 days postinfection for microbiota analysis. In addition, we evaluated the intestinal microbiota from CD4+ T cell depleted mice infected with P. murina. We found that the diversity of the intestinal microbial community was significantly altered by respiratory infection with P. murina. Specifically, mice infected with P. murina had altered microbial populations, as judged by changes in diversity metrics and relative taxa abundances. We also found that CD4+ T cell depleted mice infected with P. murina exhibited significantly altered intestinal microbiota that was distinct from immunocompetent mice infected with P. murina, suggesting that loss of CD4+ T cells may also affects the intestinal microbiota in the setting of Pneumocystis pneumonia. Finally, we employed a predictive metagenomics approach to evaluate various microbial features. We found that Pneumocystis pneumonia significantly alters the intestinal microbiota's inferred functional potential for carbohydrate, energy, and xenobiotic metabolism, as well as signal transduction pathways. Our study provides insight into specific-microbial clades and inferred microbial functional pathways associated with Pneumocystis pneumonia. Our data also suggest a role for the gut-lung axis in host defense in the lung.

  2. Binary black hole merger rates inferred from luminosity function of ultra-luminous X-ray sources

    NASA Astrophysics Data System (ADS)

    Inoue, Yoshiyuki; Tanaka, Yasuyuki T.; Isobe, Naoki

    2016-10-01

    The Advanced Laser Interferometer Gravitational-Wave Observatory (aLIGO) has detected direct signals of gravitational waves (GWs) from GW150914. The event was a merger of binary black holes whose masses are 36^{+5}_{-4} M_{{⊙}} and 29^{+4}_{-4} M_{{⊙}}. Such binary systems are expected to be directly evolved from stellar binary systems or formed by dynamical interactions of black holes in dense stellar environments. Here we derive the binary black hole merger rate based on the nearby ultra-luminous X-ray source (ULX) luminosity function (LF) under the assumption that binary black holes evolve through X-ray emitting phases. We obtain the binary black hole merger rate as 5.8(tULX/0.1 Myr)- 1λ- 0.6exp ( - 0.30λ) Gpc- 3 yr- 1, where tULX is the typical duration of the ULX phase and λ is the Eddington ratio in luminosity. This is coincident with the event rate inferred from the detection of GW150914 as well as the predictions based on binary population synthesis models. Although we are currently unable to constrain the Eddington ratio of ULXs in luminosity due to the uncertainties of our models and measured binary black hole merger event rates, further X-ray and GW data will allow us to narrow down the range of the Eddington ratios of ULXs. We also find the cumulative merger rate for the mass range of 5 M⊙ ≤ MBH ≤ 100 M⊙ inferred from the ULX LF is consistent with that estimated by the aLIGO collaboration considering various astrophysical conditions such as the mass function of black holes.

  3. Phylogenetic Gaussian process model for the inference of functionally important regions in protein tertiary structures.

    PubMed

    Huang, Yi-Fei; Golding, G Brian

    2014-01-01

    A critical question in biology is the identification of functionally important amino acid sites in proteins. Because functionally important sites are under stronger purifying selection, site-specific substitution rates tend to be lower than usual at these sites. A large number of phylogenetic models have been developed to estimate site-specific substitution rates in proteins and the extraordinarily low substitution rates have been used as evidence of function. Most of the existing tools, e.g. Rate4Site, assume that site-specific substitution rates are independent across sites. However, site-specific substitution rates may be strongly correlated in the protein tertiary structure, since functionally important sites tend to be clustered together to form functional patches. We have developed a new model, GP4Rate, which incorporates the Gaussian process model with the standard phylogenetic model to identify slowly evolved regions in protein tertiary structures. GP4Rate uses the Gaussian process to define a nonparametric prior distribution of site-specific substitution rates, which naturally captures the spatial correlation of substitution rates. Simulations suggest that GP4Rate can potentially estimate site-specific substitution rates with a much higher accuracy than Rate4Site and tends to report slowly evolved regions rather than individual sites. In addition, GP4Rate can estimate the strength of the spatial correlation of substitution rates from the data. By applying GP4Rate to a set of mammalian B7-1 genes, we found a highly conserved region which coincides with experimental evidence. GP4Rate may be a useful tool for the in silico prediction of functionally important regions in the proteins with known structures.

  4. Denoising solar radiation data using coiflet wavelets

    SciTech Connect

    Karim, Samsul Ariffin Abdul Janier, Josefina B. Muthuvalu, Mohana Sundaram; Hasan, Mohammad Khatim; Sulaiman, Jumat; Ismail, Mohd Tahir

    2014-10-24

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.

  5. Denoising HSI images for standoff target detection

    NASA Astrophysics Data System (ADS)

    Wilson, Steven; Latifi, Shahram

    2014-06-01

    Hyperspectral image denoising methods aim to improve the spatial and spectral quality of the image to increase the effectiveness of target detection algorithms. Comparing denoising methods is difficult, because sometimes authors have compared their algorithms to simple methods such as Wiener filter and wavelet thresholding. We would like to compare only the most effective methods for standoff target detection using sampled training spectra. Our overall goal is to implement an HSI algorithm to detect possible weapons and shielding materials in a scene, using a lab collected library of materials spectra. Selection of a suitable method is based on PSNR, classification accuracy, and time complexity. Since our goal is target detection, classification accuracy is more emphasized; however, an algorithm that requires large processing time would not be effective for the purpose of real-time detection. Elapsed time between HSI data collection and its processing could allow changes or movement in the scene, decreasing the validity of results. Based on our study, the First Order Roughness Penalty algorithm provides computation time of less than 2 seconds, but only provides an overall accuracy of 88% for the Indian Pines dataset. The Spectral Spatial Adaptive Total Variation method increases overall accuracy to almost 97%, but requires a computation time of over 50 seconds. For standoff target detection, Spectral Spatial Adaptive Total Variation is preferable, because it increases the probability of classification. By increasing the percentage of weapons materials that are correctly identified, further actions such as inspection or interception can be determined with confidence.

  6. Image denoising and deblurring using multispectral data

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.

    2017-05-01

    Currently decision-making systems get widespread. These systems are based on the analysis video sequences and also additional data. They are volume, change size, the behavior of one or a group of objects, temperature gradient, the presence of local areas with strong differences, and others. Security and control system are main areas of application. A noise on the images strongly influences the subsequent processing and decision making. This paper considers the problem of primary signal processing for solving the tasks of image denoising and deblurring of multispectral data. The additional information from multispectral channels can improve the efficiency of object classification. In this paper we use method of combining information about the objects obtained by the cameras in different frequency bands. We apply method based on simultaneous minimization L2 and the first order square difference sequence of estimates to denoising and restoring the blur on the edges. In case of loss of the information will be applied an approach based on the interpolation of data taken from the analysis of objects located in other areas and information obtained from multispectral camera. The effectiveness of the proposed approach is shown in a set of test images.

  7. Optimal wavelet denoising for smart biomonitor systems

    NASA Astrophysics Data System (ADS)

    Messer, Sheila R.; Agzarian, John; Abbott, Derek

    2001-03-01

    Future smart-systems promise many benefits for biomedical diagnostics. The ideal is for simple portable systems that display and interpret information from smart integrated probes or MEMS-based devices. In this paper, we will discuss a step towards this vision with a heart bio-monitor case study. An electronic stethoscope is used to record heart sounds and the problem of extracting noise from the signal is addressed via the use of wavelets and averaging. In our example of heartbeat analysis, phonocardiograms (PCGs) have many advantages in that they may be replayed and analysed for spectral and frequency information. Many sources of noise may pollute a PCG including foetal breath sounds if the subject is pregnant, lung and breath sounds, environmental noise and noise from contact between the recording device and the skin. Wavelets can be employed to denoise the PCG. The signal is decomposed by a discrete wavelet transform. Due to the efficient decomposition of heart signals, their wavelet coefficients tend to be much larger than those due to noise. Thus, coefficients below a certain level are regarded as noise and are thresholded out. The signal can then be reconstructed without significant loss of information in the signal. The questions that this study attempts to answer are which wavelet families, levels of decomposition, and thresholding techniques best remove the noise in a PCG. The use of averaging in combination with wavelet denoising is also addressed. Possible applications of the Hilbert Transform to heart sound analysis are discussed.

  8. Non-Local Means Denoising of Dynamic PET Images

    PubMed Central

    Dutta, Joyita; Leahy, Richard M.; Li, Quanzheng

    2013-01-01

    Objective Dynamic positron emission tomography (PET), which reveals information about both the spatial distribution and temporal kinetics of a radiotracer, enables quantitative interpretation of PET data. Model-based interpretation of dynamic PET images by means of parametric fitting, however, is often a challenging task due to high levels of noise, thus necessitating a denoising step. The objective of this paper is to develop and characterize a denoising framework for dynamic PET based on non-local means (NLM). Theory NLM denoising computes weighted averages of voxel intensities assigning larger weights to voxels that are similar to a given voxel in terms of their local neighborhoods or patches. We introduce three key modifications to tailor the original NLM framework to dynamic PET. Firstly, we derive similarities from less noisy later time points in a typical PET acquisition to denoise the entire time series. Secondly, we use spatiotemporal patches for robust similarity computation. Finally, we use a spatially varying smoothing parameter based on a local variance approximation over each spatiotemporal patch. Methods To assess the performance of our denoising technique, we performed a realistic simulation on a dynamic digital phantom based on the Digimouse atlas. For experimental validation, we denoised PET images from a mouse study and a hepatocellular carcinoma patient study. We compared the performance of NLM denoising with four other denoising approaches – Gaussian filtering, PCA, HYPR, and conventional NLM based on spatial patches. Results The simulation study revealed significant improvement in bias-variance performance achieved using our NLM technique relative to all the other methods. The experimental data analysis revealed that our technique leads to clear improvement in contrast-to-noise ratio in Patlak parametric images generated from denoised preclinical and clinical dynamic images, indicating its ability to preserve image contrast and high

  9. The assembly of ecological communities inferred from taxonomic and functional composition

    Treesearch

    Eric R. Sokol; E.F. Benfield; Lisa K. Belden; H. Maurice. Valett

    2011-01-01

    Among-site variation in metacommunities (beta diversity) is typically correlated with the distance separating the sites (spatial lag). This distance decay in similarity pattern has been linked to both niche-based and dispersal-based community assembly hypotheses. Here we show that beta diversity patterns in community composition, when supplemented with functional-trait...

  10. The cost of misremembering: Inferring the loss function in visual working memory.

    PubMed

    Sims, Chris R

    2015-03-04

    Visual working memory (VWM) is a highly limited storage system. A basic consequence of this fact is that visual memories cannot perfectly encode or represent the veridical structure of the world. However, in natural tasks, some memory errors might be more costly than others. This raises the intriguing possibility that the nature of memory error reflects the costs of committing different kinds of errors. Many existing theories assume that visual memories are noise-corrupted versions of afferent perceptual signals. However, this additive noise assumption oversimplifies the problem. Implicit in the behavioral phenomena of visual working memory is the concept of a loss function: a mathematical entity that describes the relative cost to the organism of making different types of memory errors. An optimally efficient memory system is one that minimizes the expected loss according to a particular loss function, while subject to a constraint on memory capacity. This paper describes a novel theoretical framework for characterizing visual working memory in terms of its implicit loss function. Using inverse decision theory, the empirical loss function is estimated from the results of a standard delayed recall visual memory experiment. These results are compared to the predicted behavior of a visual working memory system that is optimally efficient for a previously identified natural task, gaze correction following saccadic error. Finally, the approach is compared to alternative models of visual working memory, and shown to offer a superior account of the empirical data across a range of experimental datasets. © 2015 ARVO.

  11. Pragmatic Inferences in High-Functioning Adults with Autism and Asperger Syndrome

    ERIC Educational Resources Information Center

    Pijnacker, Judith; Hagoort, Peter; Buitelaar, Jan; Teunisse, Jan-Pieter; Geurts, Bart

    2009-01-01

    Although people with autism spectrum disorders (ASD) often have severe problems with pragmatic aspects of language, little is known about their pragmatic reasoning. We carried out a behavioral study on high-functioning adults with autistic disorder (n = 11) and Asperger syndrome (n = 17) and matched controls (n = 28) to investigate whether they…

  12. Pragmatic Inferences in High-Functioning Adults with Autism and Asperger Syndrome

    ERIC Educational Resources Information Center

    Pijnacker, Judith; Hagoort, Peter; Buitelaar, Jan; Teunisse, Jan-Pieter; Geurts, Bart

    2009-01-01

    Although people with autism spectrum disorders (ASD) often have severe problems with pragmatic aspects of language, little is known about their pragmatic reasoning. We carried out a behavioral study on high-functioning adults with autistic disorder (n = 11) and Asperger syndrome (n = 17) and matched controls (n = 28) to investigate whether they…

  13. Potential and limitations of inferring ecosystem photosynthetic capacity from leaf functional traits.

    PubMed

    Musavi, Talie; Migliavacca, Mirco; van de Weg, Martine Janet; Kattge, Jens; Wohlfahrt, Georg; van Bodegom, Peter M; Reichstein, Markus; Bahn, Michael; Carrara, Arnaud; Domingues, Tomas F; Gavazzi, Michael; Gianelle, Damiano; Gimeno, Cristina; Granier, André; Gruening, Carsten; Havránková, Kateřina; Herbst, Mathias; Hrynkiw, Charmaine; Kalhori, Aram; Kaminski, Thomas; Klumpp, Katja; Kolari, Pasi; Longdoz, Bernard; Minerbi, Stefano; Montagnani, Leonardo; Moors, Eddy; Oechel, Walter C; Reich, Peter B; Rohatyn, Shani; Rossi, Alessandra; Rotenberg, Eyal; Varlagin, Andrej; Wilkinson, Matthew; Wirth, Christian; Mahecha, Miguel D

    2016-10-01

    The aim of this study was to systematically analyze the potential and limitations of using plant functional trait observations from global databases versus in situ data to improve our understanding of vegetation impacts on ecosystem functional properties (EFPs). Using ecosystem photosynthetic capacity as an example, we first provide an objective approach to derive robust EFP estimates from gross primary productivity (GPP) obtained from eddy covariance flux measurements. Second, we investigate the impact of synchronizing EFPs and plant functional traits in time and space to evaluate their relationships, and the extent to which we can benefit from global plant trait databases to explain the variability of ecosystem photosynthetic capacity. Finally, we identify a set of plant functional traits controlling ecosystem photosynthetic capacity at selected sites. Suitable estimates of the ecosystem photosynthetic capacity can be derived from light response curve of GPP responding to radiation (photosynthetically active radiation or absorbed photosynthetically active radiation). Although the effect of climate is minimized in these calculations, the estimates indicate substantial interannual variation of the photosynthetic capacity, even after removing site-years with confounding factors like disturbance such as fire events. The relationships between foliar nitrogen concentration and ecosystem photosynthetic capacity are tighter when both of the measurements are synchronized in space and time. When using multiple plant traits simultaneously as predictors for ecosystem photosynthetic capacity variation, the combination of leaf carbon to nitrogen ratio with leaf phosphorus content explains the variance of ecosystem photosynthetic capacity best (adjusted R(2) = 0.55). Overall, this study provides an objective approach to identify links between leaf level traits and canopy level processes and highlights the relevance of the dynamic nature of ecosystems. Synchronizing

  14. Fault Detection of a Roller-Bearing System through the EMD of a Wavelet Denoised Signal

    PubMed Central

    Ahn, Jong-Hyo; Kwak, Dae-Ho; Koh, Bong-Hwan

    2014-01-01

    This paper investigates fault detection of a roller bearing system using a wavelet denoising scheme and proper orthogonal value (POV) of an intrinsic mode function (IMF) covariance matrix. The IMF of the bearing vibration signal is obtained through empirical mode decomposition (EMD). The signal screening process in the wavelet domain eliminates noise-corrupted portions that may lead to inaccurate prognosis of bearing conditions. We segmented the denoised bearing signal into several intervals, and decomposed each of them into IMFs. The first IMF of each segment is collected to become a covariance matrix for calculating the POV. We show that covariance matrices from healthy and damaged bearings exhibit different POV profiles, which can be a damage-sensitive feature. We also illustrate the conventional approach of feature extraction, of observing the kurtosis value of the measured signal, to compare the functionality of the proposed technique. The study demonstrates the feasibility of wavelet-based de-noising, and shows through laboratory experiments that tracking the proper orthogonal values of the covariance matrix of the IMF can be an effective and reliable measure for monitoring bearing fault. PMID:25196008

  15. Denoising of ECG signal during spaceflight using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Li, Zhuo; Wang, Li

    2009-12-01

    The Singular Value Decomposition (SVD) method is introduced to denoise the ECG signal during spaceflight. The theory base of SVD method is given briefly. The denoising process of the strategy is presented combining a segment of real ECG signal. We improve the algorithm of calculating Singular Value Ratio (SVR) spectrum, and propose a constructive approach of analysis characteristic patterns. We reproduce the ECG signal very well and compress the noise effectively. The SVD method is proved to be suitable for denoising the ECG signal.

  16. Nonlocal hierarchical dictionary learning using wavelets for image denoising.

    PubMed

    Yan, Ruomei; Shao, Ling; Liu, Yan

    2013-12-01

    Exploiting the sparsity within representation models for images is critical for image denoising. The best currently available denoising methods take advantage of the sparsity from image self-similarity, pre-learned, and fixed representations. Most of these methods, however, still have difficulties in tackling high noise levels or noise models other than Gaussian. In this paper, the multiresolution structure and sparsity of wavelets are employed by nonlocal dictionary learning in each decomposition level of the wavelets. Experimental results show that our proposed method outperforms two state-of-the-art image denoising algorithms on higher noise levels. Furthermore, our approach is more adaptive to the less extensively researched uniform noise.

  17. Image and video denoising for distributed optical fibre sensors

    NASA Astrophysics Data System (ADS)

    Soto, Marcelo A.; Ramírez, Jaime A.; Thévenaz, Luc

    2017-04-01

    A technique based on a multi-dimensional signal processing approach is here described for performance enhancement of distributed optical fibre sensors. In particular, the main features of linear and nonlinear image denoising techniques are described for signal-to-noise ratio enhancement in Brillouin optical time-domain analysers. Experimental results demonstrate the possibility to enhance the performance of distributed Brillouin sensors by more than 13 dB using a nonlinear image denoising approach, while more than 20 dB enhancement can be obtained with video denoising.

  18. The Luminosity Function at z ~ 8 from 97 Y-band Dropouts: Inferences about Reionization

    NASA Astrophysics Data System (ADS)

    Schmidt, Kasper B.; Treu, Tommaso; Trenti, Michele; Bradley, Larry D.; Kelly, Brandon C.; Oesch, Pascal A.; Holwerda, Benne W.; Shull, J. Michael; Stiavelli, Massimo

    2014-05-01

    We present the largest search to date for Y-band dropout galaxies (z ~ 8 Lyman break galaxies, LBGs) based on 350 arcmin2 of Hubble Space Telescope observations in the V, Y, J, and H bands from the Brightest of Reionizing Galaxies (BoRG) survey. In addition to previously published data, the BoRG13 data set presented here includes approximately 50 arcmin2 of new data and deeper observations of two previous BoRG pointings, from which we present 9 new z ~ 8 LBG candidates, bringing the total number of BoRG Y-band dropouts to 38 with 25.5 <= mJ <= 27.6 (AB system). We introduce a new Bayesian formalism for estimating the galaxy luminosity function, which does not require binning (and thus smearing) of the data and includes a likelihood based on the formally correct binomial distribution as opposed to the often-used approximate Poisson distribution. We demonstrate the utility of the new method on a sample of 97 Y-band dropouts that combines the bright BoRG galaxies with the fainter sources published in Bouwens et al. from the Hubble Ultra Deep Field and Early Release Science programs. We show that the z ~ 8 luminosity function is well described by a Schechter function over its full dynamic range with a characteristic magnitude M^\\star = -20.15^{+0.29}_{-0.38}, a faint-end slope of \\alpha = -1.87^{+0.26}_{-0.26}, and a number density of log _{10} \\phi ^\\star [{Mpc}^{-3}] = -3.24^{+0.25}_{-0.24}. Integrated down to M = -17.7, this luminosity function yields a luminosity density log _{10} \\epsilon [erg\\, s^{-1\\, Hz^{-1}\\, Mpc^{-3}}] = 25.52^{+0.05}_{-0.05}. Our luminosity function analysis is consistent with previously published determinations within 1σ. The error analysis suggests that uncertainties on the faint-end slope are still too large to draw a firm conclusion about its evolution with redshift. We use our statistical framework to discuss the implication of our study for the physics of reionization. By assuming theoretically motivated priors on the clumping

  19. Analytical methods for inferring functional effects of single base pair substitutions in human cancers.

    PubMed

    Lee, William; Yue, Peng; Zhang, Zemin

    2009-10-01

    Cancer is a genetic disease that results from a variety of genomic alterations. Identification of some of these causal genetic events has enabled the development of targeted therapeutics and spurred efforts to discover the key genes that drive cancer formation. Rapidly improving sequencing and genotyping technology continues to generate increasingly large datasets that require analytical methods to identify functional alterations that deserve additional investigation. This review examines statistical and computational approaches for the identification of functional changes among sets of single-nucleotide substitutions. Frequency-based methods identify the most highly mutated genes in large-scale cancer sequencing efforts while bioinformatics approaches are effective for independent evaluation of both non-synonymous mutations and polymorphisms. We also review current knowledge and tools that can be utilized for analysis of alterations in non-protein-coding genomic sequence.

  20. Bayesian inference for functional response in a stochastic predator-prey system.

    PubMed

    Gilioli, Gianni; Pasquali, Sara; Ruggeri, Fabrizio

    2008-02-01

    We present a Bayesian method for functional response parameter estimation starting from time series of field data on predator-prey dynamics. Population dynamics is described by a system of stochastic differential equations in which behavioral stochasticities are represented by noise terms affecting each population as well as their interaction. We focus on the estimation of a behavioral parameter appearing in the functional response of predator to prey abundance when a small number of observations is available. To deal with small sample sizes, latent data are introduced between each pair of field observations and are considered as missing data. The method is applied to both simulated and observational data. The results obtained using different numbers of latent data are compared with those achieved following a frequentist approach. As a case study, we consider an acarine predator-prey system relevant to biological control problems.

  1. Inference for the median residual life function in sequential multiple assignment randomized trials.

    PubMed

    Kidwell, Kelley M; Ko, Jin H; Wahed, Abdus S

    2014-04-30

    In survival analysis, median residual lifetime is often used as a summary measure to assess treatment effectiveness; it is not clear, however, how such a quantity could be estimated for a given dynamic treatment regimen using data from sequential randomized clinical trials. We propose a method to estimate a dynamic treatment regimen-specific median residual life (MERL) function from sequential multiple assignment randomized trials. We present the MERL estimator, which is based on inverse probability weighting, as well as, two variance estimates for the MERL estimator. One variance estimate follows from Lunceford, Davidian and Tsiatis' 2002 survival function-based variance estimate and the other uses the sandwich estimator. The MERL estimator is evaluated, and its two variance estimates are compared through simulation studies, showing that the estimator and both variance estimates produce approximately unbiased results in large samples. To demonstrate our methods, the estimator has been applied to data from a sequentially randomized leukemia clinical trial.

  2. Simple Math is Enough: Two Examples of Inferring Functional Associations from Genomic Data

    NASA Technical Reports Server (NTRS)

    Liang, Shoudan

    2003-01-01

    Non-random features in the genomic data are usually biologically meaningful. The key is to choose the feature well. Having a p-value based score prioritizes the findings. If two proteins share a unusually large number of common interaction partners, they tend to be involved in the same biological process. We used this finding to predict the functions of 81 un-annotated proteins in yeast.

  3. Inferring functional connectivity in MRI using Bayesian network structure learning with a modified PC algorithm

    PubMed Central

    Iyer, Swathi; Shafran, Izhak; Grayson, David; Gates, Kathleen; Nigg, Joel; Fair, Damien

    2013-01-01

    Resting state functional connectivity MRI (rs-fcMRI) is a popular technique used to gauge the functional relatedness between regions in the brain for typical and special populations. Most of the work to date determines this relationship by using Pearson's correlation on BOLD fMRI timeseries. However, it has been recognized that there are at least two key limitations to this method. First, it is not possible to resolve the direct and indirect connections/influences. Second, the direction of information flow between the regions cannot be differentiated. In the current paper, we follow-up on recent work by Smith et al (2011), and apply a Bayesian approach called the PC algorithm to both simulated data and empirical data to determine whether these two factors can be discerned with group average, as opposed to single subject, functional connectivity data. When applied on simulated individual subjects, the algorithm performs well determining indirect and direct connection but fails in determining directionality. However, when applied at group level, PC algorithm gives strong results for both indirect and direct connections and the direction of information flow. Applying the algorithm on empirical data, using a diffusion-weighted imaging (DWI) structural connectivity matrix as the baseline, the PC algorithm outperformed the direct correlations. We conclude that, under certain conditions, the PC algorithm leads to an improved estimate of brain network structure compared to the traditional connectivity analysis based on correlations. PMID:23501054

  4. Inferred basilar-membrane response functions for listeners with mild to moderate sensorineural hearing loss

    NASA Astrophysics Data System (ADS)

    Plack, Christopher J.; Drga, Vit; Lopez-Poveda, Enrique A.

    2004-04-01

    Psychophysical estimates of cochlear function suggest that normal-hearing listeners exhibit a compressive basilar-membrane (BM) response. Listeners with moderate to severe sensorineural hearing loss may exhibit a linearized BM response along with reduced gain, suggesting the loss of an active cochlear mechanism. This study investigated how the BM response changes with increasing hearing loss by comparing psychophysical measures of BM compression and gain for normal-hearing listeners with those for listeners who have mild to moderate sensorineural hearing loss. Data were collected from 16 normal-hearing listeners and 12 ears from 9 hearing-impaired listeners. The forward masker level required to mask a fixed low-level, 4000-Hz signal was measured as a function of the masker-signal interval using a masker frequency of either 2200 or 4000 Hz. These plots are known as temporal masking curves (TMCs). BM response functions derived from the TMCs showed a systematic reduction in gain with degree of hearing loss. Contrary to current thinking, however, no clear relationship was found between maximum compression and absolute threshold.

  5. Thermal consequences of increased pelt loft infer an additional utilitarian function for grooming.

    PubMed

    McFarland, Richard; Henzi, S Peter; Barrett, Louise; Wanigaratne, Anuradha; Coetzee, Elsie; Fuller, Andrea; Hetem, Robyn S; Mitchell, Duncan; Maloney, Shane K

    2015-12-20

    A strong case has been made that the primary function of grooming is hygienic. Nevertheless, its persistence in the absence of hygienic demand, and its obvious tactical importance to members of primate groups, underpins the view that grooming has become uncoupled from its utilitarian objectives and is now principally of social benefit. We identify improved thermoregulatory function as a previously unexplored benefit of grooming and so broaden our understanding of the utilitarian function of this behavior. Deriving the maximum thermal benefits from the pelt requires that it be kept clean and that the loft of the pelt is maintained (i.e., greater pelt depth), both of which can be achieved by grooming. In a series of wind-tunnel experiments, we measured the heat transfer characteristics of vervet monkey (Chlorocebus pygerythrus) pelts in the presence and absence of backcombing, which we used as a proxy for grooming. Our data indicate that backcombed pelts have improved thermal performance, offering significantly better insulation than flattened pelts and, hence, better protection from the cold. Backcombed pelts also had significantly lower radiant heat loads compared to flattened pelts, providing improved protection from radiant heat. Such thermal benefits, therefore, furnish grooming with an additional practical value to which its social use is anchored. Given the link between thermoregulatory ability and energy expenditure, our findings suggest that grooming for thermal benefits may be an important explanatory variable in the relationship between levels of sociability and individual fitness. Am. J. Primatol. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  6. The luminosity function at z ∼ 8 from 97 Y-band dropouts: Inferences about reionization

    SciTech Connect

    Schmidt, Kasper B.; Treu, Tommaso; Kelly, Brandon C.; Trenti, Michele; Bradley, Larry D.; Stiavelli, Massimo; Oesch, Pascal A.; Shull, J. Michael

    2014-05-01

    We present the largest search to date for Y-band dropout galaxies (z ∼ 8 Lyman break galaxies, LBGs) based on 350 arcmin{sup 2} of Hubble Space Telescope observations in the V, Y, J, and H bands from the Brightest of Reionizing Galaxies (BoRG) survey. In addition to previously published data, the BoRG13 data set presented here includes approximately 50 arcmin{sup 2} of new data and deeper observations of two previous BoRG pointings, from which we present 9 new z ∼ 8 LBG candidates, bringing the total number of BoRG Y-band dropouts to 38 with 25.5 ≤ m{sub J} ≤ 27.6 (AB system). We introduce a new Bayesian formalism for estimating the galaxy luminosity function, which does not require binning (and thus smearing) of the data and includes a likelihood based on the formally correct binomial distribution as opposed to the often-used approximate Poisson distribution. We demonstrate the utility of the new method on a sample of 97 Y-band dropouts that combines the bright BoRG galaxies with the fainter sources published in Bouwens et al. from the Hubble Ultra Deep Field and Early Release Science programs. We show that the z ∼ 8 luminosity function is well described by a Schechter function over its full dynamic range with a characteristic magnitude M{sup ⋆}=−20.15{sub −0.38}{sup +0.29}, a faint-end slope of α=−1.87{sub −0.26}{sup +0.26}, and a number density of log{sub 10} ϕ{sup ⋆}[Mpc{sup −3}]=−3.24{sub −0.24}{sup +0.25}. Integrated down to M = –17.7, this luminosity function yields a luminosity density log{sub 10} ϵ[erg s{sup −1} Hz{sup −1} Mpc{sup −3}]=25.52{sub −0.05}{sup +0.05}. Our luminosity function analysis is consistent with previously published determinations within 1σ. The error analysis suggests that uncertainties on the faint-end slope are still too large to draw a firm conclusion about its evolution with redshift. We use our statistical framework to discuss the implication of our study for the physics of

  7. Function of pretribosphenic and tribosphenic mammalian molars inferred from 3D animation

    NASA Astrophysics Data System (ADS)

    Schultz, Julia A.; Martin, Thomas

    2014-10-01

    Appearance of the tribosphenic molar in the Late Jurassic (160 Ma) is a crucial innovation for food processing in mammalian evolution. This molar type is characterized by a protocone, a talonid basin and a two-phased chewing cycle, all of which are apomorphic. In this functional study on the teeth of Late Jurassic Dryolestes leiriensis and the living marsupial Monodelphis domestica, we demonstrate that pretribosphenic and tribosphenic molars show fundamental differences of food reduction strategies, representing a shift in dental function during the transition of tribosphenic mammals. By using the Occlusal Fingerprint Analyser (OFA), we simulated the chewing motions of the pretribosphenic Dryolestes that represents an evolutionary precursor condition to such tribosphenic mammals as Monodelphis. Animation of chewing path and detection of collisional contacts between virtual models of teeth suggests that Dryolestes differs from the classical two-phased chewing movement of tribosphenidans, due to the narrowing of the interdental space in cervical (crown-root transition) direction, the inclination angle of the hypoflexid groove, and the unicuspid talonid. The pretribosphenic chewing cycle is equivalent to phase I of the tribosphenic chewing cycle, but the former lacks phase II of the tribosphenic chewing. The new approach can analyze the chewing cycle of the jaw by using polygonal 3D models of tooth surfaces, in a way that is complementary to the electromyography and strain gauge studies of muscle function of living animals. The technique allows alignment and scaling of isolated fossil teeth and utilizes the wear facet orientation and striation of the teeth to reconstruct the chewing path of extinct mammals.

  8. Function of pretribosphenic and tribosphenic mammalian molars inferred from 3D animation.

    PubMed

    Schultz, Julia A; Martin, Thomas

    2014-10-01

    Appearance of the tribosphenic molar in the Late Jurassic (160 Ma) is a crucial innovation for food processing in mammalian evolution. This molar type is characterized by a protocone, a talonid basin and a two-phased chewing cycle, all of which are apomorphic. In this functional study on the teeth of Late Jurassic Dryolestes leiriensis and the living marsupial Monodelphis domestica, we demonstrate that pretribosphenic and tribosphenic molars show fundamental differences of food reduction strategies, representing a shift in dental function during the transition of tribosphenic mammals. By using the Occlusal Fingerprint Analyser (OFA), we simulated the chewing motions of the pretribosphenic Dryolestes that represents an evolutionary precursor condition to such tribosphenic mammals as Monodelphis. Animation of chewing path and detection of collisional contacts between virtual models of teeth suggests that Dryolestes differs from the classical two-phased chewing movement of tribosphenidans, due to the narrowing of the interdental space in cervical (crown-root transition) direction, the inclination angle of the hypoflexid groove, and the unicuspid talonid. The pretribosphenic chewing cycle is equivalent to phase I of the tribosphenic chewing cycle, but the former lacks phase II of the tribosphenic chewing. The new approach can analyze the chewing cycle of the jaw by using polygonal 3D models of tooth surfaces, in a way that is complementary to the electromyography and strain gauge studies of muscle function of living animals. The technique allows alignment and scaling of isolated fossil teeth and utilizes the wear facet orientation and striation of the teeth to reconstruct the chewing path of extinct mammals.

  9. A Nearly Universal Solar-Wind Magnetosphere Coupling Function Inferred from Ten Magnetospheric State Variables

    NASA Astrophysics Data System (ADS)

    Newell, P. T.; Sotirelis, T.; Liou, K.; Meng, C. I.; Rich, F. J.

    2006-12-01

    We investigated whether one or a few coupling functions can represent best the interaction between the solar wind and the magnetosphere. Ten characterizations of the magnetosphere five from ground-based magnetometers, including Dst, Kp, AE, AU, and AL, and five from other sources, including auroral power (Polar UVI), cusp latitude and b2i (both DMSP), geosynchronous magnetic inclination angle (GOES), and polar cap size (SuperDARN) were correlated with more than 20 candidate solar wind coupling functions. A single coupling function, representing the rate magnetic flux is opened at the magnetopause, correlated best with 9 out of 10 indices of magnetospheric condition. This is dFMP/dt = v4/3BT2/3sin8/3(tc/2), calculated from (rate IMF field lines approach the magnetopause, v)(percent of IMF lines which merge, sin8/3(tc/2))(magnitude of magnetopause field, Bmp, v)(merging line length, (BT/Bmp)2/3). The merging line length is based on flux matching between the solar wind and a dipole field, and agrees with a superposed IMF on a vacuum dipole. The IMF clock angle dependence matches the merging rate reported at high altitude. The non-linearities of the magnetospheric response to BT and v are evident when the mean values of indices are plotted, as well as in the superior correlations from dFMP/dt. A wide variety of magnetospheric phenomena can ths be accurately predicted ab initio by just a single function, estimating the rate magnetic flux is opened on the dayside magnetopause. Across all state variables studied dFMP/dt accounts for about 57.2 percent of the variance, compared to 50.9 for EKL, and 48.8 for vBs. All data sets included thousands of points over many years, up to two solar cycles. The sole index which does not correlate best with dFMP/dt is Dst, which correlates best (r=0.87) with p1/2dFMP/dt. If dFMP/dt were credited with this success, its average score would be even higher.

  10. A nearly universal solar wind-magnetosphere coupling function inferred from 10 magnetospheric state variables

    NASA Astrophysics Data System (ADS)

    Newell, P. T.; Sotirelis, T.; Liou, K.; Meng, C.-I.; Rich, F. J.

    2007-01-01

    We investigated whether one or a few coupling functions can represent best the interaction between the solar wind and the magnetosphere over a wide variety of magnetospheric activity. Ten variables which characterize the state of the magnetosphere were studied. Five indices from ground-based magnetometers were selected, namely Dst, Kp, AE, AU, and AL, and five from other sources, namely auroral power (Polar UVI), cusp latitude (sin(Λc)), b2i (both DMSP), geosynchronous magnetic inclination angle (GOES), and polar cap size (SuperDARN). These indices were correlated with more than 20 candidate solar wind coupling functions. One function, representing the rate magnetic flux is opened at the magnetopause, correlated best with 9 out of 10 indices of magnetospheric activity. This is dΦMP/dt = v4/3BT2/3sin8/3(θc/2), calculated from (rate IMF field lines approach the magnetopause, ˜v)(% of IMF lines which merge, sin8/3(θc/2))(interplanetary field magnitude, BT)(merging line length, ˜(BMP/BT)1/3). The merging line length is based on flux matching between the solar wind and a dipole field and agrees with a superposed IMF on a vacuum dipole. The IMF clock angle dependence matches the merging rate reported (albeit with limited statistics) at high altitude. The nonlinearities of the magnetospheric response to BT and v are evident when the mean values of indices are plotted, in scatterplots, and in the superior correlations from dΦMP/dt. Our results show that a wide variety of magnetospheric phenomena can be predicted with reasonable accuracy (r > 0.80 in several cases) ab initio, that is without the time history of the target index, by a single function, estimating the dayside merging rate. Across all state variables studied (including AL, which is hard to predict, and polar cap size, which is hard to measure), dΦMP/dt accounts for about 57.2% of the variance, compared to 50.9% for EKL and 48.8% for vBs. All data sets included at least thousands of points over many

  11. The use of structural modelling to infer structure and function in biocontrol agents.

    PubMed

    Berry, Colin; Board, Jason

    2017-01-01

    Homology modelling can provide important insights into the structures of proteins when a related protein structure has already been solved. However, for many proteins, including a number of invertebrate-active toxins and accessory proteins, no such templates exist. In these cases, techniques of ab initio, template-independent modelling can be employed to generate models that may give insight into structure and function. In this overview, examples of both the problems and the potential benefits of ab initio techniques are illustrated. Consistent modelling results may indicate useful approximations to actual protein structures and can thus allow the generation of hypotheses regarding activity that can be tested experimentally.

  12. Inferring cortical function in the mouse visual system through large-scale systems neuroscience.

    PubMed

    Hawrylycz, Michael; Anastassiou, Costas; Arkhipov, Anton; Berg, Jim; Buice, Michael; Cain, Nicholas; Gouwens, Nathan W; Gratiy, Sergey; Iyer, Ramakrishnan; Lee, Jung Hoon; Mihalas, Stefan; Mitelut, Catalin; Olsen, Shawn; Reid, R Clay; Teeter, Corinne; de Vries, Saskia; Waters, Jack; Zeng, Hongkui; Koch, Christof

    2016-07-05

    The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort.

  13. Inferring cortical function in the mouse visual system through large-scale systems neuroscience

    PubMed Central

    Hawrylycz, Michael; Anastassiou, Costas; Arkhipov, Anton; Berg, Jim; Buice, Michael; Cain, Nicholas; Gouwens, Nathan W.; Gratiy, Sergey; Iyer, Ramakrishnan; Lee, Jung Hoon; Mihalas, Stefan; Mitelut, Catalin; Olsen, Shawn; Reid, R. Clay; Teeter, Corinne; de Vries, Saskia; Waters, Jack; Zeng, Hongkui; Koch, Christof

    2016-01-01

    The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort. PMID:27382147

  14. Functional morphology of the hallucal metatarsal with implications for inferring grasping ability in extinct primates.

    PubMed

    Goodenberger, Katherine E; Boyer, Doug M; Orr, Caley M; Jacobs, Rachel L; Femiani, John C; Patel, Biren A

    2015-03-01

    Primate evolutionary morphologists have argued that selection for life in a fine branch niche resulted in grasping specializations that are reflected in the hallucal metatarsal (Mt1) morphology of extant "prosimians", while a transition to use of relatively larger, horizontal substrates explains the apparent loss of such characters in anthropoids. Accordingly, these morphological characters-Mt1 torsion, peroneal process length and thickness, and physiological abduction angle-have been used to reconstruct grasping ability and locomotor mode in the earliest fossil primates. Although these characters are prominently featured in debates on the origin and subsequent radiation of Primates, questions remain about their functional significance. This study examines the relationship between these morphological characters of the Mt1 and a novel metric of pedal grasping ability for a large number of extant taxa in a phylogenetic framework. Results indicate greater Mt1 torsion in taxa that engage in hallucal grasping and in those that utilize relatively small substrates more frequently. This study provides evidence that Carpolestes simpsoni has a torsion value more similar to grasping primates than to any scandentian. The results also show that taxa that habitually grasp vertical substrates are distinguished from other taxa in having relatively longer peroneal processes. Furthermore, a longer peroneal process is also correlated with calcaneal elongation, a metric previously found to reflect leaping proclivity. A more refined understanding of the functional associations between Mt1 morphology and behavior in extant primates enhances the potential for using these morphological characters to comprehend primate (locomotor) evolution.

  15. Functional diversity of microbial communities in pristine aquifers inferred by PLFA- and sequencing-based approaches

    NASA Astrophysics Data System (ADS)

    Schwab, Valérie F.; Herrmann, Martina; Roth, Vanessa-Nina; Gleixner, Gerd; Lehmann, Robert; Pohnert, Georg; Trumbore, Susan; Küsel, Kirsten; Totsche, Kai U.

    2017-05-01

    Microorganisms in groundwater play an important role in aquifer biogeochemical cycles and water quality. However, the mechanisms linking the functional diversity of microbial populations and the groundwater physico-chemistry are still not well understood due to the complexity of interactions between surface and subsurface. Within the framework of Hainich (north-western Thuringia, central Germany) Critical Zone Exploratory of the Collaborative Research Centre AquaDiva, we used the relative abundances of phospholipid-derived fatty acids (PLFAs) to link specific biochemical markers within the microbial communities to the spatio-temporal changes of the groundwater physico-chemistry. The functional diversities of the microbial communities were mainly correlated with groundwater chemistry, including dissolved O2, Fet and NH4+ concentrations. Abundances of PLFAs derived from eukaryotes and potential nitrite-oxidizing bacteria (11Me16:0 as biomarker for Nitrospira moscoviensis) were high at sites with elevated O2 concentration where groundwater recharge supplies bioavailable substrates. In anoxic groundwaters more rich in Fet, PLFAs abundant in sulfate-reducing bacteria (SRB), iron-reducing bacteria and fungi increased with Fet and HCO3- concentrations, suggesting the occurrence of active iron reduction and the possible role of fungi in meditating iron solubilization and transport in those aquifer domains. In more NH4+-rich anoxic groundwaters, anammox bacteria and SRB-derived PLFAs increased with NH4+ concentration, further evidencing the dependence of the anammox process on ammonium concentration and potential links between SRB and anammox bacteria. Additional support of the PLFA-based bacterial communities was found in DNA- and RNA-based Illumina MiSeq amplicon sequencing of bacterial 16S rRNA genes, which showed high predominance of nitrite-oxidizing bacteria Nitrospira, e.g. Nitrospira moscoviensis, in oxic aquifer zones and of anammox bacteria in more NH4+-rich

  16. Alpha values as a function of sample size, effect size, and power: accuracy over inference.

    PubMed

    Bradley, M T; Brand, A

    2013-06-01

    Tables of alpha values as a function of sample size, effect size, and desired power were presented. The tables indicated expected alphas for small, medium, and large effect sizes given a variety of sample sizes. It was evident that sample sizes for most psychological studies are adequate for large effect sizes defined at .8. The typical alpha level of .05 and desired power of 90% can be achieved with 70 participants in two groups. It was perhaps doubtful if these ideal levels of alpha and power have generally been achieved for medium effect sizes in actual research, since 170 participants would be required. Small effect sizes have rarely been tested with an adequate number of participants or power. Implications were discussed.

  17. Crustal structure beneath the Japanese Islands inferred from receiver function analysis using similar earthquakes

    NASA Astrophysics Data System (ADS)

    Igarashi, Toshihiro

    2016-04-01

    The stress concentration and strain accumulation process due to inter-plate coupling of the subducting plate should have a large effect on inland shallow earthquakes that occur in the overriding plate. Information on the crustal structure and the crustal thickness is important to understanding their process. In this study, I applied receiver function analysis using similar earthquakes to estimate the crustal velocity structures beneath the Japanese Islands. Because similar earthquakes are caused repeatedly at almost the same place, they are useful for extracting information on spatial distribution and temporal changes of seismic velocity structures beneath the seismic stations. I used telemetric seismographic network data covered the Japanese Islands and moderate-sized similar earthquakes which occurred in the southern Hemisphere with epicentral distances between 30 and 90 degrees for about 26 years from October 1989. Data analysis was performed separately before and after the 2011 Tohoku-Oki earthquake. To identify the spatial distribution of crustal structure, I searched for the best-correlated model between an observed receiver function at each station and synthetic ones by using a grid search method. As results, I clarified the spatial distribution of the crustal velocity structures. The spatial patterns of velocities from the ground surface to 5 km deep are corresponding with basement depth models although the velocities are slower than those of tomography models. They indicate thick sediment layers in several plain and basin areas. The crustal velocity perturbations are consistent with existing tomography models. The active volcanoes correspond low-velocity zones from the upper crust to the crust-mantle transition. A comparison of the crustal structure before and after the 2011 Tohoku-Oki earthquake suggests that the northeastern Japan arc changed to lower velocities in some areas. This kind of velocity changes might be due to other effects such as changes of

  18. Bayesian inference in an item response theory model with a generalized student t link function

    NASA Astrophysics Data System (ADS)

    Azevedo, Caio L. N.; Migon, Helio S.

    2012-10-01

    In this paper we introduce a new item response theory (IRT) model with a generalized Student t-link function with unknown degrees of freedom (df), named generalized t-link (GtL) IRT model. In this model we consider only the difficulty parameter in the item response function. GtL is an alternative to the two parameter logit and probit models, since the degrees of freedom (df) play a similar role to the discrimination parameter. However, the behavior of the curves of the GtL is different from those of the two parameter models and the usual Student t link, since in GtL the curve obtained from different df's can cross the probit curves in more than one latent trait level. The GtL model has similar proprieties to the generalized linear mixed models, such as the existence of sufficient statistics and easy parameter interpretation. Also, many techniques of parameter estimation, model fit assessment and residual analysis developed for that models can be used for the GtL model. We develop fully Bayesian estimation and model fit assessment tools through a Metropolis-Hastings step within Gibbs sampling algorithm. We consider a prior sensitivity choice concerning the degrees of freedom. The simulation study indicates that the algorithm recovers all parameters properly. In addition, some Bayesian model fit assessment tools are considered. Finally, a real data set is analyzed using our approach and other usual models. The results indicate that our model fits the data better than the two parameter models.

  19. Denoising, deconvolving, and decomposing photon observations. Derivation of the D3PO algorithm

    NASA Astrophysics Data System (ADS)

    Selig, Marco; Enßlin, Torsten A.

    2015-02-01

    The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution is not dependent on the underlying position space, the implementation of the D3PO algorithm uses the nifty package to ensure applicability to various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 × 32 arcmin2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A74

  20. Denoising algorithm based on edge extraction and wavelet transform in digital holography

    NASA Astrophysics Data System (ADS)

    Zhang, Ming; Sang, Xin-zhu; Leng, Jun-min; Cao, Xue-mei

    2013-08-01

    Digital holography is a kind of coherent imaging method and inevitably affected by many factors in the process of recording. One of dominant problems is the speckle noise, which is essentially nonlinear multiplicative noise related to signals. So it is more difficult to remove than additive noise. Due to the noise pollution, the low resolution of image reconstructed is caused. A new solution for suppressing speckle noise in digital hologram is presented, which combines Canny filtering algorithm with wavelet threshold denoising algorithm. Canny filter is used to obtain the edge detail. Wavelet transformation performs denoising. In order to suppress speckle effectively and retain the image details as much as possible, Neyman-Pearson (N-P) criterion is introduced to estimate wavelet coefficient in every scale. An improved threshold function is proposed, whose curve is smoother. The reconstructed image is achieved by merging the denoised image with the edge details. Experimental results and performance parameters of the proposed algorithm are discussed and compared with other methods, which shows that the presented approach can not only effectively eliminate speckle noise, but also retain useful signals and edge information simultaneously.

  1. Denoising of chaotic signal using independent component analysis and empirical mode decomposition with circulate translating

    NASA Astrophysics Data System (ADS)

    Wen-Bo, Wang; Xiao-Dong, Zhang; Yuchan, Chang; Xiang-Li, Wang; Zhao, Wang; Xi, Chen; Lei, Zheng

    2016-01-01

    In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. Project supported by the National Science and Technology, China (Grant No. 2012BAJ15B04), the National Natural Science Foundation of China (Grant Nos. 41071270 and 61473213), the Natural Science Foundation of Hubei Province, China (Grant No. 2015CFB424), the State Key Laboratory Foundation of Satellite Ocean Environment Dynamics, China (Grant No. SOED1405), the Hubei Provincial Key Laboratory Foundation of Metallurgical Industry Process System Science, China (Grant No. Z201303), and the Hubei Key Laboratory Foundation of Transportation Internet of Things, Wuhan University of Technology, China (Grant No.2015III015-B02).

  2. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    NASA Astrophysics Data System (ADS)

    Han, G.; Lin, B.; Xu, Z.

    2017-03-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  3. Edge structure preserving 3D image denoising by local surface approximation.

    PubMed

    Qiu, Peihua; Mukherjee, Partha Sarathi

    2012-08-01

    In various applications, including magnetic resonance imaging (MRI) and functional MRI (fMRI), 3D images are becoming increasingly popular. To improve the reliability of subsequent image analyses, 3D image denoising is often a necessary preprocessing step, which is the focus of the current paper. In the literature, most existing image denoising procedures are for 2D images. Their direct extensions to 3D cases generally cannot handle 3D images efficiently because the structure of a typical 3D image is substantially more complicated than that of a typical 2D image. For instance, edge locations are surfaces in 3D cases which would be much more challenging to handle compared to edge curves in 2D cases. We propose a novel 3D image denoising procedure in this paper, based on local approximation of the edge surfaces using a set of surface templates. An important property of this method is that it can preserve edges and major edge structures (e.g., intersections of two edge surfaces and pointed corners). Numerical studies show that it works well in various applications.

  4. Central differenec quotient spectrum of singular value and its application on signal denoising

    NASA Astrophysics Data System (ADS)

    Zeng, Zuoqin; Zheng, Lixin

    2017-04-01

    Aiming at the selection problem of effective singular values in signal denoising using SVD, genetic algorithm is introduced and central difference quotient spectrum is put forward. Firstly energy calculation of effective components is adopted as the fitness function, and suitable genetic algorithm is designed for the optimization of denoising Hankel matrix. Then geometric meanings of difference quotient ways are analyzed, characteristics of the singular value curve and subtle relations between this curve and the central difference quotient way are studied. Finally central difference quotient spectrum of singular value is put forward. The results show that compared with exhaustive method, the highest energy of effective components and its optimal matrix structure can be quickly obtained with the designed genetic algorithm. Compared with the singular value curve, maximum peak position which represents the boundary of noise signals and ideal signals can be clearly shown on central difference quotient spectrum curve, and the number of effective singular values is accurately gained. On these basis, effect of signal denoising using SVD can reach to the best state.

  5. A new method for mobile phone image denoising

    NASA Astrophysics Data System (ADS)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  6. Denoising MR spectroscopic imaging data with low-rank approximations.

    PubMed

    Nguyen, Hien M; Peng, Xi; Do, Minh N; Liang, Zhi-Pei

    2013-01-01

    This paper addresses the denoising problem associated with magnetic resonance spectroscopic imaging (MRSI), where signal-to-noise ratio (SNR) has been a critical problem. A new scheme is proposed, which exploits two low-rank structures that exist in MRSI data, one due to partial separability and the other due to linear predictability. Denoising is performed by arranging the measured data in appropriate matrix forms (i.e., Casorati and Hankel) and applying low-rank approximations by singular value decomposition (SVD). The proposed method has been validated using simulated and experimental data, producing encouraging results. Specifically, the method can effectively denoise MRSI data in a wide range of SNR values while preserving spatial-spectral features. The method could prove useful for denoising MRSI data and other spatial-spectral and spatial-temporal imaging data as well.

  7. Image denoising via sparse and redundant representations over learned dictionaries.

    PubMed

    Elad, Michael; Aharon, Michal

    2006-12-01

    We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods.

  8. Sparsity based denoising of spectral domain optical coherence tomography images

    PubMed Central

    Fang, Leyuan; Li, Shutao; Nie, Qing; Izatt, Joseph A.; Toth, Cynthia A.; Farsiu, Sina

    2012-01-01

    In this paper, we make contact with the field of compressive sensing and present a development and generalization of tools and results for reconstructing irregularly sampled tomographic data. In particular, we focus on denoising Spectral-Domain Optical Coherence Tomography (SDOCT) volumetric data. We take advantage of customized scanning patterns, in which, a selected number of B-scans are imaged at higher signal-to-noise ratio (SNR). We learn a sparse representation dictionary for each of these high-SNR images, and utilize such dictionaries to denoise the low-SNR B-scans. We name this method multiscale sparsity based tomographic denoising (MSBTD). We show the qualitative and quantitative superiority of the MSBTD algorithm compared to popular denoising algorithms on images from normal and age-related macular degeneration eyes of a multi-center clinical trial. We have made the corresponding data set and software freely available online. PMID:22567586

  9. Image denoising based on wavelet cone of influence analysis

    NASA Astrophysics Data System (ADS)

    Pang, Wei; Li, Yufeng

    2009-11-01

    Donoho et al have proposed a method for denoising by thresholding based on wavelet transform, and indeed, the application of their method to image denoising has been extremely successful. But this method is based on the assumption that the type of noise is only additive Gaussian white noise, which is not efficient to impulse noise. In this paper, a new image denoising algorithm based on wavelet cone of influence (COI) analyzing is proposed, and which can effectively remove the impulse noise and preserve the image edges via undecimated discrete wavelet transform (UDWT). Furthermore, combining with the traditional wavelet thresholding denoising method, it can be also used to restrain more widely type of noise such as Gaussian noise, impulse noise, poisson noise and other mixed noise. Experiment results illustrate the advantages of this method.

  10. Terahertz digital holography image denoising using stationary wavelet transform

    NASA Astrophysics Data System (ADS)

    Cui, Shan-Shan; Li, Qi; Chen, Guanghao

    2015-04-01

    Terahertz (THz) holography is a frontier technology in terahertz imaging field. However, reconstructed images of holograms are inherently affected by speckle noise, on account of the coherent nature of light scattering. Stationary wavelet transform (SWT) is an effective tool in speckle noise removal. In this paper, two algorithms for despeckling SAR images are implemented to THz images based on SWT, which are threshold estimation and smoothing operation respectively. Denoised images are then quantitatively assessed by speckle index. Experimental results show that the stationary wavelet transform has superior denoising performance and image detail preservation to discrete wavelet transform. In terms of the threshold estimation, high levels of decomposing are needed for better denoising result. The smoothing operation combined with stationary wavelet transform manifests the optimal denoising effect at single decomposition level, with 5×5 average filtering.

  11. An expression atlas of human primary cells: inference of gene function from coexpression networks

    PubMed Central

    2013-01-01

    Background The specialisation of mammalian cells in time and space requires genes associated with specific pathways and functions to be co-ordinately expressed. Here we have combined a large number of publically available microarray datasets derived from human primary cells and analysed large correlation graphs of these data. Results Using the network analysis tool BioLayout Express3D we identify robust co-associations of genes expressed in a wide variety of cell lineages. We discuss the biological significance of a number of these associations, in particular the coexpression of key transcription factors with the genes that they are likely to control. Conclusions We consider the regulation of genes in human primary cells and specifically in the human mononuclear phagocyte system. Of particular note is the fact that these data do not support the identity of putative markers of antigen-presenting dendritic cells, nor classification of M1 and M2 activation states, a current subject of debate within immunological field. We have provided this data resource on the BioGPS web site (http://biogps.org/dataset/2429/primary-cell-atlas/) and on macrophages.com (http://www.macrophages.com/hu-cell-atlas). PMID:24053356

  12. Inferring parrotfish (Teleostei: Scaridae) pharyngeal mill function from dental morphology, wear, and microstructure.

    PubMed

    Carr, Andrew; Tibbetts, Ian R; Kemp, Anne; Truss, Rowan; Drennan, John

    2006-10-01

    Morphology, occlusal surface topography, macrowear, and microwear features of parrotfish pharyngeal teeth were investigated to relate microstructural characteristics to the function of the pharyngeal mill using scanning electron microscopy of whole and sectioned pharyngeal jaws and teeth. Pharyngeal tooth migration is anterior in the lower jaw (fifth ceratobranchial) and posterior in the upper jaw (paired third pharyngobranchials), making the interaction of occlusal surfaces and wear-generating forces complex. The extent of wear can be used to define three regions through which teeth migrate: a region containing newly erupted teeth showing little or no wear; a midregion in which the apical enameloid is swiftly worn; and a region containing teeth with only basal enameloid remaining, which shows low to moderate wear. The shape of the occlusal surface alters as the teeth progress along the pharyngeal jaw, generating conditions that appear suited to the reduction of coral particles. It is likely that the interaction between these particles and algal cells during the process of the rendering of the former is responsible for the rupture of the latter, with the consequent liberation of cell contents from which parrotfish obtain their nutrients.

  13. An expression atlas of human primary cells: inference of gene function from coexpression networks.

    PubMed

    Mabbott, Neil A; Baillie, J Kenneth; Brown, Helen; Freeman, Tom C; Hume, David A

    2013-09-20

    The specialisation of mammalian cells in time and space requires genes associated with specific pathways and functions to be co-ordinately expressed. Here we have combined a large number of publically available microarray datasets derived from human primary cells and analysed large correlation graphs of these data. Using the network analysis tool BioLayout Express3D we identify robust co-associations of genes expressed in a wide variety of cell lineages. We discuss the biological significance of a number of these associations, in particular the coexpression of key transcription factors with the genes that they are likely to control. We consider the regulation of genes in human primary cells and specifically in the human mononuclear phagocyte system. Of particular note is the fact that these data do not support the identity of putative markers of antigen-presenting dendritic cells, nor classification of M1 and M2 activation states, a current subject of debate within immunological field. We have provided this data resource on the BioGPS web site (http://biogps.org/dataset/2429/primary-cell-atlas/) and on macrophages.com (http://www.macrophages.com/hu-cell-atlas).

  14. Open chromatin profiling of human postmortem brain infers functional roles for non-coding schizophrenia loci.

    PubMed

    Fullard, John F; Giambartolomei, Claudia; Hauberg, Mads E; Xu, Ke; Voloudakis, Georgios; Shao, Zhiping; Bare, Christopher; Dudley, Joel T; Mattheisen, Manuel; Robakis, Nikolaos K; Haroutunian, Vahram; Roussos, Panos

    2017-03-14

    Open chromatin provides access to DNA binding proteins for the correct spatiotemporal regulation of gene expression. Mapping chromatin accessibility has been widely used to identify the location of cis regulatory elements (CREs) including promoters and enhancers. CREs show tissue- and cell-type specificity and disease-associated variants are often enriched for CREs in the tissues and cells that pertain to a given disease. To better understand the role of CREs in neuropsychiatric disorders we applied the Assay for Transposase Accessible Chromatin followed by sequencing (ATAC-seq) to neuronal and non-neuronal nuclei isolated from frozen postmortem human brain by fluorescence-activated nuclear sorting (FANS). Most of the identified open chromatin regions (OCRs) are differentially accessible between neurons and non-neurons, and show enrichment with known cell type markers, promoters and enhancers. Relative to those of non-neurons, neuronal OCRs are more evolutionarily conserved and are enriched in distal regulatory elements. Transcription factor (TF) footprinting analysis identifies differences in the regulome between neuronal and non-neuronal cells and ascribes putative functional roles to a number of non-coding schizophrenia (SCZ) risk variants. Among the identified variants is a Single Nucleotide Polymorphism (SNP) proximal to the gene encoding SNX19. In vitro experiments reveal that this SNP leads to an increase in transcriptional activity. As elevated expression of SNX19 has been associated with SCZ, our data provides evidence that the identified SNP contributes to disease. These results represent the first analysis of OCRs and TF binding sites in distinct populations of postmortem human brain cells and further our understanding of the regulome and the impact of neuropsychiatric disease-associated genetic risk variants.

  15. Seismic Discontinuities within the Crust and Mantle Beneath Indonesia as Inferred from P Receiver Functions

    NASA Astrophysics Data System (ADS)

    Woelbern, I.; Rumpker, G.

    2015-12-01

    Indonesia is situated at the southern margin of SE Asia, which comprises an assemblage of Gondwana-derived continental terranes, suture zones and volcanic arcs. The formation of SE Asia is believed to have started in Early Devonian. Its complex history involves the opening and closure of three distinct Tethys oceans, each accompanied by the rifting of continental fragments. We apply the receiver function technique to data of the temporary MERAMEX network operated in Central Java from May to October 2004 by the GeoForschungsZentrum Potsdam. The network consisted of 112 mobile stations with a spacing of about 10 km covering the full width of the island between the southern and northern coast lines. The tectonic history is reflected in a complex crustal structure of Central Java exhibiting strong topography of the Moho discontinuity related to different tectonic units. A discontinuity of negative impedance contrast is observed throughout the mid-crust interpreted as the top of a low-velocity layer which shows no depth correlation with the Moho interface. Converted phases generated at greater depth beneath Indonesia indicate the existence of multiple seismic discontinuities within the upper mantle and even below. The strongest signal originates from the base of the mantle transition zone, i.e. the 660 km discontinuity. The phase related to the 410 km discontinuity is less pronounced, but clearly identifiable as well. The derived thickness of the mantle-transition zone is in good agreement with the IASP91 velocity model. Additional phases are observed at roughly 33 s and 90 s relative to the P onset, corresponding to about 300 km and 920 km, respectively. A signal of reversed polarity indicates the top of a low velocity layer at about 370 km depth overlying the mantle transition zone.

  16. Receiver-Function Stacking Methods to Infer Crustal Anisotropic Structure with Application to the Turkish-Anatolian Plateau

    NASA Astrophysics Data System (ADS)

    Kaviani, A.; Rumpker, G.

    2015-12-01

    To account for the presence of seismic anisotropy within the crust and to estimate the relevant parameters, we first discuss a robust technique for the analysis of shear-wave splitting in layered anisotropic media by using converted shear phases. We use a combined approach that involves time-shifting and stacking of radial receiver functions and energy-minimization of transverse receiver functions to constrain the splitting parameters (i.e. the fast-polarization direction and the delay time) for an anisotropic layer. In multi-layered anisotropic media, the splitting parameters for the individual layers can be inferred by a layer-stripping approach, where the splitting effects due to shallower layers on converted phases from deeper discontinuities are successively corrected. The effect of anisotropy on the estimates of crustal thickness and average bulk Vp/Vs ratio can be significant. Recently, we extended the approach of Zhu & Kanamori (2000) to include P-to-S converted waves and their crustal reverberations generated in the anisotropic case. The anisotropic parameters of the medium are first estimated using the splitting analysis of the Ps-phase as described above. Then, a grid-search is performed over layer thickness and Vp/Vs ratio, while accounting for all relevant arrivals (up to 20 phases) in the anisotropic medium. We apply these techniques to receiver-function data from seismological stations across the Turkish-Anatolian Plateau to study seismic anisotropy in the crust and its relationship to crustal tectonics. Preliminary results reveal significant crustal anisotropy and indicate that the strength and direction of the anisotropy vary across the main tectonic boundaries. We also improve the estimates of the crustal thickness and the bulk Vp/Vs ratio by accounting for the presence of crustal anisotropy beneath the station. ReferenceZhu, L. & H. Kanamori (2000), Moho depth variation in southern California from teleseismic receiver functions, J. Geophys. Res

  17. Denoising two-photon calcium imaging data.

    PubMed

    Malik, Wasim Q; Schummers, James; Sur, Mriganka; Brown, Emery N

    2011-01-01

    Two-photon calcium imaging is now an important tool for in vivo imaging of biological systems. By enabling neuronal population imaging with subcellular resolution, this modality offers an approach for gaining a fundamental understanding of brain anatomy and physiology. Proper analysis of calcium imaging data requires denoising, that is separating the signal from complex physiological noise. To analyze two-photon brain imaging data, we present a signal plus colored noise model in which the signal is represented as harmonic regression and the correlated noise is represented as an order autoregressive process. We provide an efficient cyclic descent algorithm to compute approximate maximum likelihood parameter estimates by combing a weighted least-squares procedure with the Burg algorithm. We use Akaike information criterion to guide selection of the harmonic regression and the autoregressive model orders. Our flexible yet parsimonious modeling approach reliably separates stimulus-evoked fluorescence response from background activity and noise, assesses goodness of fit, and estimates confidence intervals and signal-to-noise ratio. This refined separation leads to appreciably enhanced image contrast for individual cells including clear delineation of subcellular details and network activity. The application of our approach to in vivo imaging data recorded in the ferret primary visual cortex demonstrates that our method yields substantially denoised signal estimates. We also provide a general Volterra series framework for deriving this and other signal plus correlated noise models for imaging. This approach to analyzing two-photon calcium imaging data may be readily adapted to other computational biology problems which apply correlated noise models.

  18. The Xanadu Annex on Titan Denoised

    NASA Image and Video Library

    2016-09-07

    This synthetic-aperture radar (SAR) image was obtained by NASA's Cassini spacecraft on July 25, 2016, during its 'T-121' pass over Titan's southern latitudes. The improved contrast provided by the denoising algorithm helps river channels (at bottom and upper left) stand out, as well as the crater-like feature at left. The image shows an area nicknamed the "Xanadu annex" by members of the Cassini radar team, earlier in the mission. This area had not been imaged by Cassini's radar until now, but measurements of its brightness temperature from Cassini's microwave radiometer were quite similar to that of the large region on Titan named Xanadu. Cassini's radiometer is essentially a very sensitive thermometer, and brightness temperature is a measure of the intensity of microwave radiation received from a feature by the instrument. Radar team members predicted at the time that, if this area were ever imaged, it would be similar in appearance to Xanadu, which lies just to the north. That earlier hunch appears to have been borne out, as features in this scene bear a strong similarity to the mountainous terrains Cassini's radar has imaged in Xanadu. Xanadu -- and now perhaps its annex -- remains something of a mystery. First imaged in 1994 by the Hubble Space Telescope (just three years before Cassini's launch from Earth), Xanadu was the first surface feature to be recognized on Titan. Once thought to be a raised plateau, the region is now understood to be slightly tilted, but not higher than, the darker surrounding regions. It blocks the formation of sand dunes, which otherwise extend all the way around Titan at its equator. The image was taken by the Cassini Synthetic Aperture radar (SAR) on July 25, 2016 during the mission's 122nd targeted Titan encounter. The image has been modified by the denoising method described in A. Lucas, JGR:Planets (2014). http://photojournal.jpl.nasa.gov/catalog/PIA20714

  19. Laser image denoising technique based on multi-fractal theory

    NASA Astrophysics Data System (ADS)

    Du, Lin; Sun, Huayan; Tian, Weiqing; Wang, Shuai

    2014-02-01

    The noise of laser images is complex, which includes additive noise and multiplicative noise. Considering the features of laser images, the basic processing capacity and defects of the common algorithm, this paper introduces the fractal theory into the research of laser image denoising. The research of laser image denoising is implemented mainly through the analysis of the singularity exponent of each pixel in fractal space and the feature of multi-fractal spectrum. According to the quantitative and qualitative evaluation of the processed image, the laser image processing technique based on fractal theory not only effectively removes the complicated noise of the laser images obtained by range-gated laser active imaging system, but can also maintains the detail information when implementing the image denoising processing. For different laser images, multi-fractal denoising technique can increase SNR of the laser image at least 1~2dB compared with other denoising techniques, which basically meet the needs of the laser image denoising technique.

  20. Implementation and performance evaluation of acoustic denoising algorithms for UAV

    NASA Astrophysics Data System (ADS)

    Chowdhury, Ahmed Sony Kamal

    Unmanned Aerial Vehicles (UAVs) have become popular alternative for wildlife monitoring and border surveillance applications. Elimination of the UAV's background noise and classifying the target audio signal effectively are still a major challenge. The main goal of this thesis is to remove UAV's background noise by means of acoustic denoising techniques. Existing denoising algorithms, such as Adaptive Least Mean Square (LMS), Wavelet Denoising, Time-Frequency Block Thresholding, and Wiener Filter, were implemented and their performance evaluated. The denoising algorithms were evaluated for average Signal to Noise Ratio (SNR), Segmental SNR (SSNR), Log Likelihood Ratio (LLR), and Log Spectral Distance (LSD) metrics. To evaluate the effectiveness of the denoising algorithms on classification of target audio, we implemented Support Vector Machine (SVM) and Naive Bayes classification algorithms. Simulation results demonstrate that LMS and Discrete Wavelet Transform (DWT) denoising algorithm offered superior performance than other algorithms. Finally, we implemented the LMS and DWT algorithms on a DSP board for hardware evaluation. Experimental results showed that LMS algorithm's performance is robust compared to DWT for various noise types to classify target audio signals.

  1. Gradient histogram estimation and preservation for texture enhanced image denoising.

    PubMed

    Zuo, Wangmeng; Zhang, Lei; Song, Chunwei; Zhang, David; Gao, Huijun

    2014-06-01

    Natural image statistics plays an important role in image denoising, and various natural image priors, including gradient-based, sparse representation-based, and nonlocal self-similarity-based ones, have been widely studied and exploited for noise removal. In spite of the great success of many denoising algorithms, they tend to smooth the fine scale image textures when removing noise, degrading the image visual quality. To address this problem, in this paper, we propose a texture enhanced image denoising method by enforcing the gradient histogram of the denoised image to be close to a reference gradient histogram of the original image. Given the reference gradient histogram, a novel gradient histogram preservation (GHP) algorithm is developed to enhance the texture structures while removing noise. Two region-based variants of GHP are proposed for the denoising of images consisting of regions with different textures. An algorithm is also developed to effectively estimate the reference gradient histogram from the noisy observation of the unknown image. Our experimental results demonstrate that the proposed GHP algorithm can well preserve the texture appearance in the denoised images, making them look more natural.

  2. Applications of discrete multiwavelet techniques to image denoising

    NASA Astrophysics Data System (ADS)

    Wang, Haihui; Peng, Jiaxiong; Wu, Wei; Ye, Bin

    2003-09-01

    In this paper, we present a new method by using 2-D discrete multiwavelet transform in image denoising. The developments in wavelet theory have given rise to the wavelet thresholding method, for extracting a signal from noisy data. The method of signal denoising via wavelet thresholding was popularized. Multiwavelets have recently been introduced and they offer simultaneous orthogonality, symmetry and short support. This property makes multiwavelets more suitable for various image processing applications, especially denoising. It is based on thresholding of multiwavelet coefficients arising from the standard scalar orthogonal wavelet transform. It takes into account the covariance structure of the transform. Denoising is images via thresholding of the multiwavelet coefficients result from preprocessing and the discrete multiwavelet transform can be carried out by threating the output in this paper. The form of the threshold is carefully formulated and is the key to the excellent results obtained in the extensive numerical simulations of image denoising. The performances of multiwavelets are compared with those of scalar wavelets. Simulations reveal that multiwavelet based image denoising schemes outperform wavelet based method both subjectively and objectively.

  3. Denoising Hyperspectral Image With Non-i.i.d. Noise Structure.

    PubMed

    Chen, Yang; Cao, Xiangyong; Zhao, Qian; Meng, Deyu; Xu, Zongben

    2017-07-27

    Hyperspectral image (HSI) denoising has been attracting much research attention in remote sensing area due to its importance in improving the HSI qualities. The existing HSI denoising methods mainly focus on specific spectral and spatial prior knowledge in HSIs, and share a common underlying assumption that the embedded noise in HSI is independent and identically distributed (i.i.d.). In real scenarios, however, the noise existed in a natural HSI is always with much more complicated non-i.i.d. statistical structures and the under-estimation to this noise complexity often tends to evidently degenerate the robustness of current methods. To alleviate this issue, this paper attempts the first effort to model the HSI noise using a non-i.i.d. mixture of Gaussians (NMoGs) noise assumption, which finely accords with the noise characteristics possessed by a natural HSI and thus is capable of adapting various practical noise shapes. Then we integrate such noise modeling strategy into the low-rank matrix factorization (LRMF) model and propose an NMoG-LRMF model in the Bayesian framework. A variational Bayes algorithm is then designed to infer the posterior of the proposed model. As substantiated by our experiments implemented on synthetic and real noisy HSIs, the proposed method performs more robust beyond the state-of-the-arts.

  4. Multi-threshold de-noising of electrical imaging logging data based on the wavelet packet transform

    NASA Astrophysics Data System (ADS)

    Xie, Fang; Xiao, Chengwen; Liu, Ruilin; Zhang, Lili

    2017-08-01

    A key problem of effectiveness evaluation for fractured-vuggy carbonatite reservoir is how to accurately extract fracture and vug information from electrical imaging logging data. Drill bits quaked during drilling and resulted in rugged surfaces of borehole walls and thus conductivity fluctuations in electrical imaging logging data. The occurrence of the conductivity fluctuations (formation background noise) directly affects the fracture/vug information extraction and reservoir effectiveness evaluation. We present a multi-threshold de-noising method based on wavelet packet transform to eliminate the influence of rugged borehole walls. The noise is present as fluctuations in button-electrode conductivity curves and as pockmarked responses in electrical imaging logging static images. The noise has responses in various scales and frequency ranges and has low conductivity compared with fractures or vugs. Our de-noising method is to decompose the data into coefficients with wavelet packet transform on a quadratic spline basis, then shrink high-frequency wavelet packet coefficients in different resolutions with minimax threshold and hard-threshold function, and finally reconstruct the thresholded coefficients. We use electrical imaging logging data collected from fractured-vuggy Ordovician carbonatite reservoir in Tarim Basin to verify the validity of the multi-threshold de-noising method. Segmentation results and extracted parameters are shown as well to prove the effectiveness of the de-noising procedure.

  5. Structure, evolution and functional inference on the Mildew Locus O (MLO) gene family in three cultivated Cucurbitaceae spp.

    PubMed

    Iovieno, Paolo; Andolfo, Giuseppe; Schiavulli, Adalgisa; Catalano, Domenico; Ricciardi, Luigi; Frusciante, Luigi; Ercolano, Maria Raffaella; Pavan, Stefano

    2015-12-29

    The powdery mildew disease affects thousands of plant species and arguably represents the major fungal threat for many Cucurbitaceae crops, including melon (Cucumis melo L.), watermelon (Citrullus lanatus L.) and zucchini (Cucurbita pepo L.). Several studies revealed that specific members of the Mildew Locus O (MLO) gene family act as powdery mildew susceptibility factors. Indeed, their inactivation, as the result of gene knock-out or knock-down, is associated with a peculiar form of resistance, referred to as mlo resistance. We exploited recently available genomic information to provide a comprehensive overview of the MLO gene family in Cucurbitaceae. We report the identification of 16 MLO homologs in C. melo, 14 in C. lanatus and 18 in C. pepo genomes. Bioinformatic treatment of data allowed phylogenetic inference and the prediction of several ortholog pairs and groups. Comparison with functionally characterized MLO genes and, in C. lanatus, gene expression analysis, resulted in the detection of candidate powdery mildew susceptibility factors. We identified a series of conserved amino acid residues and motifs that are likely to play a major role for the function of MLO proteins. Finally, we performed a codon-based evolutionary analysis indicating a general high level of purifying selection in the three Cucurbitaceae MLO gene families, and the occurrence of regions under diversifying selection in candidate susceptibility factors. Results of this study may help to address further biological questions concerning the evolution and function of MLO genes. Moreover, data reported here could be conveniently used by breeding research, aiming to select powdery mildew resistant cultivars in Cucurbitaceae.

  6. A formal likelihood function for parameter and predictive inference of hydrologic models with correlated, heteroscedastic, and non-Gaussian errors

    NASA Astrophysics Data System (ADS)

    Schoups, Gerrit; Vrugt, Jasper A.

    2010-10-01

    Estimation of parameter and predictive uncertainty of hydrologic models has traditionally relied on several simplifying assumptions. Residual errors are often assumed to be independent and to be adequately described by a Gaussian probability distribution with a mean of zero and a constant variance. Here we investigate to what extent estimates of parameter and predictive uncertainty are affected when these assumptions are relaxed. A formal generalized likelihood function is presented, which extends the applicability of previously used likelihood functions to situations where residual errors are correlated, heteroscedastic, and non-Gaussian with varying degrees of kurtosis and skewness. The approach focuses on a correct statistical description of the data and the total model residuals, without separating out various error sources. Application to Bayesian uncertainty analysis of a conceptual rainfall-runoff model simultaneously identifies the hydrologic model parameters and the appropriate statistical distribution of the residual errors. When applied to daily rainfall-runoff data from a humid basin we find that (1) residual errors are much better described by a heteroscedastic, first-order, auto-correlated error model with a Laplacian distribution function characterized by heavier tails than a Gaussian distribution; and (2) compared to a standard least-squares approach, proper representation of the statistical distribution of residual errors yields tighter predictive uncertainty bands and different parameter uncertainty estimates that are less sensitive to the particular time period used for inference. Application to daily rainfall-runoff data from a semiarid basin with more significant residual errors and systematic underprediction of peak flows shows that (1) multiplicative bias factors can be used to compensate for some of the largest errors and (2) a skewed error distribution yields improved estimates of predictive uncertainty in this semiarid basin with near

  7. POGs2: a web portal to facilitate cross-species inferences about protein architecture and function in plants.

    PubMed

    Tomcal, Michael; Stiffler, Nicholas; Barkan, Alice

    2013-01-01

    The Putative orthologous Groups 2 Database (POGs2) (http://pogs.uoregon.edu/) integrates information about the inferred proteomes of four plant species (Arabidopsis thaliana, Zea mays, Orza sativa, and Populus trichocarpa) in a display that facilitates comparisons among orthologs and extrapolation of annotations among species. A single-page view collates key functional data for members of each Putative Orthologous Group (POG): graphical representations of InterPro domains, predicted and established intracellular locations, and imported gene descriptions. The display incorporates POGs predicted by two different algorithms as well as gene trees, allowing users to evaluate the validity of POG memberships. The web interface provides ready access to sequences and alignments of POG members, as well as sequences, alignments, and domain architectures of closely-related paralogs. A simple and flexible search interface permits queries by BLAST and by any combination of gene identifier, keywords, domain names, InterPro identifiers, and intracellular location. The concurrent display of domain architectures for orthologous proteins highlights errors in gene models and false-negatives in domain predictions. The POGs2 layout is also useful for exploring candidate genes identified by transposon tagging, QTL mapping, map-based cloning, and proteomics, and for navigating between orthologous groups that belong to the same gene family.

  8. POGs2: A Web Portal to Facilitate Cross-Species Inferences About Protein Architecture and Function in Plants

    PubMed Central

    Tomcal, Michael; Stiffler, Nicholas; Barkan, Alice

    2013-01-01

    The Putative orthologous Groups 2 Database (POGs2) (http://pogs.uoregon.edu/) integrates information about the inferred proteomes of four plant species (Arabidopsis thaliana, Zea mays, Orza sativa, and Populus trichocarpa) in a display that facilitates comparisons among orthologs and extrapolation of annotations among species. A single-page view collates key functional data for members of each Putative Orthologous Group (POG): graphical representations of InterPro domains, predicted and established intracellular locations, and imported gene descriptions. The display incorporates POGs predicted by two different algorithms as well as gene trees, allowing users to evaluate the validity of POG memberships. The web interface provides ready access to sequences and alignments of POG members, as well as sequences, alignments, and domain architectures of closely-related paralogs. A simple and flexible search interface permits queries by BLAST and by any combination of gene identifier, keywords, domain names, InterPro identifiers, and intracellular location. The concurrent display of domain architectures for orthologous proteins highlights errors in gene models and false-negatives in domain predictions. The POGs2 layout is also useful for exploring candidate genes identified by transposon tagging, QTL mapping, map-based cloning, and proteomics, and for navigating between orthologous groups that belong to the same gene family. PMID:24340041

  9. The biosynthetic origin of oxygen functions in phenylphenalenones of Anigozanthos preissii inferred from NMR- and HRMS-based isotopologue analysis.

    PubMed

    Munde, Tobias; Maddula, Ravi K; Svatos, Ales; Schneider, Bernd

    2011-01-01

    The biosynthetic origin of 9-phenylphenalenones and the sequence according to which their oxygen functionalities are introduced were studied using nuclear magnetic resonance (NMR) spectroscopy and high-resolution electrospray ionization mass spectrometry (HRESIMS). (13)C-labelled precursors were administered to root cultures of Anigozanthos preissii, which were simultaneously incubated in an atmosphere of (18)O(2). Two major phenylphenalenones, anigorufone and hydroxyanigorufone, were isolated and analyzed by spectroscopic methods. Incorporation of (13)C-labelled precursors from the culture medium and (18)O from the atmosphere was detected. O-Methylation with (13)C-diazomethane was used to attach (13)C-labels to each hydroxyl and thereby dramatically enhance the sensitivity with which NMR spectroscopy can detect (18)O by means of isotope-induced shifts of (13)C signals. The isotopologue patterns inferred from NMR and HRESIMS analyses indicated that the hydroxyl group at C-2 of 9-phenylphenalenones had been introduced on the stage of a linear diarylheptanoid. The oxygen atoms of the carbonyl and lateral aryl ring originated from the hydroxyl group of the 4-coumaroyl moiety, which was incorporated as a unit. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. Functional associations between support use and forelimb shape in strepsirrhines and their relevance to inferring locomotor behavior in early primates.

    PubMed

    Fabre, Anne-Claire; Marigó, Judit; Granatosky, Michael C; Schmitt, Daniel

    2017-07-01

    The evolution of primates is intimately linked to their initial invasion of an arboreal environment. However, moving and foraging in this milieu creates significant mechanical challenges related to the presence of substrates differing in their size and orientation. It is widely assumed that primates are behaviorally and anatomically adapted to movement on specific substrates, but few explicit tests of this relationship in an evolutionary context have been conducted. Without direct tests of form-function relationships in living primates it is impossible to reliably infer behavior in fossil taxa. In this study, we test a hypothesis of co-variation between forelimb morphology and the type of substrates used by strepsirrhines. If associations between anatomy and substrate use exist, these can then be applied to better understand limb anatomy of extinct primates. The co-variation between each forelimb long bone and the type of substrate used was studied in a phylogenetic context. Our results show that despite the presence of significant phylogenetic signal for each long bone of the forelimb, clear support use associations are present. A strong co-variation was found between the type of substrate used and the shape of the radius, with and without taking phylogeny into account, whereas co-variation was significant for the ulna only when taking phylogeny into account. Species that use a thin branch milieu show radii that are gracile and straight and have a distal articular shape that allows for a wide range of movements. In contrast, extant species that commonly use large supports show a relatively robust and curved radius with an increased surface area available for forearm and hand muscles in pronated posture. These results, especially for the radius, support the idea that strepsirrhine primates exhibit specific skeletal adaptations associated with the supports that they habitually move on. With these robust associations in hand it will be possible to explore the same

  11. The NIFTY way of Bayesian signal inference

    SciTech Connect

    Selig, Marco

    2014-12-05

    We introduce NIFTY, 'Numerical Information Field Theory', a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTY can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTY as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D{sup 3}PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.

  12. The NIFTy way of Bayesian signal inference

    NASA Astrophysics Data System (ADS)

    Selig, Marco

    2014-12-01

    We introduce NIFTy, "Numerical Information Field Theory", a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTy can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTy as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.

  13. IntNetLncSim: an integrative network analysis method to infer human lncRNA functional similarity

    PubMed Central

    Hu, Yang; Yang, Haixiu; Zhou, Chen; Sun, Jie; Zhou, Meng

    2016-01-01

    Increasing evidence indicated that long non-coding RNAs (lncRNAs) were involved in various biological processes and complex diseases by communicating with mRNAs/miRNAs each other. Exploiting interactions between lncRNAs and mRNA/miRNAs to lncRNA functional similarity (LFS) is an effective method to explore function of lncRNAs and predict novel lncRNA-disease associations. In this article, we proposed an integrative framework, IntNetLncSim, to infer LFS by modeling the information flow in an integrated network that comprises both lncRNA-related transcriptional and post-transcriptional information. The performance of IntNetLncSim was evaluated by investigating the relationship of LFS with the similarity of lncRNA-related mRNA sets (LmRSets) and miRNA sets (LmiRSets). As a result, LFS by IntNetLncSim was significant positively correlated with the LmRSet (Pearson correlation γ2=0.8424) and LmiRSet (Pearson correlation γ2=0.2601). Particularly, the performance of IntNetLncSim is superior to several previous methods. In the case of applying the LFS to identify novel lncRNA-disease relationships, we achieved an area under the ROC curve (0.7300) in experimentally verified lncRNA-disease associations based on leave-one-out cross-validation. Furthermore, highly-ranked lncRNA-disease associations confirmed by literature mining demonstrated the excellent performance of IntNetLncSim. Finally, a web-accessible system was provided for querying LFS and potential lncRNA-disease relationships: http://www.bio-bigdata.com/IntNetLncSim. PMID:27323856

  14. IntNetLncSim: an integrative network analysis method to infer human lncRNA functional similarity.

    PubMed

    Cheng, Liang; Shi, Hongbo; Wang, Zhenzhen; Hu, Yang; Yang, Haixiu; Zhou, Chen; Sun, Jie; Zhou, Meng

    2016-07-26

    Increasing evidence indicated that long non-coding RNAs (lncRNAs) were involved in various biological processes and complex diseases by communicating with mRNAs/miRNAs each other. Exploiting interactions between lncRNAs and mRNA/miRNAs to lncRNA functional similarity (LFS) is an effective method to explore function of lncRNAs and predict novel lncRNA-disease associations. In this article, we proposed an integrative framework, IntNetLncSim, to infer LFS by modeling the information flow in an integrated network that comprises both lncRNA-related transcriptional and post-transcriptional information. The performance of IntNetLncSim was evaluated by investigating the relationship of LFS with the similarity of lncRNA-related mRNA sets (LmRSets) and miRNA sets (LmiRSets). As a result, LFS by IntNetLncSim was significant positively correlated with the LmRSet (Pearson correlation γ2=0.8424) and LmiRSet (Pearson correlation γ2=0.2601). Particularly, the performance of IntNetLncSim is superior to several previous methods. In the case of applying the LFS to identify novel lncRNA-disease relationships, we achieved an area under the ROC curve (0.7300) in experimentally verified lncRNA-disease associations based on leave-one-out cross-validation. Furthermore, highly-ranked lncRNA-disease associations confirmed by literature mining demonstrated the excellent performance of IntNetLncSim. Finally, a web-accessible system was provided for querying LFS and potential lncRNA-disease relationships: http://www.bio-bigdata.com/IntNetLncSim.

  15. Effect of denoising on supervised lung parenchymal clusters

    NASA Astrophysics Data System (ADS)

    Jayamani, Padmapriya; Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Denoising is a critical preconditioning step for quantitative analysis of medical images. Despite promises for more consistent diagnosis, denoising techniques are seldom explored in clinical settings. While this may be attributed to the esoteric nature of the parameter sensitve algorithms, lack of quantitative measures on their ecacy to enhance the clinical decision making is a primary cause of physician apathy. This paper addresses this issue by exploring the eect of denoising on the integrity of supervised lung parenchymal clusters. Multiple Volumes of Interests (VOIs) were selected across multiple high resolution CT scans to represent samples of dierent patterns (normal, emphysema, ground glass, honey combing and reticular). The VOIs were labeled through consensus of four radiologists. The original datasets were ltered by multiple denoising techniques (median ltering, anisotropic diusion, bilateral ltering and non-local means) and the corresponding ltered VOIs were extracted. Plurality of cluster indices based on multiple histogram-based pair-wise similarity measures were used to assess the quality of supervised clusters in the original and ltered space. The resultant rank orders were analyzed using the Borda criteria to nd the denoising-similarity measure combination that has the best cluster quality. Our exhaustive analyis reveals (a) for a number of similarity measures, the cluster quality is inferior in the ltered space; and (b) for measures that benet from denoising, a simple median ltering outperforms non-local means and bilateral ltering. Our study suggests the need to judiciously choose, if required, a denoising technique that does not deteriorate the integrity of supervised clusters.

  16. The Panchromatic Hubble Andromeda Treasury. IV. A Probabilistic Approach to Inferring the High-mass Stellar Initial Mass Function and Other Power-law Functions

    NASA Astrophysics Data System (ADS)

    Weisz, Daniel R.; Fouesneau, Morgan; Hogg, David W.; Rix, Hans-Walter; Dolphin, Andrew E.; Dalcanton, Julianne J.; Foreman-Mackey, Daniel T.; Lang, Dustin; Johnson, L. Clifton; Beerman, Lori C.; Bell, Eric F.; Gordon, Karl D.; Gouliermis, Dimitrios; Kalirai, Jason S.; Skillman, Evan D.; Williams, Benjamin F.

    2013-01-01

    We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M >~ 1 M ⊙). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, α, are unbiased and that the uncertainty, Δα, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on α, and provide an analytic approximation for Δα as a function of the observed number of stars and mass range. Comparison with literature studies shows that ~3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield langαrang = 2.46, with a 1σ dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the completeness for stars of a given mass. The precision on MF

  17. THE PANCHROMATIC HUBBLE ANDROMEDA TREASURY. IV. A PROBABILISTIC APPROACH TO INFERRING THE HIGH-MASS STELLAR INITIAL MASS FUNCTION AND OTHER POWER-LAW FUNCTIONS

    SciTech Connect

    Weisz, Daniel R.; Fouesneau, Morgan; Dalcanton, Julianne J.; Clifton Johnson, L.; Beerman, Lori C.; Williams, Benjamin F.; Hogg, David W.; Foreman-Mackey, Daniel T.; Rix, Hans-Walter; Gouliermis, Dimitrios; Dolphin, Andrew E.; Lang, Dustin; Bell, Eric F.; Gordon, Karl D.; Kalirai, Jason S.; Skillman, Evan D.

    2013-01-10

    We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M {approx}> 1 M {sub Sun }). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, {alpha}, are unbiased and that the uncertainty, {Delta}{alpha}, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on {alpha}, and provide an analytic approximation for {Delta}{alpha} as a function of the observed number of stars and mass range. Comparison with literature studies shows that {approx}3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield ({alpha}) = 2.46, with a 1{sigma} dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the

  18. Perceptual inference.

    PubMed

    Aggelopoulos, Nikolaos C

    2015-08-01

    Perceptual inference refers to the ability to infer sensory stimuli from predictions that result from internal neural representations built through prior experience. Methods of Bayesian statistical inference and decision theory model cognition adequately by using error sensing either in guiding action or in "generative" models that predict the sensory information. In this framework, perception can be seen as a process qualitatively distinct from sensation, a process of information evaluation using previously acquired and stored representations (memories) that is guided by sensory feedback. The stored representations can be utilised as internal models of sensory stimuli enabling long term associations, for example in operant conditioning. Evidence for perceptual inference is contributed by such phenomena as the cortical co-localisation of object perception with object memory, the response invariance in the responses of some neurons to variations in the stimulus, as well as from situations in which perception can be dissociated from sensation. In the context of perceptual inference, sensory areas of the cerebral cortex that have been facilitated by a priming signal may be regarded as comparators in a closed feedback loop, similar to the better known motor reflexes in the sensorimotor system. The adult cerebral cortex can be regarded as similar to a servomechanism, in using sensory feedback to correct internal models, producing predictions of the outside world on the basis of past experience.

  19. Application of time-resolved glucose concentration photoacoustic signals based on an improved wavelet denoising

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-10-01

    Real-time monitoring of blood glucose concentration (BGC) is a great important procedure in controlling diabetes mellitus and preventing the complication for diabetic patients. Noninvasive measurement of BGC has already become a research hotspot because it can overcome the physical and psychological harm. Photoacoustic spectroscopy is a well-established, hybrid and alternative technique used to determine the BGC. According to the theory of photoacoustic technique, the blood is irradiated by plused laser with nano-second repeation time and micro-joule power, the photoacoustic singals contained the information of BGC are generated due to the thermal-elastic mechanism, then the BGC level can be interpreted from photoacoustic signal via the data analysis. But in practice, the time-resolved photoacoustic signals of BGC are polluted by the varities of noises, e.g., the interference of background sounds and multi-component of blood. The quality of photoacoustic signal of BGC directly impacts the precision of BGC measurement. So, an improved wavelet denoising method was proposed to eliminate the noises contained in BGC photoacoustic signals. To overcome the shortcoming of traditional wavelet threshold denoising, an improved dual-threshold wavelet function was proposed in this paper. Simulation experimental results illustrated that the denoising result of this improved wavelet method was better than that of traditional soft and hard threshold function. To varify the feasibility of this improved function, the actual photoacoustic BGC signals were test, the test reslut demonstrated that the signal-to-noises ratio(SNR) of the improved function increases about 40-80%, and its root-mean-square error (RMSE) decreases about 38.7-52.8%.

  20. Stacked Denoising Autoencoders Applied to Star/Galaxy Classification

    NASA Astrophysics Data System (ADS)

    Qin, H. R.; Lin, J. M.; Wang, J. Y.

    2016-05-01

    In recent years, the deep learning has been becoming more and more popular because it is well-adapted, and has a high accuracy and complex structure, but it has not been used in astronomy. In order to resolve the question that the classification accuracy of star/galaxy is high on the bright set, but low on the faint set of the Sloan Digital Sky Survey (SDSS), we introduce the new deep learning SDA (stacked denoising autoencoders) and dropout technology, which can greatly improve robustness and anti-noise performance. We randomly selected the bright source set and faint source set from DR12 and DR7 with spectroscopic measurements, and preprocessed them. Afterwards, we randomly selected the training set and testing set without replacement from the bright set and faint set. At last, we used the obtained training set to train the SDA model of SDSS-DR7 and SDSS-DR12. We compared the testing result with the results of Library for Support Vector Machines (LibSVM), J48, Logistic Model Trees (LMT), Support Vector Machine (SVM), Logistic Regression, and Decision Stump algorithm on the SDSS-DR12 testing set, and the results of six kinds of decision trees on the SDSS-DR7 testing set. The simulation shows that SDA has a better classification accuracy than other machine learning algorithms. When we use completeness function as the test parameter, the test accuracy rate is improved by about 15% on the faint set of SDSS-DR7.

  1. Hybrid regularizers-based adaptive anisotropic diffusion for image denoising.

    PubMed

    Liu, Kui; Tan, Jieqing; Ai, Liefu

    2016-01-01

    To eliminate the staircasing effect for total variation filter and synchronously avoid the edges blurring for fourth-order PDE filter, a hybrid regularizers-based adaptive anisotropic diffusion is proposed for image denoising. In the proposed model, the [Formula: see text]-norm is considered as the fidelity term and the regularization term is composed of a total variation regularization and a fourth-order filter. The two filters can be adaptively selected according to the diffusion function. When the pixels locate at the edges, the total variation filter is selected to filter the image, which can preserve the edges. When the pixels belong to the flat regions, the fourth-order filter is adopted to smooth the image, which can eliminate the staircase artifacts. In addition, the split Bregman and relaxation approach are employed in our numerical algorithm to speed up the computation. Experimental results demonstrate that our proposed model outperforms the state-of-the-art models cited in the paper in both the qualitative and quantitative evaluations.

  2. An association of platelet indices with blood pressure in Beijing adults: Applying quadratic inference function for a longitudinal study.

    PubMed

    Yang, Kun; Tao, Lixin; Mahara, Gehendra; Yan, Yan; Cao, Kai; Liu, Xiangtong; Chen, Sipeng; Xu, Qin; Liu, Long; Wang, Chao; Huang, Fangfang; Zhang, Jie; Yan, Aoshuang; Ping, Zhao; Guo, Xiuhua

    2016-09-01

    The quadratic inference function (QIF) method becomes more acceptable for correlated data because of its advantages over generalized estimating equations (GEE). This study aimed to evaluate the relationship between platelet indices and blood pressure using QIF method, which has not been studied extensively in real data settings.A population-based longitudinal study was conducted in Beijing from 2007 to 2012, and the median of follow-up was 6 years. A total of 6515 cases, who were aged between 20 and 65 years at baseline and underwent routine physical examinations every year from 3 Beijing hospitals were enrolled to explore the association between platelet indices and blood pressure by QIF method. The original continuous platelet indices were categorized into 4 levels (Q1-Q4) using the 3 quartiles of P25, P50, and P75 as a critical value. GEE was performed to make a comparison with QIF.After adjusting for age, usage of drugs, and other confounding factors, mean platelet volume was negatively associated with diastolic blood pressure (DBP) (Equation is included in full-text article.)in males and positively linked with systolic blood pressure (SBP) (Equation is included in full-text article.). Platelet distribution width was negatively associated with SBP (Equation is included in full-text article.). Blood platelet count was associated with DBP (Equation is included in full-text article.)in males.Adults in Beijing with prolonged exposure to extreme value of platelet indices have elevated risk for future hypertension and evidence suggesting using some platelet indices for early diagnosis of high blood pressure was provided.

  3. An adaptive nonlocal means scheme for medical image denoising

    NASA Astrophysics Data System (ADS)

    Thaipanich, Tanaphol; Kuo, C.-C. Jay

    2010-03-01

    Medical images often consist of low-contrast objects corrupted by random noise arising in the image acquisition process. Thus, image denoising is one of the fundamental tasks required by medical imaging analysis. In this work, we investigate an adaptive denoising scheme based on the nonlocal (NL)-means algorithm for medical imaging applications. In contrast with the traditional NL-means algorithm, the proposed adaptive NL-means (ANL-means) denoising scheme has three unique features. First, it employs the singular value decomposition (SVD) method and the K-means clustering (K-means) technique for robust classification of blocks in noisy images. Second, the local window is adaptively adjusted to match the local property of a block. Finally, a rotated block matching algorithm is adopted for better similarity matching. Experimental results from both additive white Gaussian noise (AWGN) and Rician noise are given to demonstrate the superior performance of the proposed ANL denoising technique over various image denoising benchmarks in term of both PSNR and perceptual quality comparison.

  4. Image sequence denoising via sparse and redundant representations.

    PubMed

    Protter, Matan; Elad, Michael

    2009-01-01

    In this paper, we consider denoising of image sequences that are corrupted by zero-mean additive white Gaussian noise. Relative to single image denoising techniques, denoising of sequences aims to also utilize the temporal dimension. This assists in getting both faster algorithms and better output quality. This paper focuses on utilizing sparse and redundant representations for image sequence denoising, extending the work reported in. In the single image setting, the K-SVD algorithm is used to train a sparsifying dictionary for the corrupted image. This paper generalizes the above algorithm by offering several extensions: i) the atoms used are 3-D; ii) the dictionary is propagated from one frame to the next, reducing the number of required iterations; and iii) averaging is done on patches in both spatial and temporal neighboring locations. These modifications lead to substantial benefits in complexity and denoising performance, compared to simply running the single image algorithm sequentially. The algorithm's performance is experimentally compared to several state-of-the-art algorithms, demonstrating comparable or favorable results.

  5. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  6. Evaluation of denoising algorithms for biological electron tomography

    PubMed Central

    Narasimha, Rajesh; Aganj, Iman; Bennett, Adam; Borgnia, Mario J.; Zabransky, Daniel; Sapiro, Guillermo; McLaughlin, Steven W.; Milne, Jacqueline L. S.; Subramaniam, Sriram

    2008-01-01

    Tomograms of biological specimens derived using transmission electron microscopy can be intrinsically noisy due to the use of low electron doses, the presence of a “missing wedge” in most data collection schemes, and inaccuracies arising during 3D volume reconstruction. Before tomograms can be interpreted reliably, for example, by 3D segmentation, it is essential that the data be suitably denoised using procedures that can be individually optimized for specific data sets. Here, we implement a systematic procedure to compare various non-linear denoising techniques on tomograms recorded at room temperature and at cryogenic temperatures, and establish quantitative criteria to select a denoising approach that is most relevant for a given tomogram. We demonstrate that using an appropriate denoising algorithm facilitates robust segmentation of tomograms of HIV-infected macrophages and Bdellovibrio bacteria obtained from specimens at room and cryogenic temperatures, respectively. We validate this strategy of automated segmentation of optimally denoised tomograms by comparing its performance with manual extraction of key features from the same tomograms. PMID:18585059

  7. Single-image noise level estimation for blind denoising.

    PubMed

    Liu, Xinhao; Tanaka, Masayuki; Okutomi, Masatoshi

    2013-12-01

    Noise level is an important parameter to many image processing applications. For example, the performance of an image denoising algorithm can be much degraded due to the poor noise level estimation. Most existing denoising algorithms simply assume the noise level is known that largely prevents them from practical use. Moreover, even with the given true noise level, these denoising algorithms still cannot achieve the best performance, especially for scenes with rich texture. In this paper, we propose a patch-based noise level estimation algorithm and suggest that the noise level parameter should be tuned according to the scene complexity. Our approach includes the process of selecting low-rank patches without high frequency components from a single noisy image. The selection is based on the gradients of the patches and their statistics. Then, the noise level is estimated from the selected patches using principal component analysis. Because the true noise level does not always provide the best performance for nonblind denoising algorithms, we further tune the noise level parameter for nonblind denoising. Experiments demonstrate that both the accuracy and stability are superior to the state of the art noise level estimation algorithm for various scenes and noise levels.

  8. Spatio-temporal TGV denoising for ASL perfusion imaging.

    PubMed

    Spann, Stefan M; Kazimierski, Kamil S; Aigner, Christoph S; Kraiger, Markus; Bredies, Kristian; Stollberger, Rudolf

    2017-08-15

    In arterial spin labeling (ASL) a perfusion weighted image is achieved by subtracting a label image from a control image. This perfusion weighted image has an intrinsically low signal to noise ratio and numerous measurements are required to achieve reliable image quality, especially at higher spatial resolutions. To overcome this limitation various denoising approaches have been published using the perfusion weighted image as input for denoising. In this study we propose a new spatio-temporal filtering approach based on total generalized variation (TGV) regularization which exploits the inherent information of control and label pairs simultaneously. In this way, the temporal and spatial similarities of all images are used to jointly denoise the control and label images. To assess the effect of denoising, virtual ground truth data were produced at different SNR levels. Furthermore, high-resolution in-vivo pulsed ASL data sets were acquired and processed. The results show improved image quality, quantitative accuracy and robustness against outliers compared to seven state of the art denoising approaches. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Image denoising based on wavelets and multifractals for singularity detection.

    PubMed

    Zhong, Junmei; Ning, Ruola

    2005-10-01

    This paper presents a very efficient algorithm for image denoising based on wavelets and multifractals for singularity detection. A challenge of image denoising is how to preserve the edges of an image when reducing noise. By modeling the intensity surface of a noisy image as statistically self-similar multifractal processes and taking advantage of the multiresolution analysis with wavelet transform to exploit the local statistical self-similarity at different scales, the pointwise singularity strength value characterizing the local singularity at each scale was calculated. By thresholding the singularity strength, wavelet coefficients at each scale were classified into two categories: the edge-related and regular wavelet coefficients and the irregular coefficients. The irregular coefficients were denoised using an approximate minimum mean-squared error (MMSE) estimation method, while the edge-related and regular wavelet coefficients were smoothed using the fuzzy weighted mean (FWM) filter aiming at preserving the edges and details when reducing noise. Furthermore, to make the FWM-based filtering more efficient for noise reduction at the lowest decomposition level, the MMSE-based filtering was performed as the first pass of denoising followed by performing the FWM-based filtering. Experimental results demonstrated that this algorithm could achieve both good visual quality and high PSNR for the denoised images.

  10. Remote sensing image denoising application by generalized morphological component analysis

    NASA Astrophysics Data System (ADS)

    Yu, Chong; Chen, Xiong

    2014-12-01

    In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.

  11. Geometric properties of solutions to the total variation denoising problem

    NASA Astrophysics Data System (ADS)

    Chambolle, Antonin; Duval, Vincent; Peyré, Gabriel; Poon, Clarice

    2017-01-01

    This article studies the denoising performance of total variation (TV) image regularization. More precisely, we study geometrical properties of the solution to the so-called Rudin-Osher-Fatemi total variation denoising method. The first contribution of this paper is a precise mathematical definition of the ‘extended support’ (associated to the noise-free image) of TV denoising. It is intuitively the region which is unstable and will suffer from the staircasing effect. We highlight in several practical cases, such as the indicator of convex sets, that this region can be determined explicitly. Our second and main contribution is a proof that the TV denoising method indeed restores an image which is exactly constant outside a small tube surrounding the extended support. The radius of this tube shrinks toward zero as the noise level vanishes, and we are able to determine, in some cases, an upper bound on the convergence rate. For indicators of so-called ‘calibrable’ sets (such as disks or properly eroded squares), this extended support matches the edges, so that discontinuities produced by TV denoising cluster tightly around the edges. In contrast, for indicators of more general shapes or for complicated images, this extended support can be larger. Beside these main results, our paper also proves several intermediate results about fine properties of TV regularization, in particular for indicators of calibrable and convex sets, which are of independent interest.

  12. Wavelet Denoising of Mobile Radiation Data

    SciTech Connect

    Campbell, D B

    2008-10-31

    The FY08 phase of this project investigated the merits of video fusion as a method for mitigating the false alarms encountered by vehicle borne detection systems in an effort to realize performance gains associated with wavelet denoising. The fusion strategy exploited the significant correlations which exist between data obtained from radiation detectors and video systems with coincident fields of view. The additional information provided by optical systems can greatly increase the capabilities of these detection systems by reducing the burden of false alarms and through the generation of actionable information. The investigation into the use of wavelet analysis techniques as a means of filtering the gross-counts signal obtained from moving radiation detectors showed promise for vehicle borne systems. However, the applicability of these techniques to man-portable systems is limited due to minimal gains in performance over the rapid feedback available to system operators under walking conditions. Furthermore, the fusion of video holds significant promise for systems operating from vehicles or systems organized into stationary arrays; however, the added complexity and hardware required by this technique renders it infeasible for man-portable systems.

  13. Image denoising for real-time MRI.

    PubMed

    Klosowski, Jakob; Frahm, Jens

    2017-03-01

    To develop an image noise filter suitable for MRI in real time (acquisition and display), which preserves small isolated details and efficiently removes background noise without introducing blur, smearing, or patch artifacts. The proposed method extends the nonlocal means algorithm to adapt the influence of the original pixel value according to a simple measure for patch regularity. Detail preservation is improved by a compactly supported weighting kernel that closely approximates the commonly used exponential weight, while an oracle step ensures efficient background noise removal. Denoising experiments were conducted on real-time images of healthy subjects reconstructed by regularized nonlinear inversion from radial acquisitions with pronounced undersampling. The filter leads to a signal-to-noise ratio (SNR) improvement of at least 60% without noticeable artifacts or loss of detail. The method visually compares to more complex state-of-the-art filters as the block-matching three-dimensional filter and in certain cases better matches the underlying noise model. Acceleration of the computation to more than 100 complex frames per second using graphics processing units is straightforward. The sensitivity of nonlocal means to small details can be significantly increased by the simple strategies presented here, which allows partial restoration of SNR in iteratively reconstructed images without introducing a noticeable time delay or image artifacts. Magn Reson Med 77:1340-1352, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  14. Crustal anisotropy in northeastern Tibetan Plateau inferred from receiver functions: Rock textures caused by metamorphic fluids and lower crust flow?

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Park, Jeffrey; Rye, Danny M.

    2015-10-01

    The crust of Tibetan Plateau may have formed via shortening/thickening or large-scale underthrusting, and subsequently modified via lower crust channel flows and volatile-mediated regional metamorphism. The amplitude and distribution of crustal anisotropy record the history of continental deformation, offering clues to its formation and later modification. In this study, we first investigate the back-azimuth dependence of Ps converted phases using multitaper receiver functions (RFs). We analyze teleseismic data for 35 temporary broadband stations in the ASCENT experiment located in northeastern Tibet. We stack receiver functions after a moving-window moveout correction. Major features of RFs include: 1) Ps arrivals at 8-10 s on the radial components, suggesting a 70-90-km crustal thickness in the study area; 2) two-lobed back-azimuth variation for intra-crustal Ps phases in the upper crust (< 20 km), consistent with tilted symmetry axis anisotropy or dipping interfaces; 3) significant Ps arrivals with four-lobed back-azimuth variation distributed in distinct layers in the middle and lower crust (up to 60 km), corresponding to (sub)horizontal-axis anisotropy; and 4) weak or no evidence of azimuthal anisotropy in the lowermost crust. To study the anisotropy, we compare the observed RF stacks with one-dimensional reflectivity synthetic seismograms in anisotropic media, and fit major features by "trial and error" forward modeling. Crustal anisotropy offers few clues on plateau formation, but strong evidence of ongoing deformation and metamorphism. We infer strong horizontal-axis anisotropy concentrated in the middle and lower crust, which could be explained by vertically aligned sheet silicates, open cracks filled with magma or other fluid, vertical vein structures or by 1-10-km-scale chimney structures that have focused metamorphic fluids. Simple dynamic models encounter difficulty in generating vertically aligned sheet silicates. Instead, we interpret our data to

  15. Statistical Inference

    NASA Astrophysics Data System (ADS)

    Khan, Shahjahan

    Often scientific information on various data generating processes are presented in the from of numerical and categorical data. Except for some very rare occasions, generally such data represent a small part of the population, or selected outcomes of any data generating process. Although, valuable and useful information is lurking in the array of scientific data, generally, they are unavailable to the users. Appropriate statistical methods are essential to reveal the hidden "jewels" in the mess of the row data. Exploratory data analysis methods are used to uncover such valuable characteristics of the observed data. Statistical inference provides techniques to make valid conclusions about the unknown characteristics or parameters of the population from which scientifically drawn sample data are selected. Usually, statistical inference includes estimation of population parameters as well as performing test of hypotheses on the parameters. However, prediction of future responses and determining the prediction distributions are also part of statistical inference. Both Classical or Frequentists and Bayesian approaches are used in statistical inference. The commonly used Classical approach is based on the sample data alone. In contrast, increasingly popular Beyesian approach uses prior distribution on the parameters along with the sample data to make inferences. The non-parametric and robust methods are also being used in situations where commonly used model assumptions are unsupported. In this chapter,we cover the philosophical andmethodological aspects of both the Classical and Bayesian approaches.Moreover, some aspects of predictive inference are also included. In the absence of any evidence to support assumptions regarding the distribution of the underlying population, or if the variable is measured only in ordinal scale, non-parametric methods are used. Robust methods are employed to avoid any significant changes in the results due to deviations from the model

  16. Statistical Inference

    NASA Astrophysics Data System (ADS)

    Khan, Shahjahan

    Often scientific information on various data generating processes are presented in the from of numerical and categorical data. Except for some very rare occasions, generally such data represent a small part of the population, or selected outcomes of any data generating process. Although, valuable and useful information is lurking in the array of scientific data, generally, they are unavailable to the users. Appropriate statistical methods are essential to reveal the hidden “jewels” in the mess of the row data. Exploratory data analysis methods are used to uncover such valuable characteristics of the observed data. Statistical inference provides techniques to make valid conclusions about the unknown characteristics or parameters of the population from which scientifically drawn sample data are selected. Usually, statistical inference includes estimation of population parameters as well as performing test of hypotheses on the parameters. However, prediction of future responses and determining the prediction distributions are also part of statistical inference. Both Classical or Frequentists and Bayesian approaches are used in statistical inference. The commonly used Classical approach is based on the sample data alone. In contrast, increasingly popular Beyesian approach uses prior distribution on the parameters along with the sample data to make inferences. The non-parametric and robust methods are also being used in situations where commonly used model assumptions are unsupported. In this chapter,we cover the philosophical andmethodological aspects of both the Classical and Bayesian approaches.Moreover, some aspects of predictive inference are also included. In the absence of any evidence to support assumptions regarding the distribution of the underlying population, or if the variable is measured only in ordinal scale, non-parametric methods are used. Robust methods are employed to avoid any significant changes in the results due to deviations from the model

  17. A new study on mammographic image denoising using multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Dong, Min; Guo, Ya-Nan; Ma, Yi-De; Ma, Yu-run; Lu, Xiang-yu; Wang, Ke-ju

    2015-12-01

    Mammography is the most simple and effective technology for early detection of breast cancer. However, the lesion areas of breast are difficult to detect which due to mammograms are mixed with noise. This work focuses on discussing various multiresolution denoising techniques which include the classical methods based on wavelet and contourlet; moreover the emerging multiresolution methods are also researched. In this work, a new denoising method based on dual tree contourlet transform (DCT) is proposed, the DCT possess the advantage of approximate shift invariant, directionality and anisotropy. The proposed denoising method is implemented on the mammogram, the experimental results show that the emerging multiresolution method succeeded in maintaining the edges and texture details; and it can obtain better performance than the other methods both on visual effects and in terms of the Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM) values.

  18. Non-local MRI denoising using random sampling.

    PubMed

    Hu, Jinrong; Zhou, Jiliu; Wu, Xi

    2016-09-01

    In this paper, we propose a random sampling non-local mean (SNLM) algorithm to eliminate noise in 3D MRI datasets. Non-local means (NLM) algorithms have been implemented efficiently for MRI denoising, but are always limited by high computational complexity. Compared to conventional methods, which raster through the entire search window when computing similarity weights, the proposed SNLM algorithm randomly selects a small subset of voxels which dramatically decreases the computational burden, together with competitive denoising result. Moreover, structure tensor which encapsulates high-order information was introduced as an optimal sampling pattern for further improvement. Numerical experiments demonstrated that the proposed SNLM method can get a good balance between denoising quality and computation efficiency. At a relative sampling ratio (i.e. ξ=0.05), SNLM can remove noise as effectively as full NLM, meanwhile the running time can be reduced to 1/20 of NLM's.

  19. Image denoising via group Sparse representation over learned dictionary

    NASA Astrophysics Data System (ADS)

    Cheng, Pan; Deng, Chengzhi; Wang, Shengqian; Zhang, Chunfeng

    2013-10-01

    Images are one of vital ways to get information for us. However, in the practical application, images are often subject to a variety of noise, so that solving the problem of image denoising becomes particularly important. The K-SVD algorithm can improve the denoising effect by sparse coding atoms instead of the traditional method of sparse coding dictionary. In order to further improve the effect of denoising, we propose to extended the K-SVD algorithm via group sparse representation. The key point of this method is dividing the sparse coefficients into groups, so that adjusts the correlation among the elements by controlling the size of the groups. This new approach can improve the local constraints between adjacent atoms, thereby it is very important to increase the correlation between the atoms. The experimental results show that our method has a better effect on image recovery, which is efficient to prevent the block effect and can get smoother images.

  20. Total Variation Denoising and Support Localization of the Gradient

    NASA Astrophysics Data System (ADS)

    Chambolle, A.; Duval, V.; Peyré, G.; Poon, C.

    2016-10-01

    This paper describes the geometrical properties of the solutions to the total variation denoising method. A folklore statement is that this method is able to restore sharp edges, but at the same time, might introduce some staircasing (i.e. “fake” edges) in flat areas. Quite surprisingly, put aside numerical evidences, almost no theoretical result are available to backup these claims. The first contribution of this paper is a precise mathematical definition of the “extended support” (associated to the noise-free image) of TV denoising. This is intuitively the region which is unstable and will suffer from the staircasing effect. Our main result shows that the TV denoising method indeed restores a piece-wise constant image outside a small tube surrounding the extended support. Furthermore, the radius of this tube shrinks toward zero as the noise level vanishes and in some cases, an upper bound on the convergence rate is given.

  1. Learning optimal spatially-dependent regularization parameters in total variation image denoising

    NASA Astrophysics Data System (ADS)

    Van Chung, Cao; De los Reyes, J. C.; Schönlieb, C. B.

    2017-07-01

    We consider a bilevel optimization approach in function space for the choice of spatially dependent regularization parameters in TV image denoising models. First- and second-order optimality conditions for the bilevel problem are studied when the spatially-dependent parameter belongs to the Sobolev space {{H}1}≤ft(Ω \\right) . A combined Schwarz domain decomposition-semismooth Newton method is proposed for the solution of the full optimality system and local superlinear convergence of the semismooth Newton method is verified. Exhaustive numerical computations are finally carried out to show the suitability of the approach.

  2. MicroRNA-Target Network Inference and Local Network Enrichment Analysis Identify Two microRNA Clusters with Distinct Functions in Head and Neck Squamous Cell Carcinoma.

    PubMed

    Sass, Steffen; Pitea, Adriana; Unger, Kristian; Hess, Julia; Mueller, Nikola S; Theis, Fabian J

    2015-12-18

    MicroRNAs represent ~22 nt long endogenous small RNA molecules that have been experimentally shown to regulate gene expression post-transcriptionally. One main interest in miRNA research is the investigation of their functional roles, which can typically be accomplished by identification of mi-/mRNA interactions and functional annotation of target gene sets. We here present a novel method "miRlastic", which infers miRNA-target interactions using transcriptomic data as well as prior knowledge and performs functional annotation of target genes by exploiting the local structure of the inferred network. For the network inference, we applied linear regression modeling with elastic net regularization on matched microRNA and messenger RNA expression profiling data to perform feature selection on prior knowledge from sequence-based target prediction resources. The novelty of miRlastic inference originates in predicting data-driven intra-transcriptome regulatory relationships through feature selection. With synthetic data, we showed that miRlastic outperformed commonly used methods and was suitable even for low sample sizes. To gain insight into the functional role of miRNAs and to determine joint functional properties of miRNA clusters, we introduced a local enrichment analysis procedure. The principle of this procedure lies in identifying regions of high functional similarity by evaluating the shortest paths between genes in the network. We can finally assign functional roles to the miRNAs by taking their regulatory relationships into account. We thoroughly evaluated miRlastic on a cohort of head and neck cancer (HNSCC) patients provided by The Cancer Genome Atlas. We inferred an mi-/mRNA regulatory network for human papilloma virus (HPV)-associated miRNAs in HNSCC. The resulting network best enriched for experimentally validated miRNA-target interaction, when compared to common methods. Finally, the local enrichment step identified two functional clusters of miRNAs that

  3. Dictionary Pair Learning on Grassmann Manifolds for Image Denoising.

    PubMed

    Zeng, Xianhua; Bian, Wei; Liu, Wei; Shen, Jialie; Tao, Dacheng

    2015-11-01

    Image denoising is a fundamental problem in computer vision and image processing that holds considerable practical importance for real-world applications. The traditional patch-based and sparse coding-driven image denoising methods convert 2D image patches into 1D vectors for further processing. Thus, these methods inevitably break down the inherent 2D geometric structure of natural images. To overcome this limitation pertaining to the previous image denoising methods, we propose a 2D image denoising model, namely, the dictionary pair learning (DPL) model, and we design a corresponding algorithm called the DPL on the Grassmann-manifold (DPLG) algorithm. The DPLG algorithm first learns an initial dictionary pair (i.e., the left and right dictionaries) by employing a subspace partition technique on the Grassmann manifold, wherein the refined dictionary pair is obtained through a sub-dictionary pair merging. The DPLG obtains a sparse representation by encoding each image patch only with the selected sub-dictionary pair. The non-zero elements of the sparse representation are further smoothed by the graph Laplacian operator to remove the noise. Consequently, the DPLG algorithm not only preserves the inherent 2D geometric structure of natural images but also performs manifold smoothing in the 2D sparse coding space. We demonstrate that the DPLG algorithm also improves the structural SIMilarity values of the perceptual visual quality for denoised images using the experimental evaluations on the benchmark images and Berkeley segmentation data sets. Moreover, the DPLG also produces the competitive peak signal-to-noise ratio values from popular image denoising algorithms.

  4. Sinogram denoising via simultaneous sparse representation in learned dictionaries.

    PubMed

    Karimi, Davood; Ward, Rabab K

    2016-05-07

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  5. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  6. Functional characterization of somatic mutations in cancer using network-based inference of protein activity | Office of Cancer Genomics

    Cancer.gov

    Identifying the multiple dysregulated oncoproteins that contribute to tumorigenesis in a given patient is crucial for developing personalized treatment plans. However, accurate inference of aberrant protein activity in biological samples is still challenging as genetic alterations are only partially predictive and direct measurements of protein activity are generally not feasible.

  7. GPU-Accelerated Denoising in 3D (GD3D)

    SciTech Connect

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer the second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.

  8. Study on Underwater Image Denoising Algorithm Based on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Jian, Sun; Wen, Wang

    2017-02-01

    This paper analyzes the application of MATLAB in underwater image processing, the transmission characteristics of the underwater laser light signal and the kinds of underwater noise has been described, the common noise suppression algorithm: Wiener filter, median filter, average filter algorithm is brought out. Then the advantages and disadvantages of each algorithm in image sharpness and edge protection areas have been compared. A hybrid filter algorithm based on wavelet transform has been proposed which can be used for Color Image Denoising. At last the PSNR and NMSE of each algorithm has been given out, which compares the ability to de-noising

  9. Image denoising with the dual-tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Yaseen, Alauldeen S.; Pavlova, Olga N.; Pavlov, Alexey N.; Hramov, Alexander E.

    2016-04-01

    The purpose of this study is to compare image denoising techniques based on real and complex wavelet-transforms. Possibilities provided by the classical discrete wavelet transform (DWT) with hard and soft thresholding are considered, and influences of the wavelet basis and image resizing are discussed. The quality of image denoising for the standard 2-D DWT and the dual-tree complex wavelet transform (DT-CWT) is studied. It is shown that DT-CWT outperforms 2-D DWT at the appropriate selection of the threshold level.

  10. GPU-Accelerated Denoising in 3D (GD3D)

    SciTech Connect

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer the second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.

  11. Frames-Based Denoising in 3D Confocal Microscopy Imaging.

    PubMed

    Konstantinidis, Ioannis; Santamaria-Pang, Alberto; Kakadiaris, Ioannis

    2005-01-01

    In this paper, we propose a novel denoising method for 3D confocal microscopy data based on robust edge detection. Our approach relies on the construction of a non-separable frame system in 3D that incorporates the Sobel operator in dual spatial directions. This multidirectional set of digital filters is capable of robustly detecting edge information by ensemble thresholding of the filtered data. We demonstrate the application of our method to both synthetic and real confocal microscopy data by comparing it to denoising methods based on separable 3D wavelets and 3D median filtering, and report very encouraging results.

  12. Adaptive image denoising based on support vector machine and wavelet description

    NASA Astrophysics Data System (ADS)

    An, Feng-Ping; Zhou, Xian-Wei

    2017-09-01

    Adaptive image denoising method decomposes the original image into a series of basic pattern feature images on the basis of wavelet description and constructs the support vector machine regression function to realize the wavelet description of the original image. The support vector machine method allows the linear expansion of the signal to be expressed as a nonlinear function of the parameters associated with the SVM. Using the radial basis kernel function of SVM, the original image can be extended into a MEXICAN function and a residual trend. This MEXICAN represents a basic image feature pattern. If the residual does not fluctuate, it can also be represented as a characteristic pattern. If the residuals fluctuate significantly, it is treated as a new image and the same decomposition process is repeated until the residuals obtained by the decomposition do not significantly fluctuate. Experimental results show that the proposed method in this paper performs well; especially, it satisfactorily solves the problem of image noise removal. It may provide a new tool and method for image denoising.

  13. Comparison of automatic denoising methods for phonocardiograms with extraction of signal parameters via the Hilbert Transform

    NASA Astrophysics Data System (ADS)

    Messer, Sheila R.; Agzarian, John; Abbott, Derek

    2001-05-01

    Phonocardiograms (PCGs) have many advantages over traditional auscultation (listening to the heart) because they may be replayed, may be analyzed for spectral and frequency content, and frequencies inaudible to the human ear may be recorded. However, various sources of noise may pollute a PCG including lung sounds, environmental noise and noise generated from contact between the recording device and the skin. Because PCG signals are known to be nonlinear and it is often not possible to determine their noise content, traditional de-noising methods may not be effectively applied. However, other methods including wavelet de-noising, wavelet packet de-noising and averaging can be employed to de-noise the PCG. This study examines and compares these de-noising methods. This study answers such questions as to which de-noising method gives a better SNR, the magnitude of signal information that is lost as a result of the de-noising process, the appropriate uses of the different methods down to such specifics as to which wavelets and decomposition levels give best results in wavelet and wavelet packet de-noising. In general, the wavelet and wavelet packet de-noising performed roughly equally with optimal de-noising occurring at 3-5 levels of decomposition. Averaging also proved a highly useful de- noising technique; however, in some cases averaging is not appropriate. The Hilbert Transform is used to illustrate the results of the de-noising process and to extract instantaneous features including instantaneous amplitude, frequency, and phase.

  14. Region-based image denoising through wavelet and fast discrete curvelet transform

    NASA Astrophysics Data System (ADS)

    Gu, Yanfeng; Guo, Yan; Liu, Xing; Zhang, Ye

    2008-10-01

    Image denoising always is one of important research topics in the image processing field. In this paper, fast discrete curvelet transform (FDCT) and undecimated wavelet transform (UDWT) are proposed for image denoising. A noisy image is first denoised by FDCT and UDWT separately. The whole image space is then divided into edge region and non-edge regions. After that, wavelet transform is performed on the images denoised by FDCT and UDWT respectively. Finally, the resultant image is fused through using both of edge region wavelet cofficients of the image denoised by FDCT and non-edge region wavelet cofficients of the image denoised by UDWT. The proposed method is validated through numerical experiments conducted on standard test images. The experimental results show that the proposed algorithm outperforms wavelet-based and curvelet-based image denoising methods and preserve linear features well.

  15. Flowing Dunes of Shangri-La Denoised

    NASA Image and Video Library

    2016-09-07

    This radar image of the Shangri-La Sand Sea on Titan from NASA's Cassini spacecraft shows hundreds of sand dunes are visible as dark lines snaking across the surface. These dunes display patterns of undulation and divergence around elevated mountains (which appear bright to the radar), thereby showing the direction of wind and sand transport on the surface. Sands being carried from left to right (west to east) cannot surmount the tallest obstacles; instead, they are directed through chutes and canyons between the tall features, evident in thin, blade-like, isolated dunes between bright some features. Once sands have passed around the obstacles, they resume their downwind course, at first collecting into small, patchy dunes and then organizing into larger, more pervasive linear forms, before being halted once again by obstacles. These patterns reveal the effects not only of wind -- perhaps even modern winds if the dunes are actively moving today -- but also the effects of underlying bedrock and surrounding topography. Dunes across the solar system aid in our understanding of underlying topography, winds and climate, past and present. Similar patterns can be seen in dunes of the Great Sandy Desert in Australia, where dunes undulate broadly across the uneven terrain and are halted at the margins of sand-trapping lakes. The dune orientations correlate generally with the direction of current trade winds, and reveal that winds must have been similar back when the dunes formed, during the Pleistocene glacial and interglacial periods. The image was taken by the Cassini Synthetic Aperture radar (SAR) on July 25, 2016 during the mission's 122nd targeted Titan encounter. The image has been modified by the denoising method described in A. Lucas, JGR:Planets (2014). http://photojournal.jpl.nasa.gov/catalog/PIA20711

  16. Discrete shearlet transform on GPU with applications in anomaly detection and denoising

    NASA Astrophysics Data System (ADS)

    Gibert, Xavier; Patel, Vishal M.; Labate, Demetrio; Chellappa, Rama

    2014-12-01

    Shearlets have emerged in recent years as one of the most successful methods for the multiscale analysis of multidimensional signals. Unlike wavelets, shearlets form a pyramid of well-localized functions defined not only over a range of scales and locations, but also over a range of orientations and with highly anisotropic supports. As a result, shearlets are much more effective than traditional wavelets in handling the geometry of multidimensional data, and this was exploited in a wide range of applications from image and signal processing. However, despite their desirable properties, the wider applicability of shearlets is limited by the computational complexity of current software implementations. For example, denoising a single 512 × 512 image using a current implementation of the shearlet-based shrinkage algorithm can take between 10 s and 2 min, depending on the number of CPU cores, and much longer processing times are required for video denoising. On the other hand, due to the parallel nature of the shearlet transform, it is possible to use graphics processing units (GPU) to accelerate its implementation. In this paper, we present an open source stand-alone implementation of the 2D discrete shearlet transform using CUDA C++ as well as GPU-accelerated MATLAB implementations of the 2D and 3D shearlet transforms. We have instrumented the code so that we can analyze the running time of each kernel under different GPU hardware. In addition to denoising, we describe a novel application of shearlets for detecting anomalies in textured images. In this application, computation times can be reduced by a factor of 50 or more, compared to multicore CPU implementations.

  17. Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations

    PubMed Central

    Chaspari, Theodora; Tsiartas, Andreas; Tsilifis, Panagiotis; Narayanan, Shrikanth

    2016-01-01

    Parametric dictionaries can increase the ability of sparse representations to meaningfully capture and interpret the underlying signal information, such as encountered in biomedical problems. Given a mapping function from the atom parameter space to the actual atoms, we propose a sparse Bayesian framework for learning the atom parameters, because of its ability to provide full posterior estimates, take uncertainty into account and generalize on unseen data. Inference is performed with Markov Chain Monte Carlo, that uses block sampling to generate the variables of the Bayesian problem. Since the parameterization of dictionary atoms results in posteriors that cannot be analytically computed, we use a Metropolis-Hastings-within-Gibbs framework, according to which variables with closed-form posteriors are generated with the Gibbs sampler, while the remaining ones with the Metropolis Hastings from appropriate candidate-generating densities. We further show that the corresponding Markov Chain is uniformly ergodic ensuring its convergence to a stationary distribution independently of the initial state. Results on synthetic data and real biomedical signals indicate that our approach offers advantages in terms of signal reconstruction compared to previously proposed Steepest Descent and Equiangular Tight Frame methods. This paper demonstrates the ability of Bayesian learning to generate parametric dictionaries that can reliably represent the exemplar data and provides the foundation towards inferring the entire variable set of the sparse approximation problem for signal denoising, adaptation and other applications. PMID:28649173

  18. Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations.

    PubMed

    Chaspari, Theodora; Tsiartas, Andreas; Tsilifis, Panagiotis; Narayanan, Shrikanth

    2016-06-15

    Parametric dictionaries can increase the ability of sparse representations to meaningfully capture and interpret the underlying signal information, such as encountered in biomedical problems. Given a mapping function from the atom parameter space to the actual atoms, we propose a sparse Bayesian framework for learning the atom parameters, because of its ability to provide full posterior estimates, take uncertainty into account and generalize on unseen data. Inference is performed with Markov Chain Monte Carlo, that uses block sampling to generate the variables of the Bayesian problem. Since the parameterization of dictionary atoms results in posteriors that cannot be analytically computed, we use a Metropolis-Hastings-within-Gibbs framework, according to which variables with closed-form posteriors are generated with the Gibbs sampler, while the remaining ones with the Metropolis Hastings from appropriate candidate-generating densities. We further show that the corresponding Markov Chain is uniformly ergodic ensuring its convergence to a stationary distribution independently of the initial state. Results on synthetic data and real biomedical signals indicate that our approach offers advantages in terms of signal reconstruction compared to previously proposed Steepest Descent and Equiangular Tight Frame methods. This paper demonstrates the ability of Bayesian learning to generate parametric dictionaries that can reliably represent the exemplar data and provides the foundation towards inferring the entire variable set of the sparse approximation problem for signal denoising, adaptation and other applications.

  19. Impedance cardiography signal denoising using discrete wavelet transform.

    PubMed

    Chabchoub, Souhir; Mansouri, Sofienne; Salah, Ridha Ben

    2016-09-01

    Impedance cardiography (ICG) is a non-invasive technique for diagnosing cardiovascular diseases. In the acquisition procedure, the ICG signal is often affected by several kinds of noise which distort the determination of the hemodynamic parameters. Therefore, doctors cannot recognize ICG waveform correctly and the diagnosis of cardiovascular diseases became inaccurate. The aim of this work is to choose the most suitable method for denoising the ICG signal. Indeed, different wavelet families are used to denoise the ICG signal. The Haar, Daubechies (db2, db4, db6, and db8), Symlet (sym2, sym4, sym6, sym8) and Coiflet (coif2, coif3, coif4, coif5) wavelet families are tested and evaluated in order to select the most suitable denoising method. The wavelet family with best performance is compared with two denoising methods: one based on Savitzky-Golay filtering and the other based on median filtering. Each method is evaluated by means of the signal to noise ratio (SNR), the root mean square error (RMSE) and the percent difference root mean square (PRD). The results show that the Daubechies wavelet family (db8) has superior performance on noise reduction in comparison to other methods.

  20. Image denoising with dominant sets by a coalitional game approach.

    PubMed

    Hsiao, Pei-Chi; Chang, Long-Wen

    2013-02-01

    Dominant sets are a new graph partition method for pairwise data clustering proposed by Pavan and Pelillo. We address the problem of dominant sets with a coalitional game model, in which each data point is treated as a player and similar data points are encouraged to group together for cooperation. We propose betrayal and hermit rules to describe the cooperative behaviors among the players. After applying the betrayal and hermit rules, an optimal and stable graph partition emerges, and all the players in the partition will not change their groups. For computational feasibility, we design an approximate algorithm for finding a dominant set of mutually similar players and then apply the algorithm to an application such as image denoising. In image denoising, every pixel is treated as a player who seeks similar partners according to its patch appearance in its local neighborhood. By averaging the noisy effects with the similar pixels in the dominant sets, we improve nonlocal means image denoising to restore the intrinsic structure of the original images and achieve competitive denoising results with the state-of-the-art methods in visual and quantitative qualities.

  1. Image denoising using the higher order singular value decomposition.

    PubMed

    Rajwade, Ajit; Rangarajan, Anand; Banerjee, Arunava

    2013-04-01

    In this paper, we propose a very simple and elegant patch-based, machine learning technique for image denoising using the higher order singular value decomposition (HOSVD). The technique simply groups together similar patches from a noisy image (with similarity defined by a statistically motivated criterion) into a 3D stack, computes the HOSVD coefficients of this stack, manipulates these coefficients by hard thresholding, and inverts the HOSVD transform to produce the final filtered image. Our technique chooses all required parameters in a principled way, relating them to the noise model. We also discuss our motivation for adopting the HOSVD as an appropriate transform for image denoising. We experimentally demonstrate the excellent performance of the technique on grayscale as well as color images. On color images, our method produces state-of-the-art results, outperforming other color image denoising algorithms at moderately high noise levels. A criterion for optimal patch-size selection and noise variance estimation from the residual images (after denoising) is also presented.

  2. Local Sparse Structure Denoising for Low-Light-Level Image.

    PubMed

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lianfa

    2015-12-01

    Sparse and redundant representations perform well in image denoising. However, sparsity-based methods fail to denoise low-light-level (LLL) images because of heavy and complex noise. They consider sparsity on image patches independently and tend to lose the texture structures. To suppress noises and maintain textures simultaneously, it is necessary to embed noise invariant features into the sparse decomposition process. We, therefore, used a local structure preserving sparse coding (LSPSc) formulation to explore the local sparse structures (both the sparsity and local structure) in image. It was found that, with the introduction of spatial local structure constraint into the general sparse coding algorithm, LSPSc could improve the robustness of sparse representation for patches in serious noise. We further used a kernel LSPSc (K-LSPSc) formulation, which extends LSPSc into the kernel space to weaken the influence of linear structure constraint in nonlinear data. Based on the robust LSPSc and K-LSPSc algorithms, we constructed a local sparse structure denoising (LSSD) model for LLL images, which was demonstrated to give high performance in the natural LLL images denoising, indicating that both the LSPSc- and K-LSPSc-based LSSD models have the stable property of noise inhibition and texture details preservation.

  3. Image denoising via adaptive eigenvectors of graph Laplacian

    NASA Astrophysics Data System (ADS)

    Chen, Ying; Tang, Yibin; Xu, Ning; Zhou, Lin; Zhao, Li

    2016-07-01

    An image denoising method via adaptive eigenvectors of graph Laplacian (EGL) is proposed. Unlike the trivial parameter setting of the used eigenvectors in the traditional EGL method, in our method, the eigenvectors are adaptively selected in the whole denoising procedure. In detail, a rough image is first built with the eigenvectors from the noisy image, where the eigenvectors are selected by using the deviation estimation of the clean image. Subsequently, a guided image is effectively restored with a weighted average of the noisy and rough images. In this operation, the average coefficient is adaptively obtained to set the deviation of the guided image to approximately that of the clean image. Finally, the denoised image is achieved by a group-sparse model with the pattern from the guided image, where the eigenvectors are chosen in the error control of the noise deviation. Moreover, a modified group orthogonal matching pursuit algorithm is developed to efficiently solve the above group sparse model. The experiments show that our method not only improves the practicality of the EGL methods with the dependence reduction of the parameter setting, but also can outperform some well-developed denoising methods, especially for noise with large deviations.

  4. Pixon Based Image Denoising Scheme by Preserving Exact Edge Locations

    NASA Astrophysics Data System (ADS)

    Srikrishna, Atluri; Reddy, B. Eswara; Pompapathi, Manasani

    2016-09-01

    Denoising of an image is an essential step in many image processing applications. In any image de-noising algorithm, it is a major concern to keep interesting structures of the image like abrupt changes in image intensity values (edges). In this paper an efficient algorithm for image de-noising is proposed that obtains integrated and consecutive original image from noisy image using diffusion equations in pixon domain. The process mainly consists of two steps. In first step, the pixons for noisy image are obtained by using K-means clustering process and next step includes applying diffusion equations on the pixonal model of the image to obtain new intensity values for the restored image. The process has been applied on a variety of standard images and the objective fidelity has been compared with existing algorithms. The experimental results show that the proposed algorithm has a better performance by preserving edge details compared in terms of Figure of Merit and improved Peak-to-Signal-Noise-Ratio value. The proposed method brings out a denoising technique which preserves edge details.

  5. Enhancement of signal denoising and multiple fault signatures detecting in rotating machinery using dual-tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Wang, Yanxue; He, Zhengjia; Zi, Yanyang

    2010-01-01

    In order to enhance the desired features related to some special type of machine fault, a technique based on the dual-tree complex wavelet transform (DTCWT) is proposed in this paper. It is demonstrated that DTCWT enjoys better shift invariance and reduced spectral aliasing than second-generation wavelet transform (SGWT) and empirical mode decomposition by means of numerical simulations. These advantages of the DTCWT arise from the relationship between the two dual-tree wavelet basis functions, instead of the matching of the used single wavelet basis function to the signal being analyzed. Since noise inevitably exists in the measured signals, an enhanced vibration signals denoising algorithm incorporating DTCWT with NeighCoeff shrinkage is also developed. Denoising results of vibration signals resulting from a crack gear indicate the proposed denoising method can effectively remove noise and retain the valuable information as much as possible compared to those DWT- and SGWT-based NeighCoeff shrinkage denoising methods. As is well known, excavation of comprehensive signatures embedded in the vibration signals is of practical importance to clearly clarify the roots of the fault, especially the combined faults. In the case of multiple features detection, diagnosis results of rolling element bearings with combined faults and an actual industrial equipment confirm that the proposed DTCWT-based method is a powerful and versatile tool and consistently outperforms SGWT and fast kurtogram, which are widely used recently. Moreover, it must be noted, the proposed method is completely suitable for on-line surveillance and diagnosis due to its good robustness and efficient algorithm.

  6. Large-scale inference of gene function through phylogenetic annotation of Gene Ontology terms: case study of the apoptosis and autophagy cellular processes

    PubMed Central

    Feuermann, Marc; Gaudet, Pascale; Mi, Huaiyu; Lewis, Suzanna E.; Thomas, Paul D.

    2016-01-01

    We previously reported a paradigm for large-scale phylogenomic analysis of gene families that takes advantage of the large corpus of experimentally supported Gene Ontology (GO) annotations. This ‘GO Phylogenetic Annotation’ approach integrates GO annotations from evolutionarily related genes across ∼100 different organisms in the context of a gene family tree, in which curators build an explicit model of the evolution of gene functions. GO Phylogenetic Annotation models the gain and loss of functions in a gene family tree, which is used to infer the functions of uncharacterized (or incompletely characterized) gene products, even for human proteins that are relatively well studied. Here, we report our results from applying this paradigm to two well-characterized cellular processes, apoptosis and autophagy. This revealed several important observations with respect to GO annotations and how they can be used for function inference. Notably, we applied only a small fraction of the experimentally supported GO annotations to infer function in other family members. The majority of other annotations describe indirect effects, phenotypes or results from high throughput experiments. In addition, we show here how feedback from phylogenetic annotation leads to significant improvements in the PANTHER trees, the GO annotations and GO itself. Thus GO phylogenetic annotation both increases the quantity and improves the accuracy of the GO annotations provided to the research community. We expect these phylogenetically based annotations to be of broad use in gene enrichment analysis as well as other applications of GO annotations. Database URL: http://amigo.geneontology.org/amigo PMID:28025345

  7. Stacked Denoising Autoencoders Applied to Star/Galaxy Classification

    NASA Astrophysics Data System (ADS)

    Hao-ran, Qin; Ji-ming, Lin; Jun-yi, Wang

    2017-04-01

    In recent years, the deep learning algorithm, with the characteristics of strong adaptability, high accuracy, and structural complexity, has become more and more popular, but it has not yet been used in astronomy. In order to solve the problem that the star/galaxy classification accuracy is high for the bright source set, but low for the faint source set of the Sloan Digital Sky Survey (SDSS) data, we introduced the new deep learning algorithm, namely the SDA (stacked denoising autoencoder) neural network and the dropout fine-tuning technique, which can greatly improve the robustness and antinoise performance. We randomly selected respectively the bright source sets and faint source sets from the SDSS DR12 and DR7 data with spectroscopic measurements, and made preprocessing on them. Then, we randomly selected respectively the training sets and testing sets without replacement from the bright source sets and faint source sets. At last, using these training sets we made the training to obtain the SDA models of the bright sources and faint sources in the SDSS DR7 and DR12, respectively. We compared the test result of the SDA model on the DR12 testing set with the test results of the Library for Support Vector Machines (LibSVM), J48 decision tree, Logistic Model Tree (LMT), Support Vector Machine (SVM), Logistic Regression, and Decision Stump algorithm, and compared the test result of the SDA model on the DR7 testing set with the test results of six kinds of decision trees. The experiments show that the SDA has a better classification accuracy than other machine learning algorithms for the faint source sets of DR7 and DR12. Especially, when the completeness function is used as the evaluation index, compared with the decision tree algorithms, the correctness rate of SDA has improved about 15% for the faint source set of SDSS-DR7.

  8. Image denoising algorithm based on contourlet transform for optical coherence tomography heart tube image

    PubMed Central

    Guo, Qing; Dong, Fangmin; Sun, Shuifa; Lei, Bangjun; Gao, Bruce Z.

    2016-01-01

    Optical coherence tomography (OCT) is becoming an increasingly important imaging technology in the Biomedical field. However, the application of OCT is limited by the ubiquitous noise. In this study, the noise of OCT heart tube image is first verified as being multiplicative based on the local statistics (i.e. the linear relationship between the mean and the standard deviation of certain flat area). The variance of the noise is evaluated in log-domain. Based on these, a joint probability density function is constructed to take the inter-direction dependency in the contourlet domain from the logarithmic transformed image into account. Then, a bivariate shrinkage function is derived to denoise the image by the maximum a posteriori estimation. Systemic comparative experiments are made to synthesis images, OCT heart tube images and other OCT tissue images by subjective assessment and objective metrics. The experiment results are analysed based on the denoising results and the predominance degree of the proposed algorithm with respect to the wavelet-based algorithm. The results show that the proposed algorithm improves the signal-to-noise ratio, whereas preserving the edges and has more advantages on the images containing multi-direction information like OCT heart tube image. PMID:27087835

  9. A fast non-local image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Dauwe, A.; Goossens, B.; Luong, H. Q.; Philips, W.

    2008-02-01

    In this paper we propose several improvements to the original non-local means algorithm introduced by Buades et al. which obtains state-of-the-art denoising results. The strength of this algorithm is to exploit the repetitive character of the image in order to denoise the image unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Due to the enormous amount of weight computations, the original algorithm has a high computational cost. An improvement of image quality towards the original algorithm is to ignore the contributions from dissimilar windows. Even though their weights are very small at first sight, the new estimated pixel value can be severely biased due to the many small contributions. This bad influence of dissimilar windows can be eliminated by setting their corresponding weights to zero. Using the preclassification based on the first three statistical moments, only contributions from similar neighborhoods are computed. To decide whether a window is similar or dissimilar, we will derive thresholds for images corrupted with additive white Gaussian noise. Our accelerated approach is further optimized by taking advantage of the symmetry in the weights, which roughly halves the computation time, and by using a lookup table to speed up the weight computations. Compared to the original algorithm, our proposed method produces images with increased PSNR and better visual performance in less computation time. Our proposed method even outperforms state-of-the-art wavelet denoising techniques in both visual quality and PSNR values for images containing a lot of repetitive structures such as textures: the denoised images are much sharper and contain less artifacts. The proposed optimizations can also be applied in other image processing tasks which employ the concept of repetitive structures such as intra-frame super-resolution or detection of digital image forgery.

  10. Multitaper Spectral Analysis and Wavelet Denoising Applied to Helioseismic Data

    NASA Technical Reports Server (NTRS)

    Komm, R. W.; Gu, Y.; Hill, F.; Stark, P. B.; Fodor, I. K.

    1999-01-01

    Estimates of solar normal mode frequencies from helioseismic observations can be improved by using Multitaper Spectral Analysis (MTSA) to estimate spectra from the time series, then using wavelet denoising of the log spectra. MTSA leads to a power spectrum estimate with reduced variance and better leakage properties than the conventional periodogram. Under the assumption of stationarity and mild regularity conditions, the log multitaper spectrum has a statistical distribution that is approximately Gaussian, so wavelet denoising is asymptotically an optimal method to reduce the noise in the estimated spectra. We find that a single m-upsilon spectrum benefits greatly from MTSA followed by wavelet denoising, and that wavelet denoising by itself can be used to improve m-averaged spectra. We compare estimates using two different 5-taper estimates (Stepian and sine tapers) and the periodogram estimate, for GONG time series at selected angular degrees l. We compare those three spectra with and without wavelet-denoising, both visually, and in terms of the mode parameters estimated from the pre-processed spectra using the GONG peak-fitting algorithm. The two multitaper estimates give equivalent results. The number of modes fitted well by the GONG algorithm is 20% to 60% larger (depending on l and the temporal frequency) when applied to the multitaper estimates than when applied to the periodogram. The estimated mode parameters (frequency, amplitude and width) are comparable for the three power spectrum estimates, except for modes with very small mode widths (a few frequency bins), where the multitaper spectra broadened the modest compared with the periodogram. We tested the influence of the number of tapers used and found that narrow modes at low n values are broadened to the extent that they can no longer be fit if the number of tapers is too large. For helioseismic time series of this length and temporal resolution, the optimal number of tapers is less than 10.

  11. Dictionary-based image denoising for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Allner, Sebastian; Mei, Kai; Pfeiffer, Franz; Noël, Peter B.

    2016-03-01

    Compared to conventional computed tomography (CT), dual energy CT allows for improved material decomposition by conducting measurements at two distinct energy spectra. Since radiation exposure is a major concern in clinical CT, there is a need for tools to reduce the noise level in images while preserving diagnostic information. One way to achieve this goal is the application of image-based denoising algorithms after an analytical reconstruction has been performed. We have developed a modified dictionary denoising algorithm for dual energy CT aimed at exploiting the high spatial correlation between between images obtained from different energy spectra. Both the low-and high energy image are partitioned into small patches which are subsequently normalized. Combined patches with improved signal-to-noise ratio are formed by a weighted addition of corresponding normalized patches from both images. Assuming that corresponding low-and high energy image patches are related by a linear transformation, the signal in both patches is added coherently while noise is neglected. Conventional dictionary denoising is then performed on the combined patches. Compared to conventional dictionary denoising and bilateral filtering, our algorithm achieved superior performance in terms of qualitative and quantitative image quality measures. We demonstrate, in simulation studies, that this approach can produce 2d-histograms of the high- and low-energy reconstruction which are characterized by significantly improved material features and separation. Moreover, in comparison to other approaches that attempt denoising without simultaneously using both energy signals, superior similarity to the ground truth can be found with our proposed algorithm.

  12. Customized maximal-overlap multiwavelet denoising with data-driven group threshold for condition monitoring of rolling mill drivetrain

    NASA Astrophysics Data System (ADS)

    Chen, Jinglong; Wan, Zhiguo; Pan, Jun; Zi, Yanyang; Wang, Yu; Chen, Binqiang; Sun, Hailiang; Yuan, Jing; He, Zhengjia

    2016-02-01

    Fault identification timely of rolling mill drivetrain is significant for guaranteeing product quality and realizing long-term safe operation. So, condition monitoring system of rolling mill drivetrain is designed and developed. However, because compound fault and weak fault feature information is usually sub-merged in heavy background noise, this task still faces challenge. This paper provides a possibility for fault identification of rolling mills drivetrain by proposing customized maximal-overlap multiwavelet denoising method. The effectiveness of wavelet denoising method mainly relies on the appropriate selections of wavelet base, transform strategy and threshold rule. First, in order to realize exact matching and accurate detection of fault feature, customized multiwavelet basis function is constructed via symmetric lifting scheme and then vibration signal is processed by maximal-overlap multiwavelet transform. Next, based on spatial dependency of multiwavelet transform coefficients, spatial neighboring coefficient data-driven group threshold shrinkage strategy is developed for denoising process by choosing the optimal group length and threshold via the minimum of Stein's Unbiased Risk Estimate. The effectiveness of proposed method is first demonstrated through compound fault identification of reduction gearbox on rolling mill. Then it is applied for weak fault identification of dedusting fan bearing on rolling mill and the results support its feasibility.

  13. A hybrid fault diagnosis method based on second generation wavelet de-noising and local mean decomposition for rotating machinery.

    PubMed

    Liu, Zhiwen; He, Zhengjia; Guo, Wei; Tang, Zhangchun

    2016-03-01

    In order to extract fault features of large-scale power equipment from strong background noise, a hybrid fault diagnosis method based on the second generation wavelet de-noising (SGWD) and the local mean decomposition (LMD) is proposed in this paper. In this method, a de-noising algorithm of second generation wavelet transform (SGWT) using neighboring coefficients was employed as the pretreatment to remove noise in rotating machinery vibration signals by virtue of its good effect in enhancing the signal-noise ratio (SNR). Then, the LMD method is used to decompose the de-noised signals into several product functions (PFs). The PF corresponding to the faulty feature signal is selected according to the correlation coefficients criterion. Finally, the frequency spectrum is analyzed by applying the FFT to the selected PF. The proposed method is applied to analyze the vibration signals collected from an experimental gearbox and a real locomotive rolling bearing. The results demonstrate that the proposed method has better performances such as high SNR and fast convergence speed than the normal LMD method.

  14. Patch-wise denoising of phase fringe patterns based on matrix enhancement

    NASA Astrophysics Data System (ADS)

    Kulkarni, Rishikesh; Rastogi, Pramod

    2016-12-01

    We propose a new approach for the denoising of a phase fringe pattern recorded in an optical interferometric setup. The phase fringe pattern which is generally corrupted by high amount of speckle noise is first converted into an exponential phase field. This phase field is divided into a number of overlapping patches. Owing to the small size of each patch, the presence of a simple structure of the interference phase is assumed in it. Accordingly, the singular value decomposition (SVD) of the patch allows us to separate the signal and noise components effectively. The patch is reconstructed only with the signal component. In order to further improve the robustness of the proposed method, an enhanced data matrix is generated using the patch and the SVD of this enhanced matrix is computed. The matrix enhancement results in an increased dimension of the noise subspace which thus accommodates more amount of noise component. Reassignment of the filtered pixels of the preceding patch in the current patch improves the noise filtering accuracy. The fringe denoising capability in function of the noise level and the patch size is studied. Simulation and experimental results are provided to demonstrate the practical applicability of the proposed method.

  15. Robust Nonlinear Regression: A Greedy Approach Employing Kernels With Application to Image Denoising

    NASA Astrophysics Data System (ADS)

    Papageorgiou, George; Bouboulis, Pantelis; Theodoridis, Sergios

    2017-08-01

    We consider the task of robust non-linear regression in the presence of both inlier noise and outliers. Assuming that the unknown non-linear function belongs to a Reproducing Kernel Hilbert Space (RKHS), our goal is to estimate the set of the associated unknown parameters. Due to the presence of outliers, common techniques such as the Kernel Ridge Regression (KRR) or the Support Vector Regression (SVR) turn out to be inadequate. Instead, we employ sparse modeling arguments to explicitly model and estimate the outliers, adopting a greedy approach. The proposed robust scheme, i.e., Kernel Greedy Algorithm for Robust Denoising (KGARD), is inspired by the classical Orthogonal Matching Pursuit (OMP) algorithm. Specifically, the proposed method alternates between a KRR task and an OMP-like selection step. Theoretical results concerning the identification of the outliers are provided. Moreover, KGARD is compared against other cutting edge methods, where its performance is evaluated via a set of experiments with various types of noise. Finally, the proposed robust estimation framework is applied to the task of image denoising, and its enhanced performance in the presence of outliers is demonstrated.

  16. A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2016-05-01

    An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

  17. Multi-attribute utility function or statistical inference models: a comparison of health state valuation models using the HUI2 health state classification system.

    PubMed

    Stevens, Katherine; McCabe, Christopher; Brazier, John; Roberts, Jennifer

    2007-09-01

    A key issue in health state valuation modelling is the choice of functional form. The two most frequently used preference based instruments adopt different approaches; one based on multi-attribute utility theory (MAUT), the other on statistical analysis. There has been no comparison of these alternative approaches in the context of health economics. We report a comparison of these approaches for the health utilities index mark 2. The statistical inference model predicts more accurately than the one based on MAUT. We discuss possible explanations for the differences in performance, the importance of the findings, and implications for future research.

  18. Bayesian inferences of galaxy formation from the K-band luminosity and H I mass functions of galaxies: constraining star formation and feedback

    NASA Astrophysics Data System (ADS)

    Lu, Yu; Mo, H. J.; Lu, Zhankui; Katz, Neal; Weinberg, Martin D.

    2014-09-01

    We infer mechanisms of galaxy formation for a broad family of semi-analytic models (SAMs) constrained by the K-band luminosity function and H I mass function of local galaxies using tools of Bayesian analysis. Even with a broad search in parameter space the whole model family fails to match to constraining data. In the best-fitting models, the star formation and feedback parameters in low-mass haloes are tightly constrained by the two data sets, and the analysis reveals several generic failures of models that similarly apply to other existing SAMs. First, based on the assumption that baryon accretion follows the dark matter accretion, large mass-loading factors are required for haloes with circular velocities lower than 200 km s-1, and most of the wind mass must be expelled from the haloes. Second, assuming that the feedback is powered by Type II supernovae with a Chabrier initial mass function, the outflow requires more than 25 per cent of the available supernova kinetic energy. Finally, the posterior predictive distributions for the star formation history are dramatically inconsistent with observations for masses similar to or smaller than the Milky Way mass. The inferences suggest that the current model family is still missing some key physical processes that regulate the gas accretion and star formation in galaxies with masses below that of the Milky Way.

  19. Despeckling SRTM And Other Topographic Data With A Denoising Algorithm

    NASA Astrophysics Data System (ADS)

    Stevenson, J. A.; Sun, X.; Mitchell, N. C.

    2012-12-01

    Noise in topographic data obscures features and increases error in geomorphic products calculated from DEMs. DEMs produced by radar remote sensing, such as SRTM, are frequently used for geomorphological studies, they often contain speckle noise which may significantly lower the quality of geomorphometric analyses. We introduce here an algorithm that denoises three-dimensional objects while preserving sharp features. It is free to download and simple to use. In this study the algorithm is applied to topographic data (synthetic landscapes, SRTM, TOPSAR) and the results are compared against using a mean filter, using LiDAR data as ground truth for the natural datasets. The level of denoising is controlled by two parameters: the threshold (T) that controls the sharpness of the features to be preserved, and the number of iterations (n) that controls how much the data are changed. The optimum settings depend on the nature of the topography and of the noise to be removed, but are typically in the range T = 0.87-0.99 and n = 1-10. If the threshold is too high, noise is preserved. A lower threshold setting is used where noise is spatially uncorrelated (e.g. TOPSAR), whereas in some other datasets (e.g. SRTM), where filtering of the data during processing has introduced spatial correlation to the noise, higher thresholds can be used. Compared to those filtered to an equivalent level with a mean filter, data smoothed by the denoising algorithm of Sun et al. [Sun, X., Rosin, P.L., Martin, R.R., Langbein, F.C., 2007. Fast and effective feature-preserving mesh denoising. IEEE Transactions on Visualisation and Computer Graphics 13, 925-938] are closer to the original data and to the ground truth. Changes to the data are smaller and less correlated to topographic features. Furthermore, the feature-preserving nature of the algorithm allows significant smoothing to be applied to flat areas of topography while limiting the alterations made in mountainous regions, with clear benefits

  20. System-level insights into the cellular interactome of a non-model organism: inferring, modelling and analysing functional gene network of soybean (Glycine max).

    PubMed

    Xu, Yungang; Guo, Maozu; Zou, Quan; Liu, Xiaoyan; Wang, Chunyu; Liu, Yang

    2014-01-01

    Cellular interactome, in which genes and/or their products interact on several levels, forming transcriptional regulatory-, protein interaction-, metabolic-, signal transduction networks, etc., has attracted decades of research focuses. However, such a specific type of network alone can hardly explain the various interactive activities among genes. These networks characterize different interaction relationships, implying their unique intrinsic properties and defects, and covering different slices of biological information. Functional gene network (FGN), a consolidated interaction network that models fuzzy and more generalized notion of gene-gene relations, have been proposed to combine heterogeneous networks with the goal of identifying functional modules supported by multiple interaction types. There are yet no successful precedents of FGNs on sparsely studied non-model organisms, such as soybean (Glycine max), due to the absence of sufficient heterogeneous interaction data. We present an alternative solution for inferring the FGNs of soybean (SoyFGNs), in a pioneering study on the soybean interactome, which is also applicable to other organisms. SoyFGNs exhibit the typical characteristics of biological networks: scale-free, small-world architecture and modularization. Verified by co-expression and KEGG pathways, SoyFGNs are more extensive and accurate than an orthology network derived from Arabidopsis. As a case study, network-guided disease-resistance gene discovery indicates that SoyFGNs can provide system-level studies on gene functions and interactions. This work suggests that inferring and modelling the interactome of a non-model plant are feasible. It will speed up the discovery and definition of the functions and interactions of other genes that control important functions, such as nitrogen fixation and protein or lipid synthesis. The efforts of the study are the basis of our further comprehensive studies on the soybean functional interactome at the genome

  1. Improved deadzone modeling for bivariate wavelet shrinkage-based image denoising

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2016-05-01

    Modern image processing performed on-board low Size, Weight, and Power (SWaP) platforms, must provide high- performance while simultaneously reducing memory footprint, power consumption, and computational complexity. Image preprocessing, along with downstream image exploitation algorithms such as object detection and recognition, and georegistration, place a heavy burden on power and processing resources. Image preprocessing often includes image denoising to improve data quality for downstream exploitation algorithms. High-performance image denoising is typically performed in the wavelet domain, where noise generally spreads and the wavelet transform compactly captures high information-bearing image characteristics. In this paper, we improve modeling fidelity of a previously-developed, computationally-efficient wavelet-based denoising algorithm. The modeling improvements enhance denoising performance without significantly increasing computational cost, thus making the approach suitable for low-SWAP platforms. Specifically, this paper presents modeling improvements to the Sendur-Selesnick model (SSM) which implements a bivariate wavelet shrinkage denoising algorithm that exploits interscale dependency between wavelet coefficients. We formulate optimization problems for parameters controlling deadzone size which leads to improved denoising performance. Two formulations are provided; one with a simple, closed form solution which we use for numerical result generation, and the second as an integral equation formulation involving elliptic integrals. We generate image denoising performance results over different image sets drawn from public domain imagery, and investigate the effect of wavelet filter tap length on denoising performance. We demonstrate denoising performance improvement when using the enhanced modeling over performance obtained with the baseline SSM model.

  2. Energy-Based Wavelet De-Noising of Hydrologic Time Series

    PubMed Central

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533

  3. Energy-based wavelet de-noising of hydrologic time series.

    PubMed

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed.

  4. De-noising of digital image correlation based on stationary wavelet transform

    NASA Astrophysics Data System (ADS)

    Guo, Xiang; Li, Yulong; Suo, Tao; Liang, Jin

    2017-03-01

    In this paper, a stationary wavelet transform (SWT) based method is proposed to de-noise the digital image with the light noise, and the SWT de-noise algorithm is presented after the analyzing of the light noise. By using the de-noise algorithm, the method was demonstrated to be capable of providing accurate DIC measurements in the light noise environment. The verification, comparative and realistic experiments were conducted using this method. The result indicate that the de-noise method can be applied to the full-field strain measurement under the light interference with a high accuracy and stability.

  5. Denoising of radar signals by using wavelets and Doppler estimation by S-Transform

    NASA Astrophysics Data System (ADS)

    Reddy, V. Siva Sankara; Rao, D. Thirumala

    2012-08-01

    The s-transform is a variable window of STFT and extension of wavelet. This paper discussed the principle and method of Wavelet De-noising, reduced noise of pulse signal based on wavelet. It is shown that wavelet de-noising can eliminate most noise, and preserve effectively sudden change of signal. This paper analyzed and compared the effect of denoising of pulse signal in different ways all study shows de-noising of pulse signal based on wavelet have practical value.From the s-transform the Doppler frequency can be estimated in different ways.

  6. Automatic parameter prediction for image denoising algorithms using perceptual quality features

    NASA Astrophysics Data System (ADS)

    Mittal, Anish; Moorthy, Anush K.; Bovik, Alan C.

    2012-03-01

    A natural scene statistics (NSS) based blind image denoising approach is proposed, where denoising is performed without knowledge of the noise variance present in the image. We show how such a parameter estimation can be used to perform blind denoising by combining blind parameter estimation with a state-of-the-art denoising algorithm.1 Our experiments show that for all noise variances simulated on a varied image content, our approach is almost always statistically superior to the reference BM3D implementation in terms of perceived visual quality at the 95% confidence level.

  7. Towards General Algorithms for Grammatical Inference

    NASA Astrophysics Data System (ADS)

    Clark, Alexander

    Many algorithms for grammatical inference can be viewed as instances of a more general algorithm which maintains a set of primitive elements, which distributionally define sets of strings, and a set of features or tests that constrain various inference rules. Using this general framework, which we cast as a process of logical inference, we re-analyse Angluin's famous lstar algorithm and several recent algorithms for the inference of context-free grammars and multiple context-free grammars. Finally, to illustrate the advantages of this approach, we extend it to the inference of functional transductions from positive data only, and we present a new algorithm for the inference of finite state transducers.

  8. Fast non local means denoising for 3D MR images.

    PubMed

    Coupé, Pierrick; Yger, Pierre; Barillot, Christian

    2006-01-01

    One critical issue in the context of image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image conspicuity and to improve the performances of all the processings needed for quantitative imaging analysis. The method proposed in this paper is based on an optimized version of the Non Local (NL) Means algorithm. This approach uses the natural redundancy of information in image to remove the noise. Tests were carried out on synthetic datasets and on real 3T MR images. The results show that the NL-means approach outperforms other classical denoising methods, such as Anisotropic Diffusion Filter and Total Variation.

  9. Automatic denoising of single-trial evoked potentials.

    PubMed

    Ahmadi, Maryam; Quian Quiroga, Rodrigo

    2013-02-01

    We present an automatic denoising method based on the wavelet transform to obtain single trial evoked potentials. The method is based on the inter- and intra-scale variability of the wavelet coefficients and their deviations from baseline values. The performance of the method is tested with simulated event related potentials (ERPs) and with real visual and auditory ERPs. For the simulated data the presented method gives a significant improvement in the observation of single trial ERPs as well as in the estimation of their amplitudes and latencies, in comparison with a standard denoising technique (Donoho's thresholding) and in comparison with the noisy single trials. For the real data, the proposed method largely filters the spontaneous EEG activity, thus helping the identification of single trial visual and auditory ERPs. The proposed method provides a simple, automatic and fast tool that allows the study of single trial responses and their correlations with behavior.

  10. MULTISCALE TENSOR ANISOTROPIC FILTERING OF FLUORESCENCE MICROSCOPY FOR DENOISING MICROVASCULATURE

    PubMed Central

    Prasath, V. B. S.; Pelapur, R.; Glinskii, O. V.; Glinsky, V. V.; Huxley, V. H.; Palaniappan, K.

    2015-01-01

    Fluorescence microscopy images are contaminated by noise and improving image quality without blurring vascular structures by filtering is an important step in automatic image analysis. The application of interest here is to automatically extract the structural components of the microvascular system with accuracy from images acquired by fluorescence microscopy. A robust denoising process is necessary in order to extract accurate vascular morphology information. For this purpose, we propose a multiscale tensor with anisotropic diffusion model which progressively and adaptively updates the amount of smoothing while preserving vessel boundaries accurately. Based on a coherency enhancing flow with planar confidence measure and fused 3D structure information, our method integrates multiple scales for microvasculature preservation and noise removal membrane structures. Experimental results on simulated synthetic images and epifluorescence images show the advantage of our improvement over other related diffusion filters. We further show that the proposed multiscale integration approach improves denoising accuracy of different tensor diffusion methods to obtain better microvasculature segmentation. PMID:26730456

  11. MULTISCALE TENSOR ANISOTROPIC FILTERING OF FLUORESCENCE MICROSCOPY FOR DENOISING MICROVASCULATURE.

    PubMed

    Prasath, V B S; Pelapur, R; Glinskii, O V; Glinsky, V V; Huxley, V H; Palaniappan, K

    2015-04-01

    Fluorescence microscopy images are contaminated by noise and improving image quality without blurring vascular structures by filtering is an important step in automatic image analysis. The application of interest here is to automatically extract the structural components of the microvascular system with accuracy from images acquired by fluorescence microscopy. A robust denoising process is necessary in order to extract accurate vascular morphology information. For this purpose, we propose a multiscale tensor with anisotropic diffusion model which progressively and adaptively updates the amount of smoothing while preserving vessel boundaries accurately. Based on a coherency enhancing flow with planar confidence measure and fused 3D structure information, our method integrates multiple scales for microvasculature preservation and noise removal membrane structures. Experimental results on simulated synthetic images and epifluorescence images show the advantage of our improvement over other related diffusion filters. We further show that the proposed multiscale integration approach improves denoising accuracy of different tensor diffusion methods to obtain better microvasculature segmentation.

  12. Examining Alternatives to Wavelet Denoising for Astronomical Source Finding

    NASA Astrophysics Data System (ADS)

    Jurek, R.; Brown, S.

    2012-08-01

    The Square Kilometre Array and its pathfinders ASKAP and MeerKAT will produce prodigious amounts of data that necessitate automated source finding. The performance of automated source finders can be improved by pre-processing a dataset. In preparation for the WALLABY and DINGO surveys, we have used a test HI datacube constructed from actual Westerbork Telescope noise and WHISP HI galaxies to test the real world improvement of linear smoothing, the Duchamp source finder's wavelet denoising, iterative median smoothing and mathematical morphology subtraction, on intensity threshold source finding of spectral line datasets. To compare these pre-processing methods we have generated completeness-reliability performance curves for each method and a range of input parameters. We find that iterative median smoothing produces the best source finding results for ASKAP HI spectral line observations, but wavelet denoising is a safer pre-processing technique. In this paper we also present our implementations of iterative median smoothing and mathematical morphology subtraction.

  13. Denoising in Contrast-Enhanced X-ray Images

    NASA Astrophysics Data System (ADS)

    Jeon, Gwanggil

    2016-12-01

    In this paper, we propose a denoising and contrast-enhancement method for medical images. The main purpose of medical image improvement is to transform lower contrast data into higher contrast, and to reduce high noise levels. To meet this goal, we propose a noise-level estimation method, whereby the noise level is estimated by computing the standard deviation and variance in a local block. The obtained noise level is then used as an input parameter for the block-matching and 3D filtering (BM3D) algorithm, and the denoising process is then performed. Noise-level estimation step is important because the BM3D algorithm does not perform well without correct noise-level information. Simulation results confirm that the proposed method outperforms other benchmarks with respect to both their objective and visual performances.

  14. Parallel transformation of K-SVD solar image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Liang, Youwen; Tian, Yu; Li, Mei

    2017-02-01

    The images obtained by observing the sun through a large telescope always suffered with noise due to the low SNR. K-SVD denoising algorithm can effectively remove Gauss white noise. Training dictionaries for sparse representations is a time consuming task, due to the large size of the data involved and to the complexity of the training algorithms. In this paper, an OpenMP parallel programming language is proposed to transform the serial algorithm to the parallel version. Data parallelism model is used to transform the algorithm. Not one atom but multiple atoms updated simultaneously is the biggest change. The denoising effect and acceleration performance are tested after completion of the parallel algorithm. Speedup of the program is 13.563 in condition of using 16 cores. This parallel version can fully utilize the multi-core CPU hardware resources, greatly reduce running time and easily to transplant in multi-core platform.

  15. A simple filter circuit for denoising biomechanical impact signals.

    PubMed

    Subramaniam, Suba R; Georgakis, Apostolos

    2009-01-01

    We present a simple scheme for denoising non-stationary biomechanical signals with the aim of accurately estimating their second derivative (acceleration). The method is based on filtering in fractional Fourier domains using well-known low-pass filters in a way that amounts to a time-varying cut-off threshold. The resulting algorithm is linear and its design is facilitated by the relationship between the fractional Fourier transform and joint time-frequency representations. The implemented filter circuit employs only three low-order filters while its efficiency is further supported by the low computational complexity of the fractional Fourier transform. The results demonstrate that the proposed method can denoise the signals effectively and is more robust against noise as compared to conventional low-pass filters.

  16. Non-local means denoising algorithm accelerated by GPU

    NASA Astrophysics Data System (ADS)

    Huang, Kuidong; Zhang, Dinghua; Wang, Kai

    2009-10-01

    On the basis of studying Non-Local Means (NLM) denoising algorithm and its pixel-wise processing algorithm in Graphics Processing Unit (GPU), a whole image accumulation algorithm of NLM denoising algorithm based on GPU is proposed. The number of dynamic instructions of fragment shader is effectively reduced by redesigning the data structure and processing flow, that make the algorithm suitable to the graphic cards supported Shader Model 3.0 and/or Shader Model 4.0, and so enhance the versatility of the algorithm. Then the continuous and parallel processing method for 4 gray images based on Multiple Render Target (MRT) and double Frame Buffer Object (FBO) is proposed, and the whole processing flow with GPU is presented. The experimental results of both simulative and practical gray images show that the proposed method can achieve a speedup of 45 times while remaining the same accuracy.

  17. Diffusion Weighted Image Denoising Using Overcomplete Local PCA

    PubMed Central

    Manjón, José V.; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D. Louis; Robles, Montserrat

    2013-01-01

    Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889

  18. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    PubMed

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  19. A comparison of Monte Carlo dose calculation denoising techniques

    NASA Astrophysics Data System (ADS)

    El Naqa, I.; Kawrakow, I.; Fippel, M.; Siebers, J. V.; Lindsay, P. E.; Wickerhauser, M. V.; Vicic, M.; Zakarian, K.; Kauffmann, N.; Deasy, J. O.

    2005-03-01

    Recent studies have demonstrated that Monte Carlo (MC) denoising techniques can reduce MC radiotherapy dose computation time significantly by preferentially eliminating statistical fluctuations ('noise') through smoothing. In this study, we compare new and previously published approaches to MC denoising, including 3D wavelet threshold denoising with sub-band adaptive thresholding, content adaptive mean-median-hybrid (CAMH) filtering, locally adaptive Savitzky-Golay curve-fitting (LASG), anisotropic diffusion (AD) and an iterative reduction of noise (IRON) method formulated as an optimization problem. Several challenging phantom and computed-tomography-based MC dose distributions with varying levels of noise formed the test set. Denoising effectiveness was measured in three ways: by improvements in the mean-square-error (MSE) with respect to a reference (low noise) dose distribution; by the maximum difference from the reference distribution and by the 'Van Dyk' pass/fail criteria of either adequate agreement with the reference image in low-gradient regions (within 2% in our case) or, in high-gradient regions, a distance-to-agreement-within-2% of less than 2 mm. Results varied significantly based on the dose test case: greater reductions in MSE were observed for the relatively smoother phantom-based dose distribution (up to a factor of 16 for the LASG algorithm); smaller reductions were seen for an intensity modulated radiation therapy (IMRT) head and neck case (typically, factors of 2-4). Although several algorithms reduced statistical noise for all test geometries, the LASG method had the best MSE reduction for three of the four test geometries, and performed the best for the Van Dyk criteria. However, the wavelet thresholding method performed better for the head and neck IMRT geometry and also decreased the maximum error more effectively than LASG. In almost all cases, the evaluated methods provided acceleration of MC results towards statistically more accurate

  20. Robust L1 PCA and application in image denoising

    NASA Astrophysics Data System (ADS)

    Gao, Junbin; Kwan, Paul W. H.; Guo, Yi

    2007-11-01

    The so-called robust L1 PCA was introduced in our recent work [1] based on the L1 noise assumption. Due to the heavy tail characteristics of the L1 distribution, the proposed model has been proved much more robust against data outliers. In this paper, we further demonstrate how the learned robust L1 PCA model can be used to denoise image data.

  1. [Quantitative evaluation of soil hyperspectra denoising with different filters].

    PubMed

    Huang, Ming-Xiang; Wang, Ke; Shi, Zhou; Gong, Jian-Hua; Li, Hong-Yi; Chen, Jie-Liang

    2009-03-01

    The noise distribution of soil hyperspectra measured by ASD FieldSpec Pro FR was described, and then the quantitative evaluation of spectral denoising with six filters was compared. From the interpretation of soil hyperspectra, the continuum removed, first-order differential and high frequency curves, the UV/VNIR (350-1 050 nm) exhibit hardly noise except the coverage of 40 nm in the beginning 350 nm. However, the SWIR (1 000-2 500 nm) shows different noise distribution. Especially, the latter half of SWIR 2(1 800-2 500 nm) showed more noise, and the intersection spectrum of three spectrometers has more noise than the neighbor spectrum. Six filters were chosen for spectral denoising. The smoothing indexes (SI), horizontal feature reservation index (HFRI) and vertical feature reservation index (VFRI) were designed for evaluating the denoising performance of these filters. The comparison of their indexes shows that WD and MA filters are the optimal choice to filter the noise, in terms of balancing the contradiction between the smoothing and feature reservation ability. Furthermore the first-order differential data of 66 denoising soil spectra by 6 filters were respectively used as the input of the same PLSR model to predict the sand content. The different prediction accuracies caused by the different filters show that compared to the feature reservation ability, the filter's smoothing ability is the principal factor to influence the accuracy. The study can benefit the spectral preprocessing and analyzing, and also provide the scientific foundation for the related spectroscopy applications.

  2. Optical coherence tomography image denoising using Gaussianization transform.

    PubMed

    2017-08-01

    We demonstrate the power of the Gaussianization transform (GT) for modeling image content by applying GT for optical coherence tomography (OCT) denoising. The proposed method is a developed version of the spatially constrained Gaussian mixture model (SC-GMM) method, which assumes that each cluster of similar patches in an image has a Gaussian distribution. SC-GMM tries to find some clusters of similar patches in the image using a spatially constrained patch clustering and then denoise each cluster by the Wiener filter. Although in this method GMM distribution is assumed for the noisy image, holding this assumption on a dataset is not investigated. We illustrate that making a Gaussian assumption on a noisy dataset has a significant effect on denoising results. For this purpose, a suitable distribution for OCT images is first obtained and then GT is employed to map this original distribution of OCT images to a GMM distribution. Then, this Gaussianized image is used as the input of the SC-GMM algorithm. This method, which is a combination of GT and SC-GMM, remarkably improves the results of OCT denoising compared with earlier version of SC-GMM and even produces better visual and numerical results than the state-of-the art works in this field. Indeed, the main advantage of the proposed OCT despeckling method is texture preservation, which is important for main image processing tasks like OCT inter- and intraretinal layer analysis. Thus, to prove the efficacy of the proposed method for this analysis, an improvement in the segmentation of intraretinal layers using the proposed method as a preprocessing step is investigated. Furthermore, the proposed method can achieve the best expert ranking between other contending methods, and the results show the helpfulness and usefulness of the proposed method in clinical applications. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  3. Optical coherence tomography image denoising using Gaussianization transform

    NASA Astrophysics Data System (ADS)

    Amini, Zahra; Rabbani, Hossein

    2017-08-01

    We demonstrate the power of the Gaussianization transform (GT) for modeling image content by applying GT for optical coherence tomography (OCT) denoising. The proposed method is a developed version of the spatially constrained Gaussian mixture model (SC-GMM) method, which assumes that each cluster of similar patches in an image has a Gaussian distribution. SC-GMM tries to find some clusters of similar patches in the image using a spatially constrained patch clustering and then denoise each cluster by the Wiener filter. Although in this method GMM distribution is assumed for the noisy image, holding this assumption on a dataset is not investigated. We illustrate that making a Gaussian assumption on a noisy dataset has a significant effect on denoising results. For this purpose, a suitable distribution for OCT images is first obtained and then GT is employed to map this original distribution of OCT images to a GMM distribution. Then, this Gaussianized image is used as the input of the SC-GMM algorithm. This method, which is a combination of GT and SC-GMM, remarkably improves the results of OCT denoising compared with earlier version of SC-GMM and even produces better visual and numerical results than the state-of-the art works in this field. Indeed, the main advantage of the proposed OCT despeckling method is texture preservation, which is important for main image processing tasks like OCT inter- and intraretinal layer analysis. Thus, to prove the efficacy of the proposed method for this analysis, an improvement in the segmentation of intraretinal layers using the proposed method as a preprocessing step is investigated. Furthermore, the proposed method can achieve the best expert ranking between other contending methods, and the results show the helpfulness and usefulness of the proposed method in clinical applications.

  4. Undecimated Wavelet Transforms for Image De-noising

    SciTech Connect

    Gyaourova, A; Kamath, C; Fodor, I K

    2002-11-19

    A few different approaches exist for computing undecimated wavelet transform. In this work we construct three undecimated schemes and evaluate their performance for image noise reduction. We use standard wavelet based de-noising techniques and compare the performance of our algorithms with the original undecimated wavelet transform, as well as with the decimated wavelet transform. The experiments we have made show that our algorithms have better noise removal/blurring ratio.

  5. Adaptive nonlocal means filtering based on local noise level for CT denoising

    SciTech Connect

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.

    2014-01-15

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  6. Adaptive nonlocal means filtering based on local noise level for CT denoising

    SciTech Connect

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.

    2014-01-15

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  7. Microseismic event denoising via adaptive directional vector median filters

    NASA Astrophysics Data System (ADS)

    Zheng, Jing; Lu, Ji-Ren; Jiang, Tian-Qi; Liang, Zhe

    2017-03-01

    We present a novel denoising scheme via Radon transform-based adaptive vector directional median filters named adaptive directional vector median filter (AD-VMF) to suppress noise for microseismic downhole dataset. AD-VMF contains three major steps for microseismic downhole data processing: (i) applying Radon transform on the microseismic data to obtain the parameters of the waves, (ii) performing S-transform to determine the parameters for filters, and (iii) applying the parameters for vector median filter (VMF) to denoise the data. The steps (i) and (ii) can realize the automatic direction detection. The proposed algorithm is tested with synthetic and field datasets that were recorded with a vertical array of receivers. The P-wave and S-wave direct arrivals are properly denoised for poor signal-to-noise ratio (SNR) records. In the simulation case, we also evaluate the performance with mean square error (MSE) in terms of signal-to-noise ratio (SNR). The result shows that the distortion of the proposed method is very low; the SNR is even less than 0 dB.

  8. Baseline Adaptive Wavelet Thresholding Technique for sEMG Denoising

    NASA Astrophysics Data System (ADS)

    Bartolomeo, L.; Zecca, M.; Sessa, S.; Lin, Z.; Mukaeda, Y.; Ishii, H.; Takanishi, Atsuo

    2011-06-01

    The surface Electromyography (sEMG) signal is affected by different sources of noises: current technology is considerably robust to the interferences of the power line or the cable motion artifacts, but still there are many limitations with the baseline and the movement artifact noise. In particular, these sources have frequency spectra that include also the low-frequency components of the sEMG frequency spectrum; therefore, a standard all-bandwidth filtering could alter important information. The Wavelet denoising method has been demonstrated to be a powerful solution in processing white Gaussian noise in biological signals. In this paper we introduce a new technique for the denoising of the sEMG signal: by using the baseline of the signal before the task, we estimate the thresholds to apply to the Wavelet thresholding procedure. The experiments have been performed on ten healthy subjects, by placing the electrodes on the Extensor Carpi Ulnaris and Triceps Brachii on right upper and lower arms, and performing a flexion and extension of the right wrist. An Inertial Measurement Unit, developed in our group, has been used to recognize the movements of the hands to segment the exercise and the pre-task baseline. Finally, we show better performances of the proposed method in term of noise cancellation and distortion of the signal, quantified by a new suggested indicator of denoising quality, compared to the standard Donoho technique.

  9. Application of adaptive filters in denoising magnetocardiogram signals

    NASA Astrophysics Data System (ADS)

    Khan, Pathan Fayaz; Patel, Rajesh; Sengottuvel, S.; Saipriya, S.; Swain, Pragyna Parimita; Gireesan, K.

    2017-05-01

    Magnetocardiography (MCG) is the measurement of weak magnetic fields from the heart using Superconducting QUantum Interference Devices (SQUID). Though the measurements are performed inside magnetically shielded rooms (MSR) to reduce external electromagnetic disturbances, interferences which are caused by sources inside the shielded room could not be attenuated. The work presented here reports the application of adaptive filters to denoise MCG signals. Two adaptive noise cancellation approaches namely least mean squared (LMS) algorithm and recursive least squared (RLS) algorithm are applied to denoise MCG signals and the results are compared. It is found that both the algorithms effectively remove noisy wiggles from MCG traces; significantly improving the quality of the cardiac features in MCG traces. The calculated signal-to-noise ratio (SNR) for the denoised MCG traces is found to be slightly higher in the LMS algorithm as compared to the RLS algorithm. The results encourage the use of adaptive techniques to suppress noise due to power line frequency and its harmonics which occur frequently in biomedical measurements.

  10. Two-direction nonlocal model for image denoising.

    PubMed

    Zhang, Xuande; Feng, Xiangchu; Wang, Weiwei

    2013-01-01

    Similarities inherent in natural images have been widely exploited for image denoising and other applications. In fact, if a cluster of similar image patches is rearranged into a matrix, similarities exist both between columns and rows. Using the similarities, we present a two-directional nonlocal (TDNL) variational model for image denoising. The solution of our model consists of three components: one component is a scaled version of the original observed image and the other two components are obtained by utilizing the similarities. Specifically, by using the similarity between columns, we get a nonlocal-means-like estimation of the patch with consideration to all similar patches, while the weights are not the pairwise similarities but a set of clusterwise coefficients. Moreover, by using the similarity between rows, we also get nonlocal-autoregression-like estimations for the center pixels of the similar patches. The TDNL model leads to an alternative minimization algorithm. Experiments indicate that the model can perform on par with or better than the state-of-the-art denoising methods.

  11. Streak image denoising and segmentation using adaptive Gaussian guided filter.

    PubMed

    Jiang, Zhuocheng; Guo, Baoping

    2014-09-10

    In streak tube imaging lidar (STIL), streak images are obtained using a CCD camera. However, noise in the captured streak images can greatly affect the quality of reconstructed 3D contrast and range images. The greatest challenge for streak image denoising is reducing the noise while preserving details. In this paper, we propose an adaptive Gaussian guided filter (AGGF) for noise removal and detail enhancement of streak images. The proposed algorithm is based on a guided filter (GF) and part of an adaptive bilateral filter (ABF). In the AGGF, the details are enhanced by optimizing the offset parameter. AGGF-denoised streak images are significantly sharper than those denoised by the GF. Moreover, the AGGF is a fast linear time algorithm achieved by recursively implementing a Gaussian filter kernel. Experimentally, AGGF demonstrates its capacity to preserve edges and thin structures and outperforms the existing bilateral filter and domain transform filter in terms of both visual quality and peak signal-to-noise ratio performance.

  12. Microseismic event denoising via adaptive directional vector median filters

    NASA Astrophysics Data System (ADS)

    Zheng, Jing; Lu, Ji-Ren; Jiang, Tian-Qi; Liang, Zhe

    2017-01-01

    We present a novel denoising scheme via Radon transform-based adaptive vector directional median filters named adaptive directional vector median filter (AD-VMF) to suppress noise for microseismic downhole dataset. AD-VMF contains three major steps for microseismic downhole data processing: (i) applying Radon transform on the microseismic data to obtain the parameters of the waves, (ii) performing S-transform to determine the parameters for filters, and (iii) applying the parameters for vector median filter (VMF) to denoise the data. The steps (i) and (ii) can realize the automatic direction detection. The proposed algorithm is tested with synthetic and field datasets that were recorded with a vertical array of receivers. The P-wave and S-wave direct arrivals are properly denoised for poor signal-to-noise ratio (SNR) records. In the simulation case, we also evaluate the performance with mean square error (MSE) in terms of signal-to-noise ratio (SNR). The result shows that the distortion of the proposed method is very low; the SNR is even less than 0 dB.

  13. Oriented wavelet transform for image compression and denoising.

    PubMed

    Chappelier, Vivien; Guillemot, Christine

    2006-10-01

    In this paper, we introduce a new transform for image processing, based on wavelets and the lifting paradigm. The lifting steps of a unidimensional wavelet are applied along a local orientation defined on a quincunx sampling grid. To maximize energy compaction, the orientation minimizing the prediction error is chosen adaptively. A fine-grained multiscale analysis is provided by iterating the decomposition on the low-frequency band. In the context of image compression, the multiresolution orientation map is coded using a quad tree. The rate allocation between the orientation map and wavelet coefficients is jointly optimized in a rate-distortion sense. For image denoising, a Markov model is used to extract the orientations from the noisy image. As long as the map is sufficiently homogeneous, interesting properties of the original wavelet are preserved such as regularity and orthogonality. Perfect reconstruction is ensured by the reversibility of the lifting scheme. The mutual information between the wavelet coefficients is studied and compared to the one observed with a separable wavelet transform. The rate-distortion performance of this new transform is evaluated for image coding using state-of-the-art subband coders. Its performance in a denoising application is also assessed against the performance obtained with other transforms or denoising methods.

  14. Optimally stabilized PET image denoising using trilateral filtering.

    PubMed

    Mansoor, Awais; Bagci, Ulas; Mollura, Daniel J

    2014-01-01

    Low-resolution and signal-dependent noise distribution in positron emission tomography (PET) images makes denoising process an inevitable step prior to qualitative and quantitative image analysis tasks. Conventional PET denoising methods either over-smooth small-sized structures due to resolution limitation or make incorrect assumptions about the noise characteristics. Therefore, clinically important quantitative information may be corrupted. To address these challenges, we introduced a novel approach to remove signal-dependent noise in the PET images where the noise distribution was considered as Poisson-Gaussian mixed. Meanwhile, the generalized Anscombe's transformation (GAT) was used to stabilize varying nature of the PET noise. Other than noise stabilization, it is also desirable for the noise removal filter to preserve the boundaries of the structures while smoothing the noisy regions. Indeed, it is important to avoid significant loss of quantitative information such as standard uptake value (SUV)-based metrics as well as metabolic lesion volume. To satisfy all these properties, we extended bilateral filtering method into trilateral filtering through multiscaling and optimal Gaussianization process. The proposed method was tested on more than 50 PET-CT images from various patients having different cancers and achieved the superior performance compared to the widely used denoising techniques in the literature.

  15. Comparison of de-noising techniques for FIRST images

    SciTech Connect

    Fodor, I K; Kamath, C

    2001-01-22

    Data obtained through scientific observations are often contaminated by noise and artifacts from various sources. As a result, a first step in mining these data is to isolate the signal of interest by minimizing the effects of the contaminations. Once the data has been cleaned or de-noised, data mining can proceed as usual. In this paper, we describe our work in denoising astronomical images from the Faint Images of the Radio Sky at Twenty-Centimeters (FIRST) survey. We are mining this survey to detect radio-emitting galaxies with a bent-double morphology. This task is made difficult by the noise in the images caused by the processing of the sensor data. We compare three different approaches to de-noising: thresholding of wavelet coefficients advocated in the statistical community, traditional Altering methods used in the image processing community, and a simple thresholding scheme proposed by FIRST astronomers. While each approach has its merits and pitfalls, we found that for our purpose, the simple thresholding scheme worked relatively well for the FIRST dataset.

  16. Stacked Convolutional Denoising Auto-Encoders for Feature Representation.

    PubMed

    Du, Bo; Xiong, Wei; Wu, Jia; Zhang, Lefei; Zhang, Liangpei; Tao, Dacheng

    2016-03-16

    Deep networks have achieved excellent performance in learning representation from visual data. However, the supervised deep models like convolutional neural network require large quantities of labeled data, which are very expensive to obtain. To solve this problem, this paper proposes an unsupervised deep network, called the stacked convolutional denoising auto-encoders, which can map images to hierarchical representations without any label information. The network, optimized by layer-wise training, is constructed by stacking layers of denoising auto-encoders in a convolutional way. In each layer, high dimensional feature maps are generated by convolving features of the lower layer with kernels learned by a denoising auto-encoder. The auto-encoder is trained on patches extracted from feature maps in the lower layer to learn robust feature detectors. To better train the large network, a layer-wise whitening technique is introduced into the model. Before each convolutional layer, a whitening layer is embedded to sphere the input data. By layers of mapping, raw images are transformed into high-level feature representations which would boost the performance of the subsequent support vector machine classifier. The proposed algorithm is evaluated by extensive experimentations and demonstrates superior classification performance to state-of-the-art unsupervised networks.

  17. Impact of automated ICA-based denoising of fMRI data in acute stroke patients.

    PubMed

    Carone, D; Licenik, R; Suri, S; Griffanti, L; Filippini, N; Kennedy, J

    2017-01-01

    Different strategies have been developed using Independent Component Analysis (ICA) to automatically de-noise fMRI data, either focusing on removing only certain components (e.g. motion-ICA-AROMA, Pruim et al., 2015a) or using more complex classifiers to remove multiple types of noise components (e.g. FIX, Salimi-Khorshidi et al., 2014 Griffanti et al., 2014). However, denoising data obtained in an acute setting might prove challenging: the presence of multiple noise sources may not allow focused strategies to clean the data enough and the heterogeneity in the data may be so great to critically undermine complex approaches. The purpose of this study was to explore what automated ICA based approach would better cope with these limitations when cleaning fMRI data obtained from acute stroke patients. The performance of a focused classifier (ICA-AROMA) and a complex classifier (FIX) approaches were compared using data obtained from twenty consecutive acute lacunar stroke patients using metrics determining RSN identification, RSN reproducibility, changes in the BOLD variance, differences in the estimation of functional connectivity and loss of temporal degrees of freedom. The use of generic-trained FIX resulted in misclassification of components and significant loss of signal (< 80%), and was not explored further. Both ICA-AROMA and patient-trained FIX based denoising approaches resulted in significantly improved RSN reproducibility (p < 0.001), localized reduction in BOLD variance consistent with noise removal, and significant changes in functional connectivity (p < 0.001). Patient-trained FIX resulted in higher RSN identifiability (p < 0.001) and wider changes both in the BOLD variance and in functional connectivity compared to ICA-AROMA. The success of ICA-AROMA suggests that by focusing on selected components the full automation can deliver meaningful data for analysis even in population with multiple sources of noise. However, the time invested to train FIX

  18. Denoised and texture enhanced MVCT to improve soft tissue conspicuity

    SciTech Connect

    Sheng, Ke Qi, Sharon X.; Gou, Shuiping; Wu, Jiaolong

    2014-10-15

    Purpose: MVCT images have been used in TomoTherapy treatment to align patients based on bony anatomies but its usefulness for soft tissue registration, delineation, and adaptive radiation therapy is limited due to insignificant photoelectric interaction components and the presence of noise resulting from low detector quantum efficiency of megavoltage x-rays. Algebraic reconstruction with sparsity regularizers as well as local denoising methods has not significantly improved the soft tissue conspicuity. The authors aim to utilize a nonlocal means denoising method and texture enhancement to recover the soft tissue information in MVCT (DeTECT). Methods: A block matching 3D (BM3D) algorithm was adapted to reduce the noise while keeping the texture information of the MVCT images. Following imaging denoising, a saliency map was created to further enhance visual conspicuity of low contrast structures. In this study, BM3D and saliency maps were applied to MVCT images of a CT imaging quality phantom, a head and neck, and four prostate patients. Following these steps, the contrast-to-noise ratios (CNRs) were quantified. Results: By applying BM3D denoising and saliency map, postprocessed MVCT images show remarkable improvements in imaging contrast without compromising resolution. For the head and neck patient, the difficult-to-see lymph nodes and vein in the carotid space in the original MVCT image became conspicuous in DeTECT. For the prostate patients, the ambiguous boundary between the bladder and the prostate in the original MVCT was clarified. The CNRs of phantom low contrast inserts were improved from 1.48 and 3.8 to 13.67 and 16.17, respectively. The CNRs of two regions-of-interest were improved from 1.5 and 3.17 to 3.14 and 15.76, respectively, for the head and neck patient. DeTECT also increased the CNR of prostate from 0.13 to 1.46 for the four prostate patients. The results are substantially better than a local denoising method using anisotropic diffusion

  19. Despeckling SRTM and other topographic data with a denoising algorithm

    NASA Astrophysics Data System (ADS)

    Stevenson, John A.; Sun, Xianfang; Mitchell, Neil C.

    2010-01-01

    Noise in topographic data obscures features and increases error in geomorphic products calculated from DEMs. DEMs produced by radar remote sensing, such as SRTM, are frequently used for geomorphological studies, they often contain speckle noise which may significantly lower the quality of geomorphometric analyses. We introduce here an algorithm that denoises three-dimensional objects while preserving sharp features. It is free to download and simple to use. In this study the algorithm is applied to topographic data (synthetic landscapes, SRTM, TOPSAR) and the results are compared against using a mean filter, using LiDAR data as ground truth for the natural datasets. The level of denoising is controlled by two parameters: the threshold ( T) that controls the sharpness of the features to be preserved, and the number of iterations ( n) that controls how much the data are changed. The optimum settings depend on the nature of the topography and of the noise to be removed, but are typically in the range T = 0.87-0.99 and n = 1-10. If the threshold is too high, noise is preserved. A lower threshold setting is used where noise is spatially uncorrelated (e.g. TOPSAR), whereas in some other datasets (e.g. SRTM), where filtering of the data during processing has introduced spatial correlation to the noise, higher thresholds can be used. Compared to those filtered to an equivalent level with a mean filter, data smoothed by the denoising algorithm of Sun et al. [Sun, X., Rosin, P.L., Martin, R.R., Langbein, F.C., 2007. Fast and effective feature-preserving mesh denoising. IEEE Transactions on Visualisation and Computer Graphics 13, 925-938.] are closer to the original data and to the ground truth. Changes to the data are smaller and less correlated to topographic features. Furthermore, the feature-preserving nature of the algorithm allows significant smoothing to be applied to flat areas of topography while limiting the alterations made in mountainous regions, with clear

  20. Making Inferences: Comprehension of Physical Causality, Intentionality, and Emotions in Discourse by High-Functioning Older Children, Adolescents, and Adults with Autism

    ERIC Educational Resources Information Center

    Bodner, Kimberly E.; Engelhardt, Christopher R.; Minshew, Nancy J.; Williams, Diane L.

    2015-01-01

    Studies investigating inferential reasoning in autism spectrum disorder (ASD) have focused on the ability to make socially-related inferences or inferences more generally. Important variables for intervention planning such as whether inferences depend on physical experiences or the nature of social information have received less consideration. A…

  1. Making Inferences: Comprehension of Physical Causality, Intentionality, and Emotions in Discourse by High-Functioning Older Children, Adolescents, and Adults with Autism

    ERIC Educational Resources Information Center

    Bodner, Kimberly E.; Engelhardt, Christopher R.; Minshew, Nancy J.; Williams, Diane L.

    2015-01-01

    Studies investigating inferential reasoning in autism spectrum disorder (ASD) have focused on the ability to make socially-related inferences or inferences more generally. Important variables for intervention planning such as whether inferences depend on physical experiences or the nature of social information have received less consideration. A…

  2. Conjunction of radial basis function interpolator and artificial intelligence models for time-space modeling of contaminant transport in porous media

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Mousavi, Shahram; Dabrowska, Dominika; Sadikoglu, Fahreddin

    2017-05-01

    As an innovation, both black box and physical-based models were incorporated into simulating groundwater flow and contaminant transport. Time series of groundwater level (GL) and chloride concentration (CC) observed at different piezometers of study plain were firstly de-noised by the wavelet-based de-noising approach. The effect of de-noised data on the performance of artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) was evaluated. Wavelet transform coherence was employed for spatial clustering of piezometers. Then for each cluster, ANN and ANFIS models were trained to predict GL and CC values. Finally, considering the predicted water heads of piezometers as interior conditions, the radial basis function as a meshless method which solves partial differential equations of GFCT, was used to estimate GL and CC values at any point within the plain where there is not any piezometer. Results indicated that efficiency of ANFIS based spatiotemporal model was more than ANN based model up to 13%.

  3. Improving Students' Ability to Intuitively Infer Resistance from Magnitude of Current and Potential Difference Information: A Functional Learning Approach

    ERIC Educational Resources Information Center

    Chasseigne, Gerard; Giraudeau, Caroline; Lafon, Peggy; Mullet, Etienne

    2011-01-01

    The study examined the knowledge of the functional relations between potential difference, magnitude of current, and resistance among seventh graders, ninth graders, 11th graders (in technical schools), and college students. It also tested the efficiency of a learning device named "functional learning" derived from cognitive psychology on the…

  4. Improving Students' Ability to Intuitively Infer Resistance from Magnitude of Current and Potential Difference Information: A Functional Learning Approach

    ERIC Educational Resources Information Center

    Chasseigne, Gerard; Giraudeau, Caroline; Lafon, Peggy; Mullet, Etienne

    2011-01-01

    The study examined the knowledge of the functional relations between potential difference, magnitude of current, and resistance among seventh graders, ninth graders, 11th graders (in technical schools), and college students. It also tested the efficiency of a learning device named "functional learning" derived from cognitive psychology on the…

  5. Nanotechnology and statistical inference

    NASA Astrophysics Data System (ADS)

    Vesely, Sara; Vesely, Leonardo; Vesely, Alessandro

    2017-08-01

    We discuss some problems that arise when applying statistical inference to data with the aim of disclosing new func-tionalities. A predictive model analyzes the data taken from experiments on a specific material to assess the likelihood that another product, with similar structure and properties, will exhibit the same functionality. It doesn't have much predictive power if vari-ability occurs as a consequence of a specific, non-linear behavior. We exemplify our discussion on some experiments with biased dice.

  6. Denoising of single-trial matrix representations using 2D nonlinear diffusion filtering.

    PubMed

    Mustaffa, I; Trenado, C; Schwerdtfeger, K; Strauss, D J

    2010-01-15

    In this paper we present a novel application of denoising by means of nonlinear diffusion filters (NDFs). NDFs have been successfully applied for image processing and computer vision areas, particularly in image denoising, smoothing, segmentation, and restoration. We apply two types of NDFs for the denoising of evoked responses in single-trials in a matrix form, the nonlinear isotropic and the anisotropic diffusion filters. We show that by means of NDFs we are able to denoise the evoked potentials resulting in a better extraction of physiologically relevant morphological features over the ongoing experiment. This technique offers the advantage of translation-invariance in comparison to other well-known methods, e.g., wavelet denoising based on maximally decimated filter banks, due to an adaptive diffusion feature. We compare the proposed technique with a wavelet denoising scheme that had been introduced before for evoked responses. It is concluded that NDFs represent a promising and useful approach in the denoising of event related potentials. Novel NDF applications of single-trials of auditory brain responses (ABRs) and the transcranial magnetic stimulation (TMS) evoked electroencephalographic responses denoising are presented in this paper.

  7. A Fast Algorithm for Denoising Magnitude Diffusion-Weighted Images with Rank and Edge Constraints

    PubMed Central

    Lam, Fan; Liu, Ding; Song, Zhuang; Schuff, Norbert; Liang, Zhi-Pei

    2015-01-01

    Purpose To accelerate denoising of magnitude diffusion-weighted images subject to joint rank and edge constraints. Methods We extend a previously proposed majorize-minimize (MM) method for statistical estimation that involves noncentral χ distributions and joint rank and edge constraints. A new algorithm is derived which decomposes the constrained noncentral χ denoising problem into a series of constrained Gaussian denoising problems each of which is then solved using an efficient alternating minimization scheme. Results The performance of the proposed algorithm has been evaluated using both simulated and experimental data. Results from simulations based on ex vivo data show that the new algorithm achieves about a factor of 10 speed up over the original Quasi-Newton based algorithm. This improvement in computational efficiency enabled denoising of large data sets containing many diffusion-encoding directions. The denoising performance of the new efficient algorithm is found to be comparable to or even better than that of the original slow algorithm. For an in vivo high-resolution Q-ball acquisition, comparison of fiber tracking results around hippocampus region before and after denoising will also be shown to demonstrate the denoising effects of the new algorithm. Conclusion The optimization problem associated with denoising noncentral χ distributed diffusion-weighted images subject to joint rank and edge constraints can be solved efficiently using an MM-based algorithm. PMID:25733066

  8. A fast algorithm for denoising magnitude diffusion-weighted images with rank and edge constraints.

    PubMed

    Lam, Fan; Liu, Ding; Song, Zhuang; Schuff, Norbert; Liang, Zhi-Pei

    2016-01-01

    To accelerate denoising of magnitude diffusion-weighted images subject to joint rank and edge constraints. We extend a previously proposed majorize-minimize method for statistical estimation that involves noncentral χ distributions to incorporate joint rank and edge constraints. A new algorithm is derived which decomposes the constrained noncentral χ denoising problem into a series of constrained Gaussian denoising problems each of which is then solved using an efficient alternating minimization scheme. The performance of the proposed algorithm has been evaluated using both simulated and experimental data. Results from simulations based on ex vivo data show that the new algorithm achieves about a factor of 10 speed up over the original Quasi-Newton-based algorithm. This improvement in computational efficiency enabled denoising of large datasets containing many diffusion-encoding directions. The denoising performance of the new efficient algorithm is found to be comparable to or even better than that of the original slow algorithm. For an in vivo high-resolution Q-ball acquisition, comparison of fiber tracking results around hippocampus region before and after denoising will also be shown to demonstrate the denoising effects of the new algorithm. The optimization problem associated with denoising noncentral χ distributed diffusion-weighted images subject to joint rank and edge constraints can be solved efficiently using a majorize-minimize-based algorithm. © 2015 Wiley Periodicals, Inc.

  9. A New Wavelet Denoising Method for Selecting Decomposition Levels and Noise Thresholds

    PubMed Central

    Srivastava, Madhur; Anderson, C. Lindsay; Freed, Jack H.

    2016-01-01

    A new method is presented to denoise 1-D experimental signals using wavelet transforms. Although the state-of- the-art wavelet denoising methods perform better than other denoising methods, they are not very effective for experimental signals. Unlike images and other signals, experimental signals in chemical and biophysical applications for example, are less tolerant to signal distortion and under-denoising caused by the standard wavelet denoising methods. The new method 1) provides a method to select the number of decomposition levels to denoise, 2) uses a new formula to calculate noise thresholds that does not require noise estimation, 3) uses separate noise thresholds for positive and negative wavelet coefficients, 4) applies denoising to the Approximation component, and 5) allows the flexibility to adjust the noise thresholds. The new method is applied to continuous wave electron spin resonance (cw-ESR) spectra and it is found that it increases the signal-to-noise ratio (SNR) by more than 32 dB without distorting the signal, whereas standard denoising methods improve the SNR by less than 10 dB and with some distortion. Also, its computation time is more than 6 times faster. PMID:27795877

  10. Enhancing P300 Wave of BCI Systems Via Negentropy in Adaptive Wavelet Denoising.

    PubMed

    Vahabi, Z; Amirfattahi, R; Mirzaei, Ar

    2011-07-01

    Brian Computer Interface (BCI) is a direct communication pathway between the brain and an external device. BCIs are often aimed at assisting, augmenting or repairing human cognitive or sensory-motor functions. EEG separation into target and non-target ones based on presence of P300 signal is of difficult task mainly due to their natural low signal to noise ratio. In this paper a new algorithm is introduced to enhance EEG signals and improve their SNR. Our denoising method is based on multi-resolution analysis via Independent Component Analysis (ICA) Fundamentals. We have suggested combination of negentropy as a feature of signal and subband information from wavelet transform. The proposed method is finally tested with dataset from BCI Competition 2003 and gives results that compare favorably.

  11. Inferring Aggregated Functional Traits from Metagenomic Data Using Constrained Non-negative Matrix Factorization: Application to Fiber Degradation in the Human Gut Microbiota

    PubMed Central

    Raguideau, Sébastien; Plancade, Sandra; Pons, Nicolas; Leclerc, Marion

    2016-01-01

    Whole Genome Shotgun (WGS) metagenomics is increasingly used to study the structure and functions of complex microbial ecosystems, both from the taxonomic and functional point of view. Gene inventories of otherwise uncultured microbial communities make the direct functional profiling of microbial communities possible. The concept of community aggregated trait has been adapted from environmental and plant functional ecology to the framework of microbial ecology. Community aggregated traits are quantified from WGS data by computing the abundance of relevant marker genes. They can be used to study key processes at the ecosystem level and correlate environmental factors and ecosystem functions. In this paper we propose a novel model based approach to infer combinations of aggregated traits characterizing specific ecosystemic metabolic processes. We formulate a model of these Combined Aggregated Functional Traits (CAFTs) accounting for a hierarchical structure of genes, which are associated on microbial genomes, further linked at the ecosystem level by complex co-occurrences or interactions. The model is completed with constraints specifically designed to exploit available genomic information, in order to favor biologically relevant CAFTs. The CAFTs structure, as well as their intensity in the ecosystem, is obtained by solving a constrained Non-negative Matrix Factorization (NMF) problem. We developed a multicriteria selection procedure for the number of CAFTs. We illustrated our method on the modelling of ecosystemic functional traits of fiber degradation by the human gut microbiota. We used 1408 samples of gene abundances from several high-throughput sequencing projects and found that four CAFTs only were needed to represent the fiber degradation potential. This data reduction highlighted biologically consistent functional patterns while providing a high quality preservation of the original data. Our method is generic and can be applied to other metabolic processes in

  12. Sites Inferred by Metabolic Background Assertion Labeling (SIMBAL): adapting the Partial Phylogenetic Profiling algorithm to scan sequences for signatures that predict protein function

    PubMed Central

    2010-01-01

    Background Comparative genomics methods such as phylogenetic profiling can mine powerful inferences from inherently noisy biological data sets. We introduce Sites Inferred by Metabolic Background Assertion Labeling (SIMBAL), a method that applies the Partial Phylogenetic Profiling (PPP) approach locally within a protein sequence to discover short sequence signatures associated with functional sites. The approach is based on the basic scoring mechanism employed by PPP, namely the use of binomial distribution statistics to optimize sequence similarity cutoffs during searches of partitioned training sets. Results Here we illustrate and validate the ability of the SIMBAL method to find functionally relevant short sequence signatures by application to two well-characterized protein families. In the first example, we partitioned a family of ABC permeases using a metabolic background property (urea utilization). Thus, the TRUE set for this family comprised members whose genome of origin encoded a urea utilization system. By moving a sliding window across the sequence of a permease, and searching each subsequence in turn against the full set of partitioned proteins, the method found which local sequence signatures best correlated with the urea utilization trait. Mapping of SIMBAL "hot spots" onto crystal structures of homologous permeases reveals that the significant sites are gating determinants on the cytosolic face rather than, say, docking sites for the substrate-binding protein on the extracellular face. In the second example, we partitioned a protein methyltransferase family using gene proximity as a criterion. In this case, the TRUE set comprised those methyltransferases encoded near the gene for the substrate RF-1. SIMBAL identifies sequence regions that map onto the substrate-binding interface while ignoring regions involved in the methyltransferase reaction mechanism in general. Neither method for training set construction requires any prior experimental

  13. An iterative denoising system based on Wiener filtering with application to biomedical images

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2017-05-01

    Biomedical image denoising systems are important for accurate clinical diagnosis. The purpose of this study is to present a simple and effective iterative multistep image denoising system based on Wiener filtering (WF) where the denoised image from one stage is the input to the next stage. The denoising process stops when a particular condition measured by image energy is adaptively achieved. The proposed iterative system is tested on real clinical images and performance is measured by the well-known peak-signal-to-noise-ratio (PSNR) statistic. Experimental results showed that the proposed iterative system outperforms conventional image denoising algorithms; including wavelet packet (WP), fourth order partial differential equation (FOPDE), nonlocal Euclidean means (NLEM), first order local statistics (FOLS), and single Wiener filter used as baseline model. The experimental results demonstrate that the proposed approach can remove noise automatically and effectively while edges and texture characteristics are preserved.

  14. Hyperspectral image denoising using the robust low-rank tensor recovery.

    PubMed

    Li, Chang; Ma, Yong; Huang, Jun; Mei, Xiaoguang; Ma, Jiayi

    2015-09-01

    Denoising is an important preprocessing step to further analyze the hyperspectral image (HSI), and many denoising methods have been used for the denoising of the HSI data cube. However, the traditional denoising methods are sensitive to outliers and non-Gaussian noise. In this paper, by utilizing the underlying low-rank tensor property of the clean HSI data and the sparsity property of the outliers and non-Gaussian noise, we propose a new model based on the robust low-rank tensor recovery, which can preserve the global structure of HSI and simultaneously remove the outliers and different types of noise: Gaussian noise, impulse noise, dead lines, and so on. The proposed model can be solved by the inexact augmented Lagrangian method, and experiments on simulated and real hyperspectral images demonstrate that the proposed method is efficient for HSI denoising.

  15. A New Method for Nonlocal Means Image Denoising Using Multiple Images.

    PubMed

    Wang, Xingzheng; Wang, Haoqian; Yang, Jiangfeng; Zhang, Yongbing

    2016-01-01

    The basic principle of nonlocal means is to denoise a pixel using the weighted average of the neighbourhood pixels, while the weight is decided by the similarity of these pixels. The key issue of the nonlocal means method is how to select similar patches and design the weight of them. There are two main contributions of this paper: The first contribution is that we use two images to denoise the pixel. These two noised images are with the same noise deviation. Instead of using only one image, we calculate the weight from two noised images. After the first denoising process, we get a pre-denoised image and a residual image. The second contribution is combining the nonlocal property between residual image and pre-denoised image. The improved nonlocal means method pays more attention on the similarity than the original one, which turns out to be very effective in eliminating gaussian noise. Experimental results with simulated data are provided.

  16. Blind source separation based x-ray image denoising from an image sequence.

    PubMed

    Yu, Chun-Yu; Li, Yan; Fei, Bin; Li, Wei-Liang

    2015-09-01

    Blind source separation (BSS) based x-ray image denoising from an image sequence is proposed. Without priori knowledge, the useful image signal can be separated from an x-ray image sequence, for original images are supposed as different combinations of stable image signal and random image noise. The BSS algorithms such as fixed-point independent component analysis and second-order statistics singular value decomposition are used and compared with multi-frame averaging which is a common algorithm for improving image's signal-to-noise ratio (SNR). Denoising performance is evaluated in SNR, standard deviation, entropy, and runtime. Analysis indicates that BSS is applicable to image denoising; the denoised image's quality will get better when more frames are included in an x-ray image sequence, but it will cost more time; there should be trade-off between denoising performance and runtime, which means that the number of frames included in an image sequence is enough.

  17. A New Method for Nonlocal Means Image Denoising Using Multiple Images

    PubMed Central

    Wang, Xingzheng; Wang, Haoqian; Yang, Jiangfeng; Zhang, Yongbing

    2016-01-01

    The basic principle of nonlocal means is to denoise a pixel using the weighted average of the neighbourhood pixels, while the weight is decided by the similarity of these pixels. The key issue of the nonlocal means method is how to select similar patches and design the weight of them. There are two main contributions of this paper: The first contribution is that we use two images to denoise the pixel. These two noised images are with the same noise deviation. Instead of using only one image, we calculate the weight from two noised images. After the first denoising process, we get a pre-denoised image and a residual image. The second contribution is combining the nonlocal property between residual image and pre-denoised image. The improved nonlocal means method pays more attention on the similarity than the original one, which turns out to be very effective in eliminating gaussian noise. Experimental results with simulated data are provided. PMID:27459293

  18. From heuristic optimization to dictionary learning: a review and comprehensive comparison of image denoising algorithms.

    PubMed

    Shao, Ling; Yan, Ruomei; Li, Xuelong; Liu, Yan

    2014-07-01

    Image denoising is a well explored topic in the field of image processing. In the past several decades, the progress made in image denoising has benefited from the improved modeling of natural images. In this paper, we introduce a new taxonomy based on image representations for a better understanding of state-of-the-art image denoising techniques. Within each category, several representative algorithms are selected for evaluation and comparison. The experimental results are discussed and analyzed to determine the overall advantages and disadvantages of each category. In general, the nonlocal methods within each category produce better denoising results than local ones. In addition, methods based on overcomplete representations using learned dictionaries perform better than others. The comprehensive study in this paper would serve as a good reference and stimulate new research ideas in image denoising.

  19. OFMspert - Inference of operator intentions in supervisory control using a blackboard architecture. [operator function model expert system

    NASA Technical Reports Server (NTRS)

    Jones, Patricia S.; Mitchell, Christine M.; Rubin, Kenneth S.

    1988-01-01

    The authors proposes an architecture for an expert system that can function as an operator's associate in the supervisory control of a complex dynamic system. Called OFMspert (operator function model (OFM) expert system), the architecture uses the operator function modeling methodology as the basis for the design. The authors put emphasis on the understanding capabilities, i.e., the intent referencing property, of an operator's associate. The authors define the generic structure of OFMspert, particularly those features that support intent inferencing. They also describe the implementation and validation of OFMspert in GT-MSOCC (Georgia Tech-Multisatellite Operations Control Center), a laboratory domain designed to support research in human-computer interaction and decision aiding in complex, dynamic systems.

  20. Multiscale properties of weighted total variation flow with applications to denoising and registration.

    PubMed

    Athavale, Prashant; Xu, Robert; Radau, Perry; Nachman, Adrian; Wright, Graham A

    2015-07-01

    Images consist of structures of varying scales: large scale structures such as flat regions, and small scale structures such as noise, textures, and rapidly oscillatory patterns. In the hierarchical (BV, L(2)) image decomposition, Tadmor, et al. (2004) start with extracting coarse scale structures from a given image, and successively extract finer structures from the residuals in each step of the iterative decomposition. We propose to begin instead by extracting the finest structures from the given image and then proceed to extract increasingly coarser structures. In most images, noise could be considered as a fine scale structure. Thus, starting the image decomposition with finer scales, rather than large scales, leads to fast denoising. We note that our approach turns out to be equivalent to the nonstationary regularization in Scherzer and Weickert (2000). The continuous limit of this procedure leads to a time-scaled version of total variation flow. Motivated by specific clinical applications, we introduce an image depending weight in the regularization functional, and study the corresponding weighted TV flow. We show that the edge-preserving property of the multiscale representation of an input image obtained with the weighted TV flow can be enhanced and localized by appropriate choice of the weight. We use this in developing an efficient and edge-preserving denoising algorithm with control on speed and localization pro