Medical image denoising using one-dimensional singularity function model.
Luo, Jianhua; Zhu, Yuemin; Hiba, Bassem
2010-03-01
A novel denoising approach is proposed that is based on a spectral data substitution mechanism through using a mathematical model of one-dimensional singularity function analysis (1-D SFA). The method consists in dividing the complete spectral domain of the noisy signal into two subsets: the preserved set where the spectral data are kept unchanged, and the substitution set where the original spectral data having lower signal-to-noise ratio (SNR) are replaced by those reconstructed using the 1-D SFA model. The preserved set containing original spectral data is determined according to the SNR of the spectrum. The singular points and singularity degrees in the 1-D SFA model are obtained through calculating finite difference of the noisy signal. The theoretical formulation and experimental results demonstrated that the proposed method allows more efficient denoising while introducing less distortion, and presents significant improvement over conventional denoising methods.
Na, Man Gyun; Oh, Seungrohk
2002-11-15
A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information of other sensors. The parameters of the neuro-fuzzy inference system that estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce the time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system, and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors.
Total variation denoising of probability measures using iterated function systems with probabilities
NASA Astrophysics Data System (ADS)
La Torre, Davide; Mendivil, Franklin; Vrscay, Edward R.
2017-01-01
In this paper we present a total variation denoising problem for probability measures using the set of fixed point probability measures of iterated function systems with probabilities IFSP. By means of the Collage Theorem for contraction mappings, we provide an upper bound for this problem that can be solved by determining a set of probabilities.
A New Adaptive Diffusive Function for Magnetic Resonance Imaging Denoising Based on Pixel Similarity
Heydari, Mostafa; Karami, Mohammad Reza
2015-01-01
Although there are many methods for image denoising, but partial differential equation (PDE) based denoising attracted much attention in the field of medical image processing such as magnetic resonance imaging (MRI). The main advantage of PDE-based denoising approach is laid in its ability to smooth image in a nonlinear way, which effectively removes the noise, as well as preserving edge through anisotropic diffusion controlled by the diffusive function. This function was first introduced by Perona and Malik (P-M) in their model. They proposed two functions that are most frequently used in PDE-based methods. Since these functions consider only the gradient information of a diffused pixel, they cannot remove noise in noisy images with low signal-to-noise (SNR). In this paper we propose a modified diffusive function with fractional power that is based on pixel similarity to improve P-M model for low SNR. We also will show that our proposed function will stabilize the P-M method. As experimental results show, our proposed function that is modified version of P-M function effectively improves the SNR and preserves edges more than P-M functions in low SNR. PMID:26955563
2015-01-01
Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications. PMID:27222723
Lahmiri, Salim
2016-03-01
Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications.
Patel, Ameera X; Bullmore, Edward T
2016-11-15
Connectome mapping using techniques such as functional magnetic resonance imaging (fMRI) has become a focus of systems neuroscience. There remain many statistical challenges in analysis of functional connectivity and network architecture from BOLD fMRI multivariate time series. One key statistic for any time series is its (effective) degrees of freedom, df, which will generally be less than the number of time points (or nominal degrees of freedom, N). If we know the df, then probabilistic inference on other fMRI statistics, such as the correlation between two voxel or regional time series, is feasible. However, we currently lack good estimators of df in fMRI time series, especially after the degrees of freedom of the "raw" data have been modified substantially by denoising algorithms for head movement. Here, we used a wavelet-based method both to denoise fMRI data and to estimate the (effective) df of the denoised process. We show that seed voxel correlations corrected for locally variable df could be tested for false positive connectivity with better control over Type I error and greater specificity of anatomical mapping than probabilistic connectivity maps using the nominal degrees of freedom. We also show that wavelet despiked statistics can be used to estimate all pairwise correlations between a set of regional nodes, assign a P value to each edge, and then iteratively add edges to the graph in order of increasing P. These probabilistically thresholded graphs are likely more robust to regional variation in head movement effects than comparable graphs constructed by thresholding correlations. Finally, we show that time-windowed estimates of df can be used for probabilistic connectivity testing or dynamic network analysis so that apparent changes in the functional connectome are appropriately corrected for the effects of transient noise bursts. Wavelet despiking is both an algorithm for fMRI time series denoising and an estimator of the (effective) df of denoised
He, Yunlong; Zhao, Yanna; Ren, Yanju; Gee, James
2017-01-01
Filtering belongs to the most fundamental operations of retinal image processing and for which the value of the filtered image at a given location is a function of the values in a local window centered at this location. However, preserving thin retinal vessels during the filtering process is challenging due to vessels' small area and weak contrast compared to background, caused by the limited resolution of imaging and less blood flow in the vessel. In this paper, we present a novel retinal image denoising approach which is able to preserve the details of retinal vessels while effectively eliminating image noise. Specifically, our approach is carried out by determining an optimal spatial kernel for the bilateral filter, which is represented by a line spread function with an orientation and scale adjusted adaptively to the local vessel structure. Moreover, this approach can also be served as a preprocessing tool for improving the accuracy of the vessel detection technique. Experimental results show the superiority of our approach over state-of-the-art image denoising techniques such as the bilateral filter. PMID:28261320
Can orangutans (Pongo abelii) infer tool functionality?
Mulcahy, Nicholas J; Schubiger, Michèle N
2014-05-01
It is debatable whether apes can reason about the unobservable properties of tools. We tested orangutans for this ability with a range of tool tasks that they could solve by using observational cues to infer tool functionality. In experiment 1, subjects successfully chose an unbroken tool over a broken one when each tool's middle section was hidden. This prevented seeing which tool was functional but it could be inferred by noting the tools' visible ends that were either disjointed (broken tool) or aligned (unbroken tool). We investigated whether success in experiment 1 was best explained by inferential reasoning or by having a preference per se for a hidden tool with an aligned configuration. We conducted a similar task to experiment 1 and included a functional bent tool that could be arranged to have the same disjointed configuration as the broken tool. The results suggested that subjects had a preference per se for the aligned tool by choosing it regardless of whether it was paired with the broken tool or the functional bent tool. However, further experiments with the bent tool task suggested this preference was a result of additional demands of having to attend to and remember the properties of the tools from the beginning of the task. In our last experiment, we removed these task demands and found evidence that subjects could infer the functionality of a broken tool and an unbroken tool that both looked identical at the time of choice.
Neural Circuit Inference from Function to Structure.
Real, Esteban; Asari, Hiroki; Gollisch, Tim; Meister, Markus
2017-01-23
Advances in technology are opening new windows on the structural connectivity and functional dynamics of brain circuits. Quantitative frameworks are needed that integrate these data from anatomy and physiology. Here, we present a modeling approach that creates such a link. The goal is to infer the structure of a neural circuit from sparse neural recordings, using partial knowledge of its anatomy as a regularizing constraint. We recorded visual responses from the output neurons of the retina, the ganglion cells. We then generated a systematic sequence of circuit models that represents retinal neurons and connections and fitted them to the experimental data. The optimal models faithfully recapitulated the ganglion cell outputs. More importantly, they made predictions about dynamics and connectivity among unobserved neurons internal to the circuit, and these were subsequently confirmed by experiment. This circuit inference framework promises to facilitate the integration and understanding of big data in neuroscience.
Adaptive Denoising Technique for Robust Analysis of Functional Magnetic Resonance Imaging Data
2007-11-02
or receive while t fMRI o versatil of epoc method ER-fM to the studies comes intra-su functioADAPTIVE DENOISING TECHNIQUE FOR ROBUST ANALYSIS OF...supported in part by the Center for Advanced Software and Biomedical Engineering Consultations (CASBEC), Cairo University, and IBE Technologies , Egypt
Functional network inference of the suprachiasmatic nucleus
Abel, John H.; Meeker, Kirsten; Granados-Fuentes, Daniel; St. John, Peter C.; Wang, Thomas J.; Bales, Benjamin B.; Doyle, Francis J.; Herzog, Erik D.; Petzold, Linda R.
2016-01-01
In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. Although the intracellular genetic oscillator and intercellular biochemical coupling mechanisms have been examined previously, the network topology driving synchronization of the SCN has not been elucidated. This network has been particularly challenging to probe, due to its oscillatory components and slow coupling timescale. In this work, we investigated the SCN network at a single-cell resolution through a chemically induced desynchronization. We then inferred functional connections in the SCN by applying the maximal information coefficient statistic to bioluminescence reporter data from individual neurons while they resynchronized their circadian cycling. Our results demonstrate that the functional network of circadian cells associated with resynchronization has small-world characteristics, with a node degree distribution that is exponential. We show that hubs of this small-world network are preferentially located in the central SCN, with sparsely connected shells surrounding these cores. Finally, we used two computational models of circadian neurons to validate our predictions of network structure. PMID:27044085
Functional network inference of the suprachiasmatic nucleus
Abel, John H.; Meeker, Kirsten; Granados-Fuentes, Daniel; St. John, Peter C.; Wang, Thomas J.; Bales, Benjamin B.; Doyle, Francis J.; Herzog, Erik D.; Petzold, Linda R.
2016-04-04
In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. Although the intracellular genetic oscillator and intercellular biochemical coupling mechanisms have been examined previously, the network topology driving synchronization of the SCN has not been elucidated. This network has been particularly challenging to probe, due to its oscillatory components and slow coupling timescale. In this work, we investigated the SCN network at a single-cell resolution through a chemically induced desynchronization. We then inferred functional connections in the SCN by applying the maximal information coefficient statistic to bioluminescence reporter data from individual neurons while they resynchronized their circadian cycling. Our results demonstrate that the functional network of circadian cells associated with resynchronization has small-world characteristics, with a node degree distribution that is exponential. We show that hubs of this small-world network are preferentially located in the central SCN, with sparsely connected shells surrounding these cores. Finally, we used two computational models of circadian neurons to validate our predictions of network structure.
Functional neuroanatomy of intuitive physical inference.
Fischer, Jason; Mikhael, John G; Tenenbaum, Joshua B; Kanwisher, Nancy
2016-08-23
To engage with the world-to understand the scene in front of us, plan actions, and predict what will happen next-we must have an intuitive grasp of the world's physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events-a "physics engine" in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general "multiple demand" system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action.
Functional neuroanatomy of intuitive physical inference
Mikhael, John G.; Tenenbaum, Joshua B.; Kanwisher, Nancy
2016-01-01
To engage with the world—to understand the scene in front of us, plan actions, and predict what will happen next—we must have an intuitive grasp of the world’s physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events—a “physics engine” in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general “multiple demand” system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action. PMID:27503892
Image denoising using local tangent space alignment
NASA Astrophysics Data System (ADS)
Feng, JianZhou; Song, Li; Huo, Xiaoming; Yang, XiaoKang; Zhang, Wenjun
2010-07-01
We propose a novel image denoising approach, which is based on exploring an underlying (nonlinear) lowdimensional manifold. Using local tangent space alignment (LTSA), we 'learn' such a manifold, which approximates the image content effectively. The denoising is performed by minimizing a newly defined objective function, which is a sum of two terms: (a) the difference between the noisy image and the denoised image, (b) the distance from the image patch to the manifold. We extend the LTSA method from manifold learning to denoising. We introduce the local dimension concept that leads to adaptivity to different kind of image patches, e.g. flat patches having lower dimension. We also plug in a basic denoising stage to estimate the local coordinate more accurately. It is found that the proposed method is competitive: its performance surpasses the K-SVD denoising method.
Image denoising using a combined criterion
NASA Astrophysics Data System (ADS)
Semenishchev, Evgeny; Marchuk, Vladimir; Shrafel, Igor; Dubovskov, Vadim; Onoyko, Tatyana; Maslennikov, Stansilav
2016-05-01
A new image denoising method is proposed in this paper. We are considering an optimization problem with a linear objective function based on two criteria, namely, L2 norm and the first order square difference. This method is a parametric, so by a choice of the parameters we can adapt a proposed criteria of the objective function. The denoising algorithm consists of the following steps: 1) multiple denoising estimates are found on local areas of the image; 2) image edges are determined; 3) parameters of the method are fixed and denoised estimates of the local area are found; 4) local window is moved to the next position (local windows are overlapping) in order to produce the final estimate. A proper choice of parameters of the introduced method is discussed. A comparative analysis of a new denoising method with existed ones is performed on a set of test images.
Green channel guiding denoising on bayer image.
Tan, Xin; Lai, Shiming; Liu, Yu; Zhang, Maojun
2014-01-01
Denoising is an indispensable function for digital cameras. In respect that noise is diffused during the demosaicking, the denoising ought to work directly on bayer data. The difficulty of denoising on bayer image is the interlaced mosaic pattern of red, green, and blue. Guided filter is a novel time efficient explicit filter kernel which can incorporate additional information from the guidance image, but it is still not applied for bayer image. In this work, we observe that the green channel of bayer mode is higher in both sampling rate and Signal-to-Noise Ratio (SNR) than the red and blue ones. Therefore the green channel can be used to guide denoising. This kind of guidance integrates the different color channels together. Experiments on both actual and simulated bayer images indicate that green channel acts well as the guidance signal, and the proposed method is competitive with other popular filter kernel denoising methods.
Salimi-Khorshidi, Gholamreza; Douaud, Gwenaëlle; Beckmann, Christian F; Glasser, Matthew F; Griffanti, Ludovica; Smith, Stephen M
2014-04-15
Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis (ICA) - one of the most widely used techniques for the exploratory analysis of fMRI data - has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject "at rest"). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing "signal" (brain activity) can be distinguished form the "noise" components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX ("FMRIB's ICA-based X-noiseifier"), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of) the original
Quantitative evaluation of statistical inference in resting state functional MRI.
Yang, Xue; Kang, Hakmook; Newton, Allen; Landman, Bennett A
2012-01-01
Modern statistical inference techniques may be able to improve the sensitivity and specificity of resting state functional MRI (rs-fMRI) connectivity analysis through more realistic characterization of distributional assumptions. In simulation, the advantages of such modern methods are readily demonstrable. However quantitative empirical validation remains elusive in vivo as the true connectivity patterns are unknown and noise/artifact distributions are challenging to characterize with high fidelity. Recent innovations in capturing finite sample behavior of asymptotically consistent estimators (i.e., SIMulation and EXtrapolation - SIMEX) have enabled direct estimation of bias given single datasets. Herein, we leverage the theoretical core of SIMEX to study the properties of inference methods in the face of diminishing data (in contrast to increasing noise). The stability of inference methods with respect to synthetic loss of empirical data (defined as resilience) is used to quantify the empirical performance of one inference method relative to another. We illustrate this new approach in a comparison of ordinary and robust inference methods with rs-fMRI.
Differential Expression and Network Inferences through Functional Data Modeling
Telesca, Donatello; Inoue, Lurdes Y.T.; Neira, Mauricio; Etzioni, Ruth; Gleave, Martin; Nelson, Colleen
2010-01-01
Time–course microarray data consist of mRNA expression from a common set of genes collected at different time points. Such data are thought to reflect underlying biological processes developing over time. In this article we propose a model that allows us to examine differential expression and gene network relationships using time course microarray data. We model each gene expression profile as a random functional transformation of the scale, amplitude and phase of a common curve. Inferences about the gene–specific amplitude parameters allow us to examine differential gene expression. Inferences about measures of functional similarity based on estimated time transformation functions allow us to examine gene networks while accounting for features of the gene expression profiles. We discuss applications to simulated data as well as to microarray data on prostate cancer progression. PMID:19053995
Estimation and Inference of Directionally Differentiable Functions: Theory and Applications
NASA Astrophysics Data System (ADS)
Fang, Zheng
This dissertation addresses a large class of irregular models in economics and statistics -- settings in which the parameters of interest take the form φ(theta 0), where φ is a known directionally differentiable function and theta 0 is estimated by thetan. Chapter 1 provides a tractable framework for conducting inference, Chapter 2 focuses on optimality of estimation, and Chapter 3 applies the developed theory to construct a test whether a Hilbert space valued parameter belongs to a convex set and to derive the uniform weak convergence of the Grenander distribution function -- i.e. the least concave majorant of the empirical distribution function -- under minimal assumptions.
Inferring consistent functional interaction patterns from natural stimulus FMRI data.
Sun, Jiehuan; Hu, Xintao; Huang, Xiu; Liu, Yang; Li, Kaiming; Li, Xiang; Han, Junwei; Guo, Lei; Liu, Tianming; Zhang, Jing
2012-07-16
There has been increasing interest in how the human brain responds to natural stimulus such as video watching in the neuroimaging field. Along this direction, this paper presents our effort in inferring consistent and reproducible functional interaction patterns under natural stimulus of video watching among known functional brain regions identified by task-based fMRI. Then, we applied and compared four statistical approaches, including Bayesian network modeling with searching algorithms: greedy equivalence search (GES), Peter and Clark (PC) analysis, independent multiple greedy equivalence search (IMaGES), and the commonly used Granger causality analysis (GCA), to infer consistent and reproducible functional interaction patterns among these brain regions. It is interesting that a number of reliable and consistent functional interaction patterns were identified by the GES, PC and IMaGES algorithms in different participating subjects when they watched multiple video shots of the same semantic category. These interaction patterns are meaningful given current neuroscience knowledge and are reasonably reproducible across different brains and video shots. In particular, these consistent functional interaction patterns are supported by structural connections derived from diffusion tensor imaging (DTI) data, suggesting the structural underpinnings of consistent functional interactions. Our work demonstrates that specific consistent patterns of functional interactions among relevant brain regions might reflect the brain's fundamental mechanisms of online processing and comprehension of video messages.
Knaus, Claude; Zwicker, Matthias
2014-07-01
Image denoising continues to be an active research topic. Although state-of-the-art denoising methods are numerically impressive and approch theoretical limits, they suffer from visible artifacts.While they produce acceptable results for natural images, human eyes are less forgiving when viewing synthetic images. At the same time, current methods are becoming more complex, making analysis, and implementation difficult. We propose image denoising as a simple physical process, which progressively reduces noise by deterministic annealing. The results of our implementation are numerically and visually excellent. We further demonstrate that our method is particularly suited for synthetic images. Finally, we offer a new perspective on image denoising using robust estimators.
Network inference from functional experimental data (Conference Presentation)
NASA Astrophysics Data System (ADS)
Desrosiers, Patrick; Labrecque, Simon; Tremblay, Maxime; Bélanger, Mathieu; De Dorlodot, Bertrand; Côté, Daniel C.
2016-03-01
Functional connectivity maps of neuronal networks are critical tools to understand how neurons form circuits, how information is encoded and processed by neurons, how memory is shaped, and how these basic processes are altered under pathological conditions. Current light microscopy allows to observe calcium or electrical activity of thousands of neurons simultaneously, yet assessing comprehensive connectivity maps directly from such data remains a non-trivial analytical task. There exist simple statistical methods, such as cross-correlation and Granger causality, but they only detect linear interactions between neurons. Other more involved inference methods inspired by information theory, such as mutual information and transfer entropy, identify more accurately connections between neurons but also require more computational resources. We carried out a comparative study of common connectivity inference methods. The relative accuracy and computational cost of each method was determined via simulated fluorescence traces generated with realistic computational models of interacting neurons in networks of different topologies (clustered or non-clustered) and sizes (10-1000 neurons). To bridge the computational and experimental works, we observed the intracellular calcium activity of live hippocampal neuronal cultures infected with the fluorescent calcium marker GCaMP6f. The spontaneous activity of the networks, consisting of 50-100 neurons per field of view, was recorded from 20 to 50 Hz on a microscope controlled by a homemade software. We implemented all connectivity inference methods in the software, which rapidly loads calcium fluorescence movies, segments the images, extracts the fluorescence traces, and assesses the functional connections (with strengths and directions) between each pair of neurons. We used this software to assess, in real time, the functional connectivity from real calcium imaging data in basal conditions, under plasticity protocols, and epileptic
Chen, Jingyuan E; Jahanian, Hesamoddin; Glover, Gary H
2017-02-01
Recently, emerging studies have demonstrated the existence of brain resting-state spontaneous activity at frequencies higher than the conventional 0.1 Hz. A few groups utilizing accelerated acquisitions have reported persisting signals beyond 1 Hz, which seems too high to be accommodated by the sluggish hemodynamic process underpinning blood oxygen level-dependent contrasts (the upper limit of the canonical model is ∼0.3 Hz). It is thus questionable whether the observed high-frequency (HF) functional connectivity originates from alternative mechanisms (e.g., inflow effects, proton density changes in or near activated neural tissue) or rather is artificially introduced by improper preprocessing operations. In this study, we examined the influence of a common preprocessing step-whole-band linear nuisance regression (WB-LNR)-on resting-state functional connectivity (RSFC) and demonstrated through both simulation and analysis of real dataset that WB-LNR can introduce spurious network structures into the HF bands of functional magnetic resonance imaging (fMRI) signals. Findings of present study call into question whether published observations on HF-RSFC are partly attributable to improper data preprocessing instead of actual neural activities.
PRINCIPAL COMPONENTS FOR NON-LOCAL MEANS IMAGE DENOISING.
Tasdizen, Tolga
2008-01-01
This paper presents an image denoising algorithm that uses principal component analysis (PCA) in conjunction with the non-local means image denoising. Image neighborhood vectors used in the non-local means algorithm are first projected onto a lower-dimensional subspace using PCA. Consequently, neighborhood similarity weights for denoising are computed using distances in this subspace rather than the full space. This modification to the non-local means algorithm results in improved accuracy and computational performance. We present an analysis of the proposed method's accuracy as a function of the dimensionality of the projection subspace and demonstrate that denoising accuracy peaks at a relatively low number of dimensions.
Extensions to total variation denoising
NASA Astrophysics Data System (ADS)
Blomgren, Peter; Chan, Tony F.; Mulet, Pep
1997-10-01
The total variation denoising method, proposed by Rudin, Osher and Fatermi, 92, is a PDE-based algorithm for edge-preserving noise removal. The images resulting from its application are usually piecewise constant, possibly with a staircase effect at smooth transitions and may contain significantly less fine details than the original non-degraded image. In this paper we present some extensions to this technique that aim to improve the above drawbacks, through redefining the total variation functional or the noise constraints.
Adaptive image denoising by targeted databases.
Luo, Enming; Chan, Stanley H; Nguyen, Truong Q
2015-07-01
We propose a data-dependent denoising procedure to restore noisy images. Different from existing denoising algorithms which search for patches from either the noisy image or a generic database, the new algorithm finds patches from a database that contains relevant patches. We formulate the denoising problem as an optimal filter design problem and make two contributions. First, we determine the basis function of the denoising filter by solving a group sparsity minimization problem. The optimization formulation generalizes existing denoising algorithms and offers systematic analysis of the performance. Improvement methods are proposed to enhance the patch search process. Second, we determine the spectral coefficients of the denoising filter by considering a localized Bayesian prior. The localized prior leverages the similarity of the targeted database, alleviates the intensive Bayesian computation, and links the new method to the classical linear minimum mean squared error estimation. We demonstrate applications of the proposed method in a variety of scenarios, including text images, multiview images, and face images. Experimental results show the superiority of the new algorithm over existing methods.
Inference of gene regulation functions from dynamic transcriptome data
Hillenbrand, Patrick; Maier, Kerstin C; Cramer, Patrick; Gerland, Ulrich
2016-01-01
To quantify gene regulation, a function is required that relates transcription factor binding to DNA (input) to the rate of mRNA synthesis from a target gene (output). Such a ‘gene regulation function’ (GRF) generally cannot be measured because the experimental titration of inputs and simultaneous readout of outputs is difficult. Here we show that GRFs may instead be inferred from natural changes in cellular gene expression, as exemplified for the cell cycle in the yeast S. cerevisiae. We develop this inference approach based on a time series of mRNA synthesis rates from a synchronized population of cells observed over three cell cycles. We first estimate the functional form of how input transcription factors determine mRNA output and then derive GRFs for target genes in the CLB2 gene cluster that are expressed during G2/M phase. Systematic analysis of additional GRFs suggests a network architecture that rationalizes transcriptional cell cycle oscillations. We find that a transcription factor network alone can produce oscillations in mRNA expression, but that additional input from cyclin oscillations is required to arrive at the native behaviour of the cell cycle oscillator. DOI: http://dx.doi.org/10.7554/eLife.12188.001 PMID:27652904
Explanation and inference: mechanistic and functional explanations guide property generalization.
Lombrozo, Tania; Gwynne, Nicholas Z
2014-01-01
The ability to generalize from the known to the unknown is central to learning and inference. Two experiments explore the relationship between how a property is explained and how that property is generalized to novel species and artifacts. The experiments contrast the consequences of explaining a property mechanistically, by appeal to parts and processes, with the consequences of explaining the property functionally, by appeal to functions and goals. The findings suggest that properties that are explained functionally are more likely to be generalized on the basis of shared functions, with a weaker relationship between mechanistic explanations and generalization on the basis of shared parts and processes. The influence of explanation type on generalization holds even though all participants are provided with the same mechanistic and functional information, and whether an explanation type is freely generated (Experiment 1), experimentally provided (Experiment 2), or experimentally induced (Experiment 2). The experiments also demonstrate that explanations and generalizations of a particular type (mechanistic or functional) can be experimentally induced by providing sample explanations of that type, with a comparable effect when the sample explanations come from the same domain or from a different domains. These results suggest that explanations serve as a guide to generalization, and contribute to a growing body of work supporting the value of distinguishing mechanistic and functional explanations.
Explanation and inference: mechanistic and functional explanations guide property generalization
Lombrozo, Tania; Gwynne, Nicholas Z.
2014-01-01
The ability to generalize from the known to the unknown is central to learning and inference. Two experiments explore the relationship between how a property is explained and how that property is generalized to novel species and artifacts. The experiments contrast the consequences of explaining a property mechanistically, by appeal to parts and processes, with the consequences of explaining the property functionally, by appeal to functions and goals. The findings suggest that properties that are explained functionally are more likely to be generalized on the basis of shared functions, with a weaker relationship between mechanistic explanations and generalization on the basis of shared parts and processes. The influence of explanation type on generalization holds even though all participants are provided with the same mechanistic and functional information, and whether an explanation type is freely generated (Experiment 1), experimentally provided (Experiment 2), or experimentally induced (Experiment 2). The experiments also demonstrate that explanations and generalizations of a particular type (mechanistic or functional) can be experimentally induced by providing sample explanations of that type, with a comparable effect when the sample explanations come from the same domain or from a different domains. These results suggest that explanations serve as a guide to generalization, and contribute to a growing body of work supporting the value of distinguishing mechanistic and functional explanations. PMID:25309384
Computational approaches for inferring the functions of intrinsically disordered proteins
Varadi, Mihaly; Vranken, Wim; Guharoy, Mainak; Tompa, Peter
2015-01-01
Intrinsically disordered proteins (IDPs) are ubiquitously involved in cellular processes and often implicated in human pathological conditions. The critical biological roles of these proteins, despite not adopting a well-defined fold, encouraged structural biologists to revisit their views on the protein structure-function paradigm. Unfortunately, investigating the characteristics and describing the structural behavior of IDPs is far from trivial, and inferring the function(s) of a disordered protein region remains a major challenge. Computational methods have proven particularly relevant for studying IDPs: on the sequence level their dependence on distinct characteristics determined by the local amino acid context makes sequence-based prediction algorithms viable and reliable tools for large scale analyses, while on the structure level the in silico integration of fundamentally different experimental data types is essential to describe the behavior of a flexible protein chain. Here, we offer an overview of the latest developments and computational techniques that aim to uncover how protein function is connected to intrinsic disorder. PMID:26301226
Anisotropic Nonlocal Means Denoising
2011-11-26
match the nuanced edges and textures of real-world images remains open, since we have considered only brutal binary images here. Finally, while NLM...com- puter vision. Denoising algorithms have evolved from the classical linear and median filters to more modern schemes like total variation denoising...underlying image gradients outperforms NLM by a signi cant margin. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same
Medical-Legal Inferences From Functional Neuroimaging Evidence.
Mayberg
1996-07-01
Positron emission (PET) and single-photon emission tomography (SPECT) are validated functional imaging techniques for the in vivo measurement of many neuro-phsyiological and neurochemical parameters. Research studies of patients with a broad range of neurological and psychiatric illness have been published. Reproducible and specific patterns of altered cerebral blood flow and glucose metabolism, however, have been demonstrated and confirmed for only a limited number of specific illnesses. The association of functional scan patterns with specific deficits is less conclusive. Correlations of regional abnormalities with clinical symptoms such as motor weakness, aphasia, and visual spatial dysfunction are the most reproducible but are more poorly localized than lesion-deficit studies would suggest. Findings are even less consistent for nonlocalizing behavioral symptoms such as memory difficulties, poor concentration, irritability, or chronic pain, and no reliable patterns have been demonstrated. In a forensic context, homicidal and sadistic tendencies, aberrant sexual drive, violent impulsivity, psychopathic and sociopathic personality traits, as well as impaired judgement and poor insight, have no known PET or SPECT patterns, and their presence in an individual with any PET or SPECT scan finding cannot be inferred or concluded. Furthermore, the reliable prediction of any specific neurological, psychiatric, or behavioral deficits from specific scan findings has not been demonstrated. Unambiguous results from experiments designed to specifically examine the causative relationships between regional brain dysfunction and these types of complex behaviors are needed before any introduction of functional scans into the courts can be considered scientifically justified or legally admissible.
NASA Astrophysics Data System (ADS)
Kakadiaris, Ioannis A.; Konstantinidis, Ioannis; Papadakis, Manos; Ding, Wei; Shen, Lixin
2005-08-01
Three dimensional (3D) surfaces can be sampled parametrically in the form of range image data. Smoothing/denoising of such raw data is usually accomplished by adapting techniques developed for intensity image processing, since both range and intensity images comprise parametrically sampled geometry and appearance measurements, respectively. We present a transform-based algorithm for surface denoising, motivated by our previous work on intensity image denoising, which utilizes a non-separable Parseval frame and an ensemble thresholding scheme. The frame is constructed from separable (tensor) products of a piecewise linear spline tight frame and incorporates the weighted average operator and the Sobel operators in directions that are integer multiples of 45°. We compare the performance of this algorithm with other transform-based methods from the recent literature. Our results indicate that such transform methods are suited to the task of smoothing range images.
Constructing a Flexible Likelihood Function for Spectroscopic Inference
NASA Astrophysics Data System (ADS)
Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Hogg, David W.; Green, Gregory M.
2015-10-01
We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf.
CONSTRUCTING A FLEXIBLE LIKELIHOOD FUNCTION FOR SPECTROSCOPIC INFERENCE
Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Green, Gregory M.; Hogg, David W.
2015-10-20
We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf.
On the Inference of Functional Circadian Networks Using Granger Causality.
Pourzanjani, Arya; Herzog, Erik D; Petzold, Linda R
2015-01-01
Being able to infer one way direct connections in an oscillatory network such as the suprachiastmatic nucleus (SCN) of the mammalian brain using time series data is difficult but crucial to understanding network dynamics. Although techniques have been developed for inferring networks from time series data, there have been no attempts to adapt these techniques to infer directional connections in oscillatory time series, while accurately distinguishing between direct and indirect connections. In this paper an adaptation of Granger Causality is proposed that allows for inference of circadian networks and oscillatory networks in general called Adaptive Frequency Granger Causality (AFGC). Additionally, an extension of this method is proposed to infer networks with large numbers of cells called LASSO AFGC. The method was validated using simulated data from several different networks. For the smaller networks the method was able to identify all one way direct connections without identifying connections that were not present. For larger networks of up to twenty cells the method shows excellent performance in identifying true and false connections; this is quantified by an area-under-the-curve (AUC) 96.88%. We note that this method like other Granger Causality-based methods, is based on the detection of high frequency signals propagating between cell traces. Thus it requires a relatively high sampling rate and a network that can propagate high frequency signals.
Multiscale image blind denoising.
Lebrun, Marc; Colom, Miguel; Morel, Jean-Michel
2015-10-01
Arguably several thousands papers are dedicated to image denoising. Most papers assume a fixed noise model, mainly white Gaussian or Poissonian. This assumption is only valid for raw images. Yet, in most images handled by the public and even by scientists, the noise model is imperfectly known or unknown. End users only dispose the result of a complex image processing chain effectuated by uncontrolled hardware and software (and sometimes by chemical means). For such images, recent progress in noise estimation permits to estimate from a single image a noise model, which is simultaneously signal and frequency dependent. We propose here a multiscale denoising algorithm adapted to this broad noise model. This leads to a blind denoising algorithm which we demonstrate on real JPEG images and on scans of old photographs for which the formation model is unknown. The consistency of this algorithm is also verified on simulated distorted images. This algorithm is finally compared with the unique state of the art previous blind denoising method.
Nonlinear Image Denoising Methodologies
2002-05-01
53 5.3 A Multiscale Approach to Scale-Space Analysis . . . . . . . . . . . . . . . . 53 5.4...etc. In this thesis, Our approach to denoising is first based on a controlled nonlinear stochastic random walk to achieve a scale space analysis ( as in... stochastic treatment or interpretation of the diffusion. In addition, unless a specific stopping time is known to be adequate, the resulting evolution
NASA Astrophysics Data System (ADS)
Ren, Zhong; Liu, Guodong; Xiong, Zhihua
2016-10-01
The photoacoustic signals denoising of glucose is one of most important steps in the quality identification of the fruit because the real-time photoacoustic singals of glucose are easily interfered by all kinds of noises. To remove the noises and some useless information, an improved wavelet threshld function were proposed. Compared with the traditional wavelet hard and soft threshold functions, the improved wavelet threshold function can overcome the pseudo-oscillation effect of the denoised photoacoustic signals due to the continuity of the improved wavelet threshold function, and the error between the denoised signals and the original signals can be decreased. To validate the feasibility of the improved wavelet threshold function denoising, the denoising simulation experiments based on MATLAB programmimg were performed. In the simulation experiments, the standard test signal was used, and three different denoising methods were used and compared with the improved wavelet threshold function. The signal-to-noise ratio (SNR) and the root-mean-square error (RMSE) values were used to evaluate the performance of the improved wavelet threshold function denoising. The experimental results demonstrate that the SNR value of the improved wavelet threshold function is largest and the RMSE value is lest, which fully verifies that the improved wavelet threshold function denoising is feasible. Finally, the improved wavelet threshold function denoising was used to remove the noises of the photoacoustic signals of the glucose solutions. The denoising effect is also very good. Therefore, the improved wavelet threshold function denoising proposed by this paper, has a potential value in the field of denoising for the photoacoustic singals.
Kim, Sung-Hou; Shin, Dong Hae; Hou, Jingtong; Chandonia, John-Marc; Das, Debanu; Choi, In-Geol; Kim, Rosalind; Kim, Sung-Hou
2007-09-02
Advances in sequence genomics have resulted in an accumulation of a huge number of protein sequences derived from genome sequences. However, the functions of a large portion of them cannot be inferred based on the current methods of sequence homology detection to proteins of known functions. Three-dimensional structure can have an important impact in providing inference of molecular function (physical and chemical function) of a protein of unknown function. Structural genomics centers worldwide have been determining many 3-D structures of the proteins of unknown functions, and possible molecular functions of them have been inferred based on their structures. Combined with bioinformatics and enzymatic assay tools, the successful acceleration of the process of protein structure determination through high throughput pipelines enables the rapid functional annotation of a large fraction of hypothetical proteins. We present a brief summary of the process we used at the Berkeley Structural Genomics Center to infer molecular functions of proteins of unknown function.
Shin, Dong Hae; Hou, Jingtong; Chandonia, John-Marc; Das, Debanu; Choi, In-Geol; Kim, Rosalind; Kim, Sung-Hou
2007-09-01
Advances in sequence genomics have resulted in an accumulation of a huge number of protein sequences derived from genome sequences. However, the functions of a large portion of them cannot be inferred based on the current methods of sequence homology detection to proteins of known functions. Three-dimensional structure can have an important impact in providing inference of molecular function (physical and chemical function) of a protein of unknown function. Structural genomics centers worldwide have been determining many 3-D structures of the proteins of unknown functions, and possible molecular functions of them have been inferred based on their structures. Combined with bioinformatics and enzymatic assay tools, the successful acceleration of the process of protein structure determination through high throughput pipelines enables the rapid functional annotation of a large fraction of hypothetical proteins. We present a brief summary of the process we used at the Berkeley Structural Genomics Center to infer molecular functions of proteins of unknown function.
Minimum risk wavelet shrinkage operator for Poisson image denoising.
Cheng, Wu; Hirakawa, Keigo
2015-05-01
The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.
Study on De-noising Technology of Radar Life Signal
NASA Astrophysics Data System (ADS)
Yang, Xiu-Fang; Wang, Lian-Huan; Ma, Jiang-Fei; Wang, Pei-Pei
2016-05-01
Radar detection is a kind of novel life detection technology, which can be applied to medical monitoring, anti-terrorism and disaster relief street fighting, etc. As the radar life signal is very weak, it is often submerged in the noise. Because of non-stationary and randomness of these clutter signals, it is necessary to denoise efficiently before extracting and separating the useful signal. This paper improves the radar life signal's theoretical model of the continuous wave, does de-noising processing by introducing lifting wavelet transform and determine the best threshold function through comparing the de-noising effects of different threshold functions. The result indicates that both SNR and MSE of the signal are better than the traditional ones by introducing lifting wave transform and using a new improved soft threshold function de-noising method..
Quantum Boolean image denoising
NASA Astrophysics Data System (ADS)
Mastriani, Mario
2015-05-01
A quantum Boolean image processing methodology is presented in this work, with special emphasis in image denoising. A new approach for internal image representation is outlined together with two new interfaces: classical to quantum and quantum to classical. The new quantum Boolean image denoising called quantum Boolean mean filter works with computational basis states (CBS), exclusively. To achieve this, we first decompose the image into its three color components, i.e., red, green and blue. Then, we get the bitplanes for each color, e.g., 8 bits per pixel, i.e., 8 bitplanes per color. From now on, we will work with the bitplane corresponding to the most significant bit (MSB) of each color, exclusive manner. After a classical-to-quantum interface (which includes a classical inverter), we have a quantum Boolean version of the image within the quantum machine. This methodology allows us to avoid the problem of quantum measurement, which alters the results of the measured except in the case of CBS. Said so far is extended to quantum algorithms outside image processing too. After filtering of the inverted version of MSB (inside quantum machine), the result passes through a quantum-classical interface (which involves another classical inverter) and then proceeds to reassemble each color component and finally the ending filtered image. Finally, we discuss the more appropriate metrics for image denoising in a set of experimental results.
Color Image Denoising via Discriminatively Learned Iterative Shrinkage.
Sun, Jian; Sun, Jian; Xu, Zingben
2015-11-01
In this paper, we propose a novel model, a discriminatively learned iterative shrinkage (DLIS) model, for color image denoising. The DLIS is a generalization of wavelet shrinkage by iteratively performing shrinkage over patch groups and whole image aggregation. We discriminatively learn the shrinkage functions and basis from the training pairs of noisy/noise-free images, which can adaptively handle different noise characteristics in luminance/chrominance channels, and the unknown structured noise in real-captured color images. Furthermore, to remove the splotchy real color noises, we design a Laplacian pyramid-based denoising framework to progressively recover the clean image from the coarsest scale to the finest scale by the DLIS model learned from the real color noises. Experiments show that our proposed approach can achieve the state-of-the-art denoising results on both synthetic denoising benchmark and real-captured color images.
Improved Rotating Kernel Transformation Based Contourlet Domain Image Denoising Framework.
Guo, Qing; Dong, Fangmin; Sun, Shuifa; Ren, Xuhong; Feng, Shiyu; Gao, Bruce Zhi
A contourlet domain image denoising framework based on a novel Improved Rotating Kernel Transformation is proposed, where the difference of subbands in contourlet domain is taken into account. In detail: (1). A novel Improved Rotating Kernel Transformation (IRKT) is proposed to calculate the direction statistic of the image; The validity of the IRKT is verified by the corresponding extracted edge information comparing with the state-of-the-art edge detection algorithm. (2). The direction statistic represents the difference between subbands and is introduced to the threshold function based contourlet domain denoising approaches in the form of weights to get the novel framework. The proposed framework is utilized to improve the contourlet soft-thresholding (CTSoft) and contourlet bivariate-thresholding (CTB) algorithms. The denoising results on the conventional testing images and the Optical Coherence Tomography (OCT) medical images show that the proposed methods improve the existing contourlet based thresholding denoising algorithm, especially for the medical images.
Bayesian Inference for Functional Dynamics Exploring in fMRI Data
Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao
2016-01-01
This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come. PMID:27034708
Bayesian Inference for Functional Dynamics Exploring in fMRI Data.
Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao; Pan, Yi; Zhang, Jing
2016-01-01
This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.
Executive Functions in Adolescence: Inferences from Brain and Behavior
ERIC Educational Resources Information Center
Crone, Eveline A.
2009-01-01
Despite the advances in understanding cognitive improvements in executive function in adolescence, much less is known about the influence of affective and social modulators on executive function and the biological underpinnings of these functions and sensitivities. Here, recent behavioral and neuroscientific studies are summarized that have used…
Nonlocal Markovian models for image denoising
NASA Astrophysics Data System (ADS)
Salvadeo, Denis H. P.; Mascarenhas, Nelson D. A.; Levada, Alexandre L. M.
2016-01-01
Currently, the state-of-the art methods for image denoising are patch-based approaches. Redundant information present in nonlocal regions (patches) of the image is considered for better image modeling, resulting in an improved quality of filtering. In this respect, nonlocal Markov random field (MRF) models are proposed by redefining the energy functions of classical MRF models to adopt a nonlocal approach. With the new energy functions, the pairwise pixel interaction is weighted according to the similarities between the patches corresponding to each pair. Also, a maximum pseudolikelihood estimation of the spatial dependency parameter (β) for these models is presented here. For evaluating this proposal, these models are used as an a priori model in a maximum a posteriori estimation to denoise additive white Gaussian noise in images. Finally, results display a notable improvement in both quantitative and qualitative terms in comparison with the local MRFs.
Role of Utility and Inference in the Evolution of Functional Information
Sharov, Alexei A.
2009-01-01
Functional information means an encoded network of functions in living organisms from molecular signaling pathways to an organism’s behavior. It is represented by two components: code and an interpretation system, which together form a self-sustaining semantic closure. Semantic closure allows some freedom between components because small variations of the code are still interpretable. The interpretation system consists of inference rules that control the correspondence between the code and the function (phenotype) and determines the shape of the fitness landscape. The utility factor operates at multiple time scales: short-term selection drives evolution towards higher survival and reproduction rate within a given fitness landscape, and long-term selection favors those fitness landscapes that support adaptability and lead to evolutionary expansion of certain lineages. Inference rules make short-term selection possible by shaping the fitness landscape and defining possible directions of evolution, but they are under control of the long-term selection of lineages. Communication normally occurs within a set of agents with compatible interpretation systems, which I call communication system. Functional information cannot be directly transferred between communication systems with incompatible inference rules. Each biological species is a genetic communication system that carries unique functional information together with inference rules that determine evolutionary directions and constraints. This view of the relation between utility and inference can resolve the conflict between realism/positivism and pragmatism. Realism overemphasizes the role of inference in evolution of human knowledge because it assumes that logic is embedded in reality. Pragmatism substitutes usefulness for truth and therefore ignores the advantage of inference. The proposed concept of evolutionary pragmatism rejects the idea that logic is embedded in reality; instead, inference rules are
Craniofacial biomechanics and functional and dietary inferences in hominin paleontology.
Grine, Frederick E; Judex, Stefan; Daegling, David J; Ozcivici, Engin; Ungar, Peter S; Teaford, Mark F; Sponheimer, Matt; Scott, Jessica; Scott, Robert S; Walker, Alan
2010-04-01
Finite element analysis (FEA) is a potentially powerful tool by which the mechanical behaviors of different skeletal and dental designs can be investigated, and, as such, has become increasingly popular for biomechanical modeling and inferring the behavior of extinct organisms. However, the use of FEA to extrapolate from characterization of the mechanical environment to questions of trophic or ecological adaptation in a fossil taxon is both challenging and perilous. Here, we consider the problems and prospects of FEA applications in paleoanthropology, and provide a critical examination of one such study of the trophic adaptations of Australopithecus africanus. This particular FEA is evaluated with regard to 1) the nature of the A. africanus cranial composite, 2) model validation, 3) decisions made with respect to model parameters, 4) adequacy of data presentation, and 5) interpretation of the results. Each suggests that the results reflect methodological decisions as much as any underlying biological significance. Notwithstanding these issues, this model yields predictions that follow from the posited emphasis on premolar use by A. africanus. These predictions are tested with data from the paleontological record, including a phylogenetically-informed consideration of relative premolar size, and postcanine microwear fabrics and antemortem enamel chipping. In each instance, the data fail to conform to predictions from the model. This model thus serves to emphasize the need for caution in the application of FEA in paleoanthropological enquiry. Theoretical models can be instrumental in the construction of testable hypotheses; but ultimately, the studies that serve to test these hypotheses - rather than data from the models - should remain the source of information pertaining to hominin paleobiology and evolution.
Bayesian inference of nonpositive spectral functions in quantum field theory
NASA Astrophysics Data System (ADS)
Rothkopf, Alexander
2017-03-01
We present the generalization to nonpositive definite spectral functions of a recently proposed Bayesian deconvolution approach (BR method). The novel prior used here retains many of the beneficial analytic properties of the original method; in particular, it allows us to integrate out the hyperparameter α directly. To preserve the underlying axiom of scale invariance, we introduce a second default-model related function, whose role is discussed. Our reconstruction prescription is contrasted with existing direct methods, as well as with an approach where shift functions are introduced to compensate for negative spectral features. A mock spectrum analysis inspired by the study of gluon spectral functions in QCD illustrates the capabilities of this new approach.
Generalised partition functions: inferences on phase space distributions
NASA Astrophysics Data System (ADS)
Treumann, Rudolf A.; Baumjohann, Wolfgang
2016-06-01
It is demonstrated that the statistical mechanical partition function can be used to construct various different forms of phase space distributions. This indicates that its structure is not restricted to the Gibbs-Boltzmann factor prescription which is based on counting statistics. With the widely used replacement of the Boltzmann factor by a generalised Lorentzian (also known as the q-deformed exponential function, where κ = 1/|q - 1|, with κ, q ∈ R) both the kappa-Bose and kappa-Fermi partition functions are obtained in quite a straightforward way, from which the conventional Bose and Fermi distributions follow for κ → ∞. For κ ≠ ∞ these are subject to the restrictions that they can be used only at temperatures far from zero. They thus, as shown earlier, have little value for quantum physics. This is reasonable, because physical κ systems imply strong correlations which are absent at zero temperature where apart from stochastics all dynamical interactions are frozen. In the classical large temperature limit one obtains physically reasonable κ distributions which depend on energy respectively momentum as well as on chemical potential. Looking for other functional dependencies, we examine Bessel functions whether they can be used for obtaining valid distributions. Again and for the same reason, no Fermi and Bose distributions exist in the low temperature limit. However, a classical Bessel-Boltzmann distribution can be constructed which is a Bessel-modified Lorentzian distribution. Whether it makes any physical sense remains an open question. This is not investigated here. The choice of Bessel functions is motivated solely by their convergence properties and not by reference to any physical demands. This result suggests that the Gibbs-Boltzmann partition function is fundamental not only to Gibbs-Boltzmann but also to a large class of generalised Lorentzian distributions as well as to the corresponding nonextensive statistical mechanics.
CT reconstruction via denoising approximate message passing
NASA Astrophysics Data System (ADS)
Perelli, Alessandro; Lexa, Michael A.; Can, Ali; Davies, Mike E.
2016-05-01
In this paper, we adapt and apply a compressed sensing based reconstruction algorithm to the problem of computed tomography reconstruction for luggage inspection. Specifically, we propose a variant of the denoising generalized approximate message passing (D-GAMP) algorithm and compare its performance to the performance of traditional filtered back projection and to a penalized weighted least squares (PWLS) based reconstruction method. D-GAMP is an iterative algorithm that at each iteration estimates the conditional probability of the image given the measurements and employs a non-linear "denoising" function which implicitly imposes an image prior. Results on real baggage show that D-GAMP is well-suited to limited-view acquisitions.
Actively Learning Specific Function Properties with Applications to Statistical Inference
2007-12-01
which are distant from their nearest neigh- bors . However, when searching for level-sets, we are less interested in the function away from the level...34 excludes openly gay , lesbian and bisexual students from receiving ROTC scholarships or serving in the military. Nevertheless, all ROTC classes at
Talebi, Hossein; Milanfar, Peyman
2014-02-01
Most existing state-of-the-art image denoising algorithms are based on exploiting similarity between a relatively modest number of patches. These patch-based methods are strictly dependent on patch matching, and their performance is hamstrung by the ability to reliably find sufficiently similar patches. As the number of patches grows, a point of diminishing returns is reached where the performance improvement due to more patches is offset by the lower likelihood of finding sufficiently close matches. The net effect is that while patch-based methods, such as BM3D, are excellent overall, they are ultimately limited in how well they can do on (larger) images with increasing complexity. In this paper, we address these shortcomings by developing a paradigm for truly global filtering where each pixel is estimated from all pixels in the image. Our objectives in this paper are two-fold. First, we give a statistical analysis of our proposed global filter, based on a spectral decomposition of its corresponding operator, and we study the effect of truncation of this spectral decomposition. Second, we derive an approximation to the spectral (principal) components using the Nyström extension. Using these, we demonstrate that this global filter can be implemented efficiently by sampling a fairly small percentage of the pixels in the image. Experiments illustrate that our strategy can effectively globalize any existing denoising filters to estimate each pixel using all pixels in the image, hence improving upon the best patch-based methods.
Network-based inference of protein activity helps functionalize the genetic landscape of cancer
Alvarez, Mariano J.; Shen, Yao; Giorgi, Federico M.; Lachmann, Alexander; Ding, B. Belinda; Ye, B. Hilda; Califano, Andrea
2016-01-01
Identifying the multiple dysregulated oncoproteins that contribute to tumorigenesis in a given patient is crucial for developing personalized treatment plans. However, accurate inference of aberrant protein activity in biological samples is still challenging as genetic alterations are only partially predictive and direct measurements of protein activity are generally not feasible. To address this problem we introduce and experimentally validate a new algorithm, VIPER (Virtual Inference of Protein-activity by Enriched Regulon analysis), for the accurate assessment of protein activity from gene expression data. We use VIPER to evaluate the functional relevance of genetic alterations in regulatory proteins across all TCGA samples. In addition to accurately inferring aberrant protein activity induced by established mutations, we also identify a significant fraction of tumors with aberrant activity of druggable oncoproteins—despite a lack of mutations, and vice-versa. In vitro assays confirmed that VIPER-inferred protein activity outperforms mutational analysis in predicting sensitivity to targeted inhibitors. PMID:27322546
Honda, Hidehito; Yamagishi, Kimihiko
2016-09-09
Verbal probabilities have directional communicative functions, and most can be categorized as positive (e.g., "it is likely") or negative (e.g., "it is doubtful"). We examined the communicative functions of verbal probabilities based on the reference point hypothesis According to this hypothesis, listeners are sensitive to and can infer a speaker's reference points based on the speaker's selected directionality. In four experiments (two of which examined speakers' choice of directionality and two of which examined listeners' inferences about a speaker's reference point), we found that listeners could make inferences about speakers' reference points based on the stated directionality of verbal probability. Thus, the directionality of verbal probabilities serves the communicative function of conveying information about a speaker's reference point.
Structure and function of the mammalian middle ear. II: Inferring function from structure.
Mason, Matthew J
2016-02-01
Anatomists and zoologists who study middle ear morphology are often interested to know what the structure of an ear can reveal about the auditory acuity and hearing range of the animal in question. This paper represents an introduction to middle ear function targetted towards biological scientists with little experience in the field of auditory acoustics. Simple models of impedance matching are first described, based on the familiar concepts of the area and lever ratios of the middle ear. However, using the Mongolian gerbil Meriones unguiculatus as a test case, it is shown that the predictions made by such 'ideal transformer' models are generally not consistent with measurements derived from recent experimental studies. Electrical analogue models represent a better way to understand some of the complex, frequency-dependent responses of the middle ear: these have been used to model the effects of middle ear subcavities, and the possible function of the auditory ossicles as a transmission line. The concepts behind such models are explained here, again aimed at those with little background knowledge. Functional inferences based on middle ear anatomy are more likely to be valid at low frequencies. Acoustic impedance at low frequencies is dominated by compliance; expanded middle ear cavities, found in small desert mammals including gerbils, jerboas and the sengi Macroscelides, are expected to improve low-frequency sound transmission, as long as the ossicular system is not too stiff.
Inferring Functional Relationships from Conservation of Gene Order.
Moreno-Hagelsieb, Gabriel
2017-01-01
Predicting functional associations using the Gene Neighbor Method depends on the simple idea that if genes are conserved next to each other in evolutionarily distant prokaryotes they might belong to a polycistronic transcription unit. The procedure presented in this chapter starts with the organization of the genes within genomes into pairs of adjacent genes. Then, the pairs of adjacent genes in a genome of interest are mapped to their corresponding orthologs in other, informative, genomes. The final step is to verify if the mapped orthologs are also pairs of adjacent genes in the informative genomes.
NASA Astrophysics Data System (ADS)
Wang, Zhengzi; Ren, Zhong; Liu, Guodong
2015-10-01
Noninvasive measurement of blood glucose concentration has become a hotspot research in the world due to its characteristic of convenient, rapid and non-destructive etc. The blood glucose concentration monitoring based on photoacoustic technique has attracted many attentions because the detected signal is ultrasonic signals rather than the photo signals. But during the acquisition of the photoacoustic signals of glucose, the photoacoustic signals are not avoid to be polluted by some factors, such as the pulsed laser, electronic noises and circumstance noises etc. These disturbances will impact the measurement accuracy of the glucose concentration, So, the denoising of the glucose photoacoustic signals is a key work. In this paper, a wavelet shift-invariant threshold denoising method is improved, and a novel wavelet threshold function is proposed. For the novel wavelet threshold function, two threshold values and two different factors are set, and the novel function is high order derivative and continuous, which can be looked as the compromise between the wavelet soft threshold denoising and hard threshold denoising. Simulation experimental results illustrate that, compared with other wavelet threshold denoising, this improved wavelet shift-invariant threshold denoising has higher signal-to-noise ratio(SNR) and smaller root mean-square error (RMSE) value. And this improved denoising also has better denoising effect than others. Therefore, this improved denoising has a certain of potential value in the denoising of glucose photoacoustic signals.
Denoising forced-choice detection data.
García-Pérez, Miguel A
2010-02-01
Observers in a two-alternative forced-choice (2AFC) detection task face the need to produce a response at random (a guess) on trials in which neither presentation appeared to display a stimulus. Observers could alternatively be instructed to use a 'guess' key on those trials, a key that would produce a random guess and would also record the resultant correct or wrong response as emanating from a computer-generated guess. A simulation study shows that 'denoising' 2AFC data with information regarding which responses are a result of guesses yields estimates of detection threshold and spread of the psychometric function that are far more precise than those obtained in the absence of this information, and parallel the precision of estimates obtained with yes-no tasks running for the same number of trials. Simulations also show that partial compliance with the instructions to use the 'guess' key reduces the quality of the estimates, which nevertheless continue to be more precise than those obtained from conventional 2AFC data if the observers are still moderately compliant. An empirical study testing the validity of simulation results showed that denoised 2AFC estimates of spread were clearly superior to conventional 2AFC estimates and similar to yes-no estimates, but variations in threshold across observers and across sessions hid the benefits of denoising for threshold estimation. The empirical study also proved the feasibility of using a 'guess' key in addition to the conventional response keys defined in 2AFC tasks.
Saxena, Anupam; Lipson, Hod; Valero-Cuevas, Francisco J.
2012-01-01
In systems and computational biology, much effort is devoted to functional identification of systems and networks at the molecular-or cellular scale. However, similarly important networks exist at anatomical scales such as the tendon network of human fingers: the complex array of collagen fibers that transmits and distributes muscle forces to finger joints. This network is critical to the versatility of the human hand, and its function has been debated since at least the 16th century. Here, we experimentally infer the structure (both topology and parameter values) of this network through sparse interrogation with force inputs. A population of models representing this structure co-evolves in simulation with a population of informative future force inputs via the predator-prey estimation-exploration algorithm. Model fitness depends on their ability to explain experimental data, while the fitness of future force inputs depends on causing maximal functional discrepancy among current models. We validate our approach by inferring two known synthetic Latex networks, and one anatomical tendon network harvested from a cadaver's middle finger. We find that functionally similar but structurally diverse models can exist within a narrow range of the training set and cross-validation errors. For the Latex networks, models with low training set error [<4%] and resembling the known network have the smallest cross-validation errors [∼5%]. The low training set [<4%] and cross validation [<7.2%] errors for models for the cadaveric specimen demonstrate what, to our knowledge, is the first experimental inference of the functional structure of complex anatomical networks. This work expands current bioinformatics inference approaches by demonstrating that sparse, yet informative interrogation of biological specimens holds significant computational advantages in accurate and efficient inference over random testing, or assuming model topology and only inferring parameters values. These
Crustal structure beneath northeast India inferred from receiver function modeling
NASA Astrophysics Data System (ADS)
Borah, Kajaljyoti; Bora, Dipok K.; Goyal, Ayush; Kumar, Raju
2016-09-01
We estimated crustal shear velocity structure beneath ten broadband seismic stations of northeast India, by using H-Vp/Vs stacking method and a non-linear direct search approach, Neighbourhood Algorithm (NA) technique followed by joint inversion of Rayleigh wave group velocity and receiver function, calculated from teleseismic earthquakes data. Results show significant variations of thickness, shear velocities (Vs) and Vp/Vs ratio in the crust of the study region. The inverted shear wave velocity models show crustal thickness variations of 32-36 km in Shillong Plateau (North), 36-40 in Assam Valley and ∼44 km in Lesser Himalaya (South). Average Vp/Vs ratio in Shillong Plateau is less (1.73-1.77) compared to Assam Valley and Lesser Himalaya (∼1.80). Average crustal shear velocity beneath the study region varies from 3.4 to 3.5 km/s. Sediment structure beneath Shillong Plateau and Assam Valley shows 1-2 km thick sediment layer with low Vs (2.5-2.9 km/s) and high Vp/Vs ratio (1.8-2.1), while it is observed to be of greater thickness (4 km) with similar Vs and high Vp/Vs (∼2.5) in RUP (Lesser Himalaya). Both Shillong Plateau and Assam Valley show thick upper and middle crust (10-20 km), and thin (4-9 km) lower crust. Average Vp/Vs ratio in Assam Valley and Shillong Plateau suggest that the crust is felsic-to-intermediate and intermediate-to-mafic beneath Shillong Plateau and Assam Valley, respectively. Results show that lower crust rocks beneath the Shillong Plateau and Assam Valley lies between mafic granulite and mafic garnet granulite.
Birdsong Denoising Using Wavelets
Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal
2016-01-01
Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391
Blind Image Denoising via Dependent Dirichlet Process Tree.
Zhu, Fengyuan; Chen, Guangyong; Hao, Jianye; Heng, Pheng-Ann
2016-08-31
Most existing image denoising approaches assumed the noise to be homogeneous white Gaussian distributed with known intensity. However, in real noisy images, the noise models are usually unknown beforehand and can be much more complex. This paper addresses this problem and proposes a novel blind image denoising algorithm to recover the clean image from noisy one with the unknown noise model. To model the empirical noise of an image, our method introduces the mixture of Gaussian distribution, which is flexible enough to approximate different continuous distributions. The problem of blind image denoising is reformulated as a learning problem. The procedure is to first build a two-layer structural model for noisy patches and consider the clean ones as latent variable. To control the complexity of the noisy patch model, this work proposes a novel Bayesian nonparametric prior called "Dependent Dirichlet Process Tree" to build the model. Then, this study derives a variational inference algorithm to estimate model parameters and recover clean patches. We apply our method on synthesis and real noisy images with different noise models. Comparing with previous approaches, ours achieves better performance. The experimental results indicate the efficiency of the proposed algorithm to cope with practical image denoising tasks.
ERIC Educational Resources Information Center
Loukusa, Soile; Moilanen, Irma
2009-01-01
This review summarizes studies involving pragmatic language comprehension and inference abilities in individuals with Asperger syndrome or high-functioning autism. Systematic searches of three electronic databases, selected journals, and reference lists identified 20 studies meeting the inclusion criteria. These studies were evaluated in terms of:…
Specificity of Emotion Inferences as a Function of Emotional Contextual Support
ERIC Educational Resources Information Center
Gillioz, Christelle; Gygax, Pascal M.
2017-01-01
Research on emotion inferences has shown that readers include a representation of the main character's emotional state in their mental representations of the text. We examined the specificity of emotion representations as a function of the emotion content of short narratives, in terms of the quantity and quality of emotion components included in…
Adaptively Tuned Iterative Low Dose CT Image Denoising
Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.
2015-01-01
Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972
Adaptively Tuned Iterative Low Dose CT Image Denoising.
Hashemi, SayedMasoud; Paul, Narinder S; Beheshti, Soosan; Cobbold, Richard S C
2015-01-01
Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction.
An estimating function approach to inference for inhomogeneous Neyman-Scott processes.
Waagepetersen, Rasmus Plenge
2007-03-01
This article is concerned with inference for a certain class of inhomogeneous Neyman-Scott point processes depending on spatial covariates. Regression parameter estimates obtained from a simple estimating function are shown to be asymptotically normal when the "mother" intensity for the Neyman-Scott process tends to infinity. Clustering parameter estimates are obtained using minimum contrast estimation based on the K-function. The approach is motivated and illustrated by applications to point pattern data from a tropical rain forest plot.
Ambroise, Jérôme; Robert, Annie; Macq, Benoit; Gala, Jean-Luc
2012-01-06
An important challenge in system biology is the inference of biological networks from postgenomic data. Among these biological networks, a gene transcriptional regulatory network focuses on interactions existing between transcription factors (TFs) and and their corresponding target genes. A large number of reverse engineering algorithms were proposed to infer such networks from gene expression profiles, but most current methods have relatively low predictive performances. In this paper, we introduce the novel TNIFSED method (Transcriptional Network Inference from Functional Similarity and Expression Data), that infers a transcriptional network from the integration of correlations and partial correlations of gene expression profiles and gene functional similarities through a supervised classifier. In the current work, TNIFSED was applied to predict the transcriptional network in Escherichia coli and in Saccharomyces cerevisiae, using datasets of 445 and 170 affymetrix arrays, respectively. Using the area under the curve of the receiver operating characteristics and the F-measure as indicators, we showed the predictive performance of TNIFSED to be better than unsupervised state-of-the-art methods. TNIFSED performed slightly worse than the supervised SIRENE algorithm for the target genes identification of the TF having a wide range of yet identified target genes but better for TF having only few identified target genes. Our results indicate that TNIFSED is complementary to the SIRENE algorithm, and particularly suitable to discover target genes of "orphan" TFs.
Denoising Magnetic Resonance Images Using Collaborative Non-Local Means.
Chen, Geng; Zhang, Pei; Wu, Yafeng; Shen, Dinggang; Yap, Pew-Thian
2016-02-12
Noise artifacts in magnetic resonance (MR) images increase the complexity of image processing workflows and decrease the reliability of inferences drawn from the images. It is thus often desirable to remove such artifacts beforehand for more robust and effective quantitative analysis. It is important to preserve the integrity of relevant image information while removing noise in MR images. A variety of approaches have been developed for this purpose, and the non-local means (NLM) filter has been shown to be able to achieve state-of-the-art denoising performance. For effective denoising, NLM relies heavily on the existence of repeating structural patterns, which however might not always be present within a single image. This is especially true when one considers the fact that the human brain is complex and contains a lot of unique structures. In this paper we propose to leverage the repeating structures from multiple images to collaboratively denoise an image. The underlying assumption is that it is more likely to find repeating structures from multiple scans than from a single scan. Specifically, to denoise a target image, multiple images, which may be acquired from different subjects, are spatially aligned to the target image, and an NLM-like block matching is performed on these aligned images with the target image as the reference. This will significantly increase the number of matching structures and thus boost the denoising performance. Experiments on both synthetic and real data show that the proposed approach, collaborative non-local means (CNLM), outperforms the classic NLM and yields results with markedly improved structural details.
Nagpal, Sunil; Haque, Mohammed Monzoorul; Mande, Sharmila S.
2016-01-01
Background The overall metabolic/functional potential of any given environmental niche is a function of the sum total of genes/proteins/enzymes that are encoded and expressed by various interacting microbes residing in that niche. Consequently, prior (collated) information pertaining to genes, enzymes encoded by the resident microbes can aid in indirectly (re)constructing/ inferring the metabolic/ functional potential of a given microbial community (given its taxonomic abundance profile). In this study, we present Vikodak—a multi-modular package that is based on the above assumption and automates inferring and/ or comparing the functional characteristics of an environment using taxonomic abundance generated from one or more environmental sample datasets. With the underlying assumptions of co-metabolism and independent contributions of different microbes in a community, a concerted effort has been made to accommodate microbial co-existence patterns in various modules incorporated in Vikodak. Results Validation experiments on over 1400 metagenomic samples have confirmed the utility of Vikodak in (a) deciphering enzyme abundance profiles of any KEGG metabolic pathway, (b) functional resolution of distinct metagenomic environments, (c) inferring patterns of functional interaction between resident microbes, and (d) automating statistical comparison of functional features of studied microbiomes. Novel features incorporated in Vikodak also facilitate automatic removal of false positives and spurious functional predictions. Conclusions With novel provisions for comprehensive functional analysis, inclusion of microbial co-existence pattern based algorithms, automated inter-environment comparisons; in-depth analysis of individual metabolic pathways and greater flexibilities at the user end, Vikodak is expected to be an important value addition to the family of existing tools for 16S based function prediction. Availability and Implementation A web implementation of Vikodak
A variance components model for statistical inference on functional connectivity networks.
Fiecas, Mark; Cribben, Ivor; Bahktiari, Reyhaneh; Cummine, Jacqueline
2017-01-24
We propose a variance components linear modeling framework to conduct statistical inference on functional connectivity networks that directly accounts for the temporal autocorrelation inherent in functional magnetic resonance imaging (fMRI) time series data and for the heterogeneity across subjects in the study. The novel method estimates the autocorrelation structure in a nonparametric and subject-specific manner, and estimates the variance due to the heterogeneity using iterative least squares. We apply the new model to a resting-state fMRI study to compare the functional connectivity networks in both typical and reading impaired young adults in order to characterize the resting state networks that are related to reading processes. We also compare the performance of our model to other methods of statistical inference on functional connectivity networks that do not account for the temporal autocorrelation or heterogeneity across the subjects using simulated data, and show that by accounting for these sources of variation and covariation results in more powerful tests for statistical inference.
Time-varying coupling functions: Dynamical inference and cause of synchronization transitions
NASA Astrophysics Data System (ADS)
Stankovski, Tomislav
2017-02-01
Interactions in nature can be described by their coupling strength, direction of coupling, and coupling function. The coupling strength and directionality are relatively well understood and studied, at least for two interacting systems; however, there can be a complexity in the interactions uniquely dependent on the coupling functions. Such a special case is studied here: synchronization transition occurs only due to the time variability of the coupling functions, while the net coupling strength is constant throughout the observation time. To motivate the investigation, an example is used to present an analysis of cross-frequency coupling functions between delta and alpha brain waves extracted from the electroencephalography recording of a healthy human subject in a free-running resting state. The results indicate that time-varying coupling functions are a reality for biological interactions. A model of phase oscillators is used to demonstrate and detect the synchronization transition caused by the varying coupling functions during an invariant coupling strength. The ability to detect this phenomenon is discussed with the method of dynamical Bayesian inference, which was able to infer the time-varying coupling functions. The form of the coupling function acts as an additional dimension for the interactions, and it should be taken into account when detecting biological or other interactions from data.
Time-varying coupling functions: Dynamical inference and cause of synchronization transitions.
Stankovski, Tomislav
2017-02-01
Interactions in nature can be described by their coupling strength, direction of coupling, and coupling function. The coupling strength and directionality are relatively well understood and studied, at least for two interacting systems; however, there can be a complexity in the interactions uniquely dependent on the coupling functions. Such a special case is studied here: synchronization transition occurs only due to the time variability of the coupling functions, while the net coupling strength is constant throughout the observation time. To motivate the investigation, an example is used to present an analysis of cross-frequency coupling functions between delta and alpha brain waves extracted from the electroencephalography recording of a healthy human subject in a free-running resting state. The results indicate that time-varying coupling functions are a reality for biological interactions. A model of phase oscillators is used to demonstrate and detect the synchronization transition caused by the varying coupling functions during an invariant coupling strength. The ability to detect this phenomenon is discussed with the method of dynamical Bayesian inference, which was able to infer the time-varying coupling functions. The form of the coupling function acts as an additional dimension for the interactions, and it should be taken into account when detecting biological or other interactions from data.
Approximation Of Multi-Valued Inverse Functions Using Clustering And Sugeno Fuzzy Inference
NASA Technical Reports Server (NTRS)
Walden, Maria A.; Bikdash, Marwan; Homaifar, Abdollah
1998-01-01
Finding the inverse of a continuous function can be challenging and computationally expensive when the inverse function is multi-valued. Difficulties may be compounded when the function itself is difficult to evaluate. We show that we can use fuzzy-logic approximators such as Sugeno inference systems to compute the inverse on-line. To do so, a fuzzy clustering algorithm can be used in conjunction with a discriminating function to split the function data into branches for the different values of the forward function. These data sets are then fed into a recursive least-squares learning algorithm that finds the proper coefficients of the Sugeno approximators; each Sugeno approximator finds one value of the inverse function. Discussions about the accuracy of the approximation will be included.
HARDI DATA DENOISING USING VECTORIAL TOTAL VARIATION AND LOGARITHMIC BARRIER
Kim, Yunho; Thompson, Paul M.; Vese, Luminita A.
2010-01-01
In this work, we wish to denoise HARDI (High Angular Resolution Diffusion Imaging) data arising in medical brain imaging. Diffusion imaging is a relatively new and powerful method to measure the three-dimensional profile of water diffusion at each point in the brain. These images can be used to reconstruct fiber directions and pathways in the living brain, providing detailed maps of fiber integrity and connectivity. HARDI data is a powerful new extension of diffusion imaging, which goes beyond the diffusion tensor imaging (DTI) model: mathematically, intensity data is given at every voxel and at any direction on the sphere. Unfortunately, HARDI data is usually highly contaminated with noise, depending on the b-value which is a tuning parameter pre-selected to collect the data. Larger b-values help to collect more accurate information in terms of measuring diffusivity, but more noise is generated by many factors as well. So large b-values are preferred, if we can satisfactorily reduce the noise without losing the data structure. Here we propose two variational methods to denoise HARDI data. The first one directly denoises the collected data S, while the second one denoises the so-called sADC (spherical Apparent Diffusion Coefficient), a field of radial functions derived from the data. These two quantities are related by an equation of the form S = SSexp (−b · sADC) (in the noise-free case). By applying these two different models, we will be able to determine which quantity will most accurately preserve data structure after denoising. The theoretical analysis of the proposed models is presented, together with experimental results and comparisons for denoising synthetic and real HARDI data. PMID:20802839
Electrocardiogram signal denoising based on a new improved wavelet thresholding
NASA Astrophysics Data System (ADS)
Han, Guoqiang; Xu, Zhijun
2016-08-01
Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.
Locally Based Kernel PLS Regression De-noising with Application to Event-Related Potentials
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Tino, Peter
2002-01-01
The close relation of signal de-noising and regression problems dealing with the estimation of functions reflecting dependency between a set of inputs and dependent outputs corrupted with some level of noise have been employed in our approach.
INTEGRATING EVOLUTIONARY AND FUNCTIONAL APPROACHES TO INFER ADAPTATION AT SPECIFIC LOCI
Storz, Jay F.; Wheat, Christopher W.
2010-01-01
Inferences about adaptation at specific loci are often exclusively based on the static analysis of DNA sequence variation. Ideally, population-genetic evidence for positive selection serves as a stepping-off point for experimental studies to elucidate the functional significance of the putatively adaptive variation. We argue that inferences about adaptation at specific loci are best achieved by integrating the indirect, retrospective insights provided by population-genetic analyses with the more direct, mechanistic insights provided by functional experiments. Integrative studies of adaptive genetic variation may sometimes be motivated by experimental insights into molecular function, which then provide the impetus to perform population genetic tests to evaluate whether the functional variation is of adaptive significance. In other cases, studies may be initiated by genome scans of DNA variation to identify candidate loci for recent adaptation. Results of such analyses can then motivate experimental efforts to test whether the identified candidate loci do in fact contribute to functional variation in some fitness-related phenotype. Functional studies can provide corroborative evidence for positive selection at particular loci, and can potentially reveal specific molecular mechanisms of adaptation. PMID:20500215
Chen, Zhe; Putrino, David F; Ghosh, Soumya; Barbieri, Riccardo; Brown, Emery N
2011-04-01
The ability to accurately infer functional connectivity between ensemble neurons using experimentally acquired spike train data is currently an important research objective in computational neuroscience. Point process generalized linear models and maximum likelihood estimation have been proposed as effective methods for the identification of spiking dependency between neurons. However, unfavorable experimental conditions occasionally results in insufficient data collection due to factors such as low neuronal firing rates or brief recording periods, and in these cases, the standard maximum likelihood estimate becomes unreliable. The present studies compares the performance of different statistical inference procedures when applied to the estimation of functional connectivity in neuronal assemblies with sparse spiking data. Four inference methods were compared: maximum likelihood estimation, penalized maximum likelihood estimation, using either l(2) or l(1) regularization, and hierarchical Bayesian estimation based on a variational Bayes algorithm. Algorithmic performances were compared using well-established goodness-of-fit measures in benchmark simulation studies, and the hierarchical Bayesian approach performed favorably when compared with the other algorithms, and this approach was then successfully applied to real spiking data recorded from the cat motor cortex. The identification of spiking dependencies in physiologically acquired data was encouraging, since their sparse nature would have previously precluded them from successful analysis using traditional methods.
NASA Astrophysics Data System (ADS)
Chen, Xinyuan; Song, Li; Yang, Xiaokang
2016-09-01
Video denoising can be described as the problem of mapping from a specific length of noisy frames to clean one. We propose a deep architecture based on Recurrent Neural Network (RNN) for video denoising. The model learns a patch-based end-to-end mapping between the clean and noisy video sequences. It takes the corrupted video sequences as the input and outputs the clean one. Our deep network, which we refer to as deep Recurrent Neural Networks (deep RNNs or DRNNs), stacks RNN layers where each layer receives the hidden state of the previous layer as input. Experiment shows (i) the recurrent architecture through temporal domain extracts motion information and does favor to video denoising, and (ii) deep architecture have large enough capacity for expressing mapping relation between corrupted videos as input and clean videos as output, furthermore, (iii) the model has generality to learned different mappings from videos corrupted by different types of noise (e.g., Poisson-Gaussian noise). By training on large video databases, we are able to compete with some existing video denoising methods.
Ahl, Richard E; Keil, Frank C
2016-09-26
Four studies explored the abilities of 80 adults and 180 children (4-9 years), from predominantly middle-class families in the Northeastern United States, to use information about machines' observable functional capacities to infer their internal, "hidden" mechanistic complexity. Children as young as 4 and 5 years old used machines' numbers of functions as indications of complexity and matched machines performing more functions with more complex "insides" (Study 1). However, only older children (6 and older) and adults used machines' functional diversity alone as an indication of complexity (Studies 2-4). The ability to use functional diversity as a complexity cue therefore emerges during the early school years, well before the use of diversity in most categorical induction tasks.
Wavelet-based denoising using local Laplace prior
NASA Astrophysics Data System (ADS)
Rabbani, Hossein; Vafadust, Mansur; Selesnick, Ivan
2007-09-01
Although wavelet-based image denoising is a powerful tool for image processing applications, relatively few publications have addressed so far wavelet-based video denoising. The main reason is that the standard 3-D data transforms do not provide useful representations with good energy compaction property, for most video data. For example, the multi-dimensional standard separable discrete wavelet transform (M-D DWT) mixes orientations and motions in its subbands, and produces the checkerboard artifacts. So, instead of M-D DWT, usually oriented transforms suchas multi-dimensional complex wavelet transform (M-D DCWT) are proposed for video processing. In this paper we use a Laplace distribution with local variance to model the statistical properties of noise-free wavelet coefficients. This distribution is able to simultaneously model the heavy-tailed and intrascale dependency properties of wavelets. Using this model, simple shrinkage functions are obtained employing maximum a posteriori (MAP) and minimum mean squared error (MMSE) estimators. These shrinkage functions are proposed for video denoising in DCWT domain. The simulation results shows that this simple denoising method has impressive performance visually and quantitatively.
2D Orthogonal Locality Preserving Projection for Image Denoising.
Shikkenawis, Gitam; Mitra, Suman K
2016-01-01
Sparse representations using transform-domain techniques are widely used for better interpretation of the raw data. Orthogonal locality preserving projection (OLPP) is a linear technique that tries to preserve local structure of data in the transform domain as well. Vectorized nature of OLPP requires high-dimensional data to be converted to vector format, hence may lose spatial neighborhood information of raw data. On the other hand, processing 2D data directly, not only preserves spatial information, but also improves the computational efficiency considerably. The 2D OLPP is expected to learn the transformation from 2D data itself. This paper derives mathematical foundation for 2D OLPP. The proposed technique is used for image denoising task. Recent state-of-the-art approaches for image denoising work on two major hypotheses, i.e., non-local self-similarity and sparse linear approximations of the data. Locality preserving nature of the proposed approach automatically takes care of self-similarity present in the image while inferring sparse basis. A global basis is adequate for the entire image. The proposed approach outperforms several state-of-the-art image denoising approaches for gray-scale, color, and texture images.
De novo inference of protein function from coarse-grained dynamics.
Bhadra, Pratiti; Pal, Debnath
2014-10-01
Inference of molecular function of proteins is the fundamental task in the quest for understanding cellular processes. The task is getting increasingly difficult with thousands of new proteins discovered each day. The difficulty arises primarily due to lack of high-throughput experimental technique for assessing protein molecular function, a lacunae that computational approaches are trying hard to fill. The latter too faces a major bottleneck in absence of clear evidence based on evolutionary information. Here we propose a de novo approach to annotate protein molecular function through structural dynamics match for a pair of segments from two dissimilar proteins, which may share even <10% sequence identity. To screen these matches, corresponding 1 µs coarse-grained (CG) molecular dynamics trajectories were used to compute normalized root-mean-square-fluctuation graphs and select mobile segments, which were, thereafter, matched for all pairs using unweighted three-dimensional autocorrelation vectors. Our in-house custom-built forcefield (FF), extensively validated against dynamics information obtained from experimental nuclear magnetic resonance data, was specifically used to generate the CG dynamics trajectories. The test for correspondence of dynamics-signature of protein segments and function revealed 87% true positive rate and 93.5% true negative rate, on a dataset of 60 experimentally validated proteins, including moonlighting proteins and those with novel functional motifs. A random test against 315 unique fold/function proteins for a negative test gave >99% true recall. A blind prediction on a novel protein appears consistent with additional evidences retrieved therein. This is the first proof-of-principle of generalized use of structural dynamics for inferring protein molecular function leveraging our custom-made CG FF, useful to all.
Arboleya, Silvia; Sánchez, Borja; Solís, Gonzalo; Fernández, Nuria; Suárez, Marta; Hernández-Barranco, Ana M.; Milani, Christian; Margolles, Abelardo; de los Reyes-Gavilán, Clara G.; Ventura, Marco; Gueimonde, Miguel
2016-01-01
Background: The microbial colonization of the neonatal gut provides a critical stimulus for normal maturation and development. This process of early microbiota establishment, known to be affected by several factors, constitutes an important determinant for later health. Methods: We studied the establishment of the microbiota in preterm and full-term infants and the impact of perinatal antibiotics upon this process in premature babies. To this end, 16S rRNA gene sequence-based microbiota assessment was performed at phylum level and functional inference analyses were conducted. Moreover, the levels of the main intestinal microbial metabolites, the short-chain fatty acids (SCFA) acetate, propionate and butyrate, were measured by Gas-Chromatography Flame ionization/Mass spectrometry detection. Results: Prematurity affects microbiota composition at phylum level, leading to increases of Proteobacteria and reduction of other intestinal microorganisms. Perinatal antibiotic use further affected the microbiota of the preterm infant. These changes involved a concomitant alteration in the levels of intestinal SCFA. Moreover, functional inference analyses allowed for identifying metabolic pathways potentially affected by prematurity and perinatal antibiotics use. Conclusion: A deficiency or delay in the establishment of normal microbiota function seems to be present in preterm infants. Perinatal antibiotic use, such as intrapartum prophylaxis, affected the early life microbiota establishment in preterm newborns, which may have consequences for later health. PMID:27136545
Homology-based inference sets the bar high for protein function prediction
2013-01-01
Background Any method that de novo predicts protein function should do better than random. More challenging, it also ought to outperform simple homology-based inference. Methods Here, we describe a few methods that predict protein function exclusively through homology. Together, they set the bar or lower limit for future improvements. Results and conclusions During the development of these methods, we faced two surprises. Firstly, our most successful implementation for the baseline ranked very high at CAFA1. In fact, our best combination of homology-based methods fared only slightly worse than the top-of-the-line prediction method from the Jones group. Secondly, although the concept of homology-based inference is simple, this work revealed that the precise details of the implementation are crucial: not only did the methods span from top to bottom performers at CAFA, but also the reasons for these differences were unexpected. In this work, we also propose a new rigorous measure to compare predicted and experimental annotations. It puts more emphasis on the details of protein function than the other measures employed by CAFA and may best reflect the expectations of users. Clearly, the definition of proper goals remains one major objective for CAFA. PMID:23514582
Inferring deep-brain activity from cortical activity using functional near-infrared spectroscopy.
Liu, Ning; Cui, Xu; Bryant, Daniel M; Glover, Gary H; Reiss, Allan L
2015-03-01
Functional near-infrared spectroscopy (fNIRS) is an increasingly popular technology for studying brain function because it is non-invasive, non-irradiating and relatively inexpensive. Further, fNIRS potentially allows measurement of hemodynamic activity with high temporal resolution (milliseconds) and in naturalistic settings. However, in comparison with other imaging modalities, namely fMRI, fNIRS has a significant drawback: limited sensitivity to hemodynamic changes in deep-brain regions. To overcome this limitation, we developed a computational method to infer deep-brain activity using fNIRS measurements of cortical activity. Using simultaneous fNIRS and fMRI, we measured brain activity in 17 participants as they completed three cognitive tasks. A support vector regression (SVR) learning algorithm was used to predict activity in twelve deep-brain regions using information from surface fNIRS measurements. We compared these predictions against actual fMRI-measured activity using Pearson's correlation to quantify prediction performance. To provide a benchmark for comparison, we also used fMRI measurements of cortical activity to infer deep-brain activity. When using fMRI-measured activity from the entire cortex, we were able to predict deep-brain activity in the fusiform cortex with an average correlation coefficient of 0.80 and in all deep-brain regions with an average correlation coefficient of 0.67. The top 15% of predictions using fNIRS signal achieved an accuracy of 0.7. To our knowledge, this study is the first to investigate the feasibility of using cortical activity to infer deep-brain activity. This new method has the potential to extend fNIRS applications in cognitive and clinical neuroscience research.
Sojoudi, Alireza; Goodyear, Bradley G
2016-12-01
Spontaneous fluctuations of blood-oxygenation level-dependent functional magnetic resonance imaging (BOLD fMRI) signals are highly synchronous between brain regions that serve similar functions. This provides a means to investigate functional networks; however, most analysis techniques assume functional connections are constant over time. This may be problematic in the case of neurological disease, where functional connections may be highly variable. Recently, several methods have been proposed to determine moment-to-moment changes in the strength of functional connections over an imaging session (so called dynamic connectivity). Here a novel analysis framework based on a hierarchical observation modeling approach was proposed, to permit statistical inference of the presence of dynamic connectivity. A two-level linear model composed of overlapping sliding windows of fMRI signals, incorporating the fact that overlapping windows are not independent was described. To test this approach, datasets were synthesized whereby functional connectivity was either constant (significant or insignificant) or modulated by an external input. The method successfully determines the statistical significance of a functional connection in phase with the modulation, and it exhibits greater sensitivity and specificity in detecting regions with variable connectivity, when compared with sliding-window correlation analysis. For real data, this technique possesses greater reproducibility and provides a more discriminative estimate of dynamic connectivity than sliding-window correlation analysis. Hum Brain Mapp 37:4566-4580, 2016. © 2016 Wiley Periodicals, Inc.
Inferring deep biosphere function and diversity through (near) surface biosphere portals (Invited)
NASA Astrophysics Data System (ADS)
Meyer-Dombard, D. R.; Cardace, D.; Woycheese, K. M.; Swingley, W.; Schubotz, F.; Shock, E.
2013-12-01
The consideration of surface expressions of the deep subsurface- such as springs- remains one of the most economically viable means to query the deep biosphere's diversity and function. Hot spring source pools are ideal portals for accessing and inferring the taxonomic and functional diversity of related deep subsurface microbial communities. Consideration of the geochemical composition of deep vs. surface fluids provides context for interpretation of community function. Further, parallel assessment of 16S rRNA data, metagenomic sequencing, and isotopic compositions of biomass in surface springs allows inference of the functional capacities of subsurface ecosystems. Springs in Yellowstone National Park (YNP), the Philippines, and Turkey are considered here, incorporating near-surface, transition, and surface ecosystems to identify 'legacy' taxa and functions of the deep biosphere. We find that source pools often support functional capacity suited to subsurface ecosystems. For example, in hot ecosystems, source pools are strictly chemosynthetic, and surface environments with measureable dissolved oxygen may contain evidence of community functions more favorable under anaerobic conditions. Metagenomic reads from a YNP ecosystem indicate the genetic capacity for sulfate reduction at high temperature. However, inorganic sulfate reduction is only minimally energy-yielding in these surface environments suggesting the potential that sulfate reduction is a 'legacy' function of deeper biosphere ecosystems. Carbon fixation tactics shift with increased surface exposure of the thermal fluids. Genes related to the rTCA cycle and the acetyl co-A pathway are most prevalent in highest temperature, anaerobic sites. At lower temperature sites, fewer total carbon fixation genes were observed, perhaps indicating an increase in heterotrophic metabolism with increased surface exposure. In hydrogen and methane rich springs in the Philippines and Turkey, methanogenic taxa dominate source
Bond, F W; Dryden, W; Briscoe, R
1999-12-01
This article describes a role playing experiment that examined the sufficiency hypothesis of Rational Emotive Behaviour Therapy (REBT). This proposition states that it is sufficient for rational and irrational beliefs to refer to preferences and musts, respectively, if those beliefs are to affect the functionality of inferences (FI). Consistent with the REBT literature (e.g. Dryden, 1994; Dryden & Ellis, 1988; Palmer, Dryden, Ellis & Yapp, 1995) results from this experiment showed that rational and irrational beliefs, as defined by REBT, do affect FI. Specifically, results showed that people who hold a rational belief form inferences that are significantly more functional than those that are formed by people who hold an irrational belief. Contrary to REBT theory, the sufficiency hypothesis was not supported. Thus, results indicated that it is not sufficient for rational and irrational beliefs to refer to preferences and musts, respectively, if those beliefs are to affect the FI. It appears, then, that preferences and musts are not sufficient mechanisms by which rational and irrational beliefs, respectively, affect the FI. Psychotherapeutic implications of these findings are considered.
Wang, Xiaoxiao; Wang, Huan; Huang, Jinfeng; Zhou, Yifeng; Tzvetanov, Tzvetomir
2017-01-01
The contrast sensitivity function that spans the two dimensions of contrast and spatial frequency is crucial in predicting functional vision both in research and clinical applications. In this study, the use of Bayesian inference was proposed to determine the parameters of the two-dimensional contrast sensitivity function. Two-dimensional Bayesian inference was extensively simulated in comparison to classical one-dimensional measures. Its performance on two-dimensional data gathered with different sampling algorithms was also investigated. The results showed that the two-dimensional Bayesian inference method significantly improved the accuracy and precision of the contrast sensitivity function, as compared to the more common one-dimensional estimates. In addition, applying two-dimensional Bayesian estimation to the final data set showed similar levels of reliability and efficiency across widely disparate and established sampling methods (from classical one-dimensional sampling, such as Ψ or staircase, to more novel multi-dimensional sampling methods, such as quick contrast sensitivity function and Fisher information gain). Furthermore, the improvements observed following the application of Bayesian inference were maintained even when the prior poorly matched the subject's contrast sensitivity function. Simulation results were confirmed in a psychophysical experiment. The results indicated that two-dimensional Bayesian inference of contrast sensitivity function data provides similar estimates across a wide range of sampling methods. The present study likely has implications for the measurement of contrast sensitivity function in various settings (including research and clinical settings) and would facilitate the comparison of existing data from previous studies. PMID:28119563
Wang, Xiaoxiao; Wang, Huan; Huang, Jinfeng; Zhou, Yifeng; Tzvetanov, Tzvetomir
2016-01-01
The contrast sensitivity function that spans the two dimensions of contrast and spatial frequency is crucial in predicting functional vision both in research and clinical applications. In this study, the use of Bayesian inference was proposed to determine the parameters of the two-dimensional contrast sensitivity function. Two-dimensional Bayesian inference was extensively simulated in comparison to classical one-dimensional measures. Its performance on two-dimensional data gathered with different sampling algorithms was also investigated. The results showed that the two-dimensional Bayesian inference method significantly improved the accuracy and precision of the contrast sensitivity function, as compared to the more common one-dimensional estimates. In addition, applying two-dimensional Bayesian estimation to the final data set showed similar levels of reliability and efficiency across widely disparate and established sampling methods (from classical one-dimensional sampling, such as Ψ or staircase, to more novel multi-dimensional sampling methods, such as quick contrast sensitivity function and Fisher information gain). Furthermore, the improvements observed following the application of Bayesian inference were maintained even when the prior poorly matched the subject's contrast sensitivity function. Simulation results were confirmed in a psychophysical experiment. The results indicated that two-dimensional Bayesian inference of contrast sensitivity function data provides similar estimates across a wide range of sampling methods. The present study likely has implications for the measurement of contrast sensitivity function in various settings (including research and clinical settings) and would facilitate the comparison of existing data from previous studies.
LncRNA ontology: inferring lncRNA functions based on chromatin states and expression patterns
Li, Yongsheng; Chen, Hong; Pan, Tao; Jiang, Chunjie; Zhao, Zheng; Wang, Zishan; Zhang, Jinwen; Xu, Juan; Li, Xia
2015-01-01
Accumulating evidences suggest that long non-coding RNAs (lncRNAs) perform important functions. Genome-wide chromatin-states area rich source of information about cellular state, yielding insights beyond what is typically obtained by transcriptome profiling. We propose an integrative method for genome-wide functional predictions of lncRNAs by combining chromatin states data with gene expression patterns. We first validated the method using protein-coding genes with known function annotations. Our validation results indicated that our integrative method performs better than co-expression analysis, and is accurate across different conditions. Next, by applying the integrative model genome-wide, we predicted the probable functions for more than 97% of human lncRNAs. The putative functions inferred by our method match with previously annotated by the targets of lncRNAs. Moreover, the linkage from the cellular processes influenced by cancer-associated lncRNAs to the cancer hallmarks provided a “lncRNA point-of-view” on tumor biology. Our approach provides a functional annotation of the lncRNAs, which we developed into a web-based application, LncRNA Ontology, to provide visualization, analysis, and downloading of lncRNA putative functions. PMID:26485761
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising.
Zhang, Kai; Zuo, Wangmeng; Chen, Yunjin; Meng, Deyu; Zhang, Lei
2017-02-01
Discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise (AWGN) at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks such as Gaussian denoising, single image super-resolution and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.
Zhang, Weiwen; Culley, David E.; Gritsenko, Marina A.; Moore, Ronald J.; Nie, Lei; Scholten, Johannes C.; Petritis, Konstantinos; Strittmatter, Eric F.; Camp, David G.; Smith, Richard D.; Brockman, Fred J.
2006-11-03
Direct liquid chromatography-tandem mass spectrometry (LC-MS/MS) was used to examine the proteins extracted from Desulfovibrio vulgaris cells. While our previous study provided a proteomic overview of the cellular metabolism based on proteins with known functions (Zhang et al., 2006a, Proteomics, 6: 4286-4299), this study describes the global detection and functional inference for hypothetical D. vulgaris proteins. Across six growth conditions, 15,841 tryptic peptides were identified with high confidence. Using a criterion of peptide identification from at least two out of three independent LC-MS/MS analyses per protein, 176 open reading frames (ORFs) originally annotated as hypothetical proteins were found to encode expressed proteins. These proteins ranged from 6.0 to 153 kDa, and had calculated pI values ranging from 3.7 to 11.5. Based on homology search results (with E value <= 0.01 as a cutoff), 159 proteins were defined as conserved hypothetical proteins, and 17 proteins were unique to the D. vulgaris genome. Functional inference of the conserved hypothetical proteins was performed by a combination of several non-homology based methods: genomic context analysis, phylogenomic profiling, and analysis of a combination of experimental information including peptide detection in cells grown under specific culture conditions and cellular location of the proteins. Using this approach we were able to assign possible functions to 27 conserved hypothetical proteins. This study demonstrated that a combination of proteomics and bioinformatics methodologies can provide verification for the authenticity of hypothetical proteins and improve annotation for the D. vulgaris genome.
NASA Astrophysics Data System (ADS)
Duan, Yabo; Song, Chengtian
2016-12-01
Empirical mode decomposition (EMD) is a recently proposed nonlinear and nonstationary laser signal denoising method. A noisy signal is broken down using EMD into oscillatory components that are called intrinsic mode functions (IMFs). Thresholding-based denoising and correlation-based partial reconstruction of IMFs are the two main research directions for EMD-based denoising. Similar to other decomposition-based denoising approaches, EMD-based denoising methods require a reliable threshold to determine which IMFs are noise components and which IMFs are noise-free components. In this work, we propose a new approach in which each IMF is first denoised using EMD interval thresholding (EMD-IT), and then a robust thresholding process based on Spearman correlation coefficient is used for relevant modes selection. The proposed method tackles the problem using a thresholding-based denoising approach coupled with partial reconstruction of the relevant IMFs. Other traditional denoising methods, including correlation-based EMD partial reconstruction (EMD-Correlation), discrete Fourier transform and wavelet-based methods, are investigated to provide a comparison with the proposed technique. Simulation and test results demonstrate the superior performance of the proposed method when compared with the other methods.
Inferring modules of functionally interacting proteins using the Bond Energy Algorithm
Watanabe, Ryosuke LA; Morett, Enrique; Vallejo, Edgar E
2008-01-01
Background Non-homology based methods such as phylogenetic profiles are effective for predicting functional relationships between proteins with no considerable sequence or structure similarity. Those methods rely heavily on traditional similarity metrics defined on pairs of phylogenetic patterns. Proteins do not exclusively interact in pairs as the final biological function of a protein in the cellular context is often hold by a group of proteins. In order to accurately infer modules of functionally interacting proteins, the consideration of not only direct but also indirect relationships is required. In this paper, we used the Bond Energy Algorithm (BEA) to predict functionally related groups of proteins. With BEA we create clusters of phylogenetic profiles based on the associations of the surrounding elements of the analyzed data using a metric that considers linked relationships among elements in the data set. Results Using phylogenetic profiles obtained from the Cluster of Orthologous Groups of Proteins (COG) database, we conducted a series of clustering experiments using BEA to predict (upper level) relationships between profiles. We evaluated our results by comparing with COG's functional categories, And even more, with the experimentally determined functional relationships between proteins provided by the DIP and ECOCYC databases. Our results demonstrate that BEA is capable of predicting meaningful modules of functionally related proteins. BEA outperforms traditionally used clustering methods, such as k-means and hierarchical clustering by predicting functional relationships between proteins with higher accuracy. Conclusion This study shows that the linked relationships of phylogenetic profiles obtained by BEA is useful for detecting functional associations between profiles and extending functional modules not found by traditional methods. BEA is capable of detecting relationship among phylogenetic patterns by linking them through a common element shared in
Blouin, Christian; Boucher, Yan; Roger, Andrew J.
2003-01-01
Comparative sequence analysis has been used to study specific questions about the structure and function of proteins for many years. Here we propose a knowledge-based framework in which the maximum likelihood rate of evolution is used to quantify the level of constraint on the identity of a site. We demonstrate that site-rate mapping on 3D structures using datasets of rhodopsin-like G-protein receptors and α- and β-tubulins provides an excellent tool for pinpointing the functional features shared between orthologous and paralogous proteins. In addition, functional divergence within protein families can be inferred by examining the differences in the site rates, the differences in the chemical properties of the side chains or amino acid usage between aligned sites. Two novel analytical methods are introduced to characterize rate- independent functional divergence. These are tested using a dataset of two classes of HMG-CoA reductases for which only one class can perform both the forward and reverse reaction. We show that functionally divergent sites occur in a cluster of sites interacting with the catalytic residues and that this information should facilitate the design of experimental strategies to directly test functional properties of residues. PMID:12527789
Blouin, Christian; Boucher, Yan; Roger, Andrew J
2003-01-15
Comparative sequence analysis has been used to study specific questions about the structure and function of proteins for many years. Here we propose a knowledge-based framework in which the maximum likelihood rate of evolution is used to quantify the level of constraint on the identity of a site. We demonstrate that site-rate mapping on 3D structures using datasets of rhodopsin-like G-protein receptors and alpha- and beta-tubulins provides an excellent tool for pinpointing the functional features shared between orthologous and paralogous proteins. In addition, functional divergence within protein families can be inferred by examining the differences in the site rates, the differences in the chemical properties of the side chains or amino acid usage between aligned sites. Two novel analytical methods are introduced to characterize rate- independent functional divergence. These are tested using a dataset of two classes of HMG-CoA reductases for which only one class can perform both the forward and reverse reaction. We show that functionally divergent sites occur in a cluster of sites interacting with the catalytic residues and that this information should facilitate the design of experimental strategies to directly test functional properties of residues.
Structure-based function inference using protein family-specific fingerprints
Bandyopadhyay, Deepak; Huan, Jun; Liu, Jinze; Prins, Jan; Snoeyink, Jack; Wang, Wei; Tropsha, Alexander
2006-01-01
We describe a method to assign a protein structure to a functional family using family-specific fingerprints. Fingerprints represent amino acid packing patterns that occur in most members of a family but are rare in the background, a nonredundant subset of PDB; their information is additional to sequence alignments, sequence patterns, structural superposition, and active-site templates. Fingerprints were derived for 120 families in SCOP using Frequent Subgraph Mining. For a new structure, all occurrences of these family-specific fingerprints may be found by a fast algorithm for subgraph isomorphism; the structure can then be assigned to a family with a confidence value derived from the number of fingerprints found and their distribution in background proteins. In validation experiments, we infer the function of new members added to SCOP families and we discriminate between structurally similar, but functionally divergent TIM barrel families. We then apply our method to predict function for several structural genomics proteins, including orphan structures. Some predictions have been corroborated by other computational methods and some validated by subsequent functional characterization. PMID:16731985
Unleashing the power of meta-threading for evolution/structure-based function inference of proteins.
Brylinski, Michal
2013-01-01
Protein threading is widely used in the prediction of protein structure and the subsequent functional annotation. Most threading approaches employ similar criteria for the template identification for use in both protein structure and function modeling. Using structure similarity alone might result in a high false positive rate in protein function inference, which suggests that selecting functional templates should be subject to a different set of constraints. In this study, we extend the functionality of eThread, a recently developed approach to meta-threading, focusing on the optimal selection of functional templates. We optimized the selection of template proteins to cover a broad spectrum of protein molecular function: ligand, metal, inorganic cluster, protein, and nucleic acid binding. In large-scale benchmarks, we demonstrate that the recognition rates in identifying templates that bind molecular partners in similar locations are very high, typically 70-80%, at the expense of a relatively low false positive rate. eThread also provides useful insights into the chemical properties of binding molecules and the structural features of binding. For instance, the sensitivity in recognizing similar protein-binding interfaces is 58% at only 18% false positive rate. Furthermore, in comparative analysis, we demonstrate that meta-threading supported by machine learning outperforms single-threading approaches in functional template selection. We show that meta-threading effectively detects many facets of protein molecular function, even in a low-sequence identity regime. The enhanced version of eThread is freely available as a webserver and stand-alone software at http://www.brylinski.org/ethread.
Pragmatic inferences in high-functioning adults with autism and Asperger syndrome.
Pijnacker, Judith; Hagoort, Peter; Buitelaar, Jan; Teunisse, Jan-Pieter; Geurts, Bart
2009-04-01
Although people with autism spectrum disorders (ASD) often have severe problems with pragmatic aspects of language, little is known about their pragmatic reasoning. We carried out a behavioral study on high-functioning adults with autistic disorder (n = 11) and Asperger syndrome (n = 17) and matched controls (n = 28) to investigate whether they are capable of deriving scalar implicatures, which are generally considered to be pragmatic inferences. Participants were presented with underinformative sentences like "Some sparrows are birds". This sentence is logically true, but pragmatically inappropriate if the scalar implicature "Not all sparrows are birds" is derived. The present findings indicate that the combined ASD group was just as likely as controls to derive scalar implicatures, yet there was a difference between participants with autistic disorder and Asperger syndrome, suggesting a potential differentiation between these disorders in pragmatic reasoning. Moreover, our results suggest that verbal intelligence is a constraint for task performance in autistic disorder but not in Asperger syndrome.
Optical Aperture Synthesis Object's Information Extracting Based on Wavelet Denoising
NASA Astrophysics Data System (ADS)
Fan, W. J.; Lu, Y.
2006-10-01
Wavelet denoising is studied to improve OAS(optical aperture synthesis) object's Fourier information extracting. Translation invariance wavelet denoising based on Donoho wavelet soft threshold denoising is researched to remove Pseudo-Gibbs in wavelet soft threshold image. OAS object's information extracting based on translation invariance wavelet denoising is studied. The study shows that wavelet threshold denoising can improve the precision and the repetition of object's information extracting from interferogram, and the translation invariance wavelet denoising information extracting is better than soft threshold wavelet denoising information extracting.
Zhang, Weiwen; Culley, David E.; Gritsenko, Marina A.; Moore, Ronald J.; Nie, Lei; Scholten, Johannes C.; Petritis, Konstantinos; Strittmatter, Eric F.; Camp, David G.; Smith, Richard D.; Brockman, Fred J.
2006-11-03
ABSTRACT In the previous study, the whole-genome gene expression profiles of D. vulgaris in response to oxidative stress and heat shock were determined. The results showed 24-28% of the responsive genes were hypothetical proteins that have not been experimentally characterized or whose function can not be deduced by simple sequence comparison. To further explore the protecting mechanisms employed in D. vulgaris against the oxidative stress and heat shock, attempt was made in this study to infer functions of these hypothetical proteins by phylogenomic profiling along with detailed sequence comparison against various publicly available databases. By this approach we were ableto assign possible functions to 25 responsive hypothetical proteins. The findings included that DVU0725, induced by oxidative stress, may be involved in lipopolysaccharide biosynthesis, implying that the alternation of lipopolysaccharide on cell surface might service as a mechanism against oxidative stress in D. vulgaris. In addition, two responsive proteins, DVU0024 encoding a putative transcriptional regulator and DVU1670 encoding predicted redox protein, were sharing co-evolution atterns with rubrerythrin in Archaeoglobus fulgidus and Clostridium perfringens, respectively, implying that they might be part of the stress response and protective systems in D. vulgaris. The study demonstrated that phylogenomic profiling is a useful tool in interpretation of experimental genomics data, and also provided further insight on cellular response to oxidative stress and heat shock in D. vulgaris.
Validi, AbdoulAhad
2014-03-01
This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.
Inference of Functionally-Relevant N-acetyltransferase Residues Based on Statistical Correlations.
Neuwald, Andrew F; Altschul, Stephen F
2016-12-01
Over evolutionary time, members of a superfamily of homologous proteins sharing a common structural core diverge into subgroups filling various functional niches. At the sequence level, such divergence appears as correlations that arise from residue patterns distinct to each subgroup. Such a superfamily may be viewed as a population of sequences corresponding to a complex, high-dimensional probability distribution. Here we model this distribution as hierarchical interrelated hidden Markov models (hiHMMs), which describe these sequence correlations implicitly. By characterizing such correlations one may hope to obtain information regarding functionally-relevant properties that have thus far evaded detection. To do so, we infer a hiHMM distribution from sequence data using Bayes' theorem and Markov chain Monte Carlo (MCMC) sampling, which is widely recognized as the most effective approach for characterizing a complex, high dimensional distribution. Other routines then map correlated residue patterns to available structures with a view to hypothesis generation. When applied to N-acetyltransferases, this reveals sequence and structural features indicative of functionally important, yet generally unknown biochemical properties. Even for sets of proteins for which nothing is known beyond unannotated sequences and structures, this can lead to helpful insights. We describe, for example, a putative coenzyme-A-induced-fit substrate binding mechanism mediated by arginine residue switching between salt bridge and π-π stacking interactions. A suite of programs implementing this approach is available (psed.igs.umaryland.edu).
Constrained parametric model for simultaneous inference of two cumulative incidence functions.
Shi, Haiwen; Cheng, Yu; Jeong, Jong-Hyeon
2013-01-01
We propose a parametric regression model for the cumulative incidence functions (CIFs) commonly used for competing risks data. The model adopts a modified logistic model as the baseline CIF and a generalized odds-rate model for covariate effects, and it explicitly takes into account the constraint that a subject with any given prognostic factors should eventually fail from one of the causes such that the asymptotes of the CIFs should add up to one. This constraint intrinsically holds in a nonparametric analysis without covariates, but is easily overlooked in a semiparametric or parametric regression setting. We hence model the CIF from the primary cause assuming the generalized odds-rate transformation and the modified logistic function as the baseline CIF. Under the additivity constraint, the covariate effects on the competing cause are modeled by a function of the asymptote of the baseline distribution and the covariate effects on the primary cause. The inference procedure is straightforward by using the standard maximum likelihood theory. We demonstrate desirable finite-sample performance of our model by simulation studies in comparison with existing methods. Its practical utility is illustrated in an analysis of a breast cancer dataset to assess the treatment effect of tamoxifen, adjusting for age and initial pathological tumor size, on breast cancer recurrence that is subject to dependent censoring by second primary cancers and deaths.
Inference of Functionally-Relevant N-acetyltransferase Residues Based on Statistical Correlations
Neuwald, Andrew F.
2016-01-01
Over evolutionary time, members of a superfamily of homologous proteins sharing a common structural core diverge into subgroups filling various functional niches. At the sequence level, such divergence appears as correlations that arise from residue patterns distinct to each subgroup. Such a superfamily may be viewed as a population of sequences corresponding to a complex, high-dimensional probability distribution. Here we model this distribution as hierarchical interrelated hidden Markov models (hiHMMs), which describe these sequence correlations implicitly. By characterizing such correlations one may hope to obtain information regarding functionally-relevant properties that have thus far evaded detection. To do so, we infer a hiHMM distribution from sequence data using Bayes’ theorem and Markov chain Monte Carlo (MCMC) sampling, which is widely recognized as the most effective approach for characterizing a complex, high dimensional distribution. Other routines then map correlated residue patterns to available structures with a view to hypothesis generation. When applied to N-acetyltransferases, this reveals sequence and structural features indicative of functionally important, yet generally unknown biochemical properties. Even for sets of proteins for which nothing is known beyond unannotated sequences and structures, this can lead to helpful insights. We describe, for example, a putative coenzyme-A-induced-fit substrate binding mechanism mediated by arginine residue switching between salt bridge and π-π stacking interactions. A suite of programs implementing this approach is available (psed.igs.umaryland.edu). PMID:28002465
Denoising Medical Images using Calculus of Variations.
Kohan, Mahdi Nakhaie; Behnam, Hamid
2011-07-01
We propose a method for medical image denoising using calculus of variations and local variance estimation by shaped windows. This method reduces any additive noise and preserves small patterns and edges of images. A pyramid structure-texture decomposition of images is used to separate noise and texture components based on local variance measures. The experimental results show that the proposed method has visual improvement as well as a better SNR, RMSE and PSNR than common medical image denoising methods. Experimental results in denoising a sample Magnetic Resonance image show that SNR, PSNR and RMSE have been improved by 19, 9 and 21 percents respectively.
Büssis, Dirk; Stintzi, Annick; Schaller, Andreas; Kopka, Joachim; Altmann, Thomas
2005-01-01
The gene family of subtilisin-like serine proteases (subtilases) in Arabidopsis thaliana comprises 56 members, divided into six distinct subfamilies. Whereas the members of five subfamilies are similar to pyrolysins, two genes share stronger similarity to animal kexins. Mutant screens confirmed 144 T-DNA insertion lines with knockouts for 55 out of the 56 subtilases. Apart from SDD1, none of the confirmed homozygous mutants revealed any obvious visible phenotypic alteration during growth under standard conditions. Apart from this specific case, forward genetics gave us no hints about the function of the individual 54 non-characterized subtilase genes. Therefore, the main objective of our work was to overcome the shortcomings of the forward genetic approach and to infer alternative experimental approaches by using an integrative bioinformatics and biological approach. Computational analyses based on transcriptional co-expression and co-response pattern revealed at least two expression networks, suggesting that functional redundancy may exist among subtilases with limited similarity. Furthermore, two hubs were identified, which may be involved in signalling or may represent higher-order regulatory factors involved in responses to environmental cues. A particular enrichment of co-regulated genes with metabolic functions was observed for four subtilases possibly representing late responsive elements of environmental stress. The kexin homologs show stronger associations with genes of transcriptional regulation context. Based on the analyses presented here and in accordance with previously characterized subtilases, we propose three main functions of subtilases: involvement in (i) control of development, (ii) protein turnover, and (iii) action as downstream components of signalling cascades. Supplemental material is available in the Plant Subtilase Database (PSDB) (http://csbdb.mpimp-golm.mpg.de/psdb.html) , as well as from the CSB.DB (http
Enault, François; Suhre, Karsten; Poirot, Olivier; Abergel, Chantal; Claverie, Jean-Michel
2004-07-01
Phydbac (phylogenomic display of bacterial genes) implemented a method of phylogenomic profiling using a distance measure based on normalized BLAST scores. This method was able to increase the predictive power of phylogenomic profiling by about 25% when compared to the classical approach based on Hamming distances. Here we present a major extension of Phydbac (named here Phydbac2), that extends both the concept and the functionality of the original web-service. While phylogenomic profiles remain the central focus of Phydbac2, it now integrates chromosomal proximity and gene fusion analyses as two additional non-similarity-based indicators for inferring pairwise gene functional relationships. Moreover, all presently available (January 2004) fully sequenced bacterial genomes and those of three lower eukaryotes are now included in the profiling process, thus increasing the initial number of reference genomes (71 in Phydbac) to 150 in Phydbac2. Using the KEGG metabolic pathway database as a benchmark, we show that the predictive power of Phydbac2 is improved by 27% over the previous version. This gain is accounted for on one hand, by the increased number of reference genomes (11%) and on the other hand, as a result of including chromosomal proximity into the distance measure (16%). The expanded functionality of Phydbac2 now allows the user to query more than 50 different genomes, including at least one member of each major bacterial group, most major pathogens and potential bio-terrorism agents. The search for co-evolving genes based on consensus profiles from multiple organisms, the display of Phydbac2 profiles side by side with COG information, the inclusion of KEGG metabolic pathway maps the production of chromosomal proximity maps, and the possibility of collecting and processing results from different Phydbac queries in a common shopping cart are the main new features of Phydbac2. The Phydbac2 web server is available at http://igs-server.cnrs-mrs.fr/phydbac/.
Inference of gene function based on gene fusion events: the rosetta-stone method.
Suhre, Karsten
2007-01-01
The method described in this chapter can be used to infer putative functional links between two proteins. The basic idea is based on the principle of "guilt by association." It is assumed that two proteins, which are found to be transcribed by a single transcript in one (or several) genomes are likely to be functionally linked, for example by acting in a same metabolic pathway or by forming a multiprotein complex. This method is of particular interest for studying genes that exhibit no, or only remote, homologies with already well-characterized proteins. Combined with other non-homology based methods, gene fusion events may yield valuable information for hypothesis building on protein function, and may guide experimental characterization of the target protein, for example by suggesting potential ligands or binding partners. This chapter uses the FusionDB database (http://www.igs.cnrs-mrs.fr/FusionDB/) as source of information. FusionDB provides a characterization of a large number of gene fusion events at hand of multiple sequence alignments. Orthologous genes are included to yield a comprehensive view of the structure of a gene fusion event. Phylogenetic tree reconstruction is provided to evaluate the history of a gene fusion event, and three-dimensional protein structure information is used, where available, to further characterize the nature of the gene fusion. For genes that are not comprised in FusionDB, some instructions are given as how to generate a similar type of information, based solely on publicly available web tools that are listed here.
Comparative internal anatomy of Staurozoa (Cnidaria), with functional and evolutionary inferences.
Miranda, Lucília S; Collins, Allen G; Hirano, Yayoi M; Mills, Claudia E; Marques, Antonio C
2016-01-01
Comparative efforts to understand the body plan evolution of stalked jellyfishes are scarce. Most characters, and particularly internal anatomy, have neither been explored for the class Staurozoa, nor broadly applied in its taxonomy and classification. Recently, a molecular phylogenetic hypothesis was derived for Staurozoa, allowing for the first broad histological comparative study of staurozoan taxa. This study uses comparative histology to describe the body plans of nine staurozoan species, inferring functional and evolutionary aspects of internal morphology based on the current phylogeny of Staurozoa. We document rarely-studied structures, such as ostia between radial pockets, intertentacular lobules, gametoducts, pad-like adhesive structures, and white spots of nematocysts (the last four newly proposed putative synapomorphies for Staurozoa). Two different regions of nematogenesis are documented. This work falsifies the view that the peduncle region of stauromedusae only retains polypoid characters; metamorphosis from stauropolyp to stauromedusa occurs both at the apical region (calyx) and basal region (peduncle). Intertentacular lobules, observed previously in only a small number of species, are shown to be widespread. Similarly, gametoducts were documented in all analyzed genera, both in males and females, thereby elucidating gamete release. Finally, ostia connecting adjacent gastric radial pockets appear to be universal for Staurozoa. Detailed histological studies of medusozoan polyps and medusae are necessary to further understand the relationships between staurozoan features and those of other medusozoan cnidarians.
Comparative internal anatomy of Staurozoa (Cnidaria), with functional and evolutionary inferences
Collins, Allen G.; Hirano, Yayoi M.; Mills, Claudia E.
2016-01-01
Comparative efforts to understand the body plan evolution of stalked jellyfishes are scarce. Most characters, and particularly internal anatomy, have neither been explored for the class Staurozoa, nor broadly applied in its taxonomy and classification. Recently, a molecular phylogenetic hypothesis was derived for Staurozoa, allowing for the first broad histological comparative study of staurozoan taxa. This study uses comparative histology to describe the body plans of nine staurozoan species, inferring functional and evolutionary aspects of internal morphology based on the current phylogeny of Staurozoa. We document rarely-studied structures, such as ostia between radial pockets, intertentacular lobules, gametoducts, pad-like adhesive structures, and white spots of nematocysts (the last four newly proposed putative synapomorphies for Staurozoa). Two different regions of nematogenesis are documented. This work falsifies the view that the peduncle region of stauromedusae only retains polypoid characters; metamorphosis from stauropolyp to stauromedusa occurs both at the apical region (calyx) and basal region (peduncle). Intertentacular lobules, observed previously in only a small number of species, are shown to be widespread. Similarly, gametoducts were documented in all analyzed genera, both in males and females, thereby elucidating gamete release. Finally, ostia connecting adjacent gastric radial pockets appear to be universal for Staurozoa. Detailed histological studies of medusozoan polyps and medusae are necessary to further understand the relationships between staurozoan features and those of other medusozoan cnidarians. PMID:27812408
Fracture in teeth: a diagnostic for inferring bite force and tooth function.
Lee, James J-W; Constantino, Paul J; Lucas, Peter W; Lawn, Brian R
2011-11-01
Teeth are brittle and highly susceptible to cracking. We propose that observations of such cracking can be used as a diagnostic tool for predicting bite force and inferring tooth function in living and fossil mammals. Laboratory tests on model tooth structures and extracted human teeth in simulated biting identify the principal fracture modes in enamel. Examination of museum specimens reveals the presence of similar fractures in a wide range of vertebrates, suggesting that cracks extended during ingestion or mastication. The use of 'fracture mechanics' from materials engineering provides elegant relations for quantifying critical bite forces in terms of characteristic tooth size and enamel thickness. The role of enamel microstructure in determining how cracks initiate and propagate within the enamel (and beyond) is discussed. The picture emerges of teeth as damage-tolerant structures, full of internal weaknesses and defects and yet able to contain the expansion of seemingly precarious cracks and fissures within the enamel shell. How the findings impact on dietary pressures forms an undercurrent of the study.
Rubenson, Jonas
2016-01-01
Owing to their cursorial background, ostriches (Struthio camelus) walk and run with high metabolic economy, can reach very fast running speeds and quickly execute cutting manoeuvres. These capabilities are believed to be a result of their ability to coordinate muscles to take advantage of specialized passive limb structures. This study aimed to infer the functional roles of ostrich pelvic limb muscles during gait. Existing gait data were combined with a newly developed musculoskeletal model to generate simulations of ostrich walking and running that predict muscle excitations, force and mechanical work. Consistent with previous avian electromyography studies, predicted excitation patterns showed that individual muscles tended to be excited primarily during only stance or swing. Work and force estimates show that ostrich gaits are partially hip-driven with the bi-articular hip–knee muscles driving stance mechanics. Conversely, the knee extensors acted as brakes, absorbing energy. The digital extensors generated large amounts of both negative and positive mechanical work, with increased magnitudes during running, providing further evidence that ostriches make extensive use of tendinous elastic energy storage to improve economy. The simulations also highlight the need to carefully consider non-muscular soft tissues that may play a role in ostrich gait. PMID:27146688
A Decomposition Framework for Image Denoising Algorithms.
Ghimpeteanu, Gabriela; Batard, Thomas; Bertalmio, Marcelo; Levine, Stacey
2016-01-01
In this paper, we consider an image decomposition model that provides a novel framework for image denoising. The model computes the components of the image to be processed in a moving frame that encodes its local geometry (directions of gradients and level lines). Then, the strategy we develop is to denoise the components of the image in the moving frame in order to preserve its local geometry, which would have been more affected if processing the image directly. Experiments on a whole image database tested with several denoising methods show that this framework can provide better results than denoising the image directly, both in terms of Peak signal-to-noise ratio and Structural similarity index metrics.
Dichoptic Metacontrast Masking Functions to Infer Transmission Delay in Optic Neuritis
Bruchmann, Maximilian; Korsukewitz, Catharina; Krämer, Julia; Wiendl, Heinz; Meuth, Sven G.
2016-01-01
Optic neuritis (ON) has detrimental effects on the transmission of neuronal signals generated at the earliest stages of visual information processing. The amount, as well as the speed of transmitted visual signals is impaired. Measurements of visual evoked potentials (VEP) are often implemented in clinical routine. However, the specificity of VEPs is limited because multiple cortical areas are involved in the generation of P1 potentials, including feedback signals from higher cortical areas. Here, we show that dichoptic metacontrast masking can be used to estimate the temporal delay caused by ON. A group of 15 patients with unilateral ON, nine of which had sufficient visual acuity and volunteered to participate, and a group of healthy control subjects (N = 8) were presented with flashes of gray disks to one eye and flashes of gray annuli to the corresponding retinal location of the other eye. By asking subjects to report the subjective visibility of the target (i.e. the disk) while varying the stimulus onset asynchrony (SOA) between disk and annulus, we obtained typical U-shaped masking functions. From these functions we inferred the critical SOAmax at which the mask (i.e. the annulus) optimally suppressed the visibility of the target. ON-associated transmission delay was estimated by comparing the SOAmax between conditions in which the disk had been presented to the affected and the mask to the other eye, and vice versa. SOAmax differed on average by 28 ms, suggesting a reduction in transmission speed in the affected eye. Compared to previously reported methods assessing perceptual consequences of altered neuronal transmission speed the presented method is more accurate as it is not limited by the observers’ ability to judge subtle variations in perceived synchrony. PMID:27711139
Duchesne, Thierry; Fortin, Daniel; Rivest, Louis-Paul
2015-01-01
Animal movement has a fundamental impact on population and community structure and dynamics. Biased correlated random walks (BCRW) and step selection functions (SSF) are commonly used to study movements. Because no studies have contrasted the parameters and the statistical properties of their estimators for models constructed under these two Lagrangian approaches, it remains unclear whether or not they allow for similar inference. First, we used the Weak Law of Large Numbers to demonstrate that the log-likelihood function for estimating the parameters of BCRW models can be approximated by the log-likelihood of SSFs. Second, we illustrated the link between the two approaches by fitting BCRW with maximum likelihood and with SSF to simulated movement data in virtual environments and to the trajectory of bison (Bison bison L.) trails in natural landscapes. Using simulated and empirical data, we found that the parameters of a BCRW estimated directly from maximum likelihood and by fitting an SSF were remarkably similar. Movement analysis is increasingly used as a tool for understanding the influence of landscape properties on animal distribution. In the rapidly developing field of movement ecology, management and conservation biologists must decide which method they should implement to accurately assess the determinants of animal movement. We showed that BCRW and SSF can provide similar insights into the environmental features influencing animal movements. Both techniques have advantages. BCRW has already been extended to allow for multi-state modeling. Unlike BCRW, however, SSF can be estimated using most statistical packages, it can simultaneously evaluate habitat selection and movement biases, and can easily integrate a large number of movement taxes at multiple scales. SSF thus offers a simple, yet effective, statistical technique to identify movement taxis. PMID:25898019
Duchesne, Thierry; Fortin, Daniel; Rivest, Louis-Paul
2015-01-01
Animal movement has a fundamental impact on population and community structure and dynamics. Biased correlated random walks (BCRW) and step selection functions (SSF) are commonly used to study movements. Because no studies have contrasted the parameters and the statistical properties of their estimators for models constructed under these two Lagrangian approaches, it remains unclear whether or not they allow for similar inference. First, we used the Weak Law of Large Numbers to demonstrate that the log-likelihood function for estimating the parameters of BCRW models can be approximated by the log-likelihood of SSFs. Second, we illustrated the link between the two approaches by fitting BCRW with maximum likelihood and with SSF to simulated movement data in virtual environments and to the trajectory of bison (Bison bison L.) trails in natural landscapes. Using simulated and empirical data, we found that the parameters of a BCRW estimated directly from maximum likelihood and by fitting an SSF were remarkably similar. Movement analysis is increasingly used as a tool for understanding the influence of landscape properties on animal distribution. In the rapidly developing field of movement ecology, management and conservation biologists must decide which method they should implement to accurately assess the determinants of animal movement. We showed that BCRW and SSF can provide similar insights into the environmental features influencing animal movements. Both techniques have advantages. BCRW has already been extended to allow for multi-state modeling. Unlike BCRW, however, SSF can be estimated using most statistical packages, it can simultaneously evaluate habitat selection and movement biases, and can easily integrate a large number of movement taxes at multiple scales. SSF thus offers a simple, yet effective, statistical technique to identify movement taxis.
Denoising of gravitational wave signals via dictionary learning algorithms
NASA Astrophysics Data System (ADS)
Torres-Forné, Alejandro; Marquina, Antonio; Font, José A.; Ibáñez, José M.
2016-12-01
Gravitational wave astronomy has become a reality after the historical detections accomplished during the first observing run of the two advanced LIGO detectors. In the following years, the number of detections is expected to increase significantly with the full commissioning of the advanced LIGO, advanced Virgo and KAGRA detectors. The development of sophisticated data analysis techniques to improve the opportunities of detection for low signal-to-noise-ratio events is, hence, a most crucial effort. In this paper, we present one such technique, dictionary-learning algorithms, which have been extensively developed in the last few years and successfully applied mostly in the context of image processing. However, to the best of our knowledge, such algorithms have not yet been employed to denoise gravitational wave signals. By building dictionaries from numerical relativity templates of both binary black holes mergers and bursts of rotational core collapse, we show how machine-learning algorithms based on dictionaries can also be successfully applied for gravitational wave denoising. We use a subset of signals from both catalogs, embedded in nonwhite Gaussian noise, to assess our techniques with a large sample of tests and to find the best model parameters. The application of our method to the actual signal GW150914 shows promising results. Dictionary-learning algorithms could be a complementary addition to the gravitational wave data analysis toolkit. They may be used to extract signals from noise and to infer physical parameters if the data are in good enough agreement with the morphology of the dictionary atoms.
Yoon, Ju Young; Brown, Roger L
2014-01-01
Cross-lagged panel analysis (CLPA) is a method of examining one-way or reciprocal causal inference between longitudinally changing variables. It has been used in the social sciences for many years, but not much in nursing research. This article introduces the conceptual and statistical background of CLPA and provides an exemplar of CLPA that examines the reciprocal causal relationship between depression and cognitive function over time in older adults. The 2-year cross-lagged effects of depressive symptoms (T1) on cognitive function (T2) and cognitive function (T1) on depressive symptoms (T2) were significant, which demonstrated a reciprocal causal relationship between cognitive function and depressive mood over time. Although CLPA is a methodologically strong approach to examine the reciprocal causal inferences over time, it is necessary to consider potential sources of spuriousness to lead to false causal relationship and a reasonable time frame to detect the change of the variables.
Nonlocal means image denoising using orthogonal moments.
Kumar, Ahlad
2015-09-20
An image denoising method in moment domain has been proposed. The denoising involves the development and evaluation based on the modified nonlocal means (NLM) algorithm. It uses the similarity of the neighborhood, evaluated using Krawtchouk moments. The results of the proposed denoising method have been validated using peak signal-to-noise ratio (PSNR), a well-known quality measure such as structural similarity (SSIM) index and blind/referenceless image spatial quality evaluator (BRISQUE). The denoising algorithm has been evaluated for synthetic and real clinical images contaminated by Gaussian, Poisson, and Rician noise. The algorithm performs well compared to the Zernike based denoising as indicated by the PSNR, SSIM, and BRISQUE scores of the denoised images with an improvement of 3.1 dB, 0.1285, and 4.23, respectively. Further, comparative analysis of the proposed work with the existing techniques has also been performed. It has been observed that the results are competitive in terms of PSNR, SSIM, and BRISQUE scores when evaluated for varying levels of noise.
[Curvelet denoising algorithm for medical ultrasound image based on adaptive threshold].
Zhuang, Zhemin; Yao, Weike; Yang, Jinyao; Li, FenLan; Yuan, Ye
2014-11-01
The traditional denoising algorithm for ultrasound images would lost a lot of details and weak edge information when suppressing speckle noise. A new denoising algorithm of adaptive threshold based on curvelet transform is proposed in this paper. The algorithm utilizes differences of coefficients' local variance between texture and smooth region in each layer of ultrasound image to define fuzzy regions and membership functions. In the end, using the adaptive threshold that determine by the membership function to denoise the ultrasound image. The experimental text shows that the algorithm can reduce the speckle noise effectively and retain the detail information of original image at the same time, thus it can greatly enhance the performance of B ultrasound instrument.
Function inferences from a molecular structural model of bacterial ParE toxin
Barbosa, Luiz Carlos Bertucci; Garrido, Saulo Santesso; Garcia, Anderson; Delfino, Davi Barbosa; Marchetto, Reinaldo
2010-01-01
Toxin-antitoxin (TA) systems contribute to plasmid stability by a mechanism that relies on the differential stabilities of the toxin and antitoxin proteins and leads to the killing of daughter bacteria that did not receive a plasmid copy at the cell division. ParE is the toxic component of a TA system that constitutes along with RelE an important class of bacterial toxin called RelE/ParE superfamily. For ParE toxin, no crystallographic structure is available so far and rare in vitro studies demonstrated that the target of toxin activity is E. coli DNA gyrase. Here, a 3D Model for E. coli ParE toxin by molecular homology modeling was built using MODELLER, a program for comparative modeling. The Model was energy minimized by CHARMM and validated using PROCHECK and VERIFY3D programs. Resulting Ramachandran plot analysis it was found that the portion residues failing into the most favored and allowed regions was 96.8%. Structural similarity search employing DALI server showed as the best matches RelE and YoeB families. The Model also showed similarities with other microbial ribonucleases but in a small score. A possible homologous deep cleft active site was identified in the Model using CASTp program. Additional studies to investigate the nuclease activity in members of ParE family as well as to confirm the inhibitory replication activity are needed. The predicted Model allows initial inferences about the unexplored 3D structure of the ParE toxin and may be further used in rational design of molecules for structurefunction studies. PMID:20975905
Image denoising using nonsubsampled shearlet transform and twin support vector machines.
Yang, Hong-Ying; Wang, Xiang-Yang; Niu, Pan-Pan; Liu, Yang-Cheng
2014-09-01
Denoising of images is one of the most basic tasks of image processing. It is a challenging work to design a edge/texture-preserving image denoising scheme. Nonsubsampled shearlet transform (NSST) is an effective multi-scale and multi-direction analysis method, it not only can exactly compute the shearlet coefficients based on a multiresolution analysis, but also can provide nearly optimal approximation for a piecewise smooth function. Based on NSST, a new edge/texture-preserving image denoising using twin support vector machines (TSVMs) is proposed in this paper. Firstly, the noisy image is decomposed into different subbands of frequency and orientation responses using the NSST. Secondly, the feature vector for a pixel in a noisy image is formed by the spatial geometric regularity in NSST domain, and the TSVMs model is obtained by training. Then the NSST detail coefficients are divided into information-related coefficients and noise-related ones by TSVMs training model. Finally, the detail subbands of NSST coefficients are denoised by using the adaptive threshold. Extensive experimental results demonstrate that our method can obtain better performances in terms of both subjective and objective evaluations than those state-of-the-art denoising techniques. Especially, the proposed method can preserve edges and textures very well while removing noise.
Tian, Xiaoying; Li, Yongshuai; Zhou, Huan; Li, Xiang; Chen, Lisha; Zhang, Xuming
2016-01-01
Electrocardiogram (ECG) signals contain a great deal of essential information which can be utilized by physicians for the diagnosis of heart diseases. Unfortunately, ECG signals are inevitably corrupted by noise which will severely affect the accuracy of cardiovascular disease diagnosis. Existing ECG signal denoising methods based on wavelet shrinkage, empirical mode decomposition and nonlocal means (NLM) cannot provide sufficient noise reduction or well-detailed preservation, especially with high noise corruption. To address this problem, we have proposed a hybrid ECG signal denoising scheme by combining extreme-point symmetric mode decomposition (ESMD) with NLM. In the proposed method, the noisy ECG signals will first be decomposed into several intrinsic mode functions (IMFs) and adaptive global mean using ESMD. Then, the first several IMFs will be filtered by the NLM method according to the frequency of IMFs while the QRS complex detected from these IMFs as the dominant feature of the ECG signal and the remaining IMFs will be left unprocessed. The denoised IMFs and unprocessed IMFs are combined to produce the final denoised ECG signals. Experiments on both simulated ECG signals and real ECG signals from the MIT-BIH database demonstrate that the proposed method can suppress noise in ECG signals effectively while preserving the details very well, and it outperforms several state-of-the-art ECG signal denoising methods in terms of signal-to-noise ratio (SNR), root mean squared error (RMSE), percent root mean square difference (PRD) and mean opinion score (MOS) error index. PMID:27681729
Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.
Pang, Jiahao; Cheung, Gene
2017-04-01
Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.
Pegg, Scott C-H; Brown, Shoshana D; Ojha, Sunil; Seffernick, Jennifer; Meng, Elaine C; Morris, John H; Chang, Patricia J; Huang, Conrad C; Ferrin, Thomas E; Babbitt, Patricia C
2006-02-28
The study of mechanistically diverse enzyme superfamilies-collections of enzymes that perform different overall reactions but share both a common fold and a distinct mechanistic step performed by key conserved residues-helps elucidate the structure-function relationships of enzymes. We have developed a resource, the structure-function linkage database (SFLD), to analyze these structure-function relationships. Unique to the SFLD is its hierarchical classification scheme based on linking the specific partial reactions (or other chemical capabilities) that are conserved at the superfamily, subgroup, and family levels with the conserved structural elements that mediate them. We present the results of analyses using the SFLD in correcting misannotations, guiding protein engineering experiments, and elucidating the function of recently solved enzyme structures from the structural genomics initiative. The SFLD is freely accessible at http://sfld.rbvi.ucsf.edu.
Time Difference of Arrival (TDOA) Estimation Using Wavelet Based Denoising
1999-03-01
NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS TIME DIFFERENCE OF ARRIVAL (TDOA) ESTIMATION USING WAVELET BASED DENOISING by Unal Aktas...4. TITLE AND SUBTITLE TIME DIFFERENCE OF ARRIVAL (TDOA) ESTIMATION USING WAVELET BASED DENOISING 6. AUTHOR(S) Unal Aktas 7...difference of arrival (TDOA) method. The wavelet transform is used to increase the accuracy of TDOA estimation. Several denoising techniques based on
Geodesic denoising for optical coherence tomography images
NASA Astrophysics Data System (ADS)
Shahrian Varnousfaderani, Ehsan; Vogl, Wolf-Dieter; Wu, Jing; Gerendas, Bianca S.; Simader, Christian; Langs, Georg; Waldstein, Sebastian M.; Schmidt-Erfurth, Ursula
2016-03-01
Optical coherence tomography (OCT) is an optical signal acquisition method capturing micrometer resolution, cross-sectional three-dimensional images. OCT images are used widely in ophthalmology to diagnose and monitor retinal diseases such as age-related macular degeneration (AMD) and Glaucoma. While OCT allows the visualization of retinal structures such as vessels and retinal layers, image quality and contrast is reduced by speckle noise, obfuscating small, low intensity structures and structural boundaries. Existing denoising methods for OCT images may remove clinically significant image features such as texture and boundaries of anomalies. In this paper, we propose a novel patch based denoising method, Geodesic Denoising. The method reduces noise in OCT images while preserving clinically significant, although small, pathological structures, such as fluid-filled cysts in diseased retinas. Our method selects optimal image patch distribution representations based on geodesic patch similarity to noisy samples. Patch distributions are then randomly sampled to build a set of best matching candidates for every noisy sample, and the denoised value is computed based on a geodesic weighted average of the best candidate samples. Our method is evaluated qualitatively on real pathological OCT scans and quantitatively on a proposed set of ground truth, noise free synthetic OCT scans with artificially added noise and pathologies. Experimental results show that performance of our method is comparable with state of the art denoising methods while outperforming them in preserving the critical clinically relevant structures.
Image-Specific Prior Adaptation for Denoising.
Lu, Xin; Lin, Zhe; Jin, Hailin; Yang, Jianchao; Wang, James Z
2015-12-01
Image priors are essential to many image restoration applications, including denoising, deblurring, and inpainting. Existing methods use either priors from the given image (internal) or priors from a separate collection of images (external). We find through statistical analysis that unifying the internal and external patch priors may yield a better patch prior. We propose a novel prior learning algorithm that combines the strength of both internal and external priors. In particular, we first learn a generic Gaussian mixture model from a collection of training images and then adapt the model to the given image by simultaneously adding additional components and refining the component parameters. We apply this image-specific prior to image denoising. The experimental results show that our approach yields better or competitive denoising results in terms of both the peak signal-to-noise ratio and structural similarity.
Adaptive Fourier decomposition based ECG denoising.
Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming
2016-10-01
A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition.
Parallel object-oriented, denoising system using wavelet multiresolution analysis
Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.
2005-04-12
The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.
Study on torpedo fuze signal denoising method based on WPT
NASA Astrophysics Data System (ADS)
Zhao, Jun; Sun, Changcun; Zhang, Tao; Ren, Zhiliang
2013-07-01
Torpedo fuze signal denoising is an important action to ensure reliable operation of fuze. Based on the good characteristics of wavelet packet transform (WPT) in signal denoising, the paper used wavelet packet transform to denoise the fuze signal under a complex background interference, and a simulation of the denoising results with Matlab is performed. Simulation result shows that the WPT denoising method can effectively eliminate background noise exist in torpedo fuze target signal with higher precision and less distortion, leading to advance the reliability of torpedo fuze operation.
Larsson, Tomas; Powell, Sean; Doerks, Tobias; von Mering, Christian
2014-01-01
Accurate orthology prediction is crucial for many applications in the post-genomic era. The lack of broadly accepted benchmark tests precludes a comprehensive analysis of orthology inference. So far, functional annotation between orthologs serves as a performance proxy. However, this violates the fundamental principle of orthology as an evolutionary definition, while it is often not applicable due to limited experimental evidence for most species. Therefore, we constructed high quality "gold standard" orthologous groups that can serve as a benchmark set for orthology inference in bacterial species. Herein, we used this dataset to demonstrate 1) why a manually curated, phylogeny-based dataset is more appropriate for benchmarking orthology than other popular practices and 2) how it guides database design and parameterization through careful error quantification. More specifically, we illustrate how function-based tests often fail to identify false assignments, misjudging the true performance of orthology inference methods. We also examined how our dataset can instruct the selection of a “core” species repertoire to improve detection accuracy. We conclude that including more genomes at the proper evolutionary distances can influence the overall quality of orthology detection. The curated gene families, called Reference Orthologous Groups, are publicly available at http://eggnog.embl.de/orthobench2. PMID:25369365
Trachana, Kalliopi; Forslund, Kristoffer; Larsson, Tomas; Powell, Sean; Doerks, Tobias; von Mering, Christian; Bork, Peer
2014-01-01
Accurate orthology prediction is crucial for many applications in the post-genomic era. The lack of broadly accepted benchmark tests precludes a comprehensive analysis of orthology inference. So far, functional annotation between orthologs serves as a performance proxy. However, this violates the fundamental principle of orthology as an evolutionary definition, while it is often not applicable due to limited experimental evidence for most species. Therefore, we constructed high quality "gold standard" orthologous groups that can serve as a benchmark set for orthology inference in bacterial species. Herein, we used this dataset to demonstrate 1) why a manually curated, phylogeny-based dataset is more appropriate for benchmarking orthology than other popular practices and 2) how it guides database design and parameterization through careful error quantification. More specifically, we illustrate how function-based tests often fail to identify false assignments, misjudging the true performance of orthology inference methods. We also examined how our dataset can instruct the selection of a "core" species repertoire to improve detection accuracy. We conclude that including more genomes at the proper evolutionary distances can influence the overall quality of orthology detection. The curated gene families, called Reference Orthologous Groups, are publicly available at http://eggnog.embl.de/orthobench2.
Robust modeling based on optimized EEG bands for functional brain state inference.
Podlipsky, Ilana; Ben-Simon, Eti; Hendler, Talma; Intrator, Nathan
2012-01-30
The need to infer brain states in a data driven approach is crucial for BCI applications as well as for neuroscience research. In this work we present a novel classification framework based on Regularized Linear Regression classifier constructed from time-frequency decomposition of an EEG (electro-encephalography) signal. The regression is then used to derive a model of frequency distributions that identifies brain states. The process of classifier construction, preprocessing and selection of optimal regularization parameter by means of cross-validation is presented and discussed. The framework and the feature selection technique are demonstrated on EEG data recorded from 10 healthy subjects while requested to open and close their eyes every 30 s. This paradigm is well known in inducing Alpha power modulations that differ from low power (during eyes opened) to high (during eyes closed). The classifier was trained to infer eyes opened or eyes closed states and achieved higher than 90% classification accuracy. Furthermore, our findings reveal interesting patterns of relations between experimental conditions, EEG frequencies, regularization parameters and classifier choice. This viable tool enables identification of the most contributing frequency bands to any given brain state and their optimal combination in inferring this state. These features allow for much greater detail than the standard Fourier Transform power analysis, making it an essential method for both BCI proposes and neuroimaging research.
Kimura, S; Araki, D; Matsumura, K; Okada-Hatakeyama, M
2012-02-01
Voit and Almeida have proposed the decoupling approach as a method for inferring the S-system models of genetic networks. The decoupling approach defines the inference of a genetic network as a problem requiring the solutions of sets of algebraic equations. The computation can be accomplished in a very short time, as the approach estimates S-system parameters without solving any of the differential equations. Yet the defined algebraic equations are non-linear, which sometimes prevents us from finding reasonable S-system parameters. In this study, we propose a new technique to overcome this drawback of the decoupling approach. This technique transforms the problem of solving each set of algebraic equations into a one-dimensional function optimization problem. The computation can still be accomplished in a relatively short time, as the problem is transformed by solving a linear programming problem. We confirm the effectiveness of the proposed approach through numerical experiments.
Wavelet-based fMRI analysis: 3-D denoising, signal separation, and validation metrics.
Khullar, Siddharth; Michael, Andrew; Correa, Nicolle; Adali, Tulay; Baum, Stefi A; Calhoun, Vince D
2011-02-14
We present a novel integrated wavelet-domain based framework (w-ICA) for 3-D denoising functional magnetic resonance imaging (fMRI) data followed by source separation analysis using independent component analysis (ICA) in the wavelet domain. We propose the idea of a 3-D wavelet-based multi-directional denoising scheme where each volume in a 4-D fMRI data set is sub-sampled using the axial, sagittal and coronal geometries to obtain three different slice-by-slice representations of the same data. The filtered intensity value of an arbitrary voxel is computed as an expected value of the denoised wavelet coefficients corresponding to the three viewing geometries for each sub-band. This results in a robust set of denoised wavelet coefficients for each voxel. Given the de-correlated nature of these denoised wavelet coefficients, it is possible to obtain more accurate source estimates using ICA in the wavelet domain. The contributions of this work can be realized as two modules: First, in the analysis module we combine a new 3-D wavelet denoising approach with signal separation properties of ICA in the wavelet domain. This step helps obtain an activation component that corresponds closely to the true underlying signal, which is maximally independent with respect to other components. Second, we propose and describe two novel shape metrics for post-ICA comparisons between activation regions obtained through different frameworks. We verified our method using simulated as well as real fMRI data and compared our results against the conventional scheme (Gaussian smoothing+spatial ICA: s-ICA). The results show significant improvements based on two important features: (1) preservation of shape of the activation region (shape metrics) and (2) receiver operating characteristic curves. It was observed that the proposed framework was able to preserve the actual activation shape in a consistent manner even for very high noise levels in addition to significant reduction in false
Statistical Methods for Image Registration and Denoising
2008-06-19
21 2.5.4 Nonlocal Means . . . . . . . . . . . . . . . . . 22 2.5.5 Patch -Based Denoising with Optimal Spatial Adap- tation...24 2.5.6 Other Patch -Based Methods . . . . . . . . . . 25 2.6 Chapter Summary...the nonlocal means [9], and an optimal patch -based algorithm [31]. These algorithms all include some measure of pixel similarity that allows the
A denoising algorithm for projection measurements in cone-beam computed tomography.
Karimi, Davood; Ward, Rabab
2016-02-01
The ability to reduce the radiation dose in computed tomography (CT) is limited by the excessive quantum noise present in the projection measurements. Sinogram denoising is, therefore, an essential step towards reconstructing high-quality images, especially in low-dose CT. Effective denoising requires accurate modeling of the photon statistics and of the prior knowledge about the characteristics of the projection measurements. This paper proposes an algorithm for denoising low-dose sinograms in cone-beam CT. The proposed algorithm is based on minimizing a cost function that includes a measurement consistency term and two regularizations in terms of the gradient and the Hessian of the sinogram. This choice of the regularization is motivated by the nature of CT projections. We use a split Bregman algorithm to minimize the proposed cost function. We apply the algorithm on simulated and real cone-beam projections and compare the results with another algorithm based on bilateral filtering. Our experiments with simulated and real data demonstrate the effectiveness of the proposed algorithm. Denoising of the projections with the proposed algorithm leads to a significant reduction of the noise in the reconstructed images without oversmoothing the edges or introducing artifacts.
Improved 3D wavelet-based de-noising of fMRI data
NASA Astrophysics Data System (ADS)
Khullar, Siddharth; Michael, Andrew M.; Correa, Nicolle; Adali, Tulay; Baum, Stefi A.; Calhoun, Vince D.
2011-03-01
Functional MRI (fMRI) data analysis deals with the problem of detecting very weak signals in very noisy data. Smoothing with a Gaussian kernel is often used to decrease noise at the cost of losing spatial specificity. We present a novel wavelet-based 3-D technique to remove noise in fMRI data while preserving the spatial features in the component maps obtained through group independent component analysis (ICA). Each volume is decomposed into eight volumetric sub-bands using a separable 3-D stationary wavelet transform. Each of the detail sub-bands are then treated through the main denoising module. This module facilitates computation of shrinkage factors through a hierarchical framework. It utilizes information iteratively from the sub-band at next higher level to estimate denoised coefficients at the current level. These de-noised sub-bands are then reconstructed back to the spatial domain using an inverse wavelet transform. Finally, the denoised group fMRI data is analyzed using ICA where the data is decomposed in to clusters of functionally correlated voxels (spatial maps) as indicators of task-related neural activity. The proposed method enables the preservation of shape of the actual activation regions associated with the BOLD activity. In addition it is able to achieve high specificity as compared to the conventionally used FWHM (full width half maximum) Gaussian kernels for smoothing fMRI data.
Ladar range image denoising by a nonlocal probability statistics algorithm
NASA Astrophysics Data System (ADS)
Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi
2013-01-01
According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.
NASA Astrophysics Data System (ADS)
Fournier, G. P.; Gogarten, J. P.
2010-04-01
Using ancestral sequence reconstruction and compositional analysis, it is possible to reconstruct the ancestral functions of many enzymes involved in protein synthesis, elucidating the early functional evolution of the translation machinery and genetic code.
Inference of functional properties from large-scale analysis of enzyme superfamilies.
Brown, Shoshana D; Babbitt, Patricia C
2012-01-02
As increasingly large amounts of data from genome and other sequencing projects become available, new approaches are needed to determine the functions of the proteins these genes encode. We show how large-scale computational analysis can help to address this challenge by linking functional information to sequence and structural similarities using protein similarity networks. Network analyses using three functionally diverse enzyme superfamilies illustrate the use of these approaches for facile updating and comparison of available structures for a large superfamily, for creation of functional hypotheses for metagenomic sequences, and to summarize the limits of our functional knowledge about even well studied superfamilies.
NASA Astrophysics Data System (ADS)
Mitchell, Edward A. D.; Lamentowicz, Mariusz; Payne, Richard J.; Mazei, Yuri
2014-05-01
Sound taxonomy is a major requirement for quantitative environmental reconstruction using biological data. Transfer function performance should theoretically be expected to decrease with reduced taxonomic resolution. However for many groups of organisms taxonomy is imperfect and species level identification not always possible. We conducted numerical experiments on five testate amoeba water table (DWT) transfer function data sets. We sequentially reduced the number of taxonomic groups by successively merging morphologically similar species and removing inconspicuous species. We then assessed how these changes affected model performance and palaeoenvironmental reconstruction using two fossil data sets. Model performance decreased with decreasing taxonomic resolution, but this had only limited effects on patterns of inferred DWT, at least to detect major dry/wet shifts. Higher-resolution taxonomy may however still be useful to detect more subtle changes, or for reconstructed shifts to be significant.
NASA Astrophysics Data System (ADS)
Pandarinath, Kailasa
2014-12-01
Several new multi-dimensional tectonomagmatic discrimination diagrams employing log-ratio variables of chemical elements and probability based procedure have been developed during the last 10 years for basic-ultrabasic, intermediate and acid igneous rocks. There are numerous studies on extensive evaluations of these newly developed diagrams which have indicated their successful application to know the original tectonic setting of younger and older as well as sea-water and hydrothermally altered volcanic rocks. In the present study, these diagrams were applied to Precambrian rocks of Mexico (southern and north-eastern) and Argentina. The study indicated the original tectonic setting of Precambrian rocks from the Oaxaca Complex of southern Mexico as follows: (1) dominant rift (within-plate) setting for rocks of 1117-988 Ma age; (2) dominant rift and less-dominant arc setting for rocks of 1157-1130 Ma age; and (3) a combined tectonic setting of collision and rift for Etla Granitoid Pluton (917 Ma age). The diagrams have indicated the original tectonic setting of the Precambrian rocks from the north-eastern Mexico as: (1) a dominant arc tectonic setting for the rocks of 988 Ma age; and (2) an arc and collision setting for the rocks of 1200-1157 Ma age. Similarly, the diagrams have indicated the dominant original tectonic setting for the Precambrian rocks from Argentina as: (1) with-in plate (continental rift-ocean island) and continental rift (CR) setting for the rocks of 800 Ma and 845 Ma age, respectively; and (2) an arc setting for the rocks of 1174-1169 Ma and of 1212-1188 Ma age. The inferred tectonic setting for these Precambrian rocks are, in general, in accordance to the tectonic setting reported in the literature, though there are some inconsistence inference of tectonic settings by some of the diagrams. The present study confirms the importance of these newly developed discriminant-function based diagrams in inferring the original tectonic setting of
[A non-local means approach for PET image denoising].
Yin, Yong; Sun, Weifeng; Lu, Jie; Liu, Tonghai
2010-04-01
Denoising is an important issue for medical image processing. Based on the analysis of the Non-local means algorithm recently reported by Buades A, et al. in international journals we herein propose adapting it for PET image denoising. Experimental de-noising results for real clinical PET images show that Non-local means method is superior to median filtering and wiener filtering methods and it can suppress noise in PET images effectively and preserve important details of structure for diagnosis.
A stacked contractive denoising auto-encoder for ECG signal denoising.
Xiong, Peng; Wang, Hongrui; Liu, Ming; Lin, Feng; Hou, Zengguang; Liu, Xiuling
2016-12-01
As a primary diagnostic tool for cardiac diseases, electrocardiogram (ECG) signals are often contaminated by various kinds of noise, such as baseline wander, electrode contact noise and motion artifacts. In this paper, we propose a contractive denoising technique to improve the performance of current denoising auto-encoders (DAEs) for ECG signal denoising. Based on the Frobenius norm of the Jacobean matrix for the learned features with respect to the input, we develop a stacked contractive denoising auto-encoder (CDAE) to build a deep neural network (DNN) for noise reduction, which can significantly improve the expression of ECG signals through multi-level feature extraction. The proposed method is evaluated on ECG signals from the bench-marker MIT-BIH Arrhythmia Database, and the noises come from the MIT-BIH noise stress test database. The experimental results show that the new CDAE algorithm performs better than the conventional ECG denoising method, specifically with more than 2.40 dB improvement in the signal-to-noise ratio (SNR) and nearly 0.075 to 0.350 improvements in the root mean square error (RMSE).
Kim, Tae-Min; Chung, Yeun-Jun; Rhyu, Mun-Gan; Ho Jung, Myeong
2007-01-01
Background Gene clustering has been widely used to group genes with similar expression pattern in microarray data analysis. Subsequent enrichment analysis using predefined gene sets can provide clues on which functional themes or regulatory sequence motifs are associated with individual gene clusters. In spite of the potential utility, gene clustering and enrichment analysis have been used in separate platforms, thus, the development of integrative algorithm linking both methods is highly challenging. Results In this study, we propose an algorithm for discovery of molecular functions and elucidation of transcriptional logics using two kinds of gene information, functional and regulatory motif gene sets. The algorithm, termed gene set expression coherence analysis first selects functional gene sets with significantly high expression coherences. Those candidate gene sets are further processed into a number of functionally related themes or functional clusters according to the expression similarities. Each functional cluster is then, investigated for the enrichment of transcriptional regulatory motifs using modified gene set enrichment analysis and regulatory motif gene sets. The method was tested for two publicly available expression profiles representing murine myogenesis and erythropoiesis. For respective profiles, our algorithm identified myocyte- and erythrocyte-related molecular functions, along with the putative transcriptional regulators for the corresponding molecular functions. Conclusion As an integrative and comprehensive method for the analysis of large-scaled gene expression profiles, our method is able to generate a set of testable hypotheses: the transcriptional regulator X regulates function Y under cellular condition Z. GSECA algorithm is implemented into freely available software package. PMID:18021416
A probabilistic framework to infer brain functional connectivity from anatomical connections.
Deligianni, Fani; Varoquaux, Gael; Thirion, Bertrand; Robinson, Emma; Sharp, David J; Edwards, A David; Rueckert, Daniel
2011-01-01
We present a novel probabilistic framework to learn across several subjects a mapping from brain anatomical connectivity to functional connectivity, i.e. the covariance structure of brain activity. This prediction problem must be formulated as a structured-output learning task, as the predicted parameters are strongly correlated. We introduce a model selection framework based on cross-validation with a parametrization-independent loss function suitable to the manifold of covariance matrices. Our model is based on constraining the conditional independence structure of functional activity by the anatomical connectivity. Subsequently, we learn a linear predictor of a stationary multivariate autoregressive model. This natural parameterization of functional connectivity also enforces the positive-definiteness of the predicted covariance and thus matches the structure of the output space. Our results show that functional connectivity can be explained by anatomical connectivity on a rigorous statistical basis, and that a proper model of functional connectivity is essential to assess this link.
Janga, Sarath Chandra; Collado-Vides, Julio; Moreno-Hagelsieb, Gabriel
2005-01-01
Since operons are unstable across Prokaryotes, it has been suggested that perhaps they re-combine in a conservative manner. Thus, genes belonging to a given operon in one genome might re-associate in other genomes revealing functional relationships among gene products. We developed a system to build networks of functional relationships of gene products based on their organization into operons in any available genome. The operon predictions are based on inter-genic distances. Our system can use different kinds of thresholds to accept a functional relationship, either related to the prediction of operons, or to the number of non-redundant genomes that support the associations. We also work by shells, meaning that we decide on the number of linking iterations to allow for the complementation of related gene sets. The method shows high reliability benchmarked against knowledge-bases of functional interactions. We also illustrate the use of Nebulon in finding new members of regulons, and of other functional groups of genes. Operon rearrangements produce thousands of high-quality new interactions per prokaryotic genome, and thousands of confirmations per genome to other predictions, making it another important tool for the inference of functional interactions from genomic context. PMID:15867197
Image denoising by exploring external and internal correlations.
Yue, Huanjing; Sun, Xiaoyan; Yang, Jingyu; Wu, Feng
2015-06-01
Single image denoising suffers from limited data collection within a noisy image. In this paper, we propose a novel image denoising scheme, which explores both internal and external correlations with the help of web images. For each noisy patch, we build internal and external data cubes by finding similar patches from the noisy and web images, respectively. We then propose reducing noise by a two-stage strategy using different filtering approaches. In the first stage, since the noisy patch may lead to inaccurate patch selection, we propose a graph based optimization method to improve patch matching accuracy in external denoising. The internal denoising is frequency truncation on internal cubes. By combining the internal and external denoising patches, we obtain a preliminary denoising result. In the second stage, we propose reducing noise by filtering of external and internal cubes, respectively, on transform domain. In this stage, the preliminary denoising result not only enhances the patch matching accuracy but also provides reliable estimates of filtering parameters. The final denoising image is obtained by fusing the external and internal filtering results. Experimental results show that our method constantly outperforms state-of-the-art denoising schemes in both subjective and objective quality measurements, e.g., it achieves >2 dB gain compared with BM3D at a wide range of noise levels.
Postprocessing of Compressed Images via Sequential Denoising
NASA Astrophysics Data System (ADS)
Dar, Yehuda; Bruckstein, Alfred M.; Elad, Michael; Giryes, Raja
2016-07-01
In this work we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via Alternating Direction Method of Multipliers (ADMM), leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. Specifically, we demonstrate impressive gains in image quality for several leading compression methods - JPEG, JPEG2000, and HEVC.
Adaptive Image Denoising by Mixture Adaptation
NASA Astrophysics Data System (ADS)
Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.
2016-10-01
We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.
Adaptive Image Denoising by Mixture Adaptation.
Luo, Enming; Chan, Stanley H; Nguyen, Truong Q
2016-10-01
We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.
MRI denoising using non-local means.
Manjón, José V; Carbonell-Caballero, José; Lull, Juan J; García-Martí, Gracián; Martí-Bonmatí, Luís; Robles, Montserrat
2008-08-01
Magnetic Resonance (MR) images are affected by random noise which limits the accuracy of any quantitative measurements from the data. In the present work, a recently proposed filter for random noise removal is analyzed and adapted to reduce this noise in MR magnitude images. This parametric filter, named Non-Local Means (NLM), is highly dependent on the setting of its parameters. The aim of this paper is to find the optimal parameter selection for MR magnitude image denoising. For this purpose, experiments have been conducted to find the optimum parameters for different noise levels. Besides, the filter has been adapted to fit with specific characteristics of the noise in MR image magnitude images (i.e. Rician noise). From the results over synthetic and real images we can conclude that this filter can be successfully used for automatic MR denoising.
The interval testing procedure: A general framework for inference in functional data analysis.
Pini, Alessia; Vantini, Simone
2016-09-01
We introduce in this work the Interval Testing Procedure (ITP), a novel inferential technique for functional data. The procedure can be used to test different functional hypotheses, e.g., distributional equality between two or more functional populations, equality of mean function of a functional population to a reference. ITP involves three steps: (i) the representation of data on a (possibly high-dimensional) functional basis; (ii) the test of each possible set of consecutive basis coefficients; (iii) the computation of the adjusted p-values associated to each basis component, by means of a new strategy here proposed. We define a new type of error control, the interval-wise control of the family wise error rate, particularly suited for functional data. We show that ITP is provided with such a control. A simulation study comparing ITP with other testing procedures is reported. ITP is then applied to the analysis of hemodynamical features involved with cerebral aneurysm pathology. ITP is implemented in the fdatest R package.
Simultaneous denoising and compression of multispectral images
NASA Astrophysics Data System (ADS)
Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.
2013-01-01
A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.
Chellapandi, P; Sakthishree, S; Bharathi, M
2013-09-01
Bacterial ADP-ribosyltransferases (BADPRTs) are extensively contributed to determine the strain-specific virulence state and pathogenesis in human hosts. Understanding molecular evolution and functional diversity of the BADPRTs is an important standpoint to describe the fundamental behind in the vaccine designing for bacterial infections. In the present study, we have evaluated the origin and functional evolution of conserved domains within the BADPRTs by analyzing their sequence-function relationship. To represent the evolution history of BADPRTs, phylogenetic trees were constructed based on their protein sequence, structure and conserved domains using different evolutionary programs. Sequence divergence and genetic diversity were studied herein to deduce the functional evolution of conserved domains across the family and superfamily. The results of sequence similarity search have shown that three hypothetical proteins (above 90%) were identical to the members of BADPRTs and their functions were annotated by phylogenetic approach. Phylogenetic analysis of this study has revealed the family members of BADPRTs were phylogenetically related to one another, functionally diverged within the same family, and dispersed into closely related bacteria. The presence of core substitution pattern in the conserved domains would determine the family-specific function of BADPRTs. Functional diversity of the BADPRTs was exclusively distinguished by Darwinian positive selection (diphtheria toxin C and pertussis toxin S) and neutral selection (arginine ADP-ribosyltransferase, enterotoxin A and binary toxin A) acting on the existing domains. Many of the family members were sharing their sequence-specific features from members in the arginine ADP-ribosyltransferase family. Conservative functions of members in the BADPRTs have shown to be expanded only within closely related families, and retained as such in pathogenic bacteria by evolutionary process (domain duplication or
NASA Astrophysics Data System (ADS)
King, Gary; Rosen, Ori; Tanner, Martin A.
2004-09-01
This collection of essays brings together a diverse group of scholars to survey the latest strategies for solving ecological inference problems in various fields. The last half-decade has witnessed an explosion of research in ecological inference--the process of trying to infer individual behavior from aggregate data. Although uncertainties and information lost in aggregation make ecological inference one of the most problematic types of research to rely on, these inferences are required in many academic fields, as well as by legislatures and the Courts in redistricting, by business in marketing research, and by governments in policy analysis.
NASA Astrophysics Data System (ADS)
Sarty, Gordon E.; Atkins, M. Stella; Olatunbosun, Femi; Chizen, Donna; Loewy, John; Kendall, Edward J.; Pierson, Roger A.
1999-10-01
A new numerical wavelet transform, the discrete torus wavelet transform, is described and an application is given to the denoising of abdominal magnetic resonance imaging (MRI) data. The discrete tori wavelet transform is an undecimated wavelet transform which is computed using a discrete Fourier transform and multiplication instead of by direct convolution in the image domain. This approach leads to a decomposition of the image onto frames in the space of square summable functions on the discrete torus, l2(T2). The new transform was compared to the traditional decimated wavelet transform in its ability to denoise MRI data. By using denoised images as the basis for the computation of a nuclear magnetic resonance spin-spin relaxation-time map through least squares curve fitting, an error map was generated that was used to assess the performance of the denoising algorithms. The discrete torus wavelet transform outperformed the traditional wavelet transform in 88% of the T2 error map denoising tests with phantoms and gynecologic MRI images.
Inference of Functional Relations in Predicted Protein Networks with a Machine Learning Approach
Ezkurdia, Iakes; Andrés-León, Eduardo; Valencia, Alfonso
2010-01-01
Background Molecular biology is currently facing the challenging task of functionally characterizing the proteome. The large number of possible protein-protein interactions and complexes, the variety of environmental conditions and cellular states in which these interactions can be reorganized, and the multiple ways in which a protein can influence the function of others, requires the development of experimental and computational approaches to analyze and predict functional associations between proteins as part of their activity in the interactome. Methodology/Principal Findings We have studied the possibility of constructing a classifier in order to combine the output of the several protein interaction prediction methods. The AODE (Averaged One-Dependence Estimators) machine learning algorithm is a suitable choice in this case and it provides better results than the individual prediction methods, and it has better performances than other tested alternative methods in this experimental set up. To illustrate the potential use of this new AODE-based Predictor of Protein InterActions (APPIA), when analyzing high-throughput experimental data, we show how it helps to filter the results of published High-Throughput proteomic studies, ranking in a significant way functionally related pairs. Availability: All the predictions of the individual methods and of the combined APPIA predictor, together with the used datasets of functional associations are available at http://ecid.bioinfo.cnio.es/. Conclusions We propose a strategy that integrates the main current computational techniques used to predict functional associations into a unified classifier system, specifically focusing on the evaluation of poorly characterized protein pairs. We selected the AODE classifier as the appropriate tool to perform this task. AODE is particularly useful to extract valuable information from large unbalanced and heterogeneous data sets. The combination of the information provided by five
Huang, Yi-Fei; Golding, G Brian
2014-01-01
A critical question in biology is the identification of functionally important amino acid sites in proteins. Because functionally important sites are under stronger purifying selection, site-specific substitution rates tend to be lower than usual at these sites. A large number of phylogenetic models have been developed to estimate site-specific substitution rates in proteins and the extraordinarily low substitution rates have been used as evidence of function. Most of the existing tools, e.g. Rate4Site, assume that site-specific substitution rates are independent across sites. However, site-specific substitution rates may be strongly correlated in the protein tertiary structure, since functionally important sites tend to be clustered together to form functional patches. We have developed a new model, GP4Rate, which incorporates the Gaussian process model with the standard phylogenetic model to identify slowly evolved regions in protein tertiary structures. GP4Rate uses the Gaussian process to define a nonparametric prior distribution of site-specific substitution rates, which naturally captures the spatial correlation of substitution rates. Simulations suggest that GP4Rate can potentially estimate site-specific substitution rates with a much higher accuracy than Rate4Site and tends to report slowly evolved regions rather than individual sites. In addition, GP4Rate can estimate the strength of the spatial correlation of substitution rates from the data. By applying GP4Rate to a set of mammalian B7-1 genes, we found a highly conserved region which coincides with experimental evidence. GP4Rate may be a useful tool for the in silico prediction of functionally important regions in the proteins with known structures.
Binary black hole merger rates inferred from luminosity function of ultra-luminous X-ray sources
NASA Astrophysics Data System (ADS)
Inoue, Yoshiyuki; Tanaka, Yasuyuki T.; Isobe, Naoki
2016-10-01
The Advanced Laser Interferometer Gravitational-Wave Observatory (aLIGO) has detected direct signals of gravitational waves (GWs) from GW150914. The event was a merger of binary black holes whose masses are 36^{+5}_{-4} M_{{⊙}} and 29^{+4}_{-4} M_{{⊙}}. Such binary systems are expected to be directly evolved from stellar binary systems or formed by dynamical interactions of black holes in dense stellar environments. Here we derive the binary black hole merger rate based on the nearby ultra-luminous X-ray source (ULX) luminosity function (LF) under the assumption that binary black holes evolve through X-ray emitting phases. We obtain the binary black hole merger rate as 5.8(tULX/0.1 Myr)- 1λ- 0.6exp ( - 0.30λ) Gpc- 3 yr- 1, where tULX is the typical duration of the ULX phase and λ is the Eddington ratio in luminosity. This is coincident with the event rate inferred from the detection of GW150914 as well as the predictions based on binary population synthesis models. Although we are currently unable to constrain the Eddington ratio of ULXs in luminosity due to the uncertainties of our models and measured binary black hole merger event rates, further X-ray and GW data will allow us to narrow down the range of the Eddington ratios of ULXs. We also find the cumulative merger rate for the mass range of 5 M⊙ ≤ MBH ≤ 100 M⊙ inferred from the ULX LF is consistent with that estimated by the aLIGO collaboration considering various astrophysical conditions such as the mass function of black holes.
Spectral data de-noising using semi-classical signal analysis: application to localized MRS.
Laleg-Kirati, Taous-Meriem; Zhang, Jiayu; Achten, Eric; Serrai, Hacene
2016-10-01
In this paper, we propose a new post-processing technique called semi-classical signal analysis (SCSA) for MRS data de-noising. Similar to Fourier transformation, SCSA decomposes the input real positive MR spectrum into a set of linear combinations of squared eigenfunctions equivalently represented by localized functions with shape derived from the potential function of the Schrödinger operator. In this manner, the MRS spectral peaks represented as a sum of these 'shaped like' functions are efficiently separated from noise and accurately analyzed. The performance of the method is tested by analyzing simulated and real MRS data. The results obtained demonstrate that the SCSA method is highly efficient in localized MRS data de-noising and allows for an accurate data quantification.
The cost of misremembering: Inferring the loss function in visual working memory.
Sims, Chris R
2015-03-04
Visual working memory (VWM) is a highly limited storage system. A basic consequence of this fact is that visual memories cannot perfectly encode or represent the veridical structure of the world. However, in natural tasks, some memory errors might be more costly than others. This raises the intriguing possibility that the nature of memory error reflects the costs of committing different kinds of errors. Many existing theories assume that visual memories are noise-corrupted versions of afferent perceptual signals. However, this additive noise assumption oversimplifies the problem. Implicit in the behavioral phenomena of visual working memory is the concept of a loss function: a mathematical entity that describes the relative cost to the organism of making different types of memory errors. An optimally efficient memory system is one that minimizes the expected loss according to a particular loss function, while subject to a constraint on memory capacity. This paper describes a novel theoretical framework for characterizing visual working memory in terms of its implicit loss function. Using inverse decision theory, the empirical loss function is estimated from the results of a standard delayed recall visual memory experiment. These results are compared to the predicted behavior of a visual working memory system that is optimally efficient for a previously identified natural task, gaze correction following saccadic error. Finally, the approach is compared to alternative models of visual working memory, and shown to offer a superior account of the empirical data across a range of experimental datasets.
Topological Inference of Teleology: Deriving Function from Structure via Evidential Reasoning
1999-06-10
His system, DUDU , takes as input Pascal- like pseudo-code and either verifies its correctness or produces an explanation of the problems it finds. The...explicit representation of function in DUDU enables the program to focus its efforts on buggy parts of the input code rather than constructing a proof
Pragmatic Inferences in High-Functioning Adults with Autism and Asperger Syndrome
ERIC Educational Resources Information Center
Pijnacker, Judith; Hagoort, Peter; Buitelaar, Jan; Teunisse, Jan-Pieter; Geurts, Bart
2009-01-01
Although people with autism spectrum disorders (ASD) often have severe problems with pragmatic aspects of language, little is known about their pragmatic reasoning. We carried out a behavioral study on high-functioning adults with autistic disorder (n = 11) and Asperger syndrome (n = 17) and matched controls (n = 28) to investigate whether they…
The Luminosity Function at z ~ 8 from 97 Y-band Dropouts: Inferences about Reionization
NASA Astrophysics Data System (ADS)
Schmidt, Kasper B.; Treu, Tommaso; Trenti, Michele; Bradley, Larry D.; Kelly, Brandon C.; Oesch, Pascal A.; Holwerda, Benne W.; Shull, J. Michael; Stiavelli, Massimo
2014-05-01
We present the largest search to date for Y-band dropout galaxies (z ~ 8 Lyman break galaxies, LBGs) based on 350 arcmin2 of Hubble Space Telescope observations in the V, Y, J, and H bands from the Brightest of Reionizing Galaxies (BoRG) survey. In addition to previously published data, the BoRG13 data set presented here includes approximately 50 arcmin2 of new data and deeper observations of two previous BoRG pointings, from which we present 9 new z ~ 8 LBG candidates, bringing the total number of BoRG Y-band dropouts to 38 with 25.5 <= mJ <= 27.6 (AB system). We introduce a new Bayesian formalism for estimating the galaxy luminosity function, which does not require binning (and thus smearing) of the data and includes a likelihood based on the formally correct binomial distribution as opposed to the often-used approximate Poisson distribution. We demonstrate the utility of the new method on a sample of 97 Y-band dropouts that combines the bright BoRG galaxies with the fainter sources published in Bouwens et al. from the Hubble Ultra Deep Field and Early Release Science programs. We show that the z ~ 8 luminosity function is well described by a Schechter function over its full dynamic range with a characteristic magnitude M^\\star = -20.15^{+0.29}_{-0.38}, a faint-end slope of \\alpha = -1.87^{+0.26}_{-0.26}, and a number density of log _{10} \\phi ^\\star [{Mpc}^{-3}] = -3.24^{+0.25}_{-0.24}. Integrated down to M = -17.7, this luminosity function yields a luminosity density log _{10} \\epsilon [erg\\, s^{-1\\, Hz^{-1}\\, Mpc^{-3}}] = 25.52^{+0.05}_{-0.05}. Our luminosity function analysis is consistent with previously published determinations within 1σ. The error analysis suggests that uncertainties on the faint-end slope are still too large to draw a firm conclusion about its evolution with redshift. We use our statistical framework to discuss the implication of our study for the physics of reionization. By assuming theoretically motivated priors on the clumping
Bayesian inference for functional response in a stochastic predator-prey system.
Gilioli, Gianni; Pasquali, Sara; Ruggeri, Fabrizio
2008-02-01
We present a Bayesian method for functional response parameter estimation starting from time series of field data on predator-prey dynamics. Population dynamics is described by a system of stochastic differential equations in which behavioral stochasticities are represented by noise terms affecting each population as well as their interaction. We focus on the estimation of a behavioral parameter appearing in the functional response of predator to prey abundance when a small number of observations is available. To deal with small sample sizes, latent data are introduced between each pair of field observations and are considered as missing data. The method is applied to both simulated and observational data. The results obtained using different numbers of latent data are compared with those achieved following a frequentist approach. As a case study, we consider an acarine predator-prey system relevant to biological control problems.
Inference for the median residual life function in sequential multiple assignment randomized trials.
Kidwell, Kelley M; Ko, Jin H; Wahed, Abdus S
2014-04-30
In survival analysis, median residual lifetime is often used as a summary measure to assess treatment effectiveness; it is not clear, however, how such a quantity could be estimated for a given dynamic treatment regimen using data from sequential randomized clinical trials. We propose a method to estimate a dynamic treatment regimen-specific median residual life (MERL) function from sequential multiple assignment randomized trials. We present the MERL estimator, which is based on inverse probability weighting, as well as, two variance estimates for the MERL estimator. One variance estimate follows from Lunceford, Davidian and Tsiatis' 2002 survival function-based variance estimate and the other uses the sandwich estimator. The MERL estimator is evaluated, and its two variance estimates are compared through simulation studies, showing that the estimator and both variance estimates produce approximately unbiased results in large samples. To demonstrate our methods, the estimator has been applied to data from a sequentially randomized leukemia clinical trial.
Simple Math is Enough: Two Examples of Inferring Functional Associations from Genomic Data
NASA Technical Reports Server (NTRS)
Liang, Shoudan
2003-01-01
Non-random features in the genomic data are usually biologically meaningful. The key is to choose the feature well. Having a p-value based score prioritizes the findings. If two proteins share a unusually large number of common interaction partners, they tend to be involved in the same biological process. We used this finding to predict the functions of 81 un-annotated proteins in yeast.
True 4D Image Denoising on the GPU.
Eklund, Anders; Andersson, Mats; Knutsson, Hans
2011-01-01
The use of image denoising techniques is an important part of many medical imaging applications. One common application is to improve the image quality of low-dose (noisy) computed tomography (CT) data. While 3D image denoising previously has been applied to several volumes independently, there has not been much work done on true 4D image denoising, where the algorithm considers several volumes at the same time. The problem with 4D image denoising, compared to 2D and 3D denoising, is that the computational complexity increases exponentially. In this paper we describe a novel algorithm for true 4D image denoising, based on local adaptive filtering, and how to implement it on the graphics processing unit (GPU). The algorithm was applied to a 4D CT heart dataset of the resolution 512 × 512 × 445 × 20. The result is that the GPU can complete the denoising in about 25 minutes if spatial filtering is used and in about 8 minutes if FFT-based filtering is used. The CPU implementation requires several days of processing time for spatial filtering and about 50 minutes for FFT-based filtering. The short processing time increases the clinical value of true 4D image denoising significantly.
Remote sensing image denoising by using discrete multiwavelet transform techniques
NASA Astrophysics Data System (ADS)
Wang, Haihui; Wang, Jun; Zhang, Jian
2006-01-01
We present a new method by using GHM discrete multiwavelet transform in image denoising on this paper. The developments in wavelet theory have given rise to the wavelet thresholding method, for extracting a signal from noisy data. The method of signal denoising via wavelet thresholding was popularized. Multiwavelets have recently been introduced and they offer simultaneous orthogonality, symmetry and short support. This property makes multiwavelets more suitable for various image processing applications, especially denoising. It is based on thresholding of multiwavelet coefficients arising from the standard scalar orthogonal wavelet transform. It takes into account the covariance structure of the transform. Denoising of images via thresholding of the multiwavelet coefficients result from preprocessing and the discrete multiwavelet transform can be carried out by treating the output in this paper. The form of the threshold is carefully formulated and is the key to the excellent results obtained in the extensive numerical simulations of image denoising. We apply the multiwavelet-based to remote sensing image denoising. Multiwavelet transform technique is rather a new method, and it has a big advantage over the other techniques that it less distorts spectral characteristics of the image denoising. The experimental results show that multiwavelet based image denoising schemes outperform wavelet based method both subjectively and objectively.
Image denoising using principal component analysis in the wavelet domain
NASA Astrophysics Data System (ADS)
Bacchelli, Silvia; Papi, Serena
2006-05-01
In this work we describe a method for removing Gaussian noise from digital images, based on the combination of the wavelet packet transform and the principal component analysis. In particular, since the aim of denoising is to retain the energy of the signal while discarding the energy of the noise, our basic idea is to construct powerful tailored filters by applying the Karhunen-Loeve transform in the wavelet packet domain, thus obtaining a compaction of the signal energy into a few principal components, while the noise is spread over all the transformed coefficients. This allows us to act with a suitable shrinkage function on these new coefficients, removing the noise without blurring the edges and the important characteristics of the images. The results of a large numerical experimentation encourage us to keep going in this direction with our studies.
Function of pretribosphenic and tribosphenic mammalian molars inferred from 3D animation.
Schultz, Julia A; Martin, Thomas
2014-10-01
Appearance of the tribosphenic molar in the Late Jurassic (160 Ma) is a crucial innovation for food processing in mammalian evolution. This molar type is characterized by a protocone, a talonid basin and a two-phased chewing cycle, all of which are apomorphic. In this functional study on the teeth of Late Jurassic Dryolestes leiriensis and the living marsupial Monodelphis domestica, we demonstrate that pretribosphenic and tribosphenic molars show fundamental differences of food reduction strategies, representing a shift in dental function during the transition of tribosphenic mammals. By using the Occlusal Fingerprint Analyser (OFA), we simulated the chewing motions of the pretribosphenic Dryolestes that represents an evolutionary precursor condition to such tribosphenic mammals as Monodelphis. Animation of chewing path and detection of collisional contacts between virtual models of teeth suggests that Dryolestes differs from the classical two-phased chewing movement of tribosphenidans, due to the narrowing of the interdental space in cervical (crown-root transition) direction, the inclination angle of the hypoflexid groove, and the unicuspid talonid. The pretribosphenic chewing cycle is equivalent to phase I of the tribosphenic chewing cycle, but the former lacks phase II of the tribosphenic chewing. The new approach can analyze the chewing cycle of the jaw by using polygonal 3D models of tooth surfaces, in a way that is complementary to the electromyography and strain gauge studies of muscle function of living animals. The technique allows alignment and scaling of isolated fossil teeth and utilizes the wear facet orientation and striation of the teeth to reconstruct the chewing path of extinct mammals.
Function of pretribosphenic and tribosphenic mammalian molars inferred from 3D animation
NASA Astrophysics Data System (ADS)
Schultz, Julia A.; Martin, Thomas
2014-10-01
Appearance of the tribosphenic molar in the Late Jurassic (160 Ma) is a crucial innovation for food processing in mammalian evolution. This molar type is characterized by a protocone, a talonid basin and a two-phased chewing cycle, all of which are apomorphic. In this functional study on the teeth of Late Jurassic Dryolestes leiriensis and the living marsupial Monodelphis domestica, we demonstrate that pretribosphenic and tribosphenic molars show fundamental differences of food reduction strategies, representing a shift in dental function during the transition of tribosphenic mammals. By using the Occlusal Fingerprint Analyser (OFA), we simulated the chewing motions of the pretribosphenic Dryolestes that represents an evolutionary precursor condition to such tribosphenic mammals as Monodelphis. Animation of chewing path and detection of collisional contacts between virtual models of teeth suggests that Dryolestes differs from the classical two-phased chewing movement of tribosphenidans, due to the narrowing of the interdental space in cervical (crown-root transition) direction, the inclination angle of the hypoflexid groove, and the unicuspid talonid. The pretribosphenic chewing cycle is equivalent to phase I of the tribosphenic chewing cycle, but the former lacks phase II of the tribosphenic chewing. The new approach can analyze the chewing cycle of the jaw by using polygonal 3D models of tooth surfaces, in a way that is complementary to the electromyography and strain gauge studies of muscle function of living animals. The technique allows alignment and scaling of isolated fossil teeth and utilizes the wear facet orientation and striation of the teeth to reconstruct the chewing path of extinct mammals.
The use of structural modelling to infer structure and function in biocontrol agents.
Berry, Colin; Board, Jason
2017-01-01
Homology modelling can provide important insights into the structures of proteins when a related protein structure has already been solved. However, for many proteins, including a number of invertebrate-active toxins and accessory proteins, no such templates exist. In these cases, techniques of ab initio, template-independent modelling can be employed to generate models that may give insight into structure and function. In this overview, examples of both the problems and the potential benefits of ab initio techniques are illustrated. Consistent modelling results may indicate useful approximations to actual protein structures and can thus allow the generation of hypotheses regarding activity that can be tested experimentally.
NASA Astrophysics Data System (ADS)
Newell, P. T.; Sotirelis, T.; Liou, K.; Meng, C. I.; Rich, F. J.
2006-12-01
We investigated whether one or a few coupling functions can represent best the interaction between the solar wind and the magnetosphere. Ten characterizations of the magnetosphere five from ground-based magnetometers, including Dst, Kp, AE, AU, and AL, and five from other sources, including auroral power (Polar UVI), cusp latitude and b2i (both DMSP), geosynchronous magnetic inclination angle (GOES), and polar cap size (SuperDARN) were correlated with more than 20 candidate solar wind coupling functions. A single coupling function, representing the rate magnetic flux is opened at the magnetopause, correlated best with 9 out of 10 indices of magnetospheric condition. This is dFMP/dt = v4/3BT2/3sin8/3(tc/2), calculated from (rate IMF field lines approach the magnetopause, v)(percent of IMF lines which merge, sin8/3(tc/2))(magnitude of magnetopause field, Bmp, v)(merging line length, (BT/Bmp)2/3). The merging line length is based on flux matching between the solar wind and a dipole field, and agrees with a superposed IMF on a vacuum dipole. The IMF clock angle dependence matches the merging rate reported at high altitude. The non-linearities of the magnetospheric response to BT and v are evident when the mean values of indices are plotted, as well as in the superior correlations from dFMP/dt. A wide variety of magnetospheric phenomena can ths be accurately predicted ab initio by just a single function, estimating the rate magnetic flux is opened on the dayside magnetopause. Across all state variables studied dFMP/dt accounts for about 57.2 percent of the variance, compared to 50.9 for EKL, and 48.8 for vBs. All data sets included thousands of points over many years, up to two solar cycles. The sole index which does not correlate best with dFMP/dt is Dst, which correlates best (r=0.87) with p1/2dFMP/dt. If dFMP/dt were credited with this success, its average score would be even higher.
NASA Astrophysics Data System (ADS)
Newell, P. T.; Sotirelis, T.; Liou, K.; Meng, C.-I.; Rich, F. J.
2007-01-01
We investigated whether one or a few coupling functions can represent best the interaction between the solar wind and the magnetosphere over a wide variety of magnetospheric activity. Ten variables which characterize the state of the magnetosphere were studied. Five indices from ground-based magnetometers were selected, namely Dst, Kp, AE, AU, and AL, and five from other sources, namely auroral power (Polar UVI), cusp latitude (sin(Λc)), b2i (both DMSP), geosynchronous magnetic inclination angle (GOES), and polar cap size (SuperDARN). These indices were correlated with more than 20 candidate solar wind coupling functions. One function, representing the rate magnetic flux is opened at the magnetopause, correlated best with 9 out of 10 indices of magnetospheric activity. This is dΦMP/dt = v4/3BT2/3sin8/3(θc/2), calculated from (rate IMF field lines approach the magnetopause, ˜v)(% of IMF lines which merge, sin8/3(θc/2))(interplanetary field magnitude, BT)(merging line length, ˜(BMP/BT)1/3). The merging line length is based on flux matching between the solar wind and a dipole field and agrees with a superposed IMF on a vacuum dipole. The IMF clock angle dependence matches the merging rate reported (albeit with limited statistics) at high altitude. The nonlinearities of the magnetospheric response to BT and v are evident when the mean values of indices are plotted, in scatterplots, and in the superior correlations from dΦMP/dt. Our results show that a wide variety of magnetospheric phenomena can be predicted with reasonable accuracy (r > 0.80 in several cases) ab initio, that is without the time history of the target index, by a single function, estimating the dayside merging rate. Across all state variables studied (including AL, which is hard to predict, and polar cap size, which is hard to measure), dΦMP/dt accounts for about 57.2% of the variance, compared to 50.9% for EKL and 48.8% for vBs. All data sets included at least thousands of points over many
Goodenberger, Katherine E; Boyer, Doug M; Orr, Caley M; Jacobs, Rachel L; Femiani, John C; Patel, Biren A
2015-03-01
Primate evolutionary morphologists have argued that selection for life in a fine branch niche resulted in grasping specializations that are reflected in the hallucal metatarsal (Mt1) morphology of extant "prosimians", while a transition to use of relatively larger, horizontal substrates explains the apparent loss of such characters in anthropoids. Accordingly, these morphological characters-Mt1 torsion, peroneal process length and thickness, and physiological abduction angle-have been used to reconstruct grasping ability and locomotor mode in the earliest fossil primates. Although these characters are prominently featured in debates on the origin and subsequent radiation of Primates, questions remain about their functional significance. This study examines the relationship between these morphological characters of the Mt1 and a novel metric of pedal grasping ability for a large number of extant taxa in a phylogenetic framework. Results indicate greater Mt1 torsion in taxa that engage in hallucal grasping and in those that utilize relatively small substrates more frequently. This study provides evidence that Carpolestes simpsoni has a torsion value more similar to grasping primates than to any scandentian. The results also show that taxa that habitually grasp vertical substrates are distinguished from other taxa in having relatively longer peroneal processes. Furthermore, a longer peroneal process is also correlated with calcaneal elongation, a metric previously found to reflect leaping proclivity. A more refined understanding of the functional associations between Mt1 morphology and behavior in extant primates enhances the potential for using these morphological characters to comprehend primate (locomotor) evolution.
Inferring cortical function in the mouse visual system through large-scale systems neuroscience.
Hawrylycz, Michael; Anastassiou, Costas; Arkhipov, Anton; Berg, Jim; Buice, Michael; Cain, Nicholas; Gouwens, Nathan W; Gratiy, Sergey; Iyer, Ramakrishnan; Lee, Jung Hoon; Mihalas, Stefan; Mitelut, Catalin; Olsen, Shawn; Reid, R Clay; Teeter, Corinne; de Vries, Saskia; Waters, Jack; Zeng, Hongkui; Koch, Christof
2016-07-05
The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort.
Inferring cortical function in the mouse visual system through large-scale systems neuroscience
Hawrylycz, Michael; Anastassiou, Costas; Arkhipov, Anton; Berg, Jim; Buice, Michael; Cain, Nicholas; Gouwens, Nathan W.; Gratiy, Sergey; Iyer, Ramakrishnan; Lee, Jung Hoon; Mihalas, Stefan; Mitelut, Catalin; Olsen, Shawn; Reid, R. Clay; Teeter, Corinne; de Vries, Saskia; Waters, Jack; Zeng, Hongkui; Koch, Christof
2016-01-01
The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort. PMID:27382147
Combining interior and exterior characteristics for remote sensing image denoising
NASA Astrophysics Data System (ADS)
Peng, Ni; Sun, Shujin; Wang, Runsheng; Zhong, Ping
2016-04-01
Remote sensing image denoising faces many challenges since a remote sensing image usually covers a wide area and thus contains complex contents. Using the patch-based statistical characteristics is a flexible method to improve the denoising performance. There are usually two kinds of statistical characteristics available: interior and exterior characteristics. Different statistical characteristics have their own strengths to restore specific image contents. Combining different statistical characteristics to use their strengths together may have the potential to improve denoising results. This work proposes a method combining statistical characteristics to adaptively select statistical characteristics for different image contents. The proposed approach is implemented through a new characteristics selection criterion learned over training data. Moreover, with the proposed combination method, this work develops a denoising algorithm for remote sensing images. Experimental results show that our method can make full use of the advantages of interior and exterior characteristics for different image contents and thus improve the denoising performance.
Dual-domain denoising in three dimensional magnetic resonance imaging.
Peng, Jing; Zhou, Jiliu; Wu, Xi
2016-08-01
Denoising is a crucial preprocessing procedure for three dimensional magnetic resonance imaging (3D MRI). Existing denoising methods are predominantly implemented in a single domain, ignoring information in other domains. However, denoising methods are becoming increasingly complex, making analysis and implementation challenging. The present study aimed to develop a dual-domain image denoising (DDID) algorithm for 3D MRI that encapsulates information from the spatial and transform domains. In the present study, the DDID method was used to distinguish signal from noise in the spatial and frequency domains, after which robust accurate noise estimation was introduced for iterative filtering, which is simple and beneficial for computation. In addition, the proposed method was compared quantitatively and qualitatively with existing methods for synthetic and in vivo MRI datasets. The results of the present study suggested that the novel DDID algorithm performed well and provided competitive results, as compared with existing MRI denoising filters.
Patch-based near-optimal image denoising.
Chatterjee, Priyam; Milanfar, Peyman
2012-04-01
In this paper, we propose a denoising method motivated by our previous analysis of the performance bounds for image denoising. Insights from that study are used here to derive a high-performance practical denoising algorithm. We propose a patch-based Wiener filter that exploits patch redundancy for image denoising. Our framework uses both geometrically and photometrically similar patches to estimate the different filter parameters. We describe how these parameters can be accurately estimated directly from the input noisy image. Our denoising approach, designed for near-optimal performance (in the mean-squared error sense), has a sound statistical foundation that is analyzed in detail. The performance of our approach is experimentally verified on a variety of images and noise levels. The results presented here demonstrate that our proposed method is on par or exceeding the current state of the art, both visually and quantitatively.
Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.
Zhang, Jiachao; Hirakawa, Keigo
2017-04-01
This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.
A connection between score matching and denoising autoencoders.
Vincent, Pascal
2011-07-01
Denoising autoencoders have been previously shown to be competitive alternatives to restricted Boltzmann machines for unsupervised pretraining of each layer of a deep architecture. We show that a simple denoising autoencoder training criterion is equivalent to matching the score (with respect to the data) of a specific energy-based model to that of a nonparametric Parzen density estimator of the data. This yields several useful insights. It defines a proper probabilistic model for the denoising autoencoder technique, which makes it in principle possible to sample from them or rank examples by their energy. It suggests a different way to apply score matching that is related to learning to denoise and does not require computing second derivatives. It justifies the use of tied weights between the encoder and decoder and suggests ways to extend the success of denoising autoencoders to a larger family of energy-based models.
Denoising portal images by means of wavelet techniques
NASA Astrophysics Data System (ADS)
Gonzalez Lopez, Antonio Francisco
Portal images are used in radiotherapy for the verification of patient positioning. The distinguishing feature of this image type lies in its formation process: the same beam used for patient treatment is used for image formation. The high energy of the photons used in radiotherapy strongly limits the quality of portal images: Low contrast between tissues, low spatial resolution and low signal to noise ratio. This Thesis studies the enhancement of these images, in particular denoising of portal images. The statistical properties of portal images and noise are studied: power spectra, statistical dependencies between image and noise and marginal, joint and conditional distributions in the wavelet domain. Later, various denoising methods are applied to noisy portal images. Methods operating in the wavelet domain are the basis of this Thesis. In addition, the Wiener filter and the non local means filter (NLM), operating in the image domain, are used as a reference. Other topics studied in this Thesis are spatial resolution, wavelet processing and image processing in dosimetry in radiotherapy. In this regard, the spatial resolution of portal imaging systems is studied; a new method for determining the spatial resolution of the imaging equipments in digital radiology is presented; the calculation of the power spectrum in the wavelet domain is studied; reducing uncertainty in film dosimetry is investigated; a method for the dosimetry of small radiation fields with radiochromic film is presented; the optimal signal resolution is determined, as a function of the noise level and the quantization step, in the digitization process of films and the useful optical density range is set, as a function of the required uncertainty level, for a densitometric system. Marginal distributions of portal images are similar to those of natural images. This also applies to the statistical relationships between wavelet coefficients, intra-band and inter-band. These facts result in a better
Alpha values as a function of sample size, effect size, and power: accuracy over inference.
Bradley, M T; Brand, A
2013-06-01
Tables of alpha values as a function of sample size, effect size, and desired power were presented. The tables indicated expected alphas for small, medium, and large effect sizes given a variety of sample sizes. It was evident that sample sizes for most psychological studies are adequate for large effect sizes defined at .8. The typical alpha level of .05 and desired power of 90% can be achieved with 70 participants in two groups. It was perhaps doubtful if these ideal levels of alpha and power have generally been achieved for medium effect sizes in actual research, since 170 participants would be required. Small effect sizes have rarely been tested with an adequate number of participants or power. Implications were discussed.
NASA Astrophysics Data System (ADS)
Igarashi, Toshihiro
2016-04-01
The stress concentration and strain accumulation process due to inter-plate coupling of the subducting plate should have a large effect on inland shallow earthquakes that occur in the overriding plate. Information on the crustal structure and the crustal thickness is important to understanding their process. In this study, I applied receiver function analysis using similar earthquakes to estimate the crustal velocity structures beneath the Japanese Islands. Because similar earthquakes are caused repeatedly at almost the same place, they are useful for extracting information on spatial distribution and temporal changes of seismic velocity structures beneath the seismic stations. I used telemetric seismographic network data covered the Japanese Islands and moderate-sized similar earthquakes which occurred in the southern Hemisphere with epicentral distances between 30 and 90 degrees for about 26 years from October 1989. Data analysis was performed separately before and after the 2011 Tohoku-Oki earthquake. To identify the spatial distribution of crustal structure, I searched for the best-correlated model between an observed receiver function at each station and synthetic ones by using a grid search method. As results, I clarified the spatial distribution of the crustal velocity structures. The spatial patterns of velocities from the ground surface to 5 km deep are corresponding with basement depth models although the velocities are slower than those of tomography models. They indicate thick sediment layers in several plain and basin areas. The crustal velocity perturbations are consistent with existing tomography models. The active volcanoes correspond low-velocity zones from the upper crust to the crust-mantle transition. A comparison of the crustal structure before and after the 2011 Tohoku-Oki earthquake suggests that the northeastern Japan arc changed to lower velocities in some areas. This kind of velocity changes might be due to other effects such as changes of
Carr, Andrew; Tibbetts, Ian R; Kemp, Anne; Truss, Rowan; Drennan, John
2006-10-01
Morphology, occlusal surface topography, macrowear, and microwear features of parrotfish pharyngeal teeth were investigated to relate microstructural characteristics to the function of the pharyngeal mill using scanning electron microscopy of whole and sectioned pharyngeal jaws and teeth. Pharyngeal tooth migration is anterior in the lower jaw (fifth ceratobranchial) and posterior in the upper jaw (paired third pharyngobranchials), making the interaction of occlusal surfaces and wear-generating forces complex. The extent of wear can be used to define three regions through which teeth migrate: a region containing newly erupted teeth showing little or no wear; a midregion in which the apical enameloid is swiftly worn; and a region containing teeth with only basal enameloid remaining, which shows low to moderate wear. The shape of the occlusal surface alters as the teeth progress along the pharyngeal jaw, generating conditions that appear suited to the reduction of coral particles. It is likely that the interaction between these particles and algal cells during the process of the rendering of the former is responsible for the rupture of the latter, with the consequent liberation of cell contents from which parrotfish obtain their nutrients.
An expression atlas of human primary cells: inference of gene function from coexpression networks
2013-01-01
Background The specialisation of mammalian cells in time and space requires genes associated with specific pathways and functions to be co-ordinately expressed. Here we have combined a large number of publically available microarray datasets derived from human primary cells and analysed large correlation graphs of these data. Results Using the network analysis tool BioLayout Express3D we identify robust co-associations of genes expressed in a wide variety of cell lineages. We discuss the biological significance of a number of these associations, in particular the coexpression of key transcription factors with the genes that they are likely to control. Conclusions We consider the regulation of genes in human primary cells and specifically in the human mononuclear phagocyte system. Of particular note is the fact that these data do not support the identity of putative markers of antigen-presenting dendritic cells, nor classification of M1 and M2 activation states, a current subject of debate within immunological field. We have provided this data resource on the BioGPS web site (http://biogps.org/dataset/2429/primary-cell-atlas/) and on macrophages.com (http://www.macrophages.com/hu-cell-atlas). PMID:24053356
Fullard, John F; Giambartolomei, Claudia; Hauberg, Mads E; Xu, Ke; Voloudakis, Georgios; Shao, Zhiping; Bare, Christopher; Dudley, Joel T; Mattheisen, Manuel; Robakis, Nikolaos K; Haroutunian, Vahram; Roussos, Panos
2017-03-14
Open chromatin provides access to DNA binding proteins for the correct spatiotemporal regulation of gene expression. Mapping chromatin accessibility has been widely used to identify the location of cis regulatory elements (CREs) including promoters and enhancers. CREs show tissue- and cell-type specificity and disease-associated variants are often enriched for CREs in the tissues and cells that pertain to a given disease. To better understand the role of CREs in neuropsychiatric disorders we applied the Assay for Transposase Accessible Chromatin followed by sequencing (ATAC-seq) to neuronal and non-neuronal nuclei isolated from frozen postmortem human brain by fluorescence-activated nuclear sorting (FANS). Most of the identified open chromatin regions (OCRs) are differentially accessible between neurons and non-neurons, and show enrichment with known cell type markers, promoters and enhancers. Relative to those of non-neurons, neuronal OCRs are more evolutionarily conserved and are enriched in distal regulatory elements. Transcription factor (TF) footprinting analysis identifies differences in the regulome between neuronal and non-neuronal cells and ascribes putative functional roles to a number of non-coding schizophrenia (SCZ) risk variants. Among the identified variants is a Single Nucleotide Polymorphism (SNP) proximal to the gene encoding SNX19. In vitro experiments reveal that this SNP leads to an increase in transcriptional activity. As elevated expression of SNX19 has been associated with SCZ, our data provides evidence that the identified SNP contributes to disease. These results represent the first analysis of OCRs and TF binding sites in distinct populations of postmortem human brain cells and further our understanding of the regulome and the impact of neuropsychiatric disease-associated genetic risk variants.
NASA Astrophysics Data System (ADS)
Woelbern, I.; Rumpker, G.
2015-12-01
Indonesia is situated at the southern margin of SE Asia, which comprises an assemblage of Gondwana-derived continental terranes, suture zones and volcanic arcs. The formation of SE Asia is believed to have started in Early Devonian. Its complex history involves the opening and closure of three distinct Tethys oceans, each accompanied by the rifting of continental fragments. We apply the receiver function technique to data of the temporary MERAMEX network operated in Central Java from May to October 2004 by the GeoForschungsZentrum Potsdam. The network consisted of 112 mobile stations with a spacing of about 10 km covering the full width of the island between the southern and northern coast lines. The tectonic history is reflected in a complex crustal structure of Central Java exhibiting strong topography of the Moho discontinuity related to different tectonic units. A discontinuity of negative impedance contrast is observed throughout the mid-crust interpreted as the top of a low-velocity layer which shows no depth correlation with the Moho interface. Converted phases generated at greater depth beneath Indonesia indicate the existence of multiple seismic discontinuities within the upper mantle and even below. The strongest signal originates from the base of the mantle transition zone, i.e. the 660 km discontinuity. The phase related to the 410 km discontinuity is less pronounced, but clearly identifiable as well. The derived thickness of the mantle-transition zone is in good agreement with the IASP91 velocity model. Additional phases are observed at roughly 33 s and 90 s relative to the P onset, corresponding to about 300 km and 920 km, respectively. A signal of reversed polarity indicates the top of a low velocity layer at about 370 km depth overlying the mantle transition zone.
Fast Translation Invariant Multiscale Image Denoising.
Li, Meng; Ghosal, Subhashis
2015-12-01
Translation invariant (TI) cycle spinning is an effective method for removing artifacts from images. However, for a method using O(n) time, the exact TI cycle spinning by averaging all possible circulant shifts requires O(n(2)) time where n is the number of pixels, and therefore is not feasible in practice. Existing literature has investigated efficient algorithms to calculate TI version of some denoising approaches such as Haar wavelet. Multiscale methods, especially those based on likelihood decomposition, such as penalized likelihood estimator and Bayesian methods, have become popular in image processing because of their effectiveness in denoising images. As far as we know, there is no systematic investigation of the TI calculation corresponding to general multiscale approaches. In this paper, we propose a fast TI (FTI) algorithm and a more general k-TI (k-TI) algorithm allowing TI for the last k scales of the image, which are applicable to general d-dimensional images (d = 2, 3, …) with either Gaussian or Poisson noise. The proposed FTI leads to the exact TI estimation but only requires O(n log2 n) time. The proposed k-TI can achieve almost the same performance as the exact TI estimation, but requires even less time. We achieve this by exploiting the regularity present in the multiscale structure, which is justified theoretically. The proposed FTI and k-TI are generic in that they are applicable on any smoothing techniques based on the multiscale structure. We demonstrate the FTI and k-TI algorithms on some recently proposed state-of-the-art methods for both Poisson and Gaussian noised images. Both simulations and real data application confirm the appealing performance of the proposed algorithms. MATLAB toolboxes are online accessible to reproduce the results and be implemented for general multiscale denoising approaches provided by the users.
NASA Astrophysics Data System (ADS)
Kaviani, A.; Rumpker, G.
2015-12-01
To account for the presence of seismic anisotropy within the crust and to estimate the relevant parameters, we first discuss a robust technique for the analysis of shear-wave splitting in layered anisotropic media by using converted shear phases. We use a combined approach that involves time-shifting and stacking of radial receiver functions and energy-minimization of transverse receiver functions to constrain the splitting parameters (i.e. the fast-polarization direction and the delay time) for an anisotropic layer. In multi-layered anisotropic media, the splitting parameters for the individual layers can be inferred by a layer-stripping approach, where the splitting effects due to shallower layers on converted phases from deeper discontinuities are successively corrected. The effect of anisotropy on the estimates of crustal thickness and average bulk Vp/Vs ratio can be significant. Recently, we extended the approach of Zhu & Kanamori (2000) to include P-to-S converted waves and their crustal reverberations generated in the anisotropic case. The anisotropic parameters of the medium are first estimated using the splitting analysis of the Ps-phase as described above. Then, a grid-search is performed over layer thickness and Vp/Vs ratio, while accounting for all relevant arrivals (up to 20 phases) in the anisotropic medium. We apply these techniques to receiver-function data from seismological stations across the Turkish-Anatolian Plateau to study seismic anisotropy in the crust and its relationship to crustal tectonics. Preliminary results reveal significant crustal anisotropy and indicate that the strength and direction of the anisotropy vary across the main tectonic boundaries. We also improve the estimates of the crustal thickness and the bulk Vp/Vs ratio by accounting for the presence of crustal anisotropy beneath the station. ReferenceZhu, L. & H. Kanamori (2000), Moho depth variation in southern California from teleseismic receiver functions, J. Geophys. Res
Musculoskeletal ultrasound image denoising using Daubechies wavelets
NASA Astrophysics Data System (ADS)
Gupta, Rishu; Elamvazuthi, I.; Vasant, P.
2012-11-01
Among various existing medical imaging modalities Ultrasound is providing promising future because of its ease availability and use of non-ionizing radiations. In this paper we have attempted to denoise ultrasound image using daubechies wavelet and analyze the results with peak signal to noise ratio and coefficient of correlation as performance measurement index. The different daubechies from 1 to 6 is used on four different ultrasound bone fracture images with three different levels from 1 to 3. The images for visual inspection and PSNR, Coefficient of correlation values are graphically shown for quantitaive analysis of resultant images.
Image denoising using a tight frame.
Shen, Lixin; Papadakis, Manos; Kakadiaris, Ioannis A; Konstantinidis, Ioannis; Kouri, Donald; Hoffman, David
2006-05-01
We present a general mathematical theory for lifting frames that allows us to modify existing filters to construct new ones that form Parseval frames. We apply our theory to design nonseparable Parseval frames from separable (tensor) products of a piecewise linear spline tight frame. These new frame systems incorporate the weighted average operator, the Sobel operator, and the Laplacian operator in directions that are integer multiples of 45 degrees. A new image denoising algorithm is then proposed, tailored to the specific properties of these new frame filters. We demonstrate the performance of our algorithm on a diverse set of images with very encouraging results.
Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frederic
2015-02-15
Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimation of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a
[A novel denoising approach to SVD filtering based on DCT and PCA in CT image].
Feng, Fuqiang; Wang, Jun
2013-10-01
Because of various effects of the imaging mechanism, noises are inevitably introduced in medical CT imaging process. Noises in the images will greatly degrade the quality of images and bring difficulties to clinical diagnosis. This paper presents a new method to improve singular value decomposition (SVD) filtering performance in CT image. Filter based on SVD can effectively analyze characteristics of the image in horizontal (and/or vertical) directions. According to the features of CT image, we can make use of discrete cosine transform (DCT) to extract the region of interest and to shield uninterested region so as to realize the extraction of structure characteristics of the image. Then we transformed SVD to the image after DCT, constructing weighting function for image reconstruction adaptively weighted. The algorithm for the novel denoising approach in this paper was applied in CT image denoising, and the experimental results showed that the new method could effectively improve the performance of SVD filtering.
Denoising infrared maritime imagery using tailored dictionaries via modified K-SVD algorithm.
Smith, L N; Olson, C C; Judd, K P; Nichols, J M
2012-06-10
Recent work has shown that tailored overcomplete dictionaries can provide a better image model than standard basis functions for a variety of image processing tasks. Here we propose a modified K-SVD dictionary learning algorithm designed to maintain the advantages of the original approach but with a focus on improved convergence. We then use the learned model to denoise infrared maritime imagery and compare the performance to the original K-SVD algorithm, several overcomplete "fixed" dictionaries, and a standard wavelet denoising algorithm. Results indicate the superiority of overcomplete representations and show that our tailored approach provides similar peak signal-to-noise ratios as the traditional K-SVD at roughly half the computational cost.
Fst-Filter: A flexible spatio-temporal filter for biomedical multichannel data denoising.
Nuanprasert, Somchai; Adachi, Yoshiaki; Suzuki, Takashi
2015-08-01
In this paper, we present the noise reduction method for a multichannel measurement system where the true underlying signal is spatially low-rank and contaminated by spatially correlated noise. Our proposed formulation applies generalized singular value decomposition (GSVD) with signal recovery approach to extend the conventional subspace-based methods for performing the spatio-temporal filtering. Without necessarily requiring the noise covariance data in advance, the implemented optimization scheme allows users to choose the denoising function, F(·) flexibly satisfying for different temporal noise characteristics from a variety of existing efficient temporal filters. An effectiveness of proposed method is demonstrated by yielding the better accuracy for the brain source estimation on simulated magnetoencephalography (MEG) experiments than some traditional methods, e.g., principal component analysis (PCA), robust principal component analysis (RPCA) and multivariate wavelet denoising (MWD).
Multiresolution parametric estimation of transparent motions and denoising of fluoroscopic images.
Auvray, Vincent; Liénard, Jean; Bouthemy, Patrick
2005-01-01
We describe a novel multiresolution parametric framework to estimate transparent motions typically present in X-Ray exams. Assuming the presence if two transparent layers, it computes two affine velocity fields by minimizing an appropriate objective function with an incremental Gauss-Newton technique. We have designed a realistic simulation scheme of fluoroscopic image sequences to validate our method on data with ground truth and different levels of noise. An experiment on real clinical images is also reported. We then exploit this transparent-motion estimation method to denoise two layers image sequences using a motion-compensated estimation method. In accordance with theory, we show that we reach a denoising factor of 2/3 in a few iterations without bringing any local artifacts in the image sequence.
Evolutionary Fuzzy Block-Matching-Based Camera Raw Image Denoising.
Yang, Chin-Chang; Guo, Shu-Mei; Tsai, Jason Sheng-Hong
2016-10-03
An evolutionary fuzzy block-matching-based image denoising algorithm is proposed to remove noise from a camera raw image. Recently, a variance stabilization transform is widely used to stabilize the noise variance, so that a Gaussian denoising algorithm can be used to remove the signal-dependent noise in camera sensors. However, in the stabilized domain, the existed denoising algorithm may blur too much detail. To provide a better estimate of the noise-free signal, a new block-matching approach is proposed to find similar blocks by the use of a type-2 fuzzy logic system (FLS). Then, these similar blocks are averaged with the weightings which are determined by the FLS. Finally, an efficient differential evolution is used to further improve the performance of the proposed denoising algorithm. The experimental results show that the proposed denoising algorithm effectively improves the performance of image denoising. Furthermore, the average performance of the proposed method is better than those of two state-of-the-art image denoising algorithms in subjective and objective measures.
Tomcal, Michael; Stiffler, Nicholas; Barkan, Alice
2013-01-01
The Putative orthologous Groups 2 Database (POGs2) (http://pogs.uoregon.edu/) integrates information about the inferred proteomes of four plant species (Arabidopsis thaliana, Zea mays, Orza sativa, and Populus trichocarpa) in a display that facilitates comparisons among orthologs and extrapolation of annotations among species. A single-page view collates key functional data for members of each Putative Orthologous Group (POG): graphical representations of InterPro domains, predicted and established intracellular locations, and imported gene descriptions. The display incorporates POGs predicted by two different algorithms as well as gene trees, allowing users to evaluate the validity of POG memberships. The web interface provides ready access to sequences and alignments of POG members, as well as sequences, alignments, and domain architectures of closely-related paralogs. A simple and flexible search interface permits queries by BLAST and by any combination of gene identifier, keywords, domain names, InterPro identifiers, and intracellular location. The concurrent display of domain architectures for orthologous proteins highlights errors in gene models and false-negatives in domain predictions. The POGs2 layout is also useful for exploring candidate genes identified by transposon tagging, QTL mapping, map-based cloning, and proteomics, and for navigating between orthologous groups that belong to the same gene family. PMID:24340041
Optimal wavelet denoising for smart biomonitor systems
NASA Astrophysics Data System (ADS)
Messer, Sheila R.; Agzarian, John; Abbott, Derek
2001-03-01
Future smart-systems promise many benefits for biomedical diagnostics. The ideal is for simple portable systems that display and interpret information from smart integrated probes or MEMS-based devices. In this paper, we will discuss a step towards this vision with a heart bio-monitor case study. An electronic stethoscope is used to record heart sounds and the problem of extracting noise from the signal is addressed via the use of wavelets and averaging. In our example of heartbeat analysis, phonocardiograms (PCGs) have many advantages in that they may be replayed and analysed for spectral and frequency information. Many sources of noise may pollute a PCG including foetal breath sounds if the subject is pregnant, lung and breath sounds, environmental noise and noise from contact between the recording device and the skin. Wavelets can be employed to denoise the PCG. The signal is decomposed by a discrete wavelet transform. Due to the efficient decomposition of heart signals, their wavelet coefficients tend to be much larger than those due to noise. Thus, coefficients below a certain level are regarded as noise and are thresholded out. The signal can then be reconstructed without significant loss of information in the signal. The questions that this study attempts to answer are which wavelet families, levels of decomposition, and thresholding techniques best remove the noise in a PCG. The use of averaging in combination with wavelet denoising is also addressed. Possible applications of the Hilbert Transform to heart sound analysis are discussed.
Denoising solar radiation data using coiflet wavelets
Karim, Samsul Ariffin Abdul Janier, Josefina B. Muthuvalu, Mohana Sundaram; Hasan, Mohammad Khatim; Sulaiman, Jumat; Ismail, Mohd Tahir
2014-10-24
Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.
Fault Detection of a Roller-Bearing System through the EMD of a Wavelet Denoised Signal
Ahn, Jong-Hyo; Kwak, Dae-Ho; Koh, Bong-Hwan
2014-01-01
This paper investigates fault detection of a roller bearing system using a wavelet denoising scheme and proper orthogonal value (POV) of an intrinsic mode function (IMF) covariance matrix. The IMF of the bearing vibration signal is obtained through empirical mode decomposition (EMD). The signal screening process in the wavelet domain eliminates noise-corrupted portions that may lead to inaccurate prognosis of bearing conditions. We segmented the denoised bearing signal into several intervals, and decomposed each of them into IMFs. The first IMF of each segment is collected to become a covariance matrix for calculating the POV. We show that covariance matrices from healthy and damaged bearings exhibit different POV profiles, which can be a damage-sensitive feature. We also illustrate the conventional approach of feature extraction, of observing the kurtosis value of the measured signal, to compare the functionality of the proposed technique. The study demonstrates the feasibility of wavelet-based de-noising, and shows through laboratory experiments that tracking the proper orthogonal values of the covariance matrix of the IMF can be an effective and reliable measure for monitoring bearing fault. PMID:25196008
Nonlocal hierarchical dictionary learning using wavelets for image denoising.
Yan, Ruomei; Shao, Ling; Liu, Yan
2013-12-01
Exploiting the sparsity within representation models for images is critical for image denoising. The best currently available denoising methods take advantage of the sparsity from image self-similarity, pre-learned, and fixed representations. Most of these methods, however, still have difficulties in tackling high noise levels or noise models other than Gaussian. In this paper, the multiresolution structure and sparsity of wavelets are employed by nonlocal dictionary learning in each decomposition level of the wavelets. Experimental results show that our proposed method outperforms two state-of-the-art image denoising algorithms on higher noise levels. Furthermore, our approach is more adaptive to the less extensively researched uniform noise.
Denoising of ECG signal during spaceflight using singular value decomposition
NASA Astrophysics Data System (ADS)
Li, Zhuo; Wang, Li
2009-12-01
The Singular Value Decomposition (SVD) method is introduced to denoise the ECG signal during spaceflight. The theory base of SVD method is given briefly. The denoising process of the strategy is presented combining a segment of real ECG signal. We improve the algorithm of calculating Singular Value Ratio (SVR) spectrum, and propose a constructive approach of analysis characteristic patterns. We reproduce the ECG signal very well and compress the noise effectively. The SVD method is proved to be suitable for denoising the ECG signal.
IntNetLncSim: an integrative network analysis method to infer human lncRNA functional similarity
Hu, Yang; Yang, Haixiu; Zhou, Chen; Sun, Jie; Zhou, Meng
2016-01-01
Increasing evidence indicated that long non-coding RNAs (lncRNAs) were involved in various biological processes and complex diseases by communicating with mRNAs/miRNAs each other. Exploiting interactions between lncRNAs and mRNA/miRNAs to lncRNA functional similarity (LFS) is an effective method to explore function of lncRNAs and predict novel lncRNA-disease associations. In this article, we proposed an integrative framework, IntNetLncSim, to infer LFS by modeling the information flow in an integrated network that comprises both lncRNA-related transcriptional and post-transcriptional information. The performance of IntNetLncSim was evaluated by investigating the relationship of LFS with the similarity of lncRNA-related mRNA sets (LmRSets) and miRNA sets (LmiRSets). As a result, LFS by IntNetLncSim was significant positively correlated with the LmRSet (Pearson correlation γ2=0.8424) and LmiRSet (Pearson correlation γ2=0.2601). Particularly, the performance of IntNetLncSim is superior to several previous methods. In the case of applying the LFS to identify novel lncRNA-disease relationships, we achieved an area under the ROC curve (0.7300) in experimentally verified lncRNA-disease associations based on leave-one-out cross-validation. Furthermore, highly-ranked lncRNA-disease associations confirmed by literature mining demonstrated the excellent performance of IntNetLncSim. Finally, a web-accessible system was provided for querying LFS and potential lncRNA-disease relationships: http://www.bio-bigdata.com/IntNetLncSim. PMID:27323856
IntNetLncSim: an integrative network analysis method to infer human lncRNA functional similarity.
Cheng, Liang; Shi, Hongbo; Wang, Zhenzhen; Hu, Yang; Yang, Haixiu; Zhou, Chen; Sun, Jie; Zhou, Meng
2016-07-26
Increasing evidence indicated that long non-coding RNAs (lncRNAs) were involved in various biological processes and complex diseases by communicating with mRNAs/miRNAs each other. Exploiting interactions between lncRNAs and mRNA/miRNAs to lncRNA functional similarity (LFS) is an effective method to explore function of lncRNAs and predict novel lncRNA-disease associations. In this article, we proposed an integrative framework, IntNetLncSim, to infer LFS by modeling the information flow in an integrated network that comprises both lncRNA-related transcriptional and post-transcriptional information. The performance of IntNetLncSim was evaluated by investigating the relationship of LFS with the similarity of lncRNA-related mRNA sets (LmRSets) and miRNA sets (LmiRSets). As a result, LFS by IntNetLncSim was significant positively correlated with the LmRSet (Pearson correlation γ2=0.8424) and LmiRSet (Pearson correlation γ2=0.2601). Particularly, the performance of IntNetLncSim is superior to several previous methods. In the case of applying the LFS to identify novel lncRNA-disease relationships, we achieved an area under the ROC curve (0.7300) in experimentally verified lncRNA-disease associations based on leave-one-out cross-validation. Furthermore, highly-ranked lncRNA-disease associations confirmed by literature mining demonstrated the excellent performance of IntNetLncSim. Finally, a web-accessible system was provided for querying LFS and potential lncRNA-disease relationships: http://www.bio-bigdata.com/IntNetLncSim.
Denoising, deconvolving, and decomposing photon observations. Derivation of the D3PO algorithm
NASA Astrophysics Data System (ADS)
Selig, Marco; Enßlin, Torsten A.
2015-02-01
The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution is not dependent on the underlying position space, the implementation of the D3PO algorithm uses the nifty package to ensure applicability to various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 × 32 arcmin2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A74
NASA Astrophysics Data System (ADS)
Weisz, Daniel R.; Fouesneau, Morgan; Hogg, David W.; Rix, Hans-Walter; Dolphin, Andrew E.; Dalcanton, Julianne J.; Foreman-Mackey, Daniel T.; Lang, Dustin; Johnson, L. Clifton; Beerman, Lori C.; Bell, Eric F.; Gordon, Karl D.; Gouliermis, Dimitrios; Kalirai, Jason S.; Skillman, Evan D.; Williams, Benjamin F.
2013-01-01
We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M >~ 1 M ⊙). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, α, are unbiased and that the uncertainty, Δα, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on α, and provide an analytic approximation for Δα as a function of the observed number of stars and mass range. Comparison with literature studies shows that ~3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield langαrang = 2.46, with a 1σ dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the completeness for stars of a given mass. The precision on MF
Weisz, Daniel R.; Fouesneau, Morgan; Dalcanton, Julianne J.; Clifton Johnson, L.; Beerman, Lori C.; Williams, Benjamin F.; Hogg, David W.; Foreman-Mackey, Daniel T.; Rix, Hans-Walter; Gouliermis, Dimitrios; Dolphin, Andrew E.; Lang, Dustin; Bell, Eric F.; Gordon, Karl D.; Kalirai, Jason S.; Skillman, Evan D.
2013-01-10
We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M {approx}> 1 M {sub Sun }). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, {alpha}, are unbiased and that the uncertainty, {Delta}{alpha}, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on {alpha}, and provide an analytic approximation for {Delta}{alpha} as a function of the observed number of stars and mass range. Comparison with literature studies shows that {approx}3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield ({alpha}) = 2.46, with a 1{sigma} dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the
Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview
NASA Astrophysics Data System (ADS)
Han, G.; Lin, B.; Xu, Z.
2017-03-01
Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.
Edge structure preserving 3D image denoising by local surface approximation.
Qiu, Peihua; Mukherjee, Partha Sarathi
2012-08-01
In various applications, including magnetic resonance imaging (MRI) and functional MRI (fMRI), 3D images are becoming increasingly popular. To improve the reliability of subsequent image analyses, 3D image denoising is often a necessary preprocessing step, which is the focus of the current paper. In the literature, most existing image denoising procedures are for 2D images. Their direct extensions to 3D cases generally cannot handle 3D images efficiently because the structure of a typical 3D image is substantially more complicated than that of a typical 2D image. For instance, edge locations are surfaces in 3D cases which would be much more challenging to handle compared to edge curves in 2D cases. We propose a novel 3D image denoising procedure in this paper, based on local approximation of the edge surfaces using a set of surface templates. An important property of this method is that it can preserve edges and major edge structures (e.g., intersections of two edge surfaces and pointed corners). Numerical studies show that it works well in various applications.
NASA Astrophysics Data System (ADS)
Wen-Bo, Wang; Xiao-Dong, Zhang; Yuchan, Chang; Xiang-Li, Wang; Zhao, Wang; Xi, Chen; Lei, Zheng
2016-01-01
In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. Project supported by the National Science and Technology, China (Grant No. 2012BAJ15B04), the National Natural Science Foundation of China (Grant Nos. 41071270 and 61473213), the Natural Science Foundation of Hubei Province, China (Grant No. 2015CFB424), the State Key Laboratory Foundation of Satellite Ocean Environment Dynamics, China (Grant No. SOED1405), the Hubei Provincial Key Laboratory Foundation of Metallurgical Industry Process System Science, China (Grant No. Z201303), and the Hubei Key Laboratory Foundation of Transportation Internet of Things, Wuhan University of Technology, China (Grant No.2015III015-B02).
Denoising algorithm based on edge extraction and wavelet transform in digital holography
NASA Astrophysics Data System (ADS)
Zhang, Ming; Sang, Xin-zhu; Leng, Jun-min; Cao, Xue-mei
2013-08-01
Digital holography is a kind of coherent imaging method and inevitably affected by many factors in the process of recording. One of dominant problems is the speckle noise, which is essentially nonlinear multiplicative noise related to signals. So it is more difficult to remove than additive noise. Due to the noise pollution, the low resolution of image reconstructed is caused. A new solution for suppressing speckle noise in digital hologram is presented, which combines Canny filtering algorithm with wavelet threshold denoising algorithm. Canny filter is used to obtain the edge detail. Wavelet transformation performs denoising. In order to suppress speckle effectively and retain the image details as much as possible, Neyman-Pearson (N-P) criterion is introduced to estimate wavelet coefficient in every scale. An improved threshold function is proposed, whose curve is smoother. The reconstructed image is achieved by merging the denoised image with the edge details. Experimental results and performance parameters of the proposed algorithm are discussed and compared with other methods, which shows that the presented approach can not only effectively eliminate speckle noise, but also retain useful signals and edge information simultaneously.
Sparsity based denoising of spectral domain optical coherence tomography images
Fang, Leyuan; Li, Shutao; Nie, Qing; Izatt, Joseph A.; Toth, Cynthia A.; Farsiu, Sina
2012-01-01
In this paper, we make contact with the field of compressive sensing and present a development and generalization of tools and results for reconstructing irregularly sampled tomographic data. In particular, we focus on denoising Spectral-Domain Optical Coherence Tomography (SDOCT) volumetric data. We take advantage of customized scanning patterns, in which, a selected number of B-scans are imaged at higher signal-to-noise ratio (SNR). We learn a sparse representation dictionary for each of these high-SNR images, and utilize such dictionaries to denoise the low-SNR B-scans. We name this method multiscale sparsity based tomographic denoising (MSBTD). We show the qualitative and quantitative superiority of the MSBTD algorithm compared to popular denoising algorithms on images from normal and age-related macular degeneration eyes of a multi-center clinical trial. We have made the corresponding data set and software freely available online. PMID:22567586
Image denoising based on wavelet cone of influence analysis
NASA Astrophysics Data System (ADS)
Pang, Wei; Li, Yufeng
2009-11-01
Donoho et al have proposed a method for denoising by thresholding based on wavelet transform, and indeed, the application of their method to image denoising has been extremely successful. But this method is based on the assumption that the type of noise is only additive Gaussian white noise, which is not efficient to impulse noise. In this paper, a new image denoising algorithm based on wavelet cone of influence (COI) analyzing is proposed, and which can effectively remove the impulse noise and preserve the image edges via undecimated discrete wavelet transform (UDWT). Furthermore, combining with the traditional wavelet thresholding denoising method, it can be also used to restrain more widely type of noise such as Gaussian noise, impulse noise, poisson noise and other mixed noise. Experiment results illustrate the advantages of this method.
Terahertz digital holography image denoising using stationary wavelet transform
NASA Astrophysics Data System (ADS)
Cui, Shan-Shan; Li, Qi; Chen, Guanghao
2015-04-01
Terahertz (THz) holography is a frontier technology in terahertz imaging field. However, reconstructed images of holograms are inherently affected by speckle noise, on account of the coherent nature of light scattering. Stationary wavelet transform (SWT) is an effective tool in speckle noise removal. In this paper, two algorithms for despeckling SAR images are implemented to THz images based on SWT, which are threshold estimation and smoothing operation respectively. Denoised images are then quantitatively assessed by speckle index. Experimental results show that the stationary wavelet transform has superior denoising performance and image detail preservation to discrete wavelet transform. In terms of the threshold estimation, high levels of decomposing are needed for better denoising result. The smoothing operation combined with stationary wavelet transform manifests the optimal denoising effect at single decomposition level, with 5×5 average filtering.
Denoising MR spectroscopic imaging data with low-rank approximations.
Nguyen, Hien M; Peng, Xi; Do, Minh N; Liang, Zhi-Pei
2013-01-01
This paper addresses the denoising problem associated with magnetic resonance spectroscopic imaging (MRSI), where signal-to-noise ratio (SNR) has been a critical problem. A new scheme is proposed, which exploits two low-rank structures that exist in MRSI data, one due to partial separability and the other due to linear predictability. Denoising is performed by arranging the measured data in appropriate matrix forms (i.e., Casorati and Hankel) and applying low-rank approximations by singular value decomposition (SVD). The proposed method has been validated using simulated and experimental data, producing encouraging results. Specifically, the method can effectively denoise MRSI data in a wide range of SNR values while preserving spatial-spectral features. The method could prove useful for denoising MRSI data and other spatial-spectral and spatial-temporal imaging data as well.
Image denoising via sparse and redundant representations over learned dictionaries.
Elad, Michael; Aharon, Michal
2006-12-01
We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods.
A new method for mobile phone image denoising
NASA Astrophysics Data System (ADS)
Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang
2015-12-01
Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.
The NIFTY way of Bayesian signal inference
Selig, Marco
2014-12-05
We introduce NIFTY, 'Numerical Information Field Theory', a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTY can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTY as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D{sup 3}PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.
Aggelopoulos, Nikolaos C
2015-08-01
Perceptual inference refers to the ability to infer sensory stimuli from predictions that result from internal neural representations built through prior experience. Methods of Bayesian statistical inference and decision theory model cognition adequately by using error sensing either in guiding action or in "generative" models that predict the sensory information. In this framework, perception can be seen as a process qualitatively distinct from sensation, a process of information evaluation using previously acquired and stored representations (memories) that is guided by sensory feedback. The stored representations can be utilised as internal models of sensory stimuli enabling long term associations, for example in operant conditioning. Evidence for perceptual inference is contributed by such phenomena as the cortical co-localisation of object perception with object memory, the response invariance in the responses of some neurons to variations in the stimulus, as well as from situations in which perception can be dissociated from sensation. In the context of perceptual inference, sensory areas of the cerebral cortex that have been facilitated by a priming signal may be regarded as comparators in a closed feedback loop, similar to the better known motor reflexes in the sensorimotor system. The adult cerebral cortex can be regarded as similar to a servomechanism, in using sensory feedback to correct internal models, producing predictions of the outside world on the basis of past experience.
Gradient histogram estimation and preservation for texture enhanced image denoising.
Zuo, Wangmeng; Zhang, Lei; Song, Chunwei; Zhang, David; Gao, Huijun
2014-06-01
Natural image statistics plays an important role in image denoising, and various natural image priors, including gradient-based, sparse representation-based, and nonlocal self-similarity-based ones, have been widely studied and exploited for noise removal. In spite of the great success of many denoising algorithms, they tend to smooth the fine scale image textures when removing noise, degrading the image visual quality. To address this problem, in this paper, we propose a texture enhanced image denoising method by enforcing the gradient histogram of the denoised image to be close to a reference gradient histogram of the original image. Given the reference gradient histogram, a novel gradient histogram preservation (GHP) algorithm is developed to enhance the texture structures while removing noise. Two region-based variants of GHP are proposed for the denoising of images consisting of regions with different textures. An algorithm is also developed to effectively estimate the reference gradient histogram from the noisy observation of the unknown image. Our experimental results demonstrate that the proposed GHP algorithm can well preserve the texture appearance in the denoised images, making them look more natural.
Applications of discrete multiwavelet techniques to image denoising
NASA Astrophysics Data System (ADS)
Wang, Haihui; Peng, Jiaxiong; Wu, Wei; Ye, Bin
2003-09-01
In this paper, we present a new method by using 2-D discrete multiwavelet transform in image denoising. The developments in wavelet theory have given rise to the wavelet thresholding method, for extracting a signal from noisy data. The method of signal denoising via wavelet thresholding was popularized. Multiwavelets have recently been introduced and they offer simultaneous orthogonality, symmetry and short support. This property makes multiwavelets more suitable for various image processing applications, especially denoising. It is based on thresholding of multiwavelet coefficients arising from the standard scalar orthogonal wavelet transform. It takes into account the covariance structure of the transform. Denoising is images via thresholding of the multiwavelet coefficients result from preprocessing and the discrete multiwavelet transform can be carried out by threating the output in this paper. The form of the threshold is carefully formulated and is the key to the excellent results obtained in the extensive numerical simulations of image denoising. The performances of multiwavelets are compared with those of scalar wavelets. Simulations reveal that multiwavelet based image denoising schemes outperform wavelet based method both subjectively and objectively.
Kim, Nam-Seog; Chung, Koohong; Ahn, Seongchae; Yu, Jeong Whon; Choi, Keechoo
2014-10-01
Filtering out the noise in traffic collision data is essential in reducing false positive rates (i.e., requiring safety investigation of sites where it is not needed) and can assist government agencies in better allocating limited resources. Previous studies have demonstrated that denoising traffic collision data is possible when there exists a true known high collision concentration location (HCCL) list to calibrate the parameters of a denoising method. However, such a list is often not readily available in practice. To this end, the present study introduces an innovative approach for denoising traffic collision data using the Ensemble Empirical Mode Decomposition (EEMD) method which is widely used for analyzing nonlinear and nonstationary data. The present study describes how to transform the traffic collision data before the data can be decomposed using the EEMD method to obtain set of Intrinsic Mode Functions (IMFs) and residue. The attributes of the IMFs were then carefully examined to denoise the data and to construct Continuous Risk Profiles (CRPs). The findings from comparing the resulting CRP profiles with CRPs in which the noise was filtered out with two different empirically calibrated weighted moving window lengths are also documented, and the results and recommendations for future research are discussed.
Denoising two-photon calcium imaging data.
Malik, Wasim Q; Schummers, James; Sur, Mriganka; Brown, Emery N
2011-01-01
Two-photon calcium imaging is now an important tool for in vivo imaging of biological systems. By enabling neuronal population imaging with subcellular resolution, this modality offers an approach for gaining a fundamental understanding of brain anatomy and physiology. Proper analysis of calcium imaging data requires denoising, that is separating the signal from complex physiological noise. To analyze two-photon brain imaging data, we present a signal plus colored noise model in which the signal is represented as harmonic regression and the correlated noise is represented as an order autoregressive process. We provide an efficient cyclic descent algorithm to compute approximate maximum likelihood parameter estimates by combing a weighted least-squares procedure with the Burg algorithm. We use Akaike information criterion to guide selection of the harmonic regression and the autoregressive model orders. Our flexible yet parsimonious modeling approach reliably separates stimulus-evoked fluorescence response from background activity and noise, assesses goodness of fit, and estimates confidence intervals and signal-to-noise ratio. This refined separation leads to appreciably enhanced image contrast for individual cells including clear delineation of subcellular details and network activity. The application of our approach to in vivo imaging data recorded in the ferret primary visual cortex demonstrates that our method yields substantially denoised signal estimates. We also provide a general Volterra series framework for deriving this and other signal plus correlated noise models for imaging. This approach to analyzing two-photon calcium imaging data may be readily adapted to other computational biology problems which apply correlated noise models.
NASA Astrophysics Data System (ADS)
Liu, Zhen; Park, Jeffrey; Rye, Danny M.
2015-10-01
The crust of Tibetan Plateau may have formed via shortening/thickening or large-scale underthrusting, and subsequently modified via lower crust channel flows and volatile-mediated regional metamorphism. The amplitude and distribution of crustal anisotropy record the history of continental deformation, offering clues to its formation and later modification. In this study, we first investigate the back-azimuth dependence of Ps converted phases using multitaper receiver functions (RFs). We analyze teleseismic data for 35 temporary broadband stations in the ASCENT experiment located in northeastern Tibet. We stack receiver functions after a moving-window moveout correction. Major features of RFs include: 1) Ps arrivals at 8-10 s on the radial components, suggesting a 70-90-km crustal thickness in the study area; 2) two-lobed back-azimuth variation for intra-crustal Ps phases in the upper crust (< 20 km), consistent with tilted symmetry axis anisotropy or dipping interfaces; 3) significant Ps arrivals with four-lobed back-azimuth variation distributed in distinct layers in the middle and lower crust (up to 60 km), corresponding to (sub)horizontal-axis anisotropy; and 4) weak or no evidence of azimuthal anisotropy in the lowermost crust. To study the anisotropy, we compare the observed RF stacks with one-dimensional reflectivity synthetic seismograms in anisotropic media, and fit major features by "trial and error" forward modeling. Crustal anisotropy offers few clues on plateau formation, but strong evidence of ongoing deformation and metamorphism. We infer strong horizontal-axis anisotropy concentrated in the middle and lower crust, which could be explained by vertically aligned sheet silicates, open cracks filled with magma or other fluid, vertical vein structures or by 1-10-km-scale chimney structures that have focused metamorphic fluids. Simple dynamic models encounter difficulty in generating vertically aligned sheet silicates. Instead, we interpret our data to
Identifying the multiple dysregulated oncoproteins that contribute to tumorigenesis in a given patient is crucial for developing personalized treatment plans. However, accurate inference of aberrant protein activity in biological samples is still challenging as genetic alterations are only partially predictive and direct measurements of protein activity are generally not feasible.
NASA Astrophysics Data System (ADS)
Khan, Shahjahan
Often scientific information on various data generating processes are presented in the from of numerical and categorical data. Except for some very rare occasions, generally such data represent a small part of the population, or selected outcomes of any data generating process. Although, valuable and useful information is lurking in the array of scientific data, generally, they are unavailable to the users. Appropriate statistical methods are essential to reveal the hidden "jewels" in the mess of the row data. Exploratory data analysis methods are used to uncover such valuable characteristics of the observed data. Statistical inference provides techniques to make valid conclusions about the unknown characteristics or parameters of the population from which scientifically drawn sample data are selected. Usually, statistical inference includes estimation of population parameters as well as performing test of hypotheses on the parameters. However, prediction of future responses and determining the prediction distributions are also part of statistical inference. Both Classical or Frequentists and Bayesian approaches are used in statistical inference. The commonly used Classical approach is based on the sample data alone. In contrast, increasingly popular Beyesian approach uses prior distribution on the parameters along with the sample data to make inferences. The non-parametric and robust methods are also being used in situations where commonly used model assumptions are unsupported. In this chapter,we cover the philosophical andmethodological aspects of both the Classical and Bayesian approaches.Moreover, some aspects of predictive inference are also included. In the absence of any evidence to support assumptions regarding the distribution of the underlying population, or if the variable is measured only in ordinal scale, non-parametric methods are used. Robust methods are employed to avoid any significant changes in the results due to deviations from the model
NASA Astrophysics Data System (ADS)
Khan, Shahjahan
Often scientific information on various data generating processes are presented in the from of numerical and categorical data. Except for some very rare occasions, generally such data represent a small part of the population, or selected outcomes of any data generating process. Although, valuable and useful information is lurking in the array of scientific data, generally, they are unavailable to the users. Appropriate statistical methods are essential to reveal the hidden “jewels” in the mess of the row data. Exploratory data analysis methods are used to uncover such valuable characteristics of the observed data. Statistical inference provides techniques to make valid conclusions about the unknown characteristics or parameters of the population from which scientifically drawn sample data are selected. Usually, statistical inference includes estimation of population parameters as well as performing test of hypotheses on the parameters. However, prediction of future responses and determining the prediction distributions are also part of statistical inference. Both Classical or Frequentists and Bayesian approaches are used in statistical inference. The commonly used Classical approach is based on the sample data alone. In contrast, increasingly popular Beyesian approach uses prior distribution on the parameters along with the sample data to make inferences. The non-parametric and robust methods are also being used in situations where commonly used model assumptions are unsupported. In this chapter,we cover the philosophical andmethodological aspects of both the Classical and Bayesian approaches.Moreover, some aspects of predictive inference are also included. In the absence of any evidence to support assumptions regarding the distribution of the underlying population, or if the variable is measured only in ordinal scale, non-parametric methods are used. Robust methods are employed to avoid any significant changes in the results due to deviations from the model
Effect of denoising on supervised lung parenchymal clusters
NASA Astrophysics Data System (ADS)
Jayamani, Padmapriya; Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.
2012-03-01
Denoising is a critical preconditioning step for quantitative analysis of medical images. Despite promises for more consistent diagnosis, denoising techniques are seldom explored in clinical settings. While this may be attributed to the esoteric nature of the parameter sensitve algorithms, lack of quantitative measures on their ecacy to enhance the clinical decision making is a primary cause of physician apathy. This paper addresses this issue by exploring the eect of denoising on the integrity of supervised lung parenchymal clusters. Multiple Volumes of Interests (VOIs) were selected across multiple high resolution CT scans to represent samples of dierent patterns (normal, emphysema, ground glass, honey combing and reticular). The VOIs were labeled through consensus of four radiologists. The original datasets were ltered by multiple denoising techniques (median ltering, anisotropic diusion, bilateral ltering and non-local means) and the corresponding ltered VOIs were extracted. Plurality of cluster indices based on multiple histogram-based pair-wise similarity measures were used to assess the quality of supervised clusters in the original and ltered space. The resultant rank orders were analyzed using the Borda criteria to nd the denoising-similarity measure combination that has the best cluster quality. Our exhaustive analyis reveals (a) for a number of similarity measures, the cluster quality is inferior in the ltered space; and (b) for measures that benet from denoising, a simple median ltering outperforms non-local means and bilateral ltering. Our study suggests the need to judiciously choose, if required, a denoising technique that does not deteriorate the integrity of supervised clusters.
NASA Astrophysics Data System (ADS)
Ren, Zhong; Liu, Guodong; Huang, Zhen
2014-10-01
Real-time monitoring of blood glucose concentration (BGC) is a great important procedure in controlling diabetes mellitus and preventing the complication for diabetic patients. Noninvasive measurement of BGC has already become a research hotspot because it can overcome the physical and psychological harm. Photoacoustic spectroscopy is a well-established, hybrid and alternative technique used to determine the BGC. According to the theory of photoacoustic technique, the blood is irradiated by plused laser with nano-second repeation time and micro-joule power, the photoacoustic singals contained the information of BGC are generated due to the thermal-elastic mechanism, then the BGC level can be interpreted from photoacoustic signal via the data analysis. But in practice, the time-resolved photoacoustic signals of BGC are polluted by the varities of noises, e.g., the interference of background sounds and multi-component of blood. The quality of photoacoustic signal of BGC directly impacts the precision of BGC measurement. So, an improved wavelet denoising method was proposed to eliminate the noises contained in BGC photoacoustic signals. To overcome the shortcoming of traditional wavelet threshold denoising, an improved dual-threshold wavelet function was proposed in this paper. Simulation experimental results illustrated that the denoising result of this improved wavelet method was better than that of traditional soft and hard threshold function. To varify the feasibility of this improved function, the actual photoacoustic BGC signals were test, the test reslut demonstrated that the signal-to-noises ratio(SNR) of the improved function increases about 40-80%, and its root-mean-square error (RMSE) decreases about 38.7-52.8%.
Hybrid regularizers-based adaptive anisotropic diffusion for image denoising.
Liu, Kui; Tan, Jieqing; Ai, Liefu
2016-01-01
To eliminate the staircasing effect for total variation filter and synchronously avoid the edges blurring for fourth-order PDE filter, a hybrid regularizers-based adaptive anisotropic diffusion is proposed for image denoising. In the proposed model, the [Formula: see text]-norm is considered as the fidelity term and the regularization term is composed of a total variation regularization and a fourth-order filter. The two filters can be adaptively selected according to the diffusion function. When the pixels locate at the edges, the total variation filter is selected to filter the image, which can preserve the edges. When the pixels belong to the flat regions, the fourth-order filter is adopted to smooth the image, which can eliminate the staircase artifacts. In addition, the split Bregman and relaxation approach are employed in our numerical algorithm to speed up the computation. Experimental results demonstrate that our proposed model outperforms the state-of-the-art models cited in the paper in both the qualitative and quantitative evaluations.
Stacked Denoising Autoencoders Applied to Star/Galaxy Classification
NASA Astrophysics Data System (ADS)
Qin, H. R.; Lin, J. M.; Wang, J. Y.
2016-05-01
In recent years, the deep learning has been becoming more and more popular because it is well-adapted, and has a high accuracy and complex structure, but it has not been used in astronomy. In order to resolve the question that the classification accuracy of star/galaxy is high on the bright set, but low on the faint set of the Sloan Digital Sky Survey (SDSS), we introduce the new deep learning SDA (stacked denoising autoencoders) and dropout technology, which can greatly improve robustness and anti-noise performance. We randomly selected the bright source set and faint source set from DR12 and DR7 with spectroscopic measurements, and preprocessed them. Afterwards, we randomly selected the training set and testing set without replacement from the bright set and faint set. At last, we used the obtained training set to train the SDA model of SDSS-DR7 and SDSS-DR12. We compared the testing result with the results of Library for Support Vector Machines (LibSVM), J48, Logistic Model Trees (LMT), Support Vector Machine (SVM), Logistic Regression, and Decision Stump algorithm on the SDSS-DR12 testing set, and the results of six kinds of decision trees on the SDSS-DR7 testing set. The simulation shows that SDA has a better classification accuracy than other machine learning algorithms. When we use completeness function as the test parameter, the test accuracy rate is improved by about 15% on the faint set of SDSS-DR7.
Image denoising based on wavelets and multifractals for singularity detection.
Zhong, Junmei; Ning, Ruola
2005-10-01
This paper presents a very efficient algorithm for image denoising based on wavelets and multifractals for singularity detection. A challenge of image denoising is how to preserve the edges of an image when reducing noise. By modeling the intensity surface of a noisy image as statistically self-similar multifractal processes and taking advantage of the multiresolution analysis with wavelet transform to exploit the local statistical self-similarity at different scales, the pointwise singularity strength value characterizing the local singularity at each scale was calculated. By thresholding the singularity strength, wavelet coefficients at each scale were classified into two categories: the edge-related and regular wavelet coefficients and the irregular coefficients. The irregular coefficients were denoised using an approximate minimum mean-squared error (MMSE) estimation method, while the edge-related and regular wavelet coefficients were smoothed using the fuzzy weighted mean (FWM) filter aiming at preserving the edges and details when reducing noise. Furthermore, to make the FWM-based filtering more efficient for noise reduction at the lowest decomposition level, the MMSE-based filtering was performed as the first pass of denoising followed by performing the FWM-based filtering. Experimental results demonstrated that this algorithm could achieve both good visual quality and high PSNR for the denoised images.
Single-image noise level estimation for blind denoising.
Liu, Xinhao; Tanaka, Masayuki; Okutomi, Masatoshi
2013-12-01
Noise level is an important parameter to many image processing applications. For example, the performance of an image denoising algorithm can be much degraded due to the poor noise level estimation. Most existing denoising algorithms simply assume the noise level is known that largely prevents them from practical use. Moreover, even with the given true noise level, these denoising algorithms still cannot achieve the best performance, especially for scenes with rich texture. In this paper, we propose a patch-based noise level estimation algorithm and suggest that the noise level parameter should be tuned according to the scene complexity. Our approach includes the process of selecting low-rank patches without high frequency components from a single noisy image. The selection is based on the gradients of the patches and their statistics. Then, the noise level is estimated from the selected patches using principal component analysis. Because the true noise level does not always provide the best performance for nonblind denoising algorithms, we further tune the noise level parameter for nonblind denoising. Experiments demonstrate that both the accuracy and stability are superior to the state of the art noise level estimation algorithm for various scenes and noise levels.
GPU-accelerated denoising of 3D magnetic resonance images
Howison, Mark; Wes Bethel, E.
2014-05-29
The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.
Remote sensing image denoising application by generalized morphological component analysis
NASA Astrophysics Data System (ADS)
Yu, Chong; Chen, Xiong
2014-12-01
In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.
An adaptive nonlocal means scheme for medical image denoising
NASA Astrophysics Data System (ADS)
Thaipanich, Tanaphol; Kuo, C.-C. Jay
2010-03-01
Medical images often consist of low-contrast objects corrupted by random noise arising in the image acquisition process. Thus, image denoising is one of the fundamental tasks required by medical imaging analysis. In this work, we investigate an adaptive denoising scheme based on the nonlocal (NL)-means algorithm for medical imaging applications. In contrast with the traditional NL-means algorithm, the proposed adaptive NL-means (ANL-means) denoising scheme has three unique features. First, it employs the singular value decomposition (SVD) method and the K-means clustering (K-means) technique for robust classification of blocks in noisy images. Second, the local window is adaptively adjusted to match the local property of a block. Finally, a rotated block matching algorithm is adopted for better similarity matching. Experimental results from both additive white Gaussian noise (AWGN) and Rician noise are given to demonstrate the superior performance of the proposed ANL denoising technique over various image denoising benchmarks in term of both PSNR and perceptual quality comparison.
Image sequence denoising via sparse and redundant representations.
Protter, Matan; Elad, Michael
2009-01-01
In this paper, we consider denoising of image sequences that are corrupted by zero-mean additive white Gaussian noise. Relative to single image denoising techniques, denoising of sequences aims to also utilize the temporal dimension. This assists in getting both faster algorithms and better output quality. This paper focuses on utilizing sparse and redundant representations for image sequence denoising, extending the work reported in. In the single image setting, the K-SVD algorithm is used to train a sparsifying dictionary for the corrupted image. This paper generalizes the above algorithm by offering several extensions: i) the atoms used are 3-D; ii) the dictionary is propagated from one frame to the next, reducing the number of required iterations; and iii) averaging is done on patches in both spatial and temporal neighboring locations. These modifications lead to substantial benefits in complexity and denoising performance, compared to simply running the single image algorithm sequentially. The algorithm's performance is experimentally compared to several state-of-the-art algorithms, demonstrating comparable or favorable results.
Geometric properties of solutions to the total variation denoising problem
NASA Astrophysics Data System (ADS)
Chambolle, Antonin; Duval, Vincent; Peyré, Gabriel; Poon, Clarice
2017-01-01
This article studies the denoising performance of total variation (TV) image regularization. More precisely, we study geometrical properties of the solution to the so-called Rudin-Osher-Fatemi total variation denoising method. The first contribution of this paper is a precise mathematical definition of the ‘extended support’ (associated to the noise-free image) of TV denoising. It is intuitively the region which is unstable and will suffer from the staircasing effect. We highlight in several practical cases, such as the indicator of convex sets, that this region can be determined explicitly. Our second and main contribution is a proof that the TV denoising method indeed restores an image which is exactly constant outside a small tube surrounding the extended support. The radius of this tube shrinks toward zero as the noise level vanishes, and we are able to determine, in some cases, an upper bound on the convergence rate. For indicators of so-called ‘calibrable’ sets (such as disks or properly eroded squares), this extended support matches the edges, so that discontinuities produced by TV denoising cluster tightly around the edges. In contrast, for indicators of more general shapes or for complicated images, this extended support can be larger. Beside these main results, our paper also proves several intermediate results about fine properties of TV regularization, in particular for indicators of calibrable and convex sets, which are of independent interest.
Feuermann, Marc; Gaudet, Pascale; Mi, Huaiyu; Lewis, Suzanna E.; Thomas, Paul D.
2016-01-01
We previously reported a paradigm for large-scale phylogenomic analysis of gene families that takes advantage of the large corpus of experimentally supported Gene Ontology (GO) annotations. This ‘GO Phylogenetic Annotation’ approach integrates GO annotations from evolutionarily related genes across ∼100 different organisms in the context of a gene family tree, in which curators build an explicit model of the evolution of gene functions. GO Phylogenetic Annotation models the gain and loss of functions in a gene family tree, which is used to infer the functions of uncharacterized (or incompletely characterized) gene products, even for human proteins that are relatively well studied. Here, we report our results from applying this paradigm to two well-characterized cellular processes, apoptosis and autophagy. This revealed several important observations with respect to GO annotations and how they can be used for function inference. Notably, we applied only a small fraction of the experimentally supported GO annotations to infer function in other family members. The majority of other annotations describe indirect effects, phenotypes or results from high throughput experiments. In addition, we show here how feedback from phylogenetic annotation leads to significant improvements in the PANTHER trees, the GO annotations and GO itself. Thus GO phylogenetic annotation both increases the quantity and improves the accuracy of the GO annotations provided to the research community. We expect these phylogenetically based annotations to be of broad use in gene enrichment analysis as well as other applications of GO annotations. Database URL: http://amigo.geneontology.org/amigo PMID:28025345
Wavelet Denoising of Mobile Radiation Data
Campbell, D B
2008-10-31
The FY08 phase of this project investigated the merits of video fusion as a method for mitigating the false alarms encountered by vehicle borne detection systems in an effort to realize performance gains associated with wavelet denoising. The fusion strategy exploited the significant correlations which exist between data obtained from radiation detectors and video systems with coincident fields of view. The additional information provided by optical systems can greatly increase the capabilities of these detection systems by reducing the burden of false alarms and through the generation of actionable information. The investigation into the use of wavelet analysis techniques as a means of filtering the gross-counts signal obtained from moving radiation detectors showed promise for vehicle borne systems. However, the applicability of these techniques to man-portable systems is limited due to minimal gains in performance over the rapid feedback available to system operators under walking conditions. Furthermore, the fusion of video holds significant promise for systems operating from vehicles or systems organized into stationary arrays; however, the added complexity and hardware required by this technique renders it infeasible for man-portable systems.
Image denoising via group Sparse representation over learned dictionary
NASA Astrophysics Data System (ADS)
Cheng, Pan; Deng, Chengzhi; Wang, Shengqian; Zhang, Chunfeng
2013-10-01
Images are one of vital ways to get information for us. However, in the practical application, images are often subject to a variety of noise, so that solving the problem of image denoising becomes particularly important. The K-SVD algorithm can improve the denoising effect by sparse coding atoms instead of the traditional method of sparse coding dictionary. In order to further improve the effect of denoising, we propose to extended the K-SVD algorithm via group sparse representation. The key point of this method is dividing the sparse coefficients into groups, so that adjusts the correlation among the elements by controlling the size of the groups. This new approach can improve the local constraints between adjacent atoms, thereby it is very important to increase the correlation between the atoms. The experimental results show that our method has a better effect on image recovery, which is efficient to prevent the block effect and can get smoother images.
Total Variation Denoising and Support Localization of the Gradient
NASA Astrophysics Data System (ADS)
Chambolle, A.; Duval, V.; Peyré, G.; Poon, C.
2016-10-01
This paper describes the geometrical properties of the solutions to the total variation denoising method. A folklore statement is that this method is able to restore sharp edges, but at the same time, might introduce some staircasing (i.e. “fake” edges) in flat areas. Quite surprisingly, put aside numerical evidences, almost no theoretical result are available to backup these claims. The first contribution of this paper is a precise mathematical definition of the “extended support” (associated to the noise-free image) of TV denoising. This is intuitively the region which is unstable and will suffer from the staircasing effect. Our main result shows that the TV denoising method indeed restores a piece-wise constant image outside a small tube surrounding the extended support. Furthermore, the radius of this tube shrinks toward zero as the noise level vanishes and in some cases, an upper bound on the convergence rate is given.
Non-local MRI denoising using random sampling.
Hu, Jinrong; Zhou, Jiliu; Wu, Xi
2016-09-01
In this paper, we propose a random sampling non-local mean (SNLM) algorithm to eliminate noise in 3D MRI datasets. Non-local means (NLM) algorithms have been implemented efficiently for MRI denoising, but are always limited by high computational complexity. Compared to conventional methods, which raster through the entire search window when computing similarity weights, the proposed SNLM algorithm randomly selects a small subset of voxels which dramatically decreases the computational burden, together with competitive denoising result. Moreover, structure tensor which encapsulates high-order information was introduced as an optimal sampling pattern for further improvement. Numerical experiments demonstrated that the proposed SNLM method can get a good balance between denoising quality and computation efficiency. At a relative sampling ratio (i.e. ξ=0.05), SNLM can remove noise as effectively as full NLM, meanwhile the running time can be reduced to 1/20 of NLM's.
Sinogram denoising via simultaneous sparse representation in learned dictionaries.
Karimi, Davood; Ward, Rabab K
2016-05-07
Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.
Sinogram denoising via simultaneous sparse representation in learned dictionaries
NASA Astrophysics Data System (ADS)
Karimi, Davood; Ward, Rabab K.
2016-05-01
Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.
Dictionary Pair Learning on Grassmann Manifolds for Image Denoising.
Zeng, Xianhua; Bian, Wei; Liu, Wei; Shen, Jialie; Tao, Dacheng
2015-11-01
Image denoising is a fundamental problem in computer vision and image processing that holds considerable practical importance for real-world applications. The traditional patch-based and sparse coding-driven image denoising methods convert 2D image patches into 1D vectors for further processing. Thus, these methods inevitably break down the inherent 2D geometric structure of natural images. To overcome this limitation pertaining to the previous image denoising methods, we propose a 2D image denoising model, namely, the dictionary pair learning (DPL) model, and we design a corresponding algorithm called the DPL on the Grassmann-manifold (DPLG) algorithm. The DPLG algorithm first learns an initial dictionary pair (i.e., the left and right dictionaries) by employing a subspace partition technique on the Grassmann manifold, wherein the refined dictionary pair is obtained through a sub-dictionary pair merging. The DPLG obtains a sparse representation by encoding each image patch only with the selected sub-dictionary pair. The non-zero elements of the sparse representation are further smoothed by the graph Laplacian operator to remove the noise. Consequently, the DPLG algorithm not only preserves the inherent 2D geometric structure of natural images but also performs manifold smoothing in the 2D sparse coding space. We demonstrate that the DPLG algorithm also improves the structural SIMilarity values of the perceptual visual quality for denoised images using the experimental evaluations on the benchmark images and Berkeley segmentation data sets. Moreover, the DPLG also produces the competitive peak signal-to-noise ratio values from popular image denoising algorithms.
Study on Underwater Image Denoising Algorithm Based on Wavelet Transform
NASA Astrophysics Data System (ADS)
Jian, Sun; Wen, Wang
2017-02-01
This paper analyzes the application of MATLAB in underwater image processing, the transmission characteristics of the underwater laser light signal and the kinds of underwater noise has been described, the common noise suppression algorithm: Wiener filter, median filter, average filter algorithm is brought out. Then the advantages and disadvantages of each algorithm in image sharpness and edge protection areas have been compared. A hybrid filter algorithm based on wavelet transform has been proposed which can be used for Color Image Denoising. At last the PSNR and NMSE of each algorithm has been given out, which compares the ability to de-noising
GPU-Accelerated Denoising in 3D (GD3D)
2013-10-01
The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer the second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.
Image denoising with the dual-tree complex wavelet transform
NASA Astrophysics Data System (ADS)
Yaseen, Alauldeen S.; Pavlova, Olga N.; Pavlov, Alexey N.; Hramov, Alexander E.
2016-04-01
The purpose of this study is to compare image denoising techniques based on real and complex wavelet-transforms. Possibilities provided by the classical discrete wavelet transform (DWT) with hard and soft thresholding are considered, and influences of the wavelet basis and image resizing are discussed. The quality of image denoising for the standard 2-D DWT and the dual-tree complex wavelet transform (DT-CWT) is studied. It is shown that DT-CWT outperforms 2-D DWT at the appropriate selection of the threshold level.
Frames-Based Denoising in 3D Confocal Microscopy Imaging.
Konstantinidis, Ioannis; Santamaria-Pang, Alberto; Kakadiaris, Ioannis
2005-01-01
In this paper, we propose a novel denoising method for 3D confocal microscopy data based on robust edge detection. Our approach relies on the construction of a non-separable frame system in 3D that incorporates the Sobel operator in dual spatial directions. This multidirectional set of digital filters is capable of robustly detecting edge information by ensemble thresholding of the filtered data. We demonstrate the application of our method to both synthetic and real confocal microscopy data by comparing it to denoising methods based on separable 3D wavelets and 3D median filtering, and report very encouraging results.
NASA Astrophysics Data System (ADS)
Messer, Sheila R.; Agzarian, John; Abbott, Derek
2001-05-01
Phonocardiograms (PCGs) have many advantages over traditional auscultation (listening to the heart) because they may be replayed, may be analyzed for spectral and frequency content, and frequencies inaudible to the human ear may be recorded. However, various sources of noise may pollute a PCG including lung sounds, environmental noise and noise generated from contact between the recording device and the skin. Because PCG signals are known to be nonlinear and it is often not possible to determine their noise content, traditional de-noising methods may not be effectively applied. However, other methods including wavelet de-noising, wavelet packet de-noising and averaging can be employed to de-noise the PCG. This study examines and compares these de-noising methods. This study answers such questions as to which de-noising method gives a better SNR, the magnitude of signal information that is lost as a result of the de-noising process, the appropriate uses of the different methods down to such specifics as to which wavelets and decomposition levels give best results in wavelet and wavelet packet de-noising. In general, the wavelet and wavelet packet de-noising performed roughly equally with optimal de-noising occurring at 3-5 levels of decomposition. Averaging also proved a highly useful de- noising technique; however, in some cases averaging is not appropriate. The Hilbert Transform is used to illustrate the results of the de-noising process and to extract instantaneous features including instantaneous amplitude, frequency, and phase.
Region-based image denoising through wavelet and fast discrete curvelet transform
NASA Astrophysics Data System (ADS)
Gu, Yanfeng; Guo, Yan; Liu, Xing; Zhang, Ye
2008-10-01
Image denoising always is one of important research topics in the image processing field. In this paper, fast discrete curvelet transform (FDCT) and undecimated wavelet transform (UDWT) are proposed for image denoising. A noisy image is first denoised by FDCT and UDWT separately. The whole image space is then divided into edge region and non-edge regions. After that, wavelet transform is performed on the images denoised by FDCT and UDWT respectively. Finally, the resultant image is fused through using both of edge region wavelet cofficients of the image denoised by FDCT and non-edge region wavelet cofficients of the image denoised by UDWT. The proposed method is validated through numerical experiments conducted on standard test images. The experimental results show that the proposed algorithm outperforms wavelet-based and curvelet-based image denoising methods and preserve linear features well.
Stevens, Katherine; McCabe, Christopher; Brazier, John; Roberts, Jennifer
2007-09-01
A key issue in health state valuation modelling is the choice of functional form. The two most frequently used preference based instruments adopt different approaches; one based on multi-attribute utility theory (MAUT), the other on statistical analysis. There has been no comparison of these alternative approaches in the context of health economics. We report a comparison of these approaches for the health utilities index mark 2. The statistical inference model predicts more accurately than the one based on MAUT. We discuss possible explanations for the differences in performance, the importance of the findings, and implications for future research.
NASA Astrophysics Data System (ADS)
Lu, Yu; Mo, H. J.; Lu, Zhankui; Katz, Neal; Weinberg, Martin D.
2014-09-01
We infer mechanisms of galaxy formation for a broad family of semi-analytic models (SAMs) constrained by the K-band luminosity function and H I mass function of local galaxies using tools of Bayesian analysis. Even with a broad search in parameter space the whole model family fails to match to constraining data. In the best-fitting models, the star formation and feedback parameters in low-mass haloes are tightly constrained by the two data sets, and the analysis reveals several generic failures of models that similarly apply to other existing SAMs. First, based on the assumption that baryon accretion follows the dark matter accretion, large mass-loading factors are required for haloes with circular velocities lower than 200 km s-1, and most of the wind mass must be expelled from the haloes. Second, assuming that the feedback is powered by Type II supernovae with a Chabrier initial mass function, the outflow requires more than 25 per cent of the available supernova kinetic energy. Finally, the posterior predictive distributions for the star formation history are dramatically inconsistent with observations for masses similar to or smaller than the Milky Way mass. The inferences suggest that the current model family is still missing some key physical processes that regulate the gas accretion and star formation in galaxies with masses below that of the Milky Way.
Discrete shearlet transform on GPU with applications in anomaly detection and denoising
NASA Astrophysics Data System (ADS)
Gibert, Xavier; Patel, Vishal M.; Labate, Demetrio; Chellappa, Rama
2014-12-01
Shearlets have emerged in recent years as one of the most successful methods for the multiscale analysis of multidimensional signals. Unlike wavelets, shearlets form a pyramid of well-localized functions defined not only over a range of scales and locations, but also over a range of orientations and with highly anisotropic supports. As a result, shearlets are much more effective than traditional wavelets in handling the geometry of multidimensional data, and this was exploited in a wide range of applications from image and signal processing. However, despite their desirable properties, the wider applicability of shearlets is limited by the computational complexity of current software implementations. For example, denoising a single 512 × 512 image using a current implementation of the shearlet-based shrinkage algorithm can take between 10 s and 2 min, depending on the number of CPU cores, and much longer processing times are required for video denoising. On the other hand, due to the parallel nature of the shearlet transform, it is possible to use graphics processing units (GPU) to accelerate its implementation. In this paper, we present an open source stand-alone implementation of the 2D discrete shearlet transform using CUDA C++ as well as GPU-accelerated MATLAB implementations of the 2D and 3D shearlet transforms. We have instrumented the code so that we can analyze the running time of each kernel under different GPU hardware. In addition to denoising, we describe a novel application of shearlets for detecting anomalies in textured images. In this application, computation times can be reduced by a factor of 50 or more, compared to multicore CPU implementations.
Athavale, Prashant; Xu, Robert; Radau, Perry; Nachman, Adrian; Wright, Graham A
2015-07-01
Images consist of structures of varying scales: large scale structures such as flat regions, and small scale structures such as noise, textures, and rapidly oscillatory patterns. In the hierarchical (BV, L(2)) image decomposition, Tadmor, et al. (2004) start with extracting coarse scale structures from a given image, and successively extract finer structures from the residuals in each step of the iterative decomposition. We propose to begin instead by extracting the finest structures from the given image and then proceed to extract increasingly coarser structures. In most images, noise could be considered as a fine scale structure. Thus, starting the image decomposition with finer scales, rather than large scales, leads to fast denoising. We note that our approach turns out to be equivalent to the nonstationary regularization in Scherzer and Weickert (2000). The continuous limit of this procedure leads to a time-scaled version of total variation flow. Motivated by specific clinical applications, we introduce an image depending weight in the regularization functional, and study the corresponding weighted TV flow. We show that the edge-preserving property of the multiscale representation of an input image obtained with the weighted TV flow can be enhanced and localized by appropriate choice of the weight. We use this in developing an efficient and edge-preserving denoising algorithm with control on speed and localization properties. We examine analytical properties of the weighted TV flow that give precise information about the denoising speed and the rate of change of energy of the images. An additional contribution of the paper is to use the images obtained at different scales for robust multiscale registration. We show that the inherently multiscale nature of the weighted TV flow improved performance for registration of noisy cardiac MRI images, compared to other methods such as bilateral or Gaussian filtering. A clinical application of the multiscale registration
Xu, Yungang; Guo, Maozu; Zou, Quan; Liu, Xiaoyan; Wang, Chunyu; Liu, Yang
2014-01-01
Cellular interactome, in which genes and/or their products interact on several levels, forming transcriptional regulatory-, protein interaction-, metabolic-, signal transduction networks, etc., has attracted decades of research focuses. However, such a specific type of network alone can hardly explain the various interactive activities among genes. These networks characterize different interaction relationships, implying their unique intrinsic properties and defects, and covering different slices of biological information. Functional gene network (FGN), a consolidated interaction network that models fuzzy and more generalized notion of gene-gene relations, have been proposed to combine heterogeneous networks with the goal of identifying functional modules supported by multiple interaction types. There are yet no successful precedents of FGNs on sparsely studied non-model organisms, such as soybean (Glycine max), due to the absence of sufficient heterogeneous interaction data. We present an alternative solution for inferring the FGNs of soybean (SoyFGNs), in a pioneering study on the soybean interactome, which is also applicable to other organisms. SoyFGNs exhibit the typical characteristics of biological networks: scale-free, small-world architecture and modularization. Verified by co-expression and KEGG pathways, SoyFGNs are more extensive and accurate than an orthology network derived from Arabidopsis. As a case study, network-guided disease-resistance gene discovery indicates that SoyFGNs can provide system-level studies on gene functions and interactions. This work suggests that inferring and modelling the interactome of a non-model plant are feasible. It will speed up the discovery and definition of the functions and interactions of other genes that control important functions, such as nitrogen fixation and protein or lipid synthesis. The efforts of the study are the basis of our further comprehensive studies on the soybean functional interactome at the genome
NASA Astrophysics Data System (ADS)
Wang, Yanxue; He, Zhengjia; Zi, Yanyang
2010-01-01
In order to enhance the desired features related to some special type of machine fault, a technique based on the dual-tree complex wavelet transform (DTCWT) is proposed in this paper. It is demonstrated that DTCWT enjoys better shift invariance and reduced spectral aliasing than second-generation wavelet transform (SGWT) and empirical mode decomposition by means of numerical simulations. These advantages of the DTCWT arise from the relationship between the two dual-tree wavelet basis functions, instead of the matching of the used single wavelet basis function to the signal being analyzed. Since noise inevitably exists in the measured signals, an enhanced vibration signals denoising algorithm incorporating DTCWT with NeighCoeff shrinkage is also developed. Denoising results of vibration signals resulting from a crack gear indicate the proposed denoising method can effectively remove noise and retain the valuable information as much as possible compared to those DWT- and SGWT-based NeighCoeff shrinkage denoising methods. As is well known, excavation of comprehensive signatures embedded in the vibration signals is of practical importance to clearly clarify the roots of the fault, especially the combined faults. In the case of multiple features detection, diagnosis results of rolling element bearings with combined faults and an actual industrial equipment confirm that the proposed DTCWT-based method is a powerful and versatile tool and consistently outperforms SGWT and fast kurtogram, which are widely used recently. Moreover, it must be noted, the proposed method is completely suitable for on-line surveillance and diagnosis due to its good robustness and efficient algorithm.
Image denoising via adaptive eigenvectors of graph Laplacian
NASA Astrophysics Data System (ADS)
Chen, Ying; Tang, Yibin; Xu, Ning; Zhou, Lin; Zhao, Li
2016-07-01
An image denoising method via adaptive eigenvectors of graph Laplacian (EGL) is proposed. Unlike the trivial parameter setting of the used eigenvectors in the traditional EGL method, in our method, the eigenvectors are adaptively selected in the whole denoising procedure. In detail, a rough image is first built with the eigenvectors from the noisy image, where the eigenvectors are selected by using the deviation estimation of the clean image. Subsequently, a guided image is effectively restored with a weighted average of the noisy and rough images. In this operation, the average coefficient is adaptively obtained to set the deviation of the guided image to approximately that of the clean image. Finally, the denoised image is achieved by a group-sparse model with the pattern from the guided image, where the eigenvectors are chosen in the error control of the noise deviation. Moreover, a modified group orthogonal matching pursuit algorithm is developed to efficiently solve the above group sparse model. The experiments show that our method not only improves the practicality of the EGL methods with the dependence reduction of the parameter setting, but also can outperform some well-developed denoising methods, especially for noise with large deviations.
Image denoising using the higher order singular value decomposition.
Rajwade, Ajit; Rangarajan, Anand; Banerjee, Arunava
2013-04-01
In this paper, we propose a very simple and elegant patch-based, machine learning technique for image denoising using the higher order singular value decomposition (HOSVD). The technique simply groups together similar patches from a noisy image (with similarity defined by a statistically motivated criterion) into a 3D stack, computes the HOSVD coefficients of this stack, manipulates these coefficients by hard thresholding, and inverts the HOSVD transform to produce the final filtered image. Our technique chooses all required parameters in a principled way, relating them to the noise model. We also discuss our motivation for adopting the HOSVD as an appropriate transform for image denoising. We experimentally demonstrate the excellent performance of the technique on grayscale as well as color images. On color images, our method produces state-of-the-art results, outperforming other color image denoising algorithms at moderately high noise levels. A criterion for optimal patch-size selection and noise variance estimation from the residual images (after denoising) is also presented.
Pixon Based Image Denoising Scheme by Preserving Exact Edge Locations
NASA Astrophysics Data System (ADS)
Srikrishna, Atluri; Reddy, B. Eswara; Pompapathi, Manasani
2016-09-01
Denoising of an image is an essential step in many image processing applications. In any image de-noising algorithm, it is a major concern to keep interesting structures of the image like abrupt changes in image intensity values (edges). In this paper an efficient algorithm for image de-noising is proposed that obtains integrated and consecutive original image from noisy image using diffusion equations in pixon domain. The process mainly consists of two steps. In first step, the pixons for noisy image are obtained by using K-means clustering process and next step includes applying diffusion equations on the pixonal model of the image to obtain new intensity values for the restored image. The process has been applied on a variety of standard images and the objective fidelity has been compared with existing algorithms. The experimental results show that the proposed algorithm has a better performance by preserving edge details compared in terms of Figure of Merit and improved Peak-to-Signal-Noise-Ratio value. The proposed method brings out a denoising technique which preserves edge details.
Local Sparse Structure Denoising for Low-Light-Level Image.
Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lianfa
2015-12-01
Sparse and redundant representations perform well in image denoising. However, sparsity-based methods fail to denoise low-light-level (LLL) images because of heavy and complex noise. They consider sparsity on image patches independently and tend to lose the texture structures. To suppress noises and maintain textures simultaneously, it is necessary to embed noise invariant features into the sparse decomposition process. We, therefore, used a local structure preserving sparse coding (LSPSc) formulation to explore the local sparse structures (both the sparsity and local structure) in image. It was found that, with the introduction of spatial local structure constraint into the general sparse coding algorithm, LSPSc could improve the robustness of sparse representation for patches in serious noise. We further used a kernel LSPSc (K-LSPSc) formulation, which extends LSPSc into the kernel space to weaken the influence of linear structure constraint in nonlinear data. Based on the robust LSPSc and K-LSPSc algorithms, we constructed a local sparse structure denoising (LSSD) model for LLL images, which was demonstrated to give high performance in the natural LLL images denoising, indicating that both the LSPSc- and K-LSPSc-based LSSD models have the stable property of noise inhibition and texture details preservation.
Image denoising with dominant sets by a coalitional game approach.
Hsiao, Pei-Chi; Chang, Long-Wen
2013-02-01
Dominant sets are a new graph partition method for pairwise data clustering proposed by Pavan and Pelillo. We address the problem of dominant sets with a coalitional game model, in which each data point is treated as a player and similar data points are encouraged to group together for cooperation. We propose betrayal and hermit rules to describe the cooperative behaviors among the players. After applying the betrayal and hermit rules, an optimal and stable graph partition emerges, and all the players in the partition will not change their groups. For computational feasibility, we design an approximate algorithm for finding a dominant set of mutually similar players and then apply the algorithm to an application such as image denoising. In image denoising, every pixel is treated as a player who seeks similar partners according to its patch appearance in its local neighborhood. By averaging the noisy effects with the similar pixels in the dominant sets, we improve nonlocal means image denoising to restore the intrinsic structure of the original images and achieve competitive denoising results with the state-of-the-art methods in visual and quantitative qualities.
Impedance cardiography signal denoising using discrete wavelet transform.
Chabchoub, Souhir; Mansouri, Sofienne; Salah, Ridha Ben
2016-09-01
Impedance cardiography (ICG) is a non-invasive technique for diagnosing cardiovascular diseases. In the acquisition procedure, the ICG signal is often affected by several kinds of noise which distort the determination of the hemodynamic parameters. Therefore, doctors cannot recognize ICG waveform correctly and the diagnosis of cardiovascular diseases became inaccurate. The aim of this work is to choose the most suitable method for denoising the ICG signal. Indeed, different wavelet families are used to denoise the ICG signal. The Haar, Daubechies (db2, db4, db6, and db8), Symlet (sym2, sym4, sym6, sym8) and Coiflet (coif2, coif3, coif4, coif5) wavelet families are tested and evaluated in order to select the most suitable denoising method. The wavelet family with best performance is compared with two denoising methods: one based on Savitzky-Golay filtering and the other based on median filtering. Each method is evaluated by means of the signal to noise ratio (SNR), the root mean square error (RMSE) and the percent difference root mean square (PRD). The results show that the Daubechies wavelet family (db8) has superior performance on noise reduction in comparison to other methods.
Towards General Algorithms for Grammatical Inference
NASA Astrophysics Data System (ADS)
Clark, Alexander
Many algorithms for grammatical inference can be viewed as instances of a more general algorithm which maintains a set of primitive elements, which distributionally define sets of strings, and a set of features or tests that constrain various inference rules. Using this general framework, which we cast as a process of logical inference, we re-analyse Angluin's famous lstar algorithm and several recent algorithms for the inference of context-free grammars and multiple context-free grammars. Finally, to illustrate the advantages of this approach, we extend it to the inference of functional transductions from positive data only, and we present a new algorithm for the inference of finite state transducers.
Multitaper Spectral Analysis and Wavelet Denoising Applied to Helioseismic Data
NASA Technical Reports Server (NTRS)
Komm, R. W.; Gu, Y.; Hill, F.; Stark, P. B.; Fodor, I. K.
1999-01-01
Estimates of solar normal mode frequencies from helioseismic observations can be improved by using Multitaper Spectral Analysis (MTSA) to estimate spectra from the time series, then using wavelet denoising of the log spectra. MTSA leads to a power spectrum estimate with reduced variance and better leakage properties than the conventional periodogram. Under the assumption of stationarity and mild regularity conditions, the log multitaper spectrum has a statistical distribution that is approximately Gaussian, so wavelet denoising is asymptotically an optimal method to reduce the noise in the estimated spectra. We find that a single m-upsilon spectrum benefits greatly from MTSA followed by wavelet denoising, and that wavelet denoising by itself can be used to improve m-averaged spectra. We compare estimates using two different 5-taper estimates (Stepian and sine tapers) and the periodogram estimate, for GONG time series at selected angular degrees l. We compare those three spectra with and without wavelet-denoising, both visually, and in terms of the mode parameters estimated from the pre-processed spectra using the GONG peak-fitting algorithm. The two multitaper estimates give equivalent results. The number of modes fitted well by the GONG algorithm is 20% to 60% larger (depending on l and the temporal frequency) when applied to the multitaper estimates than when applied to the periodogram. The estimated mode parameters (frequency, amplitude and width) are comparable for the three power spectrum estimates, except for modes with very small mode widths (a few frequency bins), where the multitaper spectra broadened the modest compared with the periodogram. We tested the influence of the number of tapers used and found that narrow modes at low n values are broadened to the extent that they can no longer be fit if the number of tapers is too large. For helioseismic time series of this length and temporal resolution, the optimal number of tapers is less than 10.
Dictionary-based image denoising for dual energy computed tomography
NASA Astrophysics Data System (ADS)
Mechlem, Korbinian; Allner, Sebastian; Mei, Kai; Pfeiffer, Franz; Noël, Peter B.
2016-03-01
Compared to conventional computed tomography (CT), dual energy CT allows for improved material decomposition by conducting measurements at two distinct energy spectra. Since radiation exposure is a major concern in clinical CT, there is a need for tools to reduce the noise level in images while preserving diagnostic information. One way to achieve this goal is the application of image-based denoising algorithms after an analytical reconstruction has been performed. We have developed a modified dictionary denoising algorithm for dual energy CT aimed at exploiting the high spatial correlation between between images obtained from different energy spectra. Both the low-and high energy image are partitioned into small patches which are subsequently normalized. Combined patches with improved signal-to-noise ratio are formed by a weighted addition of corresponding normalized patches from both images. Assuming that corresponding low-and high energy image patches are related by a linear transformation, the signal in both patches is added coherently while noise is neglected. Conventional dictionary denoising is then performed on the combined patches. Compared to conventional dictionary denoising and bilateral filtering, our algorithm achieved superior performance in terms of qualitative and quantitative image quality measures. We demonstrate, in simulation studies, that this approach can produce 2d-histograms of the high- and low-energy reconstruction which are characterized by significantly improved material features and separation. Moreover, in comparison to other approaches that attempt denoising without simultaneously using both energy signals, superior similarity to the ground truth can be found with our proposed algorithm.
A fast non-local image denoising algorithm
NASA Astrophysics Data System (ADS)
Dauwe, A.; Goossens, B.; Luong, H. Q.; Philips, W.
2008-02-01
In this paper we propose several improvements to the original non-local means algorithm introduced by Buades et al. which obtains state-of-the-art denoising results. The strength of this algorithm is to exploit the repetitive character of the image in order to denoise the image unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Due to the enormous amount of weight computations, the original algorithm has a high computational cost. An improvement of image quality towards the original algorithm is to ignore the contributions from dissimilar windows. Even though their weights are very small at first sight, the new estimated pixel value can be severely biased due to the many small contributions. This bad influence of dissimilar windows can be eliminated by setting their corresponding weights to zero. Using the preclassification based on the first three statistical moments, only contributions from similar neighborhoods are computed. To decide whether a window is similar or dissimilar, we will derive thresholds for images corrupted with additive white Gaussian noise. Our accelerated approach is further optimized by taking advantage of the symmetry in the weights, which roughly halves the computation time, and by using a lookup table to speed up the weight computations. Compared to the original algorithm, our proposed method produces images with increased PSNR and better visual performance in less computation time. Our proposed method even outperforms state-of-the-art wavelet denoising techniques in both visual quality and PSNR values for images containing a lot of repetitive structures such as textures: the denoised images are much sharper and contain less artifacts. The proposed optimizations can also be applied in other image processing tasks which employ the concept of repetitive structures such as intra-frame super-resolution or detection of digital image forgery.
NASA Astrophysics Data System (ADS)
Chen, Jinglong; Wan, Zhiguo; Pan, Jun; Zi, Yanyang; Wang, Yu; Chen, Binqiang; Sun, Hailiang; Yuan, Jing; He, Zhengjia
2016-02-01
Fault identification timely of rolling mill drivetrain is significant for guaranteeing product quality and realizing long-term safe operation. So, condition monitoring system of rolling mill drivetrain is designed and developed. However, because compound fault and weak fault feature information is usually sub-merged in heavy background noise, this task still faces challenge. This paper provides a possibility for fault identification of rolling mills drivetrain by proposing customized maximal-overlap multiwavelet denoising method. The effectiveness of wavelet denoising method mainly relies on the appropriate selections of wavelet base, transform strategy and threshold rule. First, in order to realize exact matching and accurate detection of fault feature, customized multiwavelet basis function is constructed via symmetric lifting scheme and then vibration signal is processed by maximal-overlap multiwavelet transform. Next, based on spatial dependency of multiwavelet transform coefficients, spatial neighboring coefficient data-driven group threshold shrinkage strategy is developed for denoising process by choosing the optimal group length and threshold via the minimum of Stein's Unbiased Risk Estimate. The effectiveness of proposed method is first demonstrated through compound fault identification of reduction gearbox on rolling mill. Then it is applied for weak fault identification of dedusting fan bearing on rolling mill and the results support its feasibility.
Liu, Zhiwen; He, Zhengjia; Guo, Wei; Tang, Zhangchun
2016-03-01
In order to extract fault features of large-scale power equipment from strong background noise, a hybrid fault diagnosis method based on the second generation wavelet de-noising (SGWD) and the local mean decomposition (LMD) is proposed in this paper. In this method, a de-noising algorithm of second generation wavelet transform (SGWT) using neighboring coefficients was employed as the pretreatment to remove noise in rotating machinery vibration signals by virtue of its good effect in enhancing the signal-noise ratio (SNR). Then, the LMD method is used to decompose the de-noised signals into several product functions (PFs). The PF corresponding to the faulty feature signal is selected according to the correlation coefficients criterion. Finally, the frequency spectrum is analyzed by applying the FFT to the selected PF. The proposed method is applied to analyze the vibration signals collected from an experimental gearbox and a real locomotive rolling bearing. The results demonstrate that the proposed method has better performances such as high SNR and fast convergence speed than the normal LMD method.
ERIC Educational Resources Information Center
Bodner, Kimberly E.; Engelhardt, Christopher R.; Minshew, Nancy J.; Williams, Diane L.
2015-01-01
Studies investigating inferential reasoning in autism spectrum disorder (ASD) have focused on the ability to make socially-related inferences or inferences more generally. Important variables for intervention planning such as whether inferences depend on physical experiences or the nature of social information have received less consideration. A…
ERIC Educational Resources Information Center
Chasseigne, Gerard; Giraudeau, Caroline; Lafon, Peggy; Mullet, Etienne
2011-01-01
The study examined the knowledge of the functional relations between potential difference, magnitude of current, and resistance among seventh graders, ninth graders, 11th graders (in technical schools), and college students. It also tested the efficiency of a learning device named "functional learning" derived from cognitive psychology on the…
Patch-wise denoising of phase fringe patterns based on matrix enhancement
NASA Astrophysics Data System (ADS)
Kulkarni, Rishikesh; Rastogi, Pramod
2016-12-01
We propose a new approach for the denoising of a phase fringe pattern recorded in an optical interferometric setup. The phase fringe pattern which is generally corrupted by high amount of speckle noise is first converted into an exponential phase field. This phase field is divided into a number of overlapping patches. Owing to the small size of each patch, the presence of a simple structure of the interference phase is assumed in it. Accordingly, the singular value decomposition (SVD) of the patch allows us to separate the signal and noise components effectively. The patch is reconstructed only with the signal component. In order to further improve the robustness of the proposed method, an enhanced data matrix is generated using the patch and the SVD of this enhanced matrix is computed. The matrix enhancement results in an increased dimension of the noise subspace which thus accommodates more amount of noise component. Reassignment of the filtered pixels of the preceding patch in the current patch improves the noise filtering accuracy. The fringe denoising capability in function of the noise level and the patch size is studied. Simulation and experimental results are provided to demonstrate the practical applicability of the proposed method.
A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis
NASA Astrophysics Data System (ADS)
Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao
2016-05-01
An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.
Despeckling SRTM And Other Topographic Data With A Denoising Algorithm
NASA Astrophysics Data System (ADS)
Stevenson, J. A.; Sun, X.; Mitchell, N. C.
2012-12-01
Noise in topographic data obscures features and increases error in geomorphic products calculated from DEMs. DEMs produced by radar remote sensing, such as SRTM, are frequently used for geomorphological studies, they often contain speckle noise which may significantly lower the quality of geomorphometric analyses. We introduce here an algorithm that denoises three-dimensional objects while preserving sharp features. It is free to download and simple to use. In this study the algorithm is applied to topographic data (synthetic landscapes, SRTM, TOPSAR) and the results are compared against using a mean filter, using LiDAR data as ground truth for the natural datasets. The level of denoising is controlled by two parameters: the threshold (T) that controls the sharpness of the features to be preserved, and the number of iterations (n) that controls how much the data are changed. The optimum settings depend on the nature of the topography and of the noise to be removed, but are typically in the range T = 0.87-0.99 and n = 1-10. If the threshold is too high, noise is preserved. A lower threshold setting is used where noise is spatially uncorrelated (e.g. TOPSAR), whereas in some other datasets (e.g. SRTM), where filtering of the data during processing has introduced spatial correlation to the noise, higher thresholds can be used. Compared to those filtered to an equivalent level with a mean filter, data smoothed by the denoising algorithm of Sun et al. [Sun, X., Rosin, P.L., Martin, R.R., Langbein, F.C., 2007. Fast and effective feature-preserving mesh denoising. IEEE Transactions on Visualisation and Computer Graphics 13, 925-938] are closer to the original data and to the ground truth. Changes to the data are smaller and less correlated to topographic features. Furthermore, the feature-preserving nature of the algorithm allows significant smoothing to be applied to flat areas of topography while limiting the alterations made in mountainous regions, with clear benefits
Raguideau, Sébastien; Plancade, Sandra; Pons, Nicolas; Leclerc, Marion
2016-01-01
Whole Genome Shotgun (WGS) metagenomics is increasingly used to study the structure and functions of complex microbial ecosystems, both from the taxonomic and functional point of view. Gene inventories of otherwise uncultured microbial communities make the direct functional profiling of microbial communities possible. The concept of community aggregated trait has been adapted from environmental and plant functional ecology to the framework of microbial ecology. Community aggregated traits are quantified from WGS data by computing the abundance of relevant marker genes. They can be used to study key processes at the ecosystem level and correlate environmental factors and ecosystem functions. In this paper we propose a novel model based approach to infer combinations of aggregated traits characterizing specific ecosystemic metabolic processes. We formulate a model of these Combined Aggregated Functional Traits (CAFTs) accounting for a hierarchical structure of genes, which are associated on microbial genomes, further linked at the ecosystem level by complex co-occurrences or interactions. The model is completed with constraints specifically designed to exploit available genomic information, in order to favor biologically relevant CAFTs. The CAFTs structure, as well as their intensity in the ecosystem, is obtained by solving a constrained Non-negative Matrix Factorization (NMF) problem. We developed a multicriteria selection procedure for the number of CAFTs. We illustrated our method on the modelling of ecosystemic functional traits of fiber degradation by the human gut microbiota. We used 1408 samples of gene abundances from several high-throughput sequencing projects and found that four CAFTs only were needed to represent the fiber degradation potential. This data reduction highlighted biologically consistent functional patterns while providing a high quality preservation of the original data. Our method is generic and can be applied to other metabolic processes in
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
2010-01-01
Background Comparative genomics methods such as phylogenetic profiling can mine powerful inferences from inherently noisy biological data sets. We introduce Sites Inferred by Metabolic Background Assertion Labeling (SIMBAL), a method that applies the Partial Phylogenetic Profiling (PPP) approach locally within a protein sequence to discover short sequence signatures associated with functional sites. The approach is based on the basic scoring mechanism employed by PPP, namely the use of binomial distribution statistics to optimize sequence similarity cutoffs during searches of partitioned training sets. Results Here we illustrate and validate the ability of the SIMBAL method to find functionally relevant short sequence signatures by application to two well-characterized protein families. In the first example, we partitioned a family of ABC permeases using a metabolic background property (urea utilization). Thus, the TRUE set for this family comprised members whose genome of origin encoded a urea utilization system. By moving a sliding window across the sequence of a permease, and searching each subsequence in turn against the full set of partitioned proteins, the method found which local sequence signatures best correlated with the urea utilization trait. Mapping of SIMBAL "hot spots" onto crystal structures of homologous permeases reveals that the significant sites are gating determinants on the cytosolic face rather than, say, docking sites for the substrate-binding protein on the extracellular face. In the second example, we partitioned a protein methyltransferase family using gene proximity as a criterion. In this case, the TRUE set comprised those methyltransferases encoded near the gene for the substrate RF-1. SIMBAL identifies sequence regions that map onto the substrate-binding interface while ignoring regions involved in the methyltransferase reaction mechanism in general. Neither method for training set construction requires any prior experimental
ERIC Educational Resources Information Center
Cimpian, Andrei; Cadena, Cristina
2010-01-01
Artifacts pose a potential learning problem for children because the mapping between their features and their functions is often not transparent. In solving this problem, children are likely to rely on a number of information sources (e.g., others' actions, affordances). We argue that children's sensitivity to nuances in the language used to…
Automatic parameter prediction for image denoising algorithms using perceptual quality features
NASA Astrophysics Data System (ADS)
Mittal, Anish; Moorthy, Anush K.; Bovik, Alan C.
2012-03-01
A natural scene statistics (NSS) based blind image denoising approach is proposed, where denoising is performed without knowledge of the noise variance present in the image. We show how such a parameter estimation can be used to perform blind denoising by combining blind parameter estimation with a state-of-the-art denoising algorithm.1 Our experiments show that for all noise variances simulated on a varied image content, our approach is almost always statistically superior to the reference BM3D implementation in terms of perceived visual quality at the 95% confidence level.
De-noising of digital image correlation based on stationary wavelet transform
NASA Astrophysics Data System (ADS)
Guo, Xiang; Li, Yulong; Suo, Tao; Liang, Jin
2017-03-01
In this paper, a stationary wavelet transform (SWT) based method is proposed to de-noise the digital image with the light noise, and the SWT de-noise algorithm is presented after the analyzing of the light noise. By using the de-noise algorithm, the method was demonstrated to be capable of providing accurate DIC measurements in the light noise environment. The verification, comparative and realistic experiments were conducted using this method. The result indicate that the de-noise method can be applied to the full-field strain measurement under the light interference with a high accuracy and stability.
Energy-Based Wavelet De-Noising of Hydrologic Time Series
Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu
2014-01-01
De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533
Energy-based wavelet de-noising of hydrologic time series.
Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu
2014-01-01
De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed.
Improved deadzone modeling for bivariate wavelet shrinkage-based image denoising
NASA Astrophysics Data System (ADS)
DelMarco, Stephen
2016-05-01
Modern image processing performed on-board low Size, Weight, and Power (SWaP) platforms, must provide high- performance while simultaneously reducing memory footprint, power consumption, and computational complexity. Image preprocessing, along with downstream image exploitation algorithms such as object detection and recognition, and georegistration, place a heavy burden on power and processing resources. Image preprocessing often includes image denoising to improve data quality for downstream exploitation algorithms. High-performance image denoising is typically performed in the wavelet domain, where noise generally spreads and the wavelet transform compactly captures high information-bearing image characteristics. In this paper, we improve modeling fidelity of a previously-developed, computationally-efficient wavelet-based denoising algorithm. The modeling improvements enhance denoising performance without significantly increasing computational cost, thus making the approach suitable for low-SWAP platforms. Specifically, this paper presents modeling improvements to the Sendur-Selesnick model (SSM) which implements a bivariate wavelet shrinkage denoising algorithm that exploits interscale dependency between wavelet coefficients. We formulate optimization problems for parameters controlling deadzone size which leads to improved denoising performance. Two formulations are provided; one with a simple, closed form solution which we use for numerical result generation, and the second as an integral equation formulation involving elliptic integrals. We generate image denoising performance results over different image sets drawn from public domain imagery, and investigate the effect of wavelet filter tap length on denoising performance. We demonstrate denoising performance improvement when using the enhanced modeling over performance obtained with the baseline SSM model.
Denoising in Contrast-Enhanced X-ray Images
NASA Astrophysics Data System (ADS)
Jeon, Gwanggil
2016-12-01
In this paper, we propose a denoising and contrast-enhancement method for medical images. The main purpose of medical image improvement is to transform lower contrast data into higher contrast, and to reduce high noise levels. To meet this goal, we propose a noise-level estimation method, whereby the noise level is estimated by computing the standard deviation and variance in a local block. The obtained noise level is then used as an input parameter for the block-matching and 3D filtering (BM3D) algorithm, and the denoising process is then performed. Noise-level estimation step is important because the BM3D algorithm does not perform well without correct noise-level information. Simulation results confirm that the proposed method outperforms other benchmarks with respect to both their objective and visual performances.
Examining Alternatives to Wavelet Denoising for Astronomical Source Finding
NASA Astrophysics Data System (ADS)
Jurek, R.; Brown, S.
2012-08-01
The Square Kilometre Array and its pathfinders ASKAP and MeerKAT will produce prodigious amounts of data that necessitate automated source finding. The performance of automated source finders can be improved by pre-processing a dataset. In preparation for the WALLABY and DINGO surveys, we have used a test HI datacube constructed from actual Westerbork Telescope noise and WHISP HI galaxies to test the real world improvement of linear smoothing, the Duchamp source finder's wavelet denoising, iterative median smoothing and mathematical morphology subtraction, on intensity threshold source finding of spectral line datasets. To compare these pre-processing methods we have generated completeness-reliability performance curves for each method and a range of input parameters. We find that iterative median smoothing produces the best source finding results for ASKAP HI spectral line observations, but wavelet denoising is a safer pre-processing technique. In this paper we also present our implementations of iterative median smoothing and mathematical morphology subtraction.
Diffusion Weighted Image Denoising Using Overcomplete Local PCA
Manjón, José V.; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D. Louis; Robles, Montserrat
2013-01-01
Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889
A simple filter circuit for denoising biomechanical impact signals.
Subramaniam, Suba R; Georgakis, Apostolos
2009-01-01
We present a simple scheme for denoising non-stationary biomechanical signals with the aim of accurately estimating their second derivative (acceleration). The method is based on filtering in fractional Fourier domains using well-known low-pass filters in a way that amounts to a time-varying cut-off threshold. The resulting algorithm is linear and its design is facilitated by the relationship between the fractional Fourier transform and joint time-frequency representations. The implemented filter circuit employs only three low-order filters while its efficiency is further supported by the low computational complexity of the fractional Fourier transform. The results demonstrate that the proposed method can denoise the signals effectively and is more robust against noise as compared to conventional low-pass filters.
Automatic denoising of single-trial evoked potentials.
Ahmadi, Maryam; Quian Quiroga, Rodrigo
2013-02-01
We present an automatic denoising method based on the wavelet transform to obtain single trial evoked potentials. The method is based on the inter- and intra-scale variability of the wavelet coefficients and their deviations from baseline values. The performance of the method is tested with simulated event related potentials (ERPs) and with real visual and auditory ERPs. For the simulated data the presented method gives a significant improvement in the observation of single trial ERPs as well as in the estimation of their amplitudes and latencies, in comparison with a standard denoising technique (Donoho's thresholding) and in comparison with the noisy single trials. For the real data, the proposed method largely filters the spontaneous EEG activity, thus helping the identification of single trial visual and auditory ERPs. The proposed method provides a simple, automatic and fast tool that allows the study of single trial responses and their correlations with behavior.
Fast non local means denoising for 3D MR images.
Coupé, Pierrick; Yger, Pierre; Barillot, Christian
2006-01-01
One critical issue in the context of image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image conspicuity and to improve the performances of all the processings needed for quantitative imaging analysis. The method proposed in this paper is based on an optimized version of the Non Local (NL) Means algorithm. This approach uses the natural redundancy of information in image to remove the noise. Tests were carried out on synthetic datasets and on real 3T MR images. The results show that the NL-means approach outperforms other classical denoising methods, such as Anisotropic Diffusion Filter and Total Variation.
NASA Astrophysics Data System (ADS)
Hammerle, Albin; Wohlfahrt, Georg; Schoups, Gerrit
2014-05-01
Advances in automated data collection systems enabled ecologists to collect enormous amounts of varied data. Data assimilation (or data model synthesis) is one way to make sense of this mass of data. Given a process model designed to learn about ecological processes these data can be integrated within a statistical framework for data interpretation and extrapolation. Results of such a data assimilation framework clearly depend on the information content of the observed data, on the associated uncertainties (data uncertainties, model structural uncertainties and parameter uncertainties) and underlying assumptions. Parameter estimation is usually done by minimizing a simple least squares objective function with respect to the model parameters - presuming Gaussian, independent and homoscedastic errors (formal approach). Recent contributions to the (ecological) literature, however, have questioned the validity of this approach when confronted with significant errors and uncertainty in the model forcing (inputs) and model structure. Very often residual errors are non-Gaussian, correlated and heteroscedastic. Thus these error sources have to be considered and residual-errors have to be described in a statistically correct fashion order to draw statistically sound conclusions about parameter- and model predictive-uncertainties. We examined the effects of a generalized likelihood (GL) function on the parameter estimation of a carbon balance model. Compared with the formal approach, the GL function allows for correlation, non-stationarity and non-normality of model residuals. Carbon model parameters have been constrained using three different datasets, each of them modelled by its own GL function. As shown in literature the use of different datasets for parameter estimation reduces the uncertainty in model parameters and model predictions and does allow for a better quantification and for more insights into model processes.
A comparison of Monte Carlo dose calculation denoising techniques
NASA Astrophysics Data System (ADS)
El Naqa, I.; Kawrakow, I.; Fippel, M.; Siebers, J. V.; Lindsay, P. E.; Wickerhauser, M. V.; Vicic, M.; Zakarian, K.; Kauffmann, N.; Deasy, J. O.
2005-03-01
Recent studies have demonstrated that Monte Carlo (MC) denoising techniques can reduce MC radiotherapy dose computation time significantly by preferentially eliminating statistical fluctuations ('noise') through smoothing. In this study, we compare new and previously published approaches to MC denoising, including 3D wavelet threshold denoising with sub-band adaptive thresholding, content adaptive mean-median-hybrid (CAMH) filtering, locally adaptive Savitzky-Golay curve-fitting (LASG), anisotropic diffusion (AD) and an iterative reduction of noise (IRON) method formulated as an optimization problem. Several challenging phantom and computed-tomography-based MC dose distributions with varying levels of noise formed the test set. Denoising effectiveness was measured in three ways: by improvements in the mean-square-error (MSE) with respect to a reference (low noise) dose distribution; by the maximum difference from the reference distribution and by the 'Van Dyk' pass/fail criteria of either adequate agreement with the reference image in low-gradient regions (within 2% in our case) or, in high-gradient regions, a distance-to-agreement-within-2% of less than 2 mm. Results varied significantly based on the dose test case: greater reductions in MSE were observed for the relatively smoother phantom-based dose distribution (up to a factor of 16 for the LASG algorithm); smaller reductions were seen for an intensity modulated radiation therapy (IMRT) head and neck case (typically, factors of 2-4). Although several algorithms reduced statistical noise for all test geometries, the LASG method had the best MSE reduction for three of the four test geometries, and performed the best for the Van Dyk criteria. However, the wavelet thresholding method performed better for the head and neck IMRT geometry and also decreased the maximum error more effectively than LASG. In almost all cases, the evaluated methods provided acceleration of MC results towards statistically more accurate
[Quantitative evaluation of soil hyperspectra denoising with different filters].
Huang, Ming-Xiang; Wang, Ke; Shi, Zhou; Gong, Jian-Hua; Li, Hong-Yi; Chen, Jie-Liang
2009-03-01
The noise distribution of soil hyperspectra measured by ASD FieldSpec Pro FR was described, and then the quantitative evaluation of spectral denoising with six filters was compared. From the interpretation of soil hyperspectra, the continuum removed, first-order differential and high frequency curves, the UV/VNIR (350-1 050 nm) exhibit hardly noise except the coverage of 40 nm in the beginning 350 nm. However, the SWIR (1 000-2 500 nm) shows different noise distribution. Especially, the latter half of SWIR 2(1 800-2 500 nm) showed more noise, and the intersection spectrum of three spectrometers has more noise than the neighbor spectrum. Six filters were chosen for spectral denoising. The smoothing indexes (SI), horizontal feature reservation index (HFRI) and vertical feature reservation index (VFRI) were designed for evaluating the denoising performance of these filters. The comparison of their indexes shows that WD and MA filters are the optimal choice to filter the noise, in terms of balancing the contradiction between the smoothing and feature reservation ability. Furthermore the first-order differential data of 66 denoising soil spectra by 6 filters were respectively used as the input of the same PLSR model to predict the sand content. The different prediction accuracies caused by the different filters show that compared to the feature reservation ability, the filter's smoothing ability is the principal factor to influence the accuracy. The study can benefit the spectral preprocessing and analyzing, and also provide the scientific foundation for the related spectroscopy applications.
Robust L1 PCA and application in image denoising
NASA Astrophysics Data System (ADS)
Gao, Junbin; Kwan, Paul W. H.; Guo, Yi
2007-11-01
The so-called robust L1 PCA was introduced in our recent work [1] based on the L1 noise assumption. Due to the heavy tail characteristics of the L1 distribution, the proposed model has been proved much more robust against data outliers. In this paper, we further demonstrate how the learned robust L1 PCA model can be used to denoise image data.
Undecimated Wavelet Transforms for Image De-noising
Gyaourova, A; Kamath, C; Fodor, I K
2002-11-19
A few different approaches exist for computing undecimated wavelet transform. In this work we construct three undecimated schemes and evaluate their performance for image noise reduction. We use standard wavelet based de-noising techniques and compare the performance of our algorithms with the original undecimated wavelet transform, as well as with the decimated wavelet transform. The experiments we have made show that our algorithms have better noise removal/blurring ratio.
Adaptive nonlocal means filtering based on local noise level for CT denoising
Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.
2014-01-15
Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the
Bodner, Kimberly E; Engelhardt, Christopher R; Minshew, Nancy J; Williams, Diane L
2015-09-01
Studies investigating inferential reasoning in autism spectrum disorder (ASD) have focused on the ability to make socially-related inferences or inferences more generally. Important variables for intervention planning such as whether inferences depend on physical experiences or the nature of social information have received less consideration. A measure of bridging inferences of physical causation, mental states, and emotional states was administered to older children, adolescents, and adults with and without ASD. The ASD group had more difficulty making inferences, particularly related to emotional understanding. Results suggest that individuals with ASD may not have the stored experiential knowledge that specific inferences depend upon or have difficulties accessing relevant experiences due to linguistic limitations. Further research is needed to tease these elements apart.
Bodner, Kimberly E.; Engelhardt, Christopher R.; Minshew, Nancy J.
2015-01-01
Studies investigating inferential reasoning in autism spectrum disorder (ASD) have focused on the ability to make socially-related inferences or inferences more generally. Important variables for intervention planning such as whether inferences depend on physical experience or the nature of social information have received less consideration. A measure of bridging inferences of physical causation, mental states, and emotional states was administered to older children, adolescents, and adults with and without ASD. The ASD group had more difficulty making inferences, particularly related to emotional understanding. Results suggest that individuals with ASD may not have the stored experiential knowledge that specific inferences depend upon or have difficulties accessing relevant experiences due to linguistic limitations. Further research is needed to tease these elements apart. PMID:25821925
NASA Astrophysics Data System (ADS)
Peltzer, E. T.; Brewer, P. G.
2008-12-01
Increasing levels of dissolved total CO2 in the ocean from the invasion of fossil fuel CO2 via the atmosphere are widely believed to pose challenges to marine life on several fronts. This is most often expressed as a concern from the resulting lower pH, and the impact of this on calcification in marine organisms (coral reefs, calcareous phytoplankton etc.). These concerns are real, but calcification is by no means the only process affected, nor is the fossil fuel CO2 signal the only geochemical driver of the rapidly emerging deep-sea biological stress. Physical climate change is reducing deep-sea ventilation rates, and thereby leading to increasing oxygen deficits and concomitant increased respiratory CO2. We seek to understand the combined effects of the downward penetration of the fossil fuel signal, and the emergence of the depleted O2/increased respiratory CO2 signal at depth. As a first step, we seek to provide a simple function to capture the changing oceanic state. The most basic thermodynamic equation for the functioning of marine animals can be written as Corg + O2 → CO2 , and this results in the simple Gibbs free energy equation: ΔG° = - RT * ln [fCO2]/[Corg]*[fO2], in which the ratio of pO2 to pCO2 emerges as the dominant factor. From this we construct a simple Respiration Index: RI = log10 (pO2/pCO2), which is linear in energy and map this function for key oceanic regions illustrating the expansion of oceanic dead zones. The formal thermodynamic limit for aerobic life is RI = 0; in practice field data shows that at RI ~ 0.7 microbes turn to electron acceptors other than O2, and denitrification begins to occur. This likely represents the lowest limit for the long-term functioning of higher animals, and the zone RI = 0.7 to 1 appears to present challenges to basic functioning of many marine species. In addition, there are large regions of the ocean where denitrification already occurs, and these zones will expand greatly in size as the combined
Optimally stabilized PET image denoising using trilateral filtering.
Mansoor, Awais; Bagci, Ulas; Mollura, Daniel J
2014-01-01
Low-resolution and signal-dependent noise distribution in positron emission tomography (PET) images makes denoising process an inevitable step prior to qualitative and quantitative image analysis tasks. Conventional PET denoising methods either over-smooth small-sized structures due to resolution limitation or make incorrect assumptions about the noise characteristics. Therefore, clinically important quantitative information may be corrupted. To address these challenges, we introduced a novel approach to remove signal-dependent noise in the PET images where the noise distribution was considered as Poisson-Gaussian mixed. Meanwhile, the generalized Anscombe's transformation (GAT) was used to stabilize varying nature of the PET noise. Other than noise stabilization, it is also desirable for the noise removal filter to preserve the boundaries of the structures while smoothing the noisy regions. Indeed, it is important to avoid significant loss of quantitative information such as standard uptake value (SUV)-based metrics as well as metabolic lesion volume. To satisfy all these properties, we extended bilateral filtering method into trilateral filtering through multiscaling and optimal Gaussianization process. The proposed method was tested on more than 50 PET-CT images from various patients having different cancers and achieved the superior performance compared to the widely used denoising techniques in the literature.
Comparison of de-noising techniques for FIRST images
Fodor, I K; Kamath, C
2001-01-22
Data obtained through scientific observations are often contaminated by noise and artifacts from various sources. As a result, a first step in mining these data is to isolate the signal of interest by minimizing the effects of the contaminations. Once the data has been cleaned or de-noised, data mining can proceed as usual. In this paper, we describe our work in denoising astronomical images from the Faint Images of the Radio Sky at Twenty-Centimeters (FIRST) survey. We are mining this survey to detect radio-emitting galaxies with a bent-double morphology. This task is made difficult by the noise in the images caused by the processing of the sensor data. We compare three different approaches to de-noising: thresholding of wavelet coefficients advocated in the statistical community, traditional Altering methods used in the image processing community, and a simple thresholding scheme proposed by FIRST astronomers. While each approach has its merits and pitfalls, we found that for our purpose, the simple thresholding scheme worked relatively well for the FIRST dataset.
Streak image denoising and segmentation using adaptive Gaussian guided filter.
Jiang, Zhuocheng; Guo, Baoping
2014-09-10
In streak tube imaging lidar (STIL), streak images are obtained using a CCD camera. However, noise in the captured streak images can greatly affect the quality of reconstructed 3D contrast and range images. The greatest challenge for streak image denoising is reducing the noise while preserving details. In this paper, we propose an adaptive Gaussian guided filter (AGGF) for noise removal and detail enhancement of streak images. The proposed algorithm is based on a guided filter (GF) and part of an adaptive bilateral filter (ABF). In the AGGF, the details are enhanced by optimizing the offset parameter. AGGF-denoised streak images are significantly sharper than those denoised by the GF. Moreover, the AGGF is a fast linear time algorithm achieved by recursively implementing a Gaussian filter kernel. Experimentally, AGGF demonstrates its capacity to preserve edges and thin structures and outperforms the existing bilateral filter and domain transform filter in terms of both visual quality and peak signal-to-noise ratio performance.
Microseismic event denoising via adaptive directional vector median filters
NASA Astrophysics Data System (ADS)
Zheng, Jing; Lu, Ji-Ren; Jiang, Tian-Qi; Liang, Zhe
2017-01-01
We present a novel denoising scheme via Radon transform-based adaptive vector directional median filters named adaptive directional vector median filter (AD-VMF) to suppress noise for microseismic downhole dataset. AD-VMF contains three major steps for microseismic downhole data processing: (i) applying Radon transform on the microseismic data to obtain the parameters of the waves, (ii) performing S-transform to determine the parameters for filters, and (iii) applying the parameters for vector median filter (VMF) to denoise the data. The steps (i) and (ii) can realize the automatic direction detection. The proposed algorithm is tested with synthetic and field datasets that were recorded with a vertical array of receivers. The P-wave and S-wave direct arrivals are properly denoised for poor signal-to-noise ratio (SNR) records. In the simulation case, we also evaluate the performance with mean square error (MSE) in terms of signal-to-noise ratio (SNR). The result shows that the distortion of the proposed method is very low; the SNR is even less than 0 dB.
Two-direction nonlocal model for image denoising.
Zhang, Xuande; Feng, Xiangchu; Wang, Weiwei
2013-01-01
Similarities inherent in natural images have been widely exploited for image denoising and other applications. In fact, if a cluster of similar image patches is rearranged into a matrix, similarities exist both between columns and rows. Using the similarities, we present a two-directional nonlocal (TDNL) variational model for image denoising. The solution of our model consists of three components: one component is a scaled version of the original observed image and the other two components are obtained by utilizing the similarities. Specifically, by using the similarity between columns, we get a nonlocal-means-like estimation of the patch with consideration to all similar patches, while the weights are not the pairwise similarities but a set of clusterwise coefficients. Moreover, by using the similarity between rows, we also get nonlocal-autoregression-like estimations for the center pixels of the similar patches. The TDNL model leads to an alternative minimization algorithm. Experiments indicate that the model can perform on par with or better than the state-of-the-art denoising methods.
Oriented wavelet transform for image compression and denoising.
Chappelier, Vivien; Guillemot, Christine
2006-10-01
In this paper, we introduce a new transform for image processing, based on wavelets and the lifting paradigm. The lifting steps of a unidimensional wavelet are applied along a local orientation defined on a quincunx sampling grid. To maximize energy compaction, the orientation minimizing the prediction error is chosen adaptively. A fine-grained multiscale analysis is provided by iterating the decomposition on the low-frequency band. In the context of image compression, the multiresolution orientation map is coded using a quad tree. The rate allocation between the orientation map and wavelet coefficients is jointly optimized in a rate-distortion sense. For image denoising, a Markov model is used to extract the orientations from the noisy image. As long as the map is sufficiently homogeneous, interesting properties of the original wavelet are preserved such as regularity and orthogonality. Perfect reconstruction is ensured by the reversibility of the lifting scheme. The mutual information between the wavelet coefficients is studied and compared to the one observed with a separable wavelet transform. The rate-distortion performance of this new transform is evaluated for image coding using state-of-the-art subband coders. Its performance in a denoising application is also assessed against the performance obtained with other transforms or denoising methods.
Stacked Convolutional Denoising Auto-Encoders for Feature Representation.
Du, Bo; Xiong, Wei; Wu, Jia; Zhang, Lefei; Zhang, Liangpei; Tao, Dacheng
2016-03-16
Deep networks have achieved excellent performance in learning representation from visual data. However, the supervised deep models like convolutional neural network require large quantities of labeled data, which are very expensive to obtain. To solve this problem, this paper proposes an unsupervised deep network, called the stacked convolutional denoising auto-encoders, which can map images to hierarchical representations without any label information. The network, optimized by layer-wise training, is constructed by stacking layers of denoising auto-encoders in a convolutional way. In each layer, high dimensional feature maps are generated by convolving features of the lower layer with kernels learned by a denoising auto-encoder. The auto-encoder is trained on patches extracted from feature maps in the lower layer to learn robust feature detectors. To better train the large network, a layer-wise whitening technique is introduced into the model. Before each convolutional layer, a whitening layer is embedded to sphere the input data. By layers of mapping, raw images are transformed into high-level feature representations which would boost the performance of the subsequent support vector machine classifier. The proposed algorithm is evaluated by extensive experimentations and demonstrates superior classification performance to state-of-the-art unsupervised networks.
2008-01-01
Background Amongst the most commonly used molecular markers for plant phylogenetic studies are the nuclear ribosomal internal transcribed spacers (ITS). Intra-individual variability of these multicopy regions is a very common phenomenon in plants, the causes of which are debated in literature. Phylogenetic reconstruction under these conditions is inherently difficult. Our approach is to consider this problem as a special case of the general biological question of how to infer the characteristics of hosts (represented here by plant individuals) from features of their associates (represented by cloned sequences here). Results Six general transformation functions are introduced, covering the transformation of associate characters to discrete and continuous host characters, and the transformation of associate distances to host distances. A pure distance-based framework is established in which these transformation functions are applied to ITS sequences collected from the angiosperm genera Acer, Fagus and Zelkova. The formulae are also applied to allelic data of three different loci obtained from Rosa spp. The functions are validated by (1) phylogeny-independent measures of treelikeness; (2) correlation with independent host characters; (3) visualization using splits graphs and comparison with published data on the test organisms. The results agree well with these three measures and the datasets examined as well as with the theoretical predictions and previous results in the literature. High-quality distance matrices are obtained with four of the six transformation formulae. We demonstrate that one of them represents a generalization of the Sørensen coefficient, which is widely applied in ecology. Conclusion Because of their generality, the transformation functions may be applied to a wide range of biological problems that are interpretable in terms of hosts and associates. Regarding cloned sequences, the formulae have a high potential to accurately reflect evolutionary
Denoised and texture enhanced MVCT to improve soft tissue conspicuity
Sheng, Ke Qi, Sharon X.; Gou, Shuiping; Wu, Jiaolong
2014-10-15
Purpose: MVCT images have been used in TomoTherapy treatment to align patients based on bony anatomies but its usefulness for soft tissue registration, delineation, and adaptive radiation therapy is limited due to insignificant photoelectric interaction components and the presence of noise resulting from low detector quantum efficiency of megavoltage x-rays. Algebraic reconstruction with sparsity regularizers as well as local denoising methods has not significantly improved the soft tissue conspicuity. The authors aim to utilize a nonlocal means denoising method and texture enhancement to recover the soft tissue information in MVCT (DeTECT). Methods: A block matching 3D (BM3D) algorithm was adapted to reduce the noise while keeping the texture information of the MVCT images. Following imaging denoising, a saliency map was created to further enhance visual conspicuity of low contrast structures. In this study, BM3D and saliency maps were applied to MVCT images of a CT imaging quality phantom, a head and neck, and four prostate patients. Following these steps, the contrast-to-noise ratios (CNRs) were quantified. Results: By applying BM3D denoising and saliency map, postprocessed MVCT images show remarkable improvements in imaging contrast without compromising resolution. For the head and neck patient, the difficult-to-see lymph nodes and vein in the carotid space in the original MVCT image became conspicuous in DeTECT. For the prostate patients, the ambiguous boundary between the bladder and the prostate in the original MVCT was clarified. The CNRs of phantom low contrast inserts were improved from 1.48 and 3.8 to 13.67 and 16.17, respectively. The CNRs of two regions-of-interest were improved from 1.5 and 3.17 to 3.14 and 15.76, respectively, for the head and neck patient. DeTECT also increased the CNR of prostate from 0.13 to 1.46 for the four prostate patients. The results are substantially better than a local denoising method using anisotropic diffusion
Despeckling SRTM and other topographic data with a denoising algorithm
NASA Astrophysics Data System (ADS)
Stevenson, John A.; Sun, Xianfang; Mitchell, Neil C.
2010-01-01
Noise in topographic data obscures features and increases error in geomorphic products calculated from DEMs. DEMs produced by radar remote sensing, such as SRTM, are frequently used for geomorphological studies, they often contain speckle noise which may significantly lower the quality of geomorphometric analyses. We introduce here an algorithm that denoises three-dimensional objects while preserving sharp features. It is free to download and simple to use. In this study the algorithm is applied to topographic data (synthetic landscapes, SRTM, TOPSAR) and the results are compared against using a mean filter, using LiDAR data as ground truth for the natural datasets. The level of denoising is controlled by two parameters: the threshold ( T) that controls the sharpness of the features to be preserved, and the number of iterations ( n) that controls how much the data are changed. The optimum settings depend on the nature of the topography and of the noise to be removed, but are typically in the range T = 0.87-0.99 and n = 1-10. If the threshold is too high, noise is preserved. A lower threshold setting is used where noise is spatially uncorrelated (e.g. TOPSAR), whereas in some other datasets (e.g. SRTM), where filtering of the data during processing has introduced spatial correlation to the noise, higher thresholds can be used. Compared to those filtered to an equivalent level with a mean filter, data smoothed by the denoising algorithm of Sun et al. [Sun, X., Rosin, P.L., Martin, R.R., Langbein, F.C., 2007. Fast and effective feature-preserving mesh denoising. IEEE Transactions on Visualisation and Computer Graphics 13, 925-938.] are closer to the original data and to the ground truth. Changes to the data are smaller and less correlated to topographic features. Furthermore, the feature-preserving nature of the algorithm allows significant smoothing to be applied to flat areas of topography while limiting the alterations made in mountainous regions, with clear
NASA Astrophysics Data System (ADS)
Liu, Xiao; Clarke, Neil D.
Using a physically principled method of scoring genomic sequences for the potential to be bound by transcription factors, we have developed an algorithm for assessing the conservation of predicted binding occupancy that does not rely on sequence alignment of promoters. The method, which we call ortholog-weighting, assesses the degree to which the predicted binding occupancy of a transcription factor in a reference gene is also predicted in the promoters of orthologous genes. The analysis was performed separately for over 100 different transcription factors in S. cerevisiae. Statistical significance was evaluated by simulation using permuted versions of the position weight matrices. Ortholog-weighting produced about twice as many significantly high scoring genes as were obtained from the S. cerevisiae genome alone. Gene Ontology analysis found a similar two-fold enrichment of genes. Both analyses suggest that ortholog-weighting improves the prediction of true regulatory targets. Interestingly, the method has only a marginal effect on the prediction of binding by chromatin immunoprecipitation (ChIP) assays. We suggest several possibilities for reconciling this result with the improved enrichment that we observe for functionally related promoters and for promoters that are under positive selection.
Denoising of single-trial matrix representations using 2D nonlinear diffusion filtering.
Mustaffa, I; Trenado, C; Schwerdtfeger, K; Strauss, D J
2010-01-15
In this paper we present a novel application of denoising by means of nonlinear diffusion filters (NDFs). NDFs have been successfully applied for image processing and computer vision areas, particularly in image denoising, smoothing, segmentation, and restoration. We apply two types of NDFs for the denoising of evoked responses in single-trials in a matrix form, the nonlinear isotropic and the anisotropic diffusion filters. We show that by means of NDFs we are able to denoise the evoked potentials resulting in a better extraction of physiologically relevant morphological features over the ongoing experiment. This technique offers the advantage of translation-invariance in comparison to other well-known methods, e.g., wavelet denoising based on maximally decimated filter banks, due to an adaptive diffusion feature. We compare the proposed technique with a wavelet denoising scheme that had been introduced before for evoked responses. It is concluded that NDFs represent a promising and useful approach in the denoising of event related potentials. Novel NDF applications of single-trials of auditory brain responses (ABRs) and the transcranial magnetic stimulation (TMS) evoked electroencephalographic responses denoising are presented in this paper.
A Fast Algorithm for Denoising Magnitude Diffusion-Weighted Images with Rank and Edge Constraints
Lam, Fan; Liu, Ding; Song, Zhuang; Schuff, Norbert; Liang, Zhi-Pei
2015-01-01
Purpose To accelerate denoising of magnitude diffusion-weighted images subject to joint rank and edge constraints. Methods We extend a previously proposed majorize-minimize (MM) method for statistical estimation that involves noncentral χ distributions and joint rank and edge constraints. A new algorithm is derived which decomposes the constrained noncentral χ denoising problem into a series of constrained Gaussian denoising problems each of which is then solved using an efficient alternating minimization scheme. Results The performance of the proposed algorithm has been evaluated using both simulated and experimental data. Results from simulations based on ex vivo data show that the new algorithm achieves about a factor of 10 speed up over the original Quasi-Newton based algorithm. This improvement in computational efficiency enabled denoising of large data sets containing many diffusion-encoding directions. The denoising performance of the new efficient algorithm is found to be comparable to or even better than that of the original slow algorithm. For an in vivo high-resolution Q-ball acquisition, comparison of fiber tracking results around hippocampus region before and after denoising will also be shown to demonstrate the denoising effects of the new algorithm. Conclusion The optimization problem associated with denoising noncentral χ distributed diffusion-weighted images subject to joint rank and edge constraints can be solved efficiently using an MM-based algorithm. PMID:25733066
Enhancing P300 Wave of BCI Systems Via Negentropy in Adaptive Wavelet Denoising.
Vahabi, Z; Amirfattahi, R; Mirzaei, Ar
2011-07-01
Brian Computer Interface (BCI) is a direct communication pathway between the brain and an external device. BCIs are often aimed at assisting, augmenting or repairing human cognitive or sensory-motor functions. EEG separation into target and non-target ones based on presence of P300 signal is of difficult task mainly due to their natural low signal to noise ratio. In this paper a new algorithm is introduced to enhance EEG signals and improve their SNR. Our denoising method is based on multi-resolution analysis via Independent Component Analysis (ICA) Fundamentals. We have suggested combination of negentropy as a feature of signal and subband information from wavelet transform. The proposed method is finally tested with dataset from BCI Competition 2003 and gives results that compare favorably.
NASA Astrophysics Data System (ADS)
Buttinelli, M.; Bianchi, I.; Anselmi, M.; Chiarabba, C.; de Rita, D.; Quattrocchi, F.
2010-12-01
The Tolfa-Cerite volcanic district developed along the Tyrrhenian passive margin of central Italy, as part of magmatic processes started during the middle Pliocene. In this area the uncertainties on the deep crustal structures and the definition of the intrusive bodies geometry are focal issues that still need to be addressed. After the onset of the spreading of the Tyrrhenian sea during the Late Miocene, the emplacement of the intrusive bodies of the Tolfa complex (TDC), in a general back-arc geodynamical regime, generally occurred in a low stretching rate, in correspondence of the junctions between major lithospheric discontinuities. Normal faults, located at the edge of Mio-Pliocene basins, were used as preferential pathways for the rising of magmatic masses from the mantle to the surface. We used teleseismic recordings at the TOLF and MAON broad band station of the INGV seismic network (located between the Argentario promontory and Tolfa-Ceriti dome complexes -TDC-) to image the principal seismic velocity discontinuities by receiver function analysis (RF's). Together with RF’s velocity models of the area computed using the teleseismic events recorded by a temporary network of eight stations deployed around the TDC, we achieve a general crustal model of this area. The geometry of the seismic network has been defined to focus on the crustal structure beneath the TDC, trying to define the main velocity changes attributable to the intrusive bodies, the calcareous basal complex, the deep metamorphic basement, the lower crust and the Moho. The analysis of these data show the Moho at a depth of 23 km in the TDC area and 20 km in the Argentario area. Crustal models also show an unexpected velocity decrease between 12 and 18 km, consistent with a slight dropdown of the Vp/Vs ratio, imputable to a regional mid-crustal shear zone inherited from the previous alpine orogenesis, re-activated in extensional tectonic by the early opening phases of the Tyrrhenian sea. Above
A New Method for Nonlocal Means Image Denoising Using Multiple Images.
Wang, Xingzheng; Wang, Haoqian; Yang, Jiangfeng; Zhang, Yongbing
2016-01-01
The basic principle of nonlocal means is to denoise a pixel using the weighted average of the neighbourhood pixels, while the weight is decided by the similarity of these pixels. The key issue of the nonlocal means method is how to select similar patches and design the weight of them. There are two main contributions of this paper: The first contribution is that we use two images to denoise the pixel. These two noised images are with the same noise deviation. Instead of using only one image, we calculate the weight from two noised images. After the first denoising process, we get a pre-denoised image and a residual image. The second contribution is combining the nonlocal property between residual image and pre-denoised image. The improved nonlocal means method pays more attention on the similarity than the original one, which turns out to be very effective in eliminating gaussian noise. Experimental results with simulated data are provided.
Blind source separation based x-ray image denoising from an image sequence.
Yu, Chun-Yu; Li, Yan; Fei, Bin; Li, Wei-Liang
2015-09-01
Blind source separation (BSS) based x-ray image denoising from an image sequence is proposed. Without priori knowledge, the useful image signal can be separated from an x-ray image sequence, for original images are supposed as different combinations of stable image signal and random image noise. The BSS algorithms such as fixed-point independent component analysis and second-order statistics singular value decomposition are used and compared with multi-frame averaging which is a common algorithm for improving image's signal-to-noise ratio (SNR). Denoising performance is evaluated in SNR, standard deviation, entropy, and runtime. Analysis indicates that BSS is applicable to image denoising; the denoised image's quality will get better when more frames are included in an x-ray image sequence, but it will cost more time; there should be trade-off between denoising performance and runtime, which means that the number of frames included in an image sequence is enough.
Shao, Ling; Yan, Ruomei; Li, Xuelong; Liu, Yan
2014-07-01
Image denoising is a well explored topic in the field of image processing. In the past several decades, the progress made in image denoising has benefited from the improved modeling of natural images. In this paper, we introduce a new taxonomy based on image representations for a better understanding of state-of-the-art image denoising techniques. Within each category, several representative algorithms are selected for evaluation and comparison. The experimental results are discussed and analyzed to determine the overall advantages and disadvantages of each category. In general, the nonlocal methods within each category produce better denoising results than local ones. In addition, methods based on overcomplete representations using learned dictionaries perform better than others. The comprehensive study in this paper would serve as a good reference and stimulate new research ideas in image denoising.
Hyperspectral image denoising using the robust low-rank tensor recovery.
Li, Chang; Ma, Yong; Huang, Jun; Mei, Xiaoguang; Ma, Jiayi
2015-09-01
Denoising is an important preprocessing step to further analyze the hyperspectral image (HSI), and many denoising methods have been used for the denoising of the HSI data cube. However, the traditional denoising methods are sensitive to outliers and non-Gaussian noise. In this paper, by utilizing the underlying low-rank tensor property of the clean HSI data and the sparsity property of the outliers and non-Gaussian noise, we propose a new model based on the robust low-rank tensor recovery, which can preserve the global structure of HSI and simultaneously remove the outliers and different types of noise: Gaussian noise, impulse noise, dead lines, and so on. The proposed model can be solved by the inexact augmented Lagrangian method, and experiments on simulated and real hyperspectral images demonstrate that the proposed method is efficient for HSI denoising.
A New Method for Nonlocal Means Image Denoising Using Multiple Images
Wang, Xingzheng; Wang, Haoqian; Yang, Jiangfeng; Zhang, Yongbing
2016-01-01
The basic principle of nonlocal means is to denoise a pixel using the weighted average of the neighbourhood pixels, while the weight is decided by the similarity of these pixels. The key issue of the nonlocal means method is how to select similar patches and design the weight of them. There are two main contributions of this paper: The first contribution is that we use two images to denoise the pixel. These two noised images are with the same noise deviation. Instead of using only one image, we calculate the weight from two noised images. After the first denoising process, we get a pre-denoised image and a residual image. The second contribution is combining the nonlocal property between residual image and pre-denoised image. The improved nonlocal means method pays more attention on the similarity than the original one, which turns out to be very effective in eliminating gaussian noise. Experimental results with simulated data are provided. PMID:27459293
Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo
2016-12-13
In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods.
Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo
2016-01-01
In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods. PMID:27983577
Sun, Jie; Shi, Hongbo; Wang, Zhenzhen; Zhang, Changjian; Liu, Lin; Wang, Letian; He, Weiwei; Hao, Dapeng; Liu, Shulin; Zhou, Meng
2014-08-01
Accumulating evidence demonstrates that long non-coding RNAs (lncRNAs) play important roles in the development and progression of complex human diseases, and predicting novel human lncRNA-disease associations is a challenging and urgently needed task, especially at a time when increasing amounts of lncRNA-related biological data are available. In this study, we proposed a global network-based computational framework, RWRlncD, to infer potential human lncRNA-disease associations by implementing the random walk with restart method on a lncRNA functional similarity network. The performance of RWRlncD was evaluated by experimentally verified lncRNA-disease associations, based on leave-one-out cross-validation. We achieved an area under the ROC curve of 0.822, demonstrating the excellent performance of RWRlncD. Significantly, the performance of RWRlncD is robust to different parameter selections. Predictively highly-ranked lncRNA-disease associations in case studies of prostate cancer and Alzheimer's disease were manually confirmed by literature mining, providing evidence of the good performance and potential value of the RWRlncD method in predicting lncRNA-disease associations.
Tseng, Zhijie Jack; Flynn, John J
2015-01-21
biomechanical attributes from these simulations are used to infer form-function linkage.
Morphology of the Galaxy Distribution from Wavelet Denoising
NASA Astrophysics Data System (ADS)
Martínez, Vicent J.; Starck, Jean-Luc; Saar, Enn; Donoho, David L.; Reynolds, Simon C.; de la Cruz, Pablo; Paredes, Silvestre
2005-11-01
We have developed a method based on wavelets to obtain the true underlying smooth density from a point distribution. The goal has been to reconstruct the density field in an optimal way, ensuring that the morphology of the reconstructed field reflects the true underlying morphology of the point field, which, as the galaxy distribution, has a genuinely multiscale structure, with near-singular behavior on sheets, filaments, and hot spots. If the discrete distributions are smoothed using Gaussian filters, the morphological properties tend to be closer to those expected for a Gaussian field. The use of wavelet denoising provides us with a unique and more accurate morphological description.
Structural Plasticity Denoises Responses and Improves Learning Speed
Spiess, Robin; George, Richard; Cook, Matthew; Diehl, Peter U.
2016-01-01
Despite an abundance of computational models for learning of synaptic weights, there has been relatively little research on structural plasticity, i.e., the creation and elimination of synapses. Especially, it is not clear how structural plasticity works in concert with spike-timing-dependent plasticity (STDP) and what advantages their combination offers. Here we present a fairly large-scale functional model that uses leaky integrate-and-fire neurons, STDP, homeostasis, recurrent connections, and structural plasticity to learn the input encoding, the relation between inputs, and to infer missing inputs. Using this model, we compare the error and the amount of noise in the network's responses with and without structural plasticity and the influence of structural plasticity on the learning speed of the network. Using structural plasticity during learning shows good results for learning the representation of input values, i.e., structural plasticity strongly reduces the noise of the response by preventing spikes with a high error. For inferring missing inputs we see similar results, with responses having less noise if the network was trained using structural plasticity. Additionally, using structural plasticity with pruning significantly decreased the time to learn weights suitable for inference. Presumably, this is due to the clearer signal containing less spikes that misrepresent the desired value. Therefore, this work shows that structural plasticity is not only able to improve upon the performance using STDP without structural plasticity but also speeds up learning. Additionally, it addresses the practical problem of limited resources for connectivity that is not only apparent in the mammalian neocortex but also in computer hardware or neuromorphic (brain-inspired) hardware by efficiently pruning synapses without losing performance. PMID:27660610
Inferring Horizontal Gene Transfer
Lassalle, Florent; Dessimoz, Christophe
2015-01-01
Horizontal or Lateral Gene Transfer (HGT or LGT) is the transmission of portions of genomic DNA between organisms through a process decoupled from vertical inheritance. In the presence of HGT events, different fragments of the genome are the result of different evolutionary histories. This can therefore complicate the investigations of evolutionary relatedness of lineages and species. Also, as HGT can bring into genomes radically different genotypes from distant lineages, or even new genes bearing new functions, it is a major source of phenotypic innovation and a mechanism of niche adaptation. For example, of particular relevance to human health is the lateral transfer of antibiotic resistance and pathogenicity determinants, leading to the emergence of pathogenic lineages [1]. Computational identification of HGT events relies upon the investigation of sequence composition or evolutionary history of genes. Sequence composition-based ("parametric") methods search for deviations from the genomic average, whereas evolutionary history-based ("phylogenetic") approaches identify genes whose evolutionary history significantly differs from that of the host species. The evaluation and benchmarking of HGT inference methods typically rely upon simulated genomes, for which the true history is known. On real data, different methods tend to infer different HGT events, and as a result it can be difficult to ascertain all but simple and clear-cut HGT events. PMID:26020646
How Forgetting Aids Heuristic Inference
ERIC Educational Resources Information Center
Schooler, Lael J.; Hertwig, Ralph
2005-01-01
Some theorists, ranging from W. James (1890) to contemporary psychologists, have argued that forgetting is the key to proper functioning of memory. The authors elaborate on the notion of beneficial forgetting by proposing that loss of information aids inference heuristics that exploit mnemonic information. To this end, the authors bring together 2…
Vibration Sensor Data Denoising Using a Time-Frequency Manifold for Machinery Fault Diagnosis
He, Qingbo; Wang, Xiangxiang; Zhou, Qiang
2014-01-01
Vibration sensor data from a mechanical system are often associated with important measurement information useful for machinery fault diagnosis. However, in practice the existence of background noise makes it difficult to identify the fault signature from the sensing data. This paper introduces the time-frequency manifold (TFM) concept into sensor data denoising and proposes a novel denoising method for reliable machinery fault diagnosis. The TFM signature reflects the intrinsic time-frequency structure of a non-stationary signal. The proposed method intends to realize data denoising by synthesizing the TFM using time-frequency synthesis and phase space reconstruction (PSR) synthesis. Due to the merits of the TFM in noise suppression and resolution enhancement, the denoised signal would have satisfactory denoising effects, as well as inherent time-frequency structure keeping. Moreover, this paper presents a clustering-based statistical parameter to evaluate the proposed method, and also presents a new diagnostic approach, called frequency probability time series (FPTS) spectral analysis, to show its effectiveness in fault diagnosis. The proposed TFM-based data denoising method has been employed to deal with a set of vibration sensor data from defective bearings, and the results verify that for machinery fault diagnosis the method is superior to two traditional denoising methods. PMID:24379045
G. S., Vijay; H. S., Kumar; Pai P., Srinivasa; N. S., Sriram; Rao, Raj B. K. N.
2012-01-01
The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal. PMID:23213323
Vibration sensor data denoising using a time-frequency manifold for machinery fault diagnosis.
He, Qingbo; Wang, Xiangxiang; Zhou, Qiang
2013-12-27
Vibration sensor data from a mechanical system are often associated with important measurement information useful for machinery fault diagnosis. However, in practice the existence of background noise makes it difficult to identify the fault signature from the sensing data. This paper introduces the time-frequency manifold (TFM) concept into sensor data denoising and proposes a novel denoising method for reliable machinery fault diagnosis. The TFM signature reflects the intrinsic time-frequency structure of a non-stationary signal. The proposed method intends to realize data denoising by synthesizing the TFM using time-frequency synthesis and phase space reconstruction (PSR) synthesis. Due to the merits of the TFM in noise suppression and resolution enhancement, the denoised signal would have satisfactory denoising effects, as well as inherent time-frequency structure keeping. Moreover, this paper presents a clustering-based statistical parameter to evaluate the proposed method, and also presents a new diagnostic approach, called frequency probability time series (FPTS) spectral analysis, to show its effectiveness in fault diagnosis. The proposed TFM-based data denoising method has been employed to deal with a set of vibration sensor data from defective bearings, and the results verify that for machinery fault diagnosis the method is superior to two traditional denoising methods.
Local Spectral Component Decomposition for Multi-Channel Image Denoising.
Rizkinia, Mia; Baba, Tatsuya; Shirai, Keiichiro; Okuda, Masahiro
2016-07-01
We propose a method for local spectral component decomposition based on the line feature of local distribution. Our aim is to reduce noise on multi-channel images by exploiting the linear correlation in the spectral domain of a local region. We first calculate a linear feature over the spectral components of an M -channel image, which we call the spectral line, and then, using the line, we decompose the image into three components: a single M -channel image and two gray-scale images. By virtue of the decomposition, the noise is concentrated on the two images, and thus our algorithm needs to denoise only the two gray-scale images, regardless of the number of the channels. As a result, image deterioration due to the imbalance of the spectral component correlation can be avoided. The experiment shows that our method improves image quality with less deterioration while preserving vivid contrast. Our method is especially effective for hyperspectral images. The experimental results demonstrate that our proposed method can compete with the other state-of-the-art denoising methods.
A nonlinear Bayesian filtering framework for ECG denoising.
Sameni, Reza; Shamsollahi, Mohammad B; Jutten, Christian; Clifford, Gari D
2007-12-01
In this paper, a nonlinear Bayesian filtering framework is proposed for the filtering of single channel noisy electrocardiogram (ECG) recordings. The necessary dynamic models of the ECG are based on a modified nonlinear dynamic model, previously suggested for the generation of a highly realistic synthetic ECG. A modified version of this model is used in several Bayesian filters, including the Extended Kalman Filter, Extended Kalman Smoother, and Unscented Kalman Filter. An automatic parameter selection method is also introduced, to facilitate the adaptation of the model parameters to a vast variety of ECGs. This approach is evaluated on several normal ECGs, by artificially adding white and colored Gaussian noises to visually inspected clean ECG recordings, and studying the SNR and morphology of the filter outputs. The results of the study demonstrate superior results compared with conventional ECG denoising approaches such as bandpass filtering, adaptive filtering, and wavelet denoising, over a wide range of ECG SNRs. The method is also successfully evaluated on real nonstationary muscle artifact. This method may therefore serve as an effective framework for the model-based filtering of noisy ECG recordings.
Denoising Stimulated Raman Spectroscopic Images by Total Variation Minimization
Liao, Chien-Sheng; Choi, Joon Hee; Zhang, Delong; Chan, Stanley H.; Cheng, Ji-Xin
2016-01-01
High-speed coherent Raman scattering imaging is opening a new avenue to unveiling the cellular machinery by visualizing the spatio-temporal dynamics of target molecules or intracellular organelles. By extracting signals from the laser at MHz modulation frequency, current stimulated Raman scattering (SRS) microscopy has reached shot noise limited detection sensitivity. The laser-based local oscillator in SRS microscopy not only generates high levels of signal, but also delivers a large shot noise which degrades image quality and spectral fidelity. Here, we demonstrate a denoising algorithm that removes the noise in both spatial and spectral domains by total variation minimization. The signal-to-noise ratio of SRS spectroscopic images was improved by up to 57 times for diluted dimethyl sulfoxide solutions and by 15 times for biological tissues. Weak Raman peaks of target molecules originally buried in the noise were unraveled. Coupling the denoising algorithm with multivariate curve resolution allowed discrimination of fat stores from protein-rich organelles in C. elegans. Together, our method significantly improved detection sensitivity without frame averaging, which can be useful for in vivo spectroscopic imaging. PMID:26955400
An efficient approach for feature-preserving mesh denoising
NASA Astrophysics Data System (ADS)
Lu, Xuequan; Liu, Xiaohong; Deng, Zhigang; Chen, Wenzhi
2017-03-01
With the growing availability of various optical and laser scanners, it is easy to capture different kinds of mesh models which are inevitably corrupted with noise. Although many mesh denoising methods proposed in recent years can produce encouraging results, most of them still suffer from their computational efficiencies. In this paper, we propose a highly efficient approach for mesh denoising while preserving geometric features. Specifically, our method consists of three steps: initial vertex filtering, normal estimation, and vertex update. At the initial vertex filtering step, we introduce a fast iterative vertex filter to substantially reduce noise interference. With the initially filtered mesh from the above step, we then estimate face and vertex normals: an unstandardized bilateral filter to efficiently smooth face normals, and an efficient scheme to estimate vertex normals with the filtered face normals. Finally, at the vertex update step, by utilizing both the filtered face normals and estimated vertex normals obtained from the previous step, we propose a novel iterative vertex update algorithm to efficiently update vertex positions. The qualitative and quantitative comparisons show that our method can outperform the selected state of the art methods, in particular, its computational efficiency (up to about 32 times faster).
Optimization of dynamic measurement of receptor kinetics by wavelet denoising.
Alpert, Nathaniel M; Reilhac, Anthonin; Chio, Tat C; Selesnick, Ivan
2006-04-01
The most important technical limitation affecting dynamic measurements with PET is low signal-to-noise ratio (SNR). Several reports have suggested that wavelet processing of receptor kinetic data in the human brain can improve the SNR of parametric images of binding potential (BP). However, it is difficult to fully assess these reports because objective standards have not been developed to measure the tradeoff between accuracy (e.g. degradation of resolution) and precision. This paper employs a realistic simulation method that includes all major elements affecting image formation. The simulation was used to derive an ensemble of dynamic PET ligand (11C-raclopride) experiments that was subjected to wavelet processing. A method for optimizing wavelet denoising is presented and used to analyze the simulated experiments. Using optimized wavelet denoising, SNR of the four-dimensional PET data increased by about a factor of two and SNR of three-dimensional BP maps increased by about a factor of 1.5. Analysis of the difference between the processed and unprocessed means for the 4D concentration data showed that more than 80% of voxels in the ensemble mean of the wavelet processed data deviated by less than 3%. These results show that a 1.5x increase in SNR can be achieved with little degradation of resolution. This corresponds to injecting about twice the radioactivity, a maneuver that is not possible in human studies without saturating the PET camera and/or exposing the subject to more than permitted radioactivity.
Noise distribution and denoising of current density images.
Beheshti, Mohammadali; Foomany, Farbod H; Magtibay, Karl; Jaffray, David A; Krishnan, Sridhar; Nanthakumar, Kumaraswamy; Umapathy, Karthikeyan
2015-04-01
Current density imaging (CDI) is a magnetic resonance (MR) imaging technique that could be used to study current pathways inside the tissue. The current distribution is measured indirectly as phase changes. The inherent noise in the MR imaging technique degrades the accuracy of phase measurements leading to imprecise current variations. The outcome can be affected significantly, especially at a low signal-to-noise ratio (SNR). We have shown the residual noise distribution of the phase to be Gaussian-like and the noise in CDI images approximated as a Gaussian. This finding matches experimental results. We further investigated this finding by performing comparative analysis with denoising techniques, using two CDI datasets with two different currents (20 and 45 mA). We found that the block-matching and three-dimensional (BM3D) technique outperforms other techniques when applied on current density ([Formula: see text]). The minimum gain in noise power by BM3D applied to [Formula: see text] compared with the next best technique in the analysis was found to be around 2 dB per pixel. We characterize the noise profile in CDI images and provide insights on the performance of different denoising techniques when applied at two different stages of current density reconstruction.
Bernier, Michaël; Chamberland, Maxime; Houde, Jean-Christophe; Descoteaux, Maxime; Whittingstall, Kevin
2014-01-01
In recent years, there has been ever-increasing interest in combining functional magnetic resonance imaging (fMRI) and diffusion magnetic resonance imaging (dMRI) for better understanding the link between cortical activity and connectivity, respectively. However, it is challenging to detect and validate fMRI activity in key sub-cortical areas such as the thalamus, given that they are prone to susceptibility artifacts due to the partial volume effects (PVE) of surrounding tissues (GM/WM interface). This is especially true on relatively low-field clinical MR systems (e.g., 1.5 T). We propose to overcome this limitation by using a spatial denoising technique used in structural MRI and more recently in diffusion MRI called non-local means (NLM) denoising, which uses a patch-based approach to suppress the noise locally. To test this, we measured fMRI in 20 healthy subjects performing three block-based tasks : eyes-open closed (EOC) and left/right finger tapping (FTL, FTR). Overall, we found that NLM yielded more thalamic activity compared to traditional denoising methods. In order to validate our pipeline, we also investigated known structural connectivity going through the thalamus using HARDI tractography: the optic radiations, related to the EOC task, and the cortico-spinal tract (CST) for FTL and FTR. To do so, we reconstructed the tracts using functionally based thalamic and cortical ROIs to initiates seeds of tractography in a two-level coarse-to-fine fashion. We applied this method at the single subject level, which allowed us to see the structural connections underlying fMRI thalamic activity. In summary, we propose a new fMRI processing pipeline which uses a recent spatial denoising technique (NLM) to successfully detect sub-cortical activity which was validated using an advanced dMRI seeding strategy in single subjects at 1.5 T.
Iterative methods for total variation denoising
Vogel, C.R.; Oman, M.E.
1996-01-01
Total variation (TV) methods are very effective for recovering ``blocky,`` possibly discontinuous, images from noisy data. A fixed point algorithm for minimizing a TV-penalized least squares functional is presented and compared with existing minimization schemes. A variant of the cell-centered finite difference multigrid method of Ewing and Shen is implemented for solving the (large, sparse) linear subproblems. Numerical results are presented for one- and two-dimensional examples; in particular, the algorithm is applied to actual data obtained from confocal microscopy.
Statistical inference and string theory
NASA Astrophysics Data System (ADS)
Heckman, Jonathan J.
2015-09-01
In this paper, we expose some surprising connections between string theory and statistical inference. We consider a large collective of agents sweeping out a family of nearby statistical models for an M-dimensional manifold of statistical fitting parameters. When the agents making nearby inferences align along a d-dimensional grid, we find that the pooled probability that the collective reaches a correct inference is the partition function of a nonlinear sigma model in d dimensions. Stability under perturbations to the original inference scheme requires the agents of the collective to distribute along two dimensions. Conformal invariance of the sigma model corresponds to the condition of a stable inference scheme, directly leading to the Einstein field equations for classical gravity. By summing over all possible arrangements of the agents in the collective, we reach a string theory. We also use this perspective to quantify how much an observer can hope to learn about the internal geometry of a superstring compactification. Finally, we present some brief speculative remarks on applications to the AdS/CFT correspondence and Lorentzian signature space-times.
Automatic Denoising and Unmixing in Hyperspectral Image Processing
NASA Astrophysics Data System (ADS)
Peng, Honghong
This thesis addresses two important aspects in hyperspectral image processing: automatic hyperspectral image denoising and unmixing. The first part of this thesis is devoted to a novel automatic optimized vector bilateral filter denoising algorithm, while the remainder concerns nonnegative matrix factorization with deterministic annealing for unsupervised unmixing in remote sensing hyperspectral images. The need for automatic hyperspectral image processing has been promoted by the development of potent hyperspectral systems, with hundreds of narrow contiguous bands, spanning the visible to the long wave infrared range of the electromagnetic spectrum. Due to the large volume of raw data generated by such sensors, automatic processing in the hyperspectral images processing chain is preferred to minimize human workload and achieve optimal result. Two of the mostly researched processing for such automatic effort are: hyperspectral image denoising, which is an important preprocessing step for almost all remote sensing tasks, and unsupervised unmixing, which decomposes the pixel spectra into a collection of endmember spectral signatures and their corresponding abundance fractions. Two new methodologies are introduced in this thesis to tackle the automatic processing problems described above. Vector bilateral filtering has been shown to provide good tradeoff between noise removal and edge degradation when applied to multispectral/hyperspectral image denoising. It has also been demonstrated to provide dynamic range enhancement of bands that have impaired signal to noise ratios. Typical vector bilateral filtering usage does not employ parameters that have been determined to satisfy optimality criteria. This thesis also introduces an approach for selection of the parameters of a vector bilateral filter through an optimization procedure rather than by ad hoc means. The approach is based on posing the filtering problem as one of nonlinear estimation and minimizing the Stein
Application of the dual-tree complex wavelet transform in biomedical signal denoising.
Wang, Fang; Ji, Zhong
2014-01-01
In biomedical signal processing, Gibbs oscillation and severe frequency aliasing may occur when using the traditional discrete wavelet transform (DWT). Herein, a new denoising algorithm based on the dual-tree complex wavelet transform (DTCWT) is presented. Electrocardiogram (ECG) signals and heart sound signals are denoised based on the DTCWT. The results prove that the DTCWT is efficient. The signal-to-noise ratio (SNR) and the mean square error (MSE) are used to compare the denoising effect. Results of the paired samples t-test show that the new method can remove noise more thoroughly and better retain the boundary and texture of the signal.
Comparing resting state fMRI de-noising approaches using multi- and single-echo acquisitions
Sethi, Arjun; Laganà, Maria Marcella; Baglio, Francesca; Baselli, Giuseppe; Kundu, Prantik; Harrison, Neil A.; Cercignani, Mara
2017-01-01
Artifact removal in resting state fMRI (rfMRI) data remains a serious challenge, with even subtle head motion undermining reliability and reproducibility. Here we compared some of the most popular single-echo de-noising methods—regression of Motion parameters, White matter and Cerebrospinal fluid signals (MWC method), FMRIB’s ICA-based X-noiseifier (FIX) and ICA-based Automatic Removal Of Motion Artifacts (ICA-AROMA)—with a multi-echo approach (ME-ICA) that exploits the linear dependency of BOLD on the echo time. Data were acquired using a clinical scanner and included 30 young, healthy participants (minimal head motion) and 30 Attention Deficit Hyperactivity Disorder patients (greater head motion). De-noising effectiveness was assessed in terms of data quality after each cleanup procedure, ability to uncouple BOLD signal and motion and preservation of default mode network (DMN) functional connectivity. Most cleaning methods showed a positive impact on data quality. However, based on the investigated metrics, ME-ICA was the most robust. It minimized the impact of motion on FC even for high motion participants and preserved DMN functional connectivity structure. The high-quality results obtained using ME-ICA suggest that using a multi-echo EPI sequence, reliable rfMRI data can be obtained in a clinical setting. PMID:28323821
Multiple Instance Fuzzy Inference
2015-12-02
INFERENCE A novel fuzzy learning framework that employs fuzzy inference to solve the problem of multiple instance learning (MIL) is presented. The...fuzzy learning framework that employs fuzzy inference to solve the problem of multiple instance learning (MIL) is presented. The framework introduces a...or learned from data. In multiple instance problems, the training data is ambiguously labeled. Instances are grouped into bags, labels of bags are
Self-adaptive image denoising based on bidimensional empirical mode decomposition (BEMD).
Guo, Song; Luan, Fangjun; Song, Xiaoyu; Li, Changyou
2014-01-01
To better analyze images with the Gaussian white noise, it is necessary to remove the noise before image processing. In this paper, we propose a self-adaptive image denoising method based on bidimensional empirical mode decomposition (BEMD). Firstly, normal probability plot confirms that 2D-IMF of Gaussian white noise images decomposed by BEMD follow the normal distribution. Secondly, energy estimation equation of the ith 2D-IMF (i=2,3,4,......) is proposed referencing that of ith IMF (i=2,3,4,......) obtained by empirical mode decomposition (EMD). Thirdly, the self-adaptive threshold of each 2D-IMF is calculated. Eventually, the algorithm of the self-adaptive image denoising method based on BEMD is described. From the practical perspective, this is applied for denoising of the magnetic resonance images (MRI) of the brain. And the results show it has a better denoising performance compared with other methods.
NASA Astrophysics Data System (ADS)
Ren, Zhong; Liu, Guodong
2016-11-01
To improve the denoising effect of the glucose photoacoustic signals, a modified wavelet thresholding combined shift-invariance algorithm was used in this paper. In addition, the shift-invariance method was added into the improved algorithm. To verify the feasibility of modified wavelet shift-invariance threshold denoising algorithm, the simulation experiments were performed. Results show that the denoising effect of modified wavelet shift-invariance thresholding algorithm is better than that of others because its signal-to-noise ratio is largest and the root-mean-square error is lest. Finally, the modified wavelet shift-invariance threshold denoising was used to remove the noises of the photoacoustic signals of glucose aqueous solutions.
NASA Astrophysics Data System (ADS)
Thangaswamy, Sree Sharmila; Kadarkarai, Ramar; Thangaswamy, Sree Renga Raja
2013-01-01
Satellite images are corrupted by noise during image acquisition and transmission. The removal of noise from the image by attenuating the high-frequency image components removes important details as well. In order to retain the useful information, improve the visual appearance, and accurately classify an image, an effective denoising technique is required. We discuss three important steps such as image denoising, resolution enhancement, and classification for improving accuracy in a noisy image. An effective denoising technique, hybrid directional lifting, is proposed to retain the important details of the images and improve visual appearance. The discrete wavelet transform based interpolation is developed for enhancing the resolution of the denoised image. The image is then classified using a support vector machine, which is superior to other neural network classifiers. The quantitative performance measures such as peak signal to noise ratio and classification accuracy show the significance of the proposed techniques.
Dual tree complex wavelet transform based denoising of optical microscopy images.
Bal, Ufuk
2012-12-01
Photon shot noise is the main noise source of optical microscopy images and can be modeled by a Poisson process. Several discrete wavelet transform based methods have been proposed in the literature for denoising images corrupted by Poisson noise. However, the discrete wavelet transform (DWT) has disadvantages such as shift variance, aliasing, and lack of directional selectivity. To overcome these problems, a dual tree complex wavelet transform is used in our proposed denoising algorithm. Our denoising algorithm is based on the assumption that for the Poisson noise case threshold values for wavelet coefficients can be estimated from the approximation coefficients. Our proposed method was compared with one of the state of the art denoising algorithms. Better results were obtained by using the proposed algorithm in terms of image quality metrics. Furthermore, the contrast enhancement effect of the proposed method on collagen fıber images is examined. Our method allows fast and efficient enhancement of images obtained under low light intensity conditions.
Denoising method of heart sound signals based on self-construct heart sound wavelet
NASA Astrophysics Data System (ADS)
Cheng, Xiefeng; Zhang, Zheng
2014-08-01
In the field of heart sound signal denoising, the wavelet transform has become one of the most effective measures. The selective wavelet basis is based on the well-known orthogonal db series or biorthogonal bior series wavelet. In this paper we present a self-construct wavelet basis which is suitable for the heart sound denoising and analyze its constructor method and features in detail according to the characteristics of heart sound and evaluation criterion of signal denoising. The experimental results show that the heart sound wavelet can effectively filter out the noise of the heart sound signals, reserve the main characteristics of the signal. Compared with the traditional wavelets, it has a higher signal-to-noise ratio, lower mean square error and better denoising effect.
Potential pitfalls when denoising resting state fMRI data using nuisance regression.
Bright, Molly G; Tench, Christopher R; Murphy, Kevin
2016-12-23
In resting state fMRI, it is necessary to remove signal variance associated with noise sources, leaving cleaned fMRI time-series that more accurately reflect the underlying intrinsic brain fluctuations of interest. This is commonly achieved through nuisance regression, in which the fit is calculated of a noise model of head motion and physiological processes to the fMRI data in a General Linear Model, and the "cleaned" residuals of this fit are used in further analysis. We examine the statistical assumptions and requirements of the General Linear Model, and whether these are met during nuisance regression of resting state fMRI data. Using toy examples and real data we show how pre-whitening, temporal filtering and temporal shifting of regressors impact model fit. Based on our own observations, existing literature, and statistical theory, we make the following recommendations when employing nuisance regression: pre-whitening should be applied to achieve valid statistical inference of the noise model fit parameters; temporal filtering should be incorporated into the noise model to best account for changes in degrees of freedom; temporal shifting of regressors, although merited, should be achieved via optimisation and validation of a single temporal shift. We encourage all readers to make simple, practical changes to their fMRI denoising pipeline, and to regularly assess the appropriateness of the noise model used. By negotiating the potential pitfalls described in this paper, and by clearly reporting the details of nuisance regression in future manuscripts, we hope that the field will achieve more accurate and precise noise models for cleaning the resting state fMRI time-series.
Exploiting the self-similarity in ERP images by nonlocal means for single-trial denoising.
Strauss, Daniel J; Teuber, Tanja; Steidl, Gabriele; Corona-Strauss, Farah I
2013-07-01
Event related potentials (ERPs) represent a noninvasive and widely available means to analyze neural correlates of sensory and cognitive processing. Recent developments in neural and cognitive engineering proposed completely new application fields of this well-established measurement technique when using an advanced single-trial processing. We have recently shown that 2-D diffusion filtering methods from image processing can be used for the denoising of ERP single-trials in matrix representations, also called ERP images. In contrast to conventional 1-D transient ERP denoising techniques, the 2-D restoration of ERP images allows for an integration of regularities over multiple stimulations into the denoising process. Advanced anisotropic image restoration methods may require directional information for the ERP denoising process. This is especially true if there is a lack of a priori knowledge about possible traces in ERP images. However due to the use of event related experimental paradigms, ERP images are characterized by a high degree of self-similarity over the individual trials. In this paper, we propose the simple and easy to apply nonlocal means method for ERP image denoising in order to exploit this self-similarity rather than focusing on the edge-based extraction of directional information. Using measured and simulated ERP data, we compare our method to conventional approaches in ERP denoising. It is concluded that the self-similarity in ERP images can be exploited for single-trial ERP denoising by the proposed approach. This method might be promising for a variety of evoked and event-related potential applications, including nonstationary paradigms such as changing exogeneous stimulus characteristics or endogenous states during the experiment. As presented, the proposed approach is for the a posteriori denoising of single-trial sequences.
Nonlinear denoising of transient signals with application to event-related potentials
NASA Astrophysics Data System (ADS)
Effern, A.; Lehnertz, K.; Schreiber, T.; Grunwald, T.; David, P.; Elger, C. E.
2000-06-01
We present a new wavelet-based method for the denoising of event-related potentials (ERPs), employing techniques recently developed for the paradigm of deterministic chaotic systems. The denoising scheme has been constructed to be appropriate for short and transient time sequences using circular state space embedding. Its effectiveness was successfully tested on simulated signals as well as on ERPs recorded from within a human brain. The method enables the study of individual ERPs against strong ongoing brain electrical activity.
MRI noise estimation and denoising using non-local PCA.
Manjón, José V; Coupé, Pierrick; Buades, Antonio
2015-05-01
This paper proposes a novel method for MRI denoising that exploits both the sparseness and self-similarity properties of the MR images. The proposed method is a two-stage approach that first filters the noisy image using a non local PCA thresholding strategy by automatically estimating the local noise level present in the image and second uses this filtered image as a guide image within a rotationally invariant non-local means filter. The proposed method internally estimates the amount of local noise presents in the images that enables applying it automatically to images with spatially varying noise levels and also corrects the Rician noise induced bias locally. The proposed approach has been compared with related state-of-the-art methods showing competitive results in all the studied cases.
Denoising of diffusion MRI using random matrix theory.
Veraart, Jelle; Novikov, Dmitry S; Christiaens, Daan; Ades-Aron, Benjamin; Sijbers, Jan; Fieremans, Els
2016-11-15
We introduce and evaluate a post-processing technique for fast denoising of diffusion-weighted MR images. By exploiting the intrinsic redundancy in diffusion MRI using universal properties of the eigenspectrum of random covariance matrices, we remove noise-only principal components, thereby enabling signal-to-noise ratio enhancements. This yields parameter maps of improved quality for visual, quantitative, and statistical interpretation. By studying statistics of residuals, we demonstrate that the technique suppresses local signal fluctuations that solely originate from thermal noise rather than from other sources such as anatomical detail. Furthermore, we achieve improved precision in the estimation of diffusion parameters and fiber orientations in the human brain without compromising the accuracy and spatial resolution.
Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data
Pnevmatikakis, Eftychios A.; Soudry, Daniel; Gao, Yuanjun; Machado, Timothy A.; Merel, Josh; Pfau, David; Reardon, Thomas; Mu, Yu; Lacefield, Clay; Yang, Weijian; Ahrens, Misha; Bruno, Randy; Jessell, Thomas M.; Peterka, Darcy S.; Yuste, Rafael; Paninski, Liam
2016-01-01
SUMMARY We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multineuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data. PMID:26774160
Quadtree structured image approximation for denoising and interpolation.
Scholefield, Adam; Dragotti, Pier Luigi
2014-03-01
The success of many image restoration algorithms is often due to their ability to sparsely describe the original signal. Shukla proposed a compression algorithm, based on a sparse quadtree decomposition model, which could optimally represent piecewise polynomial images. In this paper, we adapt this model to the image restoration by changing the rate-distortion penalty to a description-length penalty. In addition, one of the major drawbacks of this type of approximation is the computational complexity required to find a suitable subspace for each node of the quadtree. We address this issue by searching for a suitable subspace much more efficiently using the mathematics of updating matrix factorisations. Algorithms are developed to tackle denoising and interpolation. Simulation results indicate that we beat state of the art results when the original signal is in the model (e.g., depth images) and are competitive for natural images when the degradation is high.
Adaptively wavelet-based image denoising algorithm with edge preserving
NASA Astrophysics Data System (ADS)
Tan, Yihua; Tian, Jinwen; Liu, Jian
2006-02-01
A new wavelet-based image denoising algorithm, which exploits the edge information hidden in the corrupted image, is presented. Firstly, a canny-like edge detector identifies the edges in each subband. Secondly, multiplying the wavelet coefficients in neighboring scales is implemented to suppress the noise while magnifying the edge information, and the result is utilized to exclude the fake edges. The isolated edge pixel is also identified as noise. Unlike the thresholding method, after that we use local window filter in the wavelet domain to remove noise in which the variance estimation is elaborated to utilize the edge information. This method is adaptive to local image details, and can achieve better performance than the methods of state of the art.
Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data.
Pnevmatikakis, Eftychios A; Soudry, Daniel; Gao, Yuanjun; Machado, Timothy A; Merel, Josh; Pfau, David; Reardon, Thomas; Mu, Yu; Lacefield, Clay; Yang, Weijian; Ahrens, Misha; Bruno, Randy; Jessell, Thomas M; Peterka, Darcy S; Yuste, Rafael; Paninski, Liam
2016-01-20
We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multi-neuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data.
Obtaining single stimulus evoked potentials with wavelet denoising
NASA Astrophysics Data System (ADS)
Quian Quiroga, R.
2000-11-01
We present a method for the analysis of electroencephalograms (EEG). In particular, small signals due to stimulation, so-called evoked potentials (EPs), have to be detected in the background EEG. This is achieved by using a denoising implementation based on the wavelet decomposition. One recording of visual evoked potentials, and recordings of auditory evoked potentials from four subjects corresponding to different age groups are analyzed. We find higher variability in older individuals. Moreover, since the EPs are identified at the single stimulus level (without need of ensemble averaging), this will allow the calculation of better resolved averages. Since the method is parameter free (i.e. it does not need to be adapted to the particular characteristics of each recording), implementations in clinical settings are imaginable.
A novel de-noising method for B ultrasound images
NASA Astrophysics Data System (ADS)
Tian, Da-Yong; Mo, Jia-qing; Yu, Yin-Feng; Lv, Xiao-Yi; Yu, Xiao; Jia, Zhen-Hong
2015-12-01
B ultrasound as a kind of ultrasonic imaging, which has become the indispensable diagnosis method in clinical medicine. However, the presence of speckle noise in ultrasound image greatly reduces the image quality and interferes with the accuracy of the diagnosis. Therefore, how to construct a method which can eliminate the speckle noise effectively, and at the same time keep the image details effectively is the research target of the current ultrasonic image de-noising. This paper is intended to remove the inherent speckle noise of B ultrasound image. The novel algorithm proposed is based on both wavelet transformation of B ultrasound images and data fusion of B ultrasound images, with a smaller mean squared error (MSE) and greater signal to noise ratio (SNR) compared with other algorithms. The results of this study can effectively remove speckle noise from B ultrasound images, and can well preserved the details and edge information which will produce better visual effects.
Locally adaptive bilateral clustering for universal image denoising
NASA Astrophysics Data System (ADS)
Toh, K. K. V.; Mat Isa, N. A.
2012-12-01
This paper presents a novel and efficient locally adaptive denoising method based on clustering of pixels into regions of similar geometric and radiometric structures. Clustering is performed by adaptively segmenting pixels in the local kernel based on their augmented variational series. Then, noise pixels are restored by selectively considering the radiometric and spatial properties of every pixel in the formed clusters. The proposed method is exceedingly robust in conveying reliable local structural information even in the presence of noise. As a result, the proposed method substantially outperforms other state-of-the-art methods in terms of image restoration and computational cost. We support our claims with ample simulated and real data experiments. The relatively fast runtime from extensive simulations also suggests that the proposed method is suitable for a variety of image-based products — either embedded in image capturing devices or applied as image enhancement software.
Wavelet denoising of multiframe optical coherence tomography data
Mayer, Markus A.; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.
2012-01-01
We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise. PMID:22435103
Liu, Ryan Wen; Shi, Lin; Huang, Wenhua; Xu, Jing; Yu, Simon Chun Ho; Wang, Defeng
2014-07-01
Magnetic resonance imaging (MRI) is an outstanding medical imaging modality but the quality often suffers from noise pollution during image acquisition and transmission. The purpose of this study is to enhance image quality using feature-preserving denoising method. In current literature, most existing MRI denoising methods did not simultaneously take the global image prior and local image features into account. The denoising method proposed in this paper is implemented based on an assumption of spatially varying Rician noise map. A two-step wavelet-domain estimation method is developed to extract the noise map. Following a Bayesian modeling approach, a generalized total variation-based MRI denoising model is proposed based on global hyper-Laplacian prior and Rician noise assumption. The proposed model has the properties of backward diffusion in local normal directions and forward diffusion in local tangent directions. To further improve the denoising performance, a local variance estimator-based method is introduced to calculate the spatially adaptive regularization parameters related to local image features and spatially varying noise map. The main benefit of the proposed method is that it takes full advantage of the global MR image prior and local image features. Numerous experiments have been conducted on both synthetic and real MR data sets to compare our proposed model with some state-of-the-art denoising methods. The experimental results have demonstrated the superior performance of our proposed model in terms of quantitative and qualitative image quality evaluations.
Ince, Nuri Firat; Tadipatri, Vijay Aditya; Göksu, Fikri; Tewfik, Ahmed H
2009-01-01
Multichannel neural activities such as EEG or ECoG in a brain computer interface can be classified with subset selection algorithms running on large feature dictionaries describing subject specific features in spectral, temporal and spatial domain. While providing high accuracies in classification, the subset selection techniques are associated with long training times due to the large feature set constructed from multichannel neural recordings. In this paper we study a novel denoising technique for reducing the dimensionality of the feature space which decreases the computational complexity of the subset selection step radically without causing any degradation in the final classification accuracy. The denoising procedure was based on the comparison of the energy in a particular time segment and in a given scale/level to the energy of the raw data. By setting denoising threshold a priori the algorithm removes those nodes which fail to capture the energy in the raw data in a given scale. We provide experimental studies towards the classification of motor imagery related multichannel ECoG recordings for a brain computer interface. The denoising procedure was able to reach the same classification accuracy without denoising and a computational complexity around 5 times smaller. We also note that in some cases the denoised procedure performed better classification.
Denoising of 3D magnetic resonance images by using higher-order singular value decomposition.
Zhang, Xinyuan; Xu, Zhongbiao; Jia, Nan; Yang, Wei; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu
2015-01-01
The denoising of magnetic resonance (MR) images is important to improve the inspection quality and reliability of quantitative image analysis. Nonlocal filters by exploiting similarity and/or sparseness among patches or cubes achieve excellent performance in denoising MR images. Recently, higher-order singular value decomposition (HOSVD) has been demonstrated to be a simple and effective method for exploiting redundancy in the 3D stack of similar patches during denoising 2D natural image. This work aims to investigate the application and improvement of HOSVD to denoising MR volume data. The wiener-augmented HOSVD method achieves comparable performance to that of BM4D. For further improvement, we propose to augment the standard HOSVD stage by a second recursive stage, which is a repeated HOSVD filtering of the weighted summation of the residual and denoised image in the first stage. The appropriate weights have been investigated by experiments with different image types and noise levels. Experimental results over synthetic and real 3D MR data demonstrate that the proposed method outperforms current state-of-the-art denoising methods.
A Comparison of PDE-based Non-Linear Anisotropic Diffusion Techniques for Image Denoising
Weeratunga, S K; Kamath, C
2003-01-06
PDE-based, non-linear diffusion techniques are an effective way to denoise images. In a previous study, we investigated the effects of different parameters in the implementation of isotropic, non-linear diffusion. Using synthetic and real images, we showed that for images corrupted with additive Gaussian noise, such methods are quite effective, leading to lower mean-squared-error values in comparison with spatial filters and wavelet-based approaches. In this paper, we extend this work to include anisotropic diffusion, where the diffusivity is a tensor valued function which can be adapted to local edge orientation. This allows smoothing along the edges, but not perpendicular to it. We consider several anisotropic diffusivity functions as well as approaches for discretizing the diffusion operator that minimize the mesh orientation effects. We investigate how these tensor-valued diffusivity functions compare in image quality, ease of use, and computational costs relative to simple spatial filters, the more complex bilateral filters, wavelet-based methods, and isotropic non-linear diffusion based techniques.
Comparison of PDE-based non-linear anistropic diffusion techniques for image denoising
NASA Astrophysics Data System (ADS)
Weeratunga, Sisira K.; Kamath, Chandrika
2003-05-01
PDE-based, non-linear diffusion techniques are an effective way to denoise images.In a previous study, we investigated the effects of different parameters in the implementation of isotropic, non-linear diffusion. Using synthetic and real images, we showed that for images corrupted with additive Gaussian noise, such methods are quite effective, leading to lower mean-squared-error values in comparison with spatial filters and wavelet-based approaches. In this paper, we extend this work to include anisotropic diffusion, where the diffusivity is a tensor valued function which can be adapted to local edge orientation. This allows smoothing along the edges, but not perpendicular to it. We consider several anisotropic diffusivity functions as well as approaches for discretizing the diffusion operator that minimize the mesh orientation effects. We investigate how these tensor-valued diffusivity functions compare in image quality, ease of use, and computational costs relative to simple spatial filters, the more complex bilateral filters, wavelet-based methods, and isotropic non-linear diffusion based techniques.
Novel wavelet threshold denoising method in axle press-fit zone ultrasonic detection
NASA Astrophysics Data System (ADS)
Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai
2017-02-01
Axles are important part of railway locomotives and vehicles. Periodic ultrasonic inspection of axles can effectively detect and monitor axle fatigue cracks. However, in the axle press-fit zone, the complex interface contact condition reduces the signal-noise ratio (SNR). Therefore, the probability of false positives and false negatives increases. In this work, a novel wavelet threshold function is created to remove noise and suppress press-fit interface echoes in axle ultrasonic defect detection. The novel wavelet threshold function with two variables is designed to ensure the precision of optimum searching process. Based on the positive correlation between the correlation coefficient and SNR and with the experiment phenomenon that the defect and the press-fit interface echo have different axle-circumferential correlation characteristics, a discrete optimum searching process for two undetermined variables in novel wavelet threshold function is conducted. The performance of the proposed method is assessed by comparing it with traditional threshold methods using real data. The statistic results of the amplitude and the peak SNR of defect echoes show that the proposed wavelet threshold denoising method not only maintains the amplitude of defect echoes but also has a higher peak SNR.
Petrov, S.
1996-10-01
Languages with a solvable implication problem but without complete and consistent systems of inference rules (`poor` languages) are considered. The problem of existence of finite complete and consistent inference rule system for a ``poor`` language is stated independently of the language or rules syntax. Several properties of the problem arc proved. An application of results to the language of join dependencies is given.
NASA Astrophysics Data System (ADS)
Shiomi, K.; Takeda, T.; Sekiguchi, S.
2012-12-01
By the recent dense GPS observation, the high strain rate zone (HSRZ) crossing the central Japan was discovered. In the HSRZ, E-W compressive stress field is observed, and large earthquakes with M>6 are frequently occurred. In this study, we try to reveal depth-dependent anisotropic feature in this region by using teleseismic receiver functions (RFs) and S-wave splitting information. As a target, we select NIED Hi-net stations N.TGWH and N.TSTH, which are located inside and outside of the HSRZ respectively. For RF analysis, we choose M>5.5 teleseismic events from October 2000 to November 2011. Low-pass filters with fc = 1 and 2 Hz are applied to estimate RFs. In the radial RFs, we find clear positive phase arrivals at 4 to 4.5 s in delay time for both stations. Since this time delay corresponds to 35 km-depth velocity discontinuity existence, these phases may be the converted phases generated at the Moho discontinuity. Seeing the back-azimuth paste-ups of the transverse RFs, we can find polarity changes of later phases at 4 to 4.5 s in delay time at the N.TSTH station. This polarity change occurs for direction of N0E (north), N180E (south), and N270E (west). Although we have no data in N90E (east) direction, this feature implies that anisotropic rocks may exist around the Moho. In order to check this feature, we consider 6-layered subsurface model and compare synthetic RFs with the observation. The first three layers are for thick sediments and upper crust including a dipping velocity interface. The fourth, fifth and sixth layer corresponds to the mid crust, lower crust and uppermost mantle, respectively. The best model infers that the mid- and lower-crust beneath the N.TSTH station should have strong anisotropy whose fast axis directs to the N-S, though the fast axis in the uppermost mantle seems to show E-W direction. Moreover, to explain the observation, the symmetric axes in the lower crust and the uppermost mantle should be dipping about 20 degrees. To check
Chen, Szi-Wen; Chen, Yuan-Ho
2015-01-01
In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz. PMID:26501290
Chen, Szi-Wen; Chen, Yuan-Ho
2015-10-16
In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.
Linguistic Markers of Inference Generation While Reading.
Clinton, Virginia; Carlson, Sarah E; Seipel, Ben
2016-06-01
Words can be informative linguistic markers of psychological constructs. The purpose of this study is to examine associations between word use and the process of making meaningful connections to a text while reading (i.e., inference generation). To achieve this purpose, think-aloud data from third-fifth grade students ([Formula: see text]) reading narrative texts were hand-coded for inferences. These data were also processed with a computer text analysis tool, Linguistic Inquiry and Word Count, for percentages of word use in the following categories: cognitive mechanism words, nonfluencies, and nine types of function words. Findings indicate that cognitive mechanisms were an independent, positive predictor of connections to background knowledge (i.e., elaborative inference generation) and nonfluencies were an independent, negative predictor of connections within the text (i.e., bridging inference generation). Function words did not provide unique variance towards predicting inference generation. These findings are discussed in the context of a cognitive reflection model and the differences between bridging and elaborative inference generation. In addition, potential practical implications for intelligent tutoring systems and computer-based methods of inference identification are presented.
Inferring Centrality from Network Snapshots
NASA Astrophysics Data System (ADS)
Shao, Haibin; Mesbahi, Mehran; Li, Dewei; Xi, Yugeng
2017-01-01
The topology and dynamics of a complex network shape its functionality. However, the topologies of many large-scale networks are either unavailable or incomplete. Without the explicit knowledge of network topology, we show how the data generated from the network dynamics can be utilised to infer the tempo centrality, which is proposed to quantify the influence of nodes in a consensus network. We show that the tempo centrality can be used to construct an accurate estimate of both the propagation rate of influence exerted on consensus networks and the Kirchhoff index of the underlying graph. Moreover, the tempo centrality also encodes the disturbance rejection of nodes in a consensus network. Our findings provide an approach to infer the performance of a consensus network from its temporal data.
Network Plasticity as Bayesian Inference
Legenstein, Robert; Maass, Wolfgang
2015-01-01
General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information so well to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling. PMID:26545099
Inferring Centrality from Network Snapshots
Shao, Haibin; Mesbahi, Mehran; Li, Dewei; Xi, Yugeng
2017-01-01
The topology and dynamics of a complex network shape its functionality. However, the topologies of many large-scale networks are either unavailable or incomplete. Without the explicit knowledge of network topology, we show how the data generated from the network dynamics can be utilised to infer the tempo centrality, which is proposed to quantify the influence of nodes in a consensus network. We show that the tempo centrality can be used to construct an accurate estimate of both the propagation rate of influence exerted on consensus networks and the Kirchhoff index of the underlying graph. Moreover, the tempo centrality also encodes the disturbance rejection of nodes in a consensus network. Our findings provide an approach to infer the performance of a consensus network from its temporal data. PMID:28098166
NASA Astrophysics Data System (ADS)
Zhao, Fan; Zhao, Jian; Zhao, Wenda; Qu, Feng
2016-05-01
Infrared images are characterized by low signal-to-noise ratio and low contrast. Therefore, the edge details are easily immerged in the background and noise, making it much difficult to achieve infrared image edge detail enhancement and denoising. This article proposes a novel method of Gaussian mixture model-based gradient field reconstruction, which enhances image edge details while suppressing noise. First, by analyzing the gradient histogram of noisy infrared image, Gaussian mixture model is adopted to simulate the distribution of the gradient histogram, and divides the image information into three parts corresponding to faint details, noise and the edges of clear targets, respectively. Then, the piecewise function is constructed based on the characteristics of the image to increase gradients of faint details and suppress gradients of noise. Finally, anisotropic diffusion constraint is added while visualizing enhanced image from the transformed gradient field to further suppress noise. The experimental results show that the method possesses unique advantage of effectively enhancing infrared image edge details and suppressing noise as well, compared with the existing methods. In addition, it can be used to effectively enhance other types of images such as the visible and medical images.
NASA Astrophysics Data System (ADS)
Weeratunga, Sisira K.; Kamath, Chandrika
2002-05-01
Removing noise from data is often the first step in data analysis. Denoising techniques should not only reduce the noise, but do so without blurring or changing the location of the edges. Many approaches have been proposed to accomplish this; in this paper, we focus on one such approach, namely the use of non-linear diffusion operators. This approach has been studied extensively from a theoretical viewpoint ever since the 1987 work of Perona and Malik showed that non-linear filters outperformed the more traditional linear Canny edge detector. We complement this theoretical work by investigating the performance of several isotropic diffusion operators on test images from scientific domains. We explore the effects of various parameters such as the choice of diffusivity function, explicit and implicit methods for the discretization of the PDE, and approaches for the spatial discretization of the non-linear operator etc. We also compare these schemes with simple spatial filters and the more complex wavelet-based shrinkage techniques. Our empirical results show that, with an appropriate choice of parameters, diffusion-based schemes can be as effective as competitive techniques.
Le Pogam, A; Hanzouli, H; Hatt, M; Cheze Le Rest, C; Visvikis, D
2013-12-01
Denoising of Positron Emission Tomography (PET) images is a challenging task due to the inherent low signal-to-noise ratio (SNR) of the acquired data. A pre-processing denoising step may facilitate and improve the results of further steps such as segmentation, quantification or textural features characterization. Different recent denoising techniques have been introduced and most state-of-the-art methods are based on filtering in the wavelet domain. However, the wavelet transform suffers from some limitations due to its non-optimal processing of edge discontinuities. More recently, a new multi scale geometric approach has been proposed, namely the curvelet transform. It extends the wavelet transform to account for directional properties in the image. In order to address the issue of resolution loss associated with standard denoising, we considered a strategy combining the complementary wavelet and curvelet transforms. We compared different figures of merit (e.g. SNR increase, noise decrease in homogeneous regions, resolution loss, and intensity bias) on simulated and clinical datasets with the proposed combined approach and the wavelet-only and curvelet-only filtering techniques. The three methods led to an increase of the SNR. Regarding the quantitative accuracy however, the wavelet and curvelet only denoising approaches led to larger biases in the intensity and the contrast than the proposed combined algorithm. This approach could become an alternative solution to filters currently used after image reconstruction in clinical systems such as the Gaussian filter.
Edge-preserving image denoising via group coordinate descent on the GPU.
McGaffin, Madison Gray; Fessler, Jeffrey A
2015-04-01
Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel 1D pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates in-place and only store the noisy data, denoised image, and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation. Both algorithms use the majorize-minimize framework to solve the 1D pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time.
Chen, Qiang; de Sisternes, Luis; Leng, Theodore; Rubin, Daniel L
2015-06-01
Image denoising is a fundamental preprocessing step of image processing in many applications developed for optical coherence tomography (OCT) retinal imaging--a high-resolution modality for evaluating disease in the eye. To make a homogeneity similarity-based image denoising method more suitable for OCT image removal, we improve it by considering the noise and retinal characteristics of OCT images in two respects: (1) median filtering preprocessing is used to make the noise distribution of OCT images more suitable for patch-based methods; (2) a rectangle neighborhood and region restriction are adopted to accommodate the horizontal stretching of retinal structures when observed in OCT images. As a performance measurement of the proposed technique, we tested the method on real and synthetic noisy retinal OCT images and compared the results with other well-known spatial denoising methods, including bilateral filtering, five partial differential equation (PDE)-based methods, and three patch-based methods. Our results indicate that our proposed method seems suitable for retinal OCT imaging denoising, and that, in general, patch-based methods can achieve better visual denoising results than point-based methods in this type of imaging, because the image patch can better represent the structured information in the images than a single pixel. However, the time complexity of the patch-based methods is substantially higher than that of the others.
Kazakeviciute, Agne; Ho, Chris Jun Hui; Olivo, Malini
2016-09-01
The aim of this study is to solve a problem of denoising and artifact removal from in vivo multispectral photoacoustic imaging when the level of noise is not known a priori. The study analyzes Wiener filtering in Fourier domain when a family of anisotropic shape filters is considered. The unknown noise and signal power spectral densities are estimated using spectral information of images and the autoregressive of the power 1 ( AR(1)) model. Edge preservation is achieved by detecting image edges in the original and the denoised image and superimposing a weighted contribution of the two edge images to the resulting denoised image. The method is tested on multispectral photoacoustic images from simulations, a tissue-mimicking phantom, as well as in vivo imaging of the mouse, with its performance compared against that of the standard Wiener filtering in Fourier domain. The results reveal better denoising and fine details preservation capabilities of the proposed method when compared to that of the standard Wiener filtering in Fourier domain, suggesting that this could be a useful denoising technique for other multispectral photoacoustic studies.
NASA Astrophysics Data System (ADS)
Bitenc, M.; Kieffer, D. S.; Khoshelham, K.
2015-08-01
The precision of Terrestrial Laser Scanning (TLS) data depends mainly on the inherent random range error, which hinders extraction of small details from TLS measurements. New post processing algorithms have been developed that reduce or eliminate the noise and therefore enable modelling details at a smaller scale than one would traditionally expect. The aim of this research is to find the optimum denoising method such that the corrected TLS data provides a reliable estimation of small-scale rock joint roughness. Two wavelet-based denoising methods are considered, namely Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), in combination with different thresholding procedures. The question is, which technique provides a more accurate roughness estimates considering (i) wavelet transform (SWT or DWT), (ii) thresholding method (fixed-form or penalised low) and (iii) thresholding mode (soft or hard). The performance of denoising methods is tested by two analyses, namely method noise and method sensitivity to noise. The reference data are precise Advanced TOpometric Sensor (ATOS) measurements obtained on 20 × 30 cm rock joint sample, which are for the second analysis corrupted by different levels of noise. With such a controlled noise level experiments it is possible to evaluate the methods' performance for different amounts of noise, which might be present in TLS data. Qualitative visual checks of denoised surfaces and quantitative parameters such as grid height and roughness are considered in a comparative analysis of denoising methods. Results indicate that the preferred method for realistic roughness estimation is DWT with penalised low hard thresholding.
The use of ensemble empirical mode decomposition as a novel denoising technique
NASA Astrophysics Data System (ADS)
Gaci, Said; Hachay, Olga; Zaourar, Naima
2016-04-01
Denoising is of a high importance in geophysical data processing. This paper suggests a new denoising technique based on the Ensemble Empirical mode decomposition (EEMD). This technique has been compared with the discrete wavelet transform (DWT) thresholding. Firstly, both methods have been implemented on synthetic signals with diverse waveforms ('blocks', 'heavy sine', 'Doppler', and 'mishmash'). The EEMD denoising method is proved to be the most efficient for 'blocks', 'heavy sine' and 'mishmash' signals for all the considered signal-to-noise ratio (SNR) values. However, the results obtained using the DWT thresholding are the most reliable for 'Doppler' signal, and the difference between the calculated mean square error (MSE) values using the studied methods is slight and decreases as the SNR value gets smaller values. Secondly, the denoising methods have been applied on real seismic traces recorded in the Algerian Sahara. It is shown that the proposed technique outperforms the DWT thresholding. In conclusion, the EEMD technique can provide a powerful tool for denoising seismic signals. Keywords: Ensemble Empirical mode decomposition (EEMD), Discrete wavelet transform (DWT), seismic signal.
Adaptive regularization of the NL-means: application to image and video denoising.
Sutour, Camille; Deledalle, Charles-Alban; Aujol, Jean-François
2014-08-01
Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.
Edge-preserving image denoising via group coordinate descent on the GPU
McGaffin, Madison G.; Fessler, Jeffrey A.
2015-01-01
Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel one-dimensional pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates inplace and only store the noisy data, denoised image and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation (TV). Both algorithms use the majorize-minimize (MM) framework to solve the one-dimensional pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time. PMID:25675454
[Multispectral remote sensing image denoising based on non-local means].
Liu, Peng; Liu, Ding-Sheng; Li, Guo-Qing; Liu, Zhi-Wen
2011-11-01
The non-local mean denoising (NLM) exploits the fact that similar neighborhoods can occur anywhere in the image and can contribute to denoising. However, these current NLM methods do not aim at multichannel remote sensing image. Smoothing every band image separately will seriously damage the spectral information of the multispectral image. Then the authors promote the NLM from two aspects. Firstly, for multispectral image denoising, a weight value should be related to all channels but not only one channel. So for the kth band image, the authors use sum of smoothing kernel in all bands instead of one band. Secondly, for the patch whose spectral feature is similar to the spectral feature of the central patch, its weight should be larger. Bringing the two changes into the traditional non-local mean, a new multispectral non-local mean denoising method is proposed. In the experiments, different satellite images containing both urban and rural parts are used. For better evaluating the performance of the different method, ERGAS and SAM as quality index are used. And some other methods are compared with the proposed method. The proposed method shows better performance not only in ERGAS but also in SAM. Especially the spectral feature is better reserved in proposed NLM denoising.
Chiron, Lionel; van Agthoven, Maria A.; Kieffer, Bruno; Rolando, Christian; Delsuc, Marc-André
2014-01-01
Modern scientific research produces datasets of increasing size and complexity that require dedicated numerical methods to be processed. In many cases, the analysis of spectroscopic data involves the denoising of raw data before any further processing. Current efficient denoising algorithms require the singular value decomposition of a matrix with a size that scales up as the square of the data length, preventing their use on very large datasets. Taking advantage of recent progress on random projection and probabilistic algorithms, we developed a simple and efficient method for the denoising of very large datasets. Based on the QR decomposition of a matrix randomly sampled from the data, this approach allows a gain of nearly three orders of magnitude in processing time compared with classical singular value decomposition denoising. This procedure, called urQRd (uncoiled random QR denoising), strongly reduces the computer memory footprint and allows the denoising algorithm to be applied to virtually unlimited data size. The efficiency of these numerical tools is demonstrated on experimental data from high-resolution broadband Fourier transform ion cyclotron resonance mass spectrometry, which has applications in proteomics and metabolomics. We show that robust denoising is achieved in 2D spectra whose interpretation is severely impaired by scintillation noise. These denoising procedures can be adapted to many other data analysis domains where the size and/or the processing time are crucial. PMID:24390542
Computed tomography perfusion imaging denoising using Gaussian process regression
NASA Astrophysics Data System (ADS)
Zhu, Fan; Carpenter, Trevor; Rodriguez Gonzalez, David; Atkinson, Malcolm; Wardlaw, Joanna
2012-06-01
Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study.
Simultaneous Fusion and Denoising of Panchromatic and Multispectral Satellite Images
NASA Astrophysics Data System (ADS)
Ragheb, Amr M.; Osman, Heba; Abbas, Alaa M.; Elkaffas, Saleh M.; El-Tobely, Tarek A.; Khamis, S.; Elhalawany, Mohamed E.; Nasr, Mohamed E.; Dessouky, Moawad I.; Al-Nuaimy, Waleed; Abd El-Samie, Fathi E.
2012-12-01
To identify objects in satellite images, multispectral (MS) images with high spectral resolution and low spatial resolution, and panchromatic (Pan) images with high spatial resolution and low spectral resolution need to be fused. Several fusion methods such as the intensity-hue-saturation (IHS), the discrete wavelet transform, the discrete wavelet frame transform (DWFT), and the principal component analysis have been proposed in recent years to obtain images with both high spectral and spatial resolutions. In this paper, a hybrid fusion method for satellite images comprising both the IHS transform and the DWFT is proposed. This method tries to achieve the highest possible spectral and spatial resolutions with as small distortion in the fused image as possible. A comparison study between the proposed hybrid method and the traditional methods is presented in this paper. Different MS and Pan images from Landsat-5, Spot, Landsat-7, and IKONOS satellites are used in this comparison. The effect of noise on the proposed hybrid fusion method as well as the traditional fusion methods is studied. Experimental results show the superiority of the proposed hybrid method to the traditional methods. The results show also that a wavelet denoising step is required when fusion is performed at low signal-to-noise ratios.
Multi-scale non-local denoising method in neuroimaging.
Chen, Yiping; Wang, Cheng; Wang, Liansheng
2016-03-17
Non-local means algorithm can remove image noise in a unique way that is contrary to traditional techniques. This is because it not only smooths the image but it also preserves the information details of the image. However, this method suffers from high computational complexity. We propose a multi-scale non-local means method in which adaptive multi-scale technique is implemented. In practice, based on each selected scale, the input image is divided into small blocks. Then, we remove the noise in the given pixel by using only one block. This can overcome the low efficiency problem caused by the original non-local means method. Our proposed method also benefits from the local average gradient orientation. In order to perform evaluation, we compared the processed images based on our technique with the ones by the original and the improved non-local means denoising method. Extensive experiments are conducted and results shows that our method is faster than the original and the improved non-local means method. It is also proven that our implemented method is robust enough to remove noise in the application of neuroimaging.
On Bayesian Inductive Inference & Predictive Estimation
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John; Smelyanskiy, Vadim
2004-01-01
We investigate Bayesian inference and the Principle of Maximum Entropy (PME) as methods for doing inference under uncertainty. This investigation is primarily through concrete examples that have been previously investigated in the literature. We find that it is possible to do Bayesian inference and PME inference using the same information, despite claims to the contrary, but that the results are not directly comparable. This is because Bayesian inference yields a probability density function (pdf) over the unknown model parameters, whereas PME yields point estimates. If mean estimates are extracted from the Bayesian pdfs, the resulting parameter estimates can differ radically from the PME values and also from the Maximum Likelihood values. We conclude that these differences are due to the Bayesian inference not assuming anything beyond the given prior probabilities and the data, whereas PME implicitly assumes that the given constraints are the only constraints that are operating. Since this assumption can be wrong, PME values may have to be revised when subsequent data shows evidence for more constraints. The entropy concentration previously "proved" by E. T. Jaynes is shown to be in error. Further, we show that PME is a generalized form of independence assumption, and so can be a very powerful method of inference when the variables being investigated are largely independent of each other.
Akhbari, Mahsa; Shamsollahi, Mohammad B; Jutten, Christian; Armoundas, Antonis A; Sayadi, Omid
2016-02-01
In this paper we propose an efficient method for denoising and extracting fiducial point (FP) of ECG signals. The method is based on a nonlinear dynamic model which uses Gaussian functions to model ECG waveforms. For estimating the model parameters, we use an extended Kalman filter (EKF). In this framework called EKF25, all the parameters of Gaussian functions as well as the ECG waveforms (P-wave, QRS complex and T-wave) in the ECG dynamical model, are considered as state variables. In this paper, the dynamic time warping method is used to estimate the nonlinear ECG phase observation. We compare this new approach with linear phase observation models. Using linear and nonlinear EKF25 for ECG denoising and nonlinear EKF25 for fiducial point extraction and ECG interval analysis are the main contributions of this paper. Performance comparison with other EKF-based techniques shows that the proposed method results in higher output SNR with an average SNR improvement of 12 dB for an input SNR of -8 dB. To evaluate the FP extraction performance, we compare the proposed method with a method based on partially collapsed Gibbs sampler and an established EKF-based method. The mean absolute error and the root mean square error of all FPs, across all databases are 14 ms and 22 ms, respectively, for our proposed method, with an advantage when using a nonlinear phase observation. These errors are significantly smaller than errors obtained with other methods. For ECG interval analysis, with an absolute mean error and a root mean square error of about 22 ms and 29 ms, the proposed method achieves better accuracy and smaller variability with respect to other methods.
An Optimal Partial Differential Equations-based Stopping Criterion for Medical Image Denoising.
Khanian, Maryam; Feizi, Awat; Davari, Ali
2014-01-01
Improving the quality of medical images at pre- and post-surgery operations are necessary for beginning and speeding up the recovery process. Partial differential equations-based models have become a powerful and well-known tool in different areas of image processing such as denoising, multiscale image analysis, edge detection and other fields of image processing and computer vision. In this paper, an algorithm for medical image denoising using anisotropic diffusion filter with a convenient stopping criterion is presented. In this regard, the current paper introduces two strategies: utilizing the efficient explicit method due to its advantages with presenting impressive software technique to effectively solve the anisotropic diffusion filter which is mathematically unstable, proposing an automatic stopping criterion, that takes into consideration just input image, as opposed to other stopping criteria, besides the quality of denoised image, easiness and time. Various medical images are examined to confirm the claim.
A comparison of filtering techniques on denoising terahertz coaxial digital holography image
NASA Astrophysics Data System (ADS)
Cui, Shan-shan; Li, Qi
2016-10-01
In the process of recording terahertz digital hologram, the hologram is easy to be contaminated by speckle noise, which leads to lower resolution in imaging system and affects the reconstruction results seriously. Thus, the study of filtering algorithms applicable for de-speckling terahertz digital holography image has important practical values. In this paper, non-local means filtering and guided bilateral filtering were brought to process the real image reconstructed from continuous-wave terahertz coaxial digital hologram. For comparison, median filtering, bilateral filtering, and robust bilateral filtering, were introduced as conventional methods to denoise the real image. Then, all the denoising results were evaluated. The comparison indicates that the guided bilateral filter manifests the optimal denoising effect for the terahertz digital holography image, both significantly suppressing speckle noise, and effectively preserving the useful information on the reconstructed image.
Improved DCT-Based Nonlocal Means Filter for MR Images Denoising
Hu, Jinrong; Pu, Yifei; Wu, Xi; Zhang, Yi; Zhou, Jiliu
2012-01-01
The nonlocal means (NLM) filter has been proven to be an efficient feature-preserved denoising method and can be applied to remove noise in the magnetic resonance (MR) images. To suppress noise more efficiently, we present a novel NLM filter based on the discrete cosine transform (DCT). Instead of computing similarity weights using the gray level information directly, the proposed method calculates similarity weights in the DCT subspace of neighborhood. Due to promising characteristics of DCT, such as low data correlation and high energy compaction, the proposed filter is naturally endowed with more accurate estimation of weights thus enhances denoising effectively. The performance of the proposed filter is evaluated qualitatively and quantitatively together with two other NLM filters, namely, the original NLM filter and the unbiased NLM (UNLM) filter. Experimental results demonstrate that the proposed filter achieves better denoising performance in MRI compared to the others. PMID:22545063
Chang, Ching-Wei; Mycek, Mary-Ann
2012-05-01
We report the first application of wavelet-based denoising (noise removal) methods to time-domain box-car fluorescence lifetime imaging microscopy (FLIM) images and compare the results to novel total variation (TV) denoising methods. Methods were tested first on artificial images and then applied to low-light live-cell images. Relative to undenoised images, TV methods could improve lifetime precision up to 10-fold in artificial images, while preserving the overall accuracy of lifetime and amplitude values of a single-exponential decay model and improving local lifetime fitting in live-cell images. Wavelet-based methods were at least 4-fold faster than TV methods, but could introduce significant inaccuracies in recovered lifetime values. The denoising methods discussed can potentially enhance a variety of FLIM applications, including live-cell, in vivo animal, or endoscopic imaging studies, especially under challenging imaging conditions such as low-light or fast video-rate imaging.
Low-dose computed tomography image denoising based on joint wavelet and sparse representation.
Ghadrdan, Samira; Alirezaie, Javad; Dillenseger, Jean-Louis; Babyn, Paul
2014-01-01
Image denoising and signal enhancement are the most challenging issues in low dose computed tomography (CT) imaging. Sparse representational methods have shown initial promise for these applications. In this work we present a wavelet based sparse representation denoising technique utilizing dictionary learning and clustering. By using wavelets we extract the most suitable features in the images to obtain accurate dictionary atoms for the denoising algorithm. To achieve improved results we also lower the number of clusters which reduces computational complexity. In addition, a single image noise level estimation is developed to update the cluster centers in higher PSNRs. Our results along with the computational efficiency of the proposed algorithm clearly demonstrates the improvement of the proposed algorithm over other clustering based sparse representation (CSR) and K-SVD methods.
Computed Tomography Images De-noising using a Novel Two Stage Adaptive Algorithm.
Fadaee, Mojtaba; Shamsi, Mousa; Saberkari, Hamidreza; Sedaaghi, Mohammad Hossein
2015-01-01
In this paper, an optimal algorithm is presented for de-noising of medical images. The presented algorithm is based on improved version of local pixels grouping and principal component analysis. In local pixels grouping algorithm, blocks matching based on L (2) norm method is utilized, which leads to matching performance improvement. To evaluate the performance of our proposed algorithm, peak signal to noise ratio (PSNR) and structural similarity (SSIM) evaluation criteria have been used, which are respectively according to the signal to noise ratio in the image and structural similarity of two images. The proposed algorithm has two de-noising and cleanup stages. The cleanup stage is carried out comparatively; meaning that it is alternately repeated until the two conditions based on PSNR and SSIM are established. Implementation results show that the presented algorithm has a significant superiority in de-noising. Furthermore, the quantities of SSIM and PSNR values are higher in comparison to other methods.
Computed Tomography Images De-noising using a Novel Two Stage Adaptive Algorithm
Fadaee, Mojtaba; Shamsi, Mousa; Saberkari, Hamidreza; Sedaaghi, Mohammad Hossein
2015-01-01
In this paper, an optimal algorithm is presented for de-noising of medical images. The presented algorithm is based on improved version of local pixels grouping and principal component analysis. In local pixels grouping algorithm, blocks matching based on L2 norm method is utilized, which leads to matching performance improvement. To evaluate the performance of our proposed algorithm, peak signal to noise ratio (PSNR) and structural similarity (SSIM) evaluation criteria have been used, which are respectively according to the signal to noise ratio in the image and structural similarity of two images. The proposed algorithm has two de-noising and cleanup stages. The cleanup stage is carried out comparatively; meaning that it is alternately repeated until the two conditions based on PSNR and SSIM are established. Implementation results show that the presented algorithm has a significant superiority in de-noising. Furthermore, the quantities of SSIM and PSNR values are higher in comparison to other methods. PMID:26955565
A multiscale products technique for denoising of DNA capillary electrophoresis signals
NASA Astrophysics Data System (ADS)
Gao, Qingwei; Lu, Yixiang; Sun, Dong; Zhang, Dexiang
2013-06-01
Since noise degrades the accuracy and precision of DNA capillary electrophoresis (CE) analysis, signal denoising is thus important to facilitate the postprocessing of CE data. In this paper, a new denoising algorithm based on dyadic wavelet transform using multiscale products is applied for the removal of the noise in the DNA CE signal. The adjacent scale wavelet coefficients are first multiplied to amplify the significant features of the CE signal while diluting noise. Then, noise is suppressed by applying a multiscale threshold to the multiscale products instead of directly to the wavelet coefficients. Finally, the noise-free CE signal is recovered from the thresholded coefficients by using inverse dyadic wavelet transform. We compare the performance of the proposed algorithm with other denoising methods applied to the synthetic CE and real CE signals. Experimental results show that the new scheme achieves better removal of noise while preserving the shape of peaks corresponding to the analytes in the sample.
Denoising for 3-d photon-limited imaging data using nonseparable filterbanks.
Santamaria-Pang, Alberto; Bildea, Teodor Stefan; Tan, Shan; Kakadiaris, Ioannis A
2008-12-01
In this paper, we present a novel frame-based denoising algorithm for photon-limited 3-D images. We first construct a new 3-D nonseparable filterbank by adding elements to an existing frame in a structurally stable way. In contrast with the traditional 3-D separable wavelet system, the new filterbank is capable of using edge information in multiple directions. We then propose a data-adaptive hysteresis thresholding algorithm based on this new 3-D nonseparable filterbank. In addition, we develop a new validation strategy for denoising of photon-limited images containing sparse structures, such as neurons (the structure of interest is less than 5% of total volume). The validation method, based on tubular neighborhoods around the structure, is used to determine the optimal threshold of the proposed denoising algorithm. We compare our method with other state-of-the-art methods and report very encouraging results on applications utilizing both synthetic and real data.
REFINING GENETICALLY INFERRED RELATIONSHIPS USING TREELET COVARIANCE SMOOTHING1
Crossett, Andrew; Lee, Ann B.; Klei, Lambertus; Devlin, Bernie; Roeder, Kathryn
2013-01-01
Recent technological advances coupled with large sample sets have uncovered many factors underlying the genetic basis of traits and the predisposition to complex disease, but much is left to discover. A common thread to most genetic investigations is familial relationships. Close relatives can be identified from family records, and more distant relatives can be inferred from large panels of genetic markers. Unfortunately these empirical estimates can be noisy, especially regarding distant relatives. We propose a new method for denoising genetically—inferred relationship matrices by exploiting the underlying structure due to hierarchical groupings of correlated individuals. The approach, which we call Treelet Covariance Smoothing, employs a multiscale decomposition of covariance matrices to improve estimates of pairwise relationships. On both simulated and real data, we show that smoothing leads to better estimates of the relatedness amongst distantly related individuals. We illustrate our method with a large genome-wide association study and estimate the “heritability” of body mass index quite accurately. Traditionally heritability, defined as the fraction of the total trait variance attributable to additive genetic effects, is estimated from samples of closely related individuals using random effects models. We show that by using smoothed relationship matrices we can estimate heritability using population-based samples. Finally, while our methods have been developed for refining genetic relationship matrices and improving estimates of heritability, they have much broader potential application in statistics. Most notably, for error-in-variables random effects models and settings that require regularization of matrices with block or hierarchical structure. PMID:24587841
REFINING GENETICALLY INFERRED RELATIONSHIPS USING TREELET COVARIANCE SMOOTHING.
Crossett, Andrew; Lee, Ann B; Klei, Lambertus; Devlin, Bernie; Roeder, Kathryn
2013-06-27
Recent technological advances coupled with large sample sets have uncovered many factors underlying the genetic basis of traits and the predisposition to complex disease, but much is left to discover. A common thread to most genetic investigations is familial relationships. Close relatives can be identified from family records, and more distant relatives can be inferred from large panels of genetic markers. Unfortunately these empirical estimates can be noisy, especially regarding distant relatives. We propose a new method for denoising genetically-inferred relationship matrices by exploiting the underlying structure due to hierarchical groupings of correlated individuals. The approach, which we call Treelet Covariance Smoothing, employs a multiscale decomposition of covariance matrices to improve estimates of pairwise relationships. On both simulated and real data, we show that smoothing leads to better estimates of the relatedness amongst distantly related individuals. We illustrate our method with a large genome-wide association study and estimate the "heritability" of body mass index quite accurately. Traditionally heritability, defined as the fraction of the total trait variance attributable to additive genetic effects, is estimated from samples of closely related individuals using random effects models. We show that by using smoothed relationship matrices we can estimate heritability using population-based samples. Finally, while our methods have been developed for refining genetic relationship matrices and improving estimates of heritability, they have much broader potential application in statistics. Most notably, for error-in-variables random effects models and settings that require regularization of matrices with block or hierarchical structure.
A new adaptive algorithm for image denoising based on curvelet transform
NASA Astrophysics Data System (ADS)
Chen, Musheng; Cai, Zhishan
2013-10-01
The purpose of this paper is to study a method of denoising images corrupted with additive white Gaussian noise. In this paper, the application of the time invariant discrete curvelet transform for noise reduction is considered. In curvelet transform, the frame elements are indexed by scale, orientation and location parameters. It is designed to represent edges and the singularities along curved paths more efficiently than the wavelet transform. Therefore, curvelet transform can get better results than wavelet method in image denoising. In general, image denoising imposes a compromise between noise reduction and preserving significant image details. To achieve a good performance in this respect, an efficient and adaptive image denoising method based on curvelet transform is presented in this paper. Firstly, the noisy image is decomposed into many levels to obtain different frequency sub-bands by curvelet transform. Secondly, efficient and adaptive threshold estimation based on generalized Gaussian distribution modeling of sub-band coefficients is used to remove the noisy coefficients. The choice of the threshold estimation is carried out by analyzing the standard deviation and threshold. Ultimately, invert the multi-scale decomposition to reconstruct the denoised image. Here, to prove the performance of the proposed method, the results are compared with other existent algorithms such as hard and soft threshold based on wavelet. The simulation results on several testing images indicate that the proposed method outperforms the other methods in peak signal to noise ratio and keeps better visual in edges information reservation as well. The results also suggest that curvelet transform can achieve a better performance than the wavelet transform in image denoising.
NASA Astrophysics Data System (ADS)
Holan, Scott H.; Viator, John A.
2008-06-01
Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples.
Hanson, K.M.; Cunningham, G.S.
1996-04-01
The authors are developing a computer application, called the Bayes Inference Engine, to provide the means to make inferences about models of physical reality within a Bayesian framework. The construction of complex nonlinear models is achieved by a fully object-oriented design. The models are represented by a data-flow diagram that may be manipulated by the analyst through a graphical programming environment. Maximum a posteriori solutions are achieved using a general, gradient-based optimization algorithm. The application incorporates a new technique of estimating and visualizing the uncertainties in specific aspects of the model.
Projection space denoising with bilateral filtering and CT noise modeling for dose reduction in CT
Manduca, Armando; Yu Lifeng; Trzasko, Joshua D.; Khaylova, Natalia; Kofler, James M.; McCollough, Cynthia M.; Fletcher, Joel G.
2009-11-15
Purpose: To investigate a novel locally adaptive projection space denoising algorithm for low-dose CT data. Methods: The denoising algorithm is based on bilateral filtering, which smooths values using a weighted average in a local neighborhood, with weights determined according to both spatial proximity and intensity similarity between the center pixel and the neighboring pixels. This filtering is locally adaptive and can preserve important edge information in the sinogram, thus maintaining high spatial resolution. A CT noise model that takes into account the bowtie filter and patient-specific automatic exposure control effects is also incorporated into the denoising process. The authors evaluated the noise-resolution properties of bilateral filtering incorporating such a CT noise model in phantom studies and preliminary patient studies with contrast-enhanced abdominal CT exams. Results: On a thin wire phantom, the noise-resolution properties were significantly improved with the denoising algorithm compared to commercial reconstruction kernels. The noise-resolution properties on low-dose (40 mA s) data after denoising approximated those of conventional reconstructions at twice the dose level. A separate contrast plate phantom showed improved depiction of low-contrast plates with the denoising algorithm over conventional reconstructions when noise levels were matched. Similar improvement in noise-resolution properties was found on CT colonography data and on five abdominal low-energy (80 kV) CT exams. In each abdominal case, a board-certified subspecialized radiologist rated the denoised 80 kV images markedly superior in image quality compared to the commercially available reconstructions, and denoising improved the image quality to the point where the 80 kV images alone were considered to be of diagnostic quality. Conclusions: The results demonstrate that bilateral filtering incorporating a CT noise model can achieve a significantly better noise-resolution trade
A computationally efficient denoising and hole-filling method for depth image enhancement
NASA Astrophysics Data System (ADS)
Liu, Soulan; Chen, Chen; Kehtarnavaz, Nasser
2016-04-01
Depth maps captured by Kinect depth cameras are being widely used for 3D action recognition. However, such images often appear noisy and contain missing pixels or black holes. This paper presents a computationally efficient method for both denoising and hole-filling in depth images. The denoising is achieved by utilizing a combination of Gaussian kernel filtering and anisotropic filtering. The hole-filling is achieved by utilizing a combination of morphological filtering and zero block filtering. Experimental results using the publicly available datasets are provided indicating the superiority of the developed method in terms of both depth error and computational efficiency compared to three existing methods.
[Near infrared spectra (NIR) analysis of octane number by wavelet denoising-derivative method].
Tian, Gao-you; Yuan, Hong-fu; Chu, Xiao-li; Liu, Hui-ying; Lu, Wan-zhen
2005-04-01
Derivative can correct baseline effects and also increase the level of noise. Wavelet transform has been proven an efficient tool for de-noising. This paper is directed to the application of wavelet transfer and derivative in the NIR analysis of octane number (RON). The derivative parameters, as well as their effects on the noise level and analytic accuracy of RON, have been studied in detail. The results show that derivative can correct the baseline effects and increase the analytic accuracy. Noise from the derivative spectra has great detriment to the analysis of RON. De-noising of wavelet transform can increase the S/N and improve the analytical accuracy.
Statistical inference for inverse problems
NASA Astrophysics Data System (ADS)
Bissantz, Nicolai; Holzmann, Hajo
2008-06-01
In this paper we study statistical inference for certain inverse problems. We go beyond mere estimation purposes and review and develop the construction of confidence intervals and confidence bands in some inverse problems, including deconvolution and the backward heat equation. Further, we discuss the construction of certain hypothesis tests, in particular concerning the number of local maxima of the unknown function. The methods are illustrated in a case study, where we analyze the distribution of heliocentric escape velocities of galaxies in the Centaurus galaxy cluster, and provide statistical evidence for its bimodality.
Geometric de-noising of protein-protein interaction networks.
Kuchaiev, Oleksii; Rasajski, Marija; Higham, Desmond J; Przulj, Natasa
2009-08-01
Understanding complex networks of protein-protein interactions (PPIs) is one of the foremost challenges of the post-genomic era. Due to the recent advances in experimental bio-technology, including yeast-2-hybrid (Y2H), tandem affinity purification (TAP) and other high-throughput methods for protein-protein interaction (PPI) detection, huge amounts of PPI network data are becoming available. Of major concern, however, are the levels of noise and incompleteness. For example, for Y2H screens, it is thought that the false positive rate could be as high as 64%, and the false negative rate may range from 43% to 71%. TAP experiments are believed to have comparable levels of noise.We present a novel technique to assess the confidence levels of interactions in PPI networks obtained from experimental studies. We use it for predicting new interactions and thus for guiding future biological experiments. This technique is the first to utilize currently the best fitting network model for PPI networks, geometric graphs. Our approach achieves specificity of 85% and sensitivity of 90%. We use it to assign confidence scores to physical protein-protein interactions in the human PPI network downloaded from BioGRID. Using our approach, we predict 251 interactions in the human PPI network, a statistically significant fraction of which correspond to protein pairs sharing common GO terms. Moreover, we validate a statistically significant portion of our predicted interactions in the HPRD database and the newer release of BioGRID. The data and Matlab code implementing the methods are freely available from the web site: http://www.kuchaev.com/Denoising.
ERIC Educational Resources Information Center
Watson, Jane
2007-01-01
Inference, or decision making, is seen in curriculum documents as the final step in a statistical investigation. For a formal statistical enquiry this may be associated with sophisticated tests involving probability distributions. For young students without the mathematical background to perform such tests, it is still possible to draw informal…
Yu, Hancheng; Zhao, Li; Wang, Haixian
2009-10-01
This correspondence proposes an efficient algorithm for removing Gaussian noise from corrupted image by incorporating a wavelet-based trivariate shrinkage filter with a spatial-based joint bilateral filter. In the wavelet domain, the wavelet coefficients are modeled as trivariate Gaussian distribution, taking into account the statistical dependencies among intrascale wavelet coefficients, and then a trivariate shrinkage filter is derived by using the maximum a posteriori (MAP) estimator. Although wavelet-based methods are efficient in image denoising, they are prone to producing salient artifacts such as low-frequency noise and edge ringing which relate to the structure of the underlying wavelet. On the other hand, most spatial-based algorithms output much higher quality denoising image with less artifacts. However, they are usually too computationally demanding. In order to reduce the computational cost, we develop an efficient joint bilateral filter by using the wavelet denoising result rather than directly processing the noisy image in the spatial domain. This filter could suppress the noise while preserve image details with small computational cost. Extension to color image denoising is also presented. We compare our denoising algorithm with other denoising techniques in terms of PSNR and visual quality. The experimental results indicate that our algorithm is competitive with other denoising techniques.
Carasso, Alfred S; Vladár, András E
2012-01-01
Helium ion microscopes (HIM) are capable of acquiring images with better than 1 nm resolution, and HIM images are particularly rich in morphological surface details. However, such images are generally quite noisy. A major challenge is to denoise these images while preserving delicate surface information. This paper presents a powerful slow motion denoising technique, based on solving linear fractional diffusion equations forward in time. The method is easily implemented computationally, using fast Fourier transform (FFT) algorithms. When applied to actual HIM images, the method is found to reproduce the essential surface morphology of the sample with high fidelity. In contrast, such highly sophisticated methodologies as Curvelet Transform denoising, and Total Variation denoising using split Bregman iterations, are found to eliminate vital fine scale information, along with the noise. Image Lipschitz exponents are a useful image metrology tool for quantifying the fine structure content in an image. In this paper, this tool is applied to rank order the above three distinct denoising approaches, in terms of their texture preserving properties. In several denoising experiments on actual HIM images, it was found that fractional diffusion smoothing performed noticeably better than split Bregman TV, which in turn, performed slightly better than Curvelet denoising.
Cannistraci, Carlo V; Montevecchi, Franco M; Alessio, Massimo
2009-11-01
Denoising is a fundamental early stage in 2-DE image analysis strongly influencing spot detection or pixel-based methods. A novel nonlinear adaptive spatial filter (median-modified Wiener filter, MMWF), is here compared with five well-established denoising techniques (Median, Wiener, Gaussian, and Polynomial-Savitzky-Golay filters; wavelet denoising) to suggest, by means of fuzzy sets evaluation, the best denoising approach to use in practice. Although median filter and wavelet achieved the best performance in spike and Gaussian denoising respectively, they are unsuitable for contemporary removal of different types of noise, because their best setting is noise-dependent. Vice versa, MMWF that arrived second in each single denoising category, was evaluated as the best filter for global denoising, being its best setting invariant of the type of noise. In addition, median filter eroded the edge of isolated spots and filled the space between close-set spots, whereas MMWF because of a novel filter effect (drop-off-effect) does not suffer from erosion problem, preserves the morphology of close-set spots, and avoids spot and spike fuzzyfication, an aberration encountered for Wiener filter. In our tests, MMWF was assessed as the best choice when the goal is to minimize spot edge aberrations while removing spike and Gaussian noise.
Liu, Xiaoming; Yang, Zhou; Wang, Jia; Liu, Jun; Zhang, Kai; Hu, Wei
2017-01-01
Image denoising is a crucial step before performing segmentation or feature extraction on an image, which affects the final result in image processing. In recent years, utilizing the self-similarity characteristics of the images, many patch-based image denoising methods have been proposed, but most of them, named the internal denoising methods, utilized the noisy image only where the performances are constrained by the limited information they used. We proposed a patch-based method, which uses a low-rank technique and targeted database, to denoise the optical coherence tomography (OCT) image. When selecting the similar patches for the noisy patch, our method combined internal and external denoising, utilizing the other images relevant to the noisy image, in which our targeted database is made up of these two kinds of images and is an improvement compared with the previous methods. Next, we leverage the low-rank technique to denoise the group matrix consisting of the noisy patch and the corresponding similar patches, for the fact that a clean image can be seen as a low-rank matrix and rank of the noisy image is much larger than the clean image. After the first-step denoising is accomplished, we take advantage of Gabor transform, which considered the layer characteristic of the OCT retinal images, to construct a noisy image before the second step. Experimental results demonstrate that our method compares favorably with the existing state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Peña, M.
2016-10-01
Achieving acceptable signal-to-noise ratio (SNR) can be difficult when working in sparsely populated waters and/or when species have low scattering such as fluid filled animals. The increasing use of higher frequencies and the study of deeper depths in fisheries acoustics, as well as the use of commercial vessels, is raising the need to employ good denoising algorithms. The use of a lower Sv threshold to remove noise or unwanted targets is not suitable in many cases and increases the relative background noise component in the echogram, demanding more effectiveness from denoising algorithms. The Adaptive Wiener Filter (AWF) denoising algorithm is presented in this study. The technique is based on the AWF commonly used in digital photography and video enhancement. The algorithm firstly increments the quality of the data with a variance-dependent smoothing, before estimating the noise level as the envelope of the Sv minima. The AWF denoising algorithm outperforms existing algorithms in the presence of gaussian, speckle and salt & pepper noise, although impulse noise needs to be previously removed. Cleaned echograms present homogenous echotraces with outlined edges.
NASA Astrophysics Data System (ADS)
Wang, Dong; Singh, Vijay P.; Shang, Xiaosan; Ding, Hao; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi; Wang, Shicheng; Wang, Zhenlong
2014-07-01
De-noising meteorologic and hydrologic time series is important to improve the accuracy and reliability of extraction, analysis, simulation, and forecasting. A hybrid approach, combining sample entropy and wavelet de-noising method, is developed to separate noise from original series and is named as AWDA-SE (adaptive wavelet de-noising approach using sample entropy). The AWDA-SE approach adaptively determines the threshold for wavelet analysis. Two kinds of meteorologic and hydrologic data sets, synthetic data set and 3 representative field measured data sets (one is the annual rainfall data of Jinan station and the other two are annual streamflow series from two typical stations in China, Yingluoxia station on the Heihe River, which is little affected by human activities, and Lijin station on the Yellow River, which is greatly affected by human activities), are used to illustrate the approach. The AWDA-SE approach is compared with three conventional de-noising methods, including fixed-form threshold algorithm, Stein unbiased risk estimation algorithm, and minimax algorithm. Results show that the AWDA-SE approach separates effectively the signal and noise of the data sets and is found to be better than the conventional methods. Measures of assessment standards show that the developed approach can be employed to investigate noisy and short time series and can also be applied to other areas.
Kernel regression based feature extraction for 3D MR image denoising.
López-Rubio, Ezequiel; Florentín-Núñez, María Nieves
2011-08-01
Kernel regression is a non-parametric estimation technique which has been successfully applied to image denoising and enhancement in recent times. Magnetic resonance 3D image denoising has two features that distinguish it from other typical image denoising applications, namely the tridimensional structure of the images and the nature of the noise, which is Rician rather than Gaussian or impulsive. Here we propose a principled way to adapt the general kernel regression framework to this particular problem. Our noise removal system is rooted on a zeroth order 3D kernel regression, which computes a weighted average of the pixels over a regression window. We propose to obtain the weights from the similarities among small sized feature vectors associated to each pixel. In turn, these features come from a second order 3D kernel regression estimation of the original image values and gradient vectors. By considering directional information in the weight computation, this approach substantially enhances the performance of the filter. Moreover, Rician noise level is automatically estimated without any need of human intervention, i.e. our method is fully automated. Experimental results over synthetic and real images demonstrate that our proposal achieves good performance with respect to the other MRI denoising filters being compared.
Group-sparse representation with dictionary learning for medical image denoising and fusion.
Li, Shutao; Yin, Haitao; Fang, Leyuan
2012-12-01
Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.
Shi, Yan; Yang, Xiaoyuan; Guo, Yuhua
2014-01-01
This paper is devoted to the study of a directional lifting transform for wavelet frames. A nonsubsampled lifting structure is developed to maintain the translation invariance as it is an important property in image denoising. Then, the directionality of the lifting-based tight frame is explicitly discussed, followed by a specific translation invariant directional framelet transform (TIDFT). The TIDFT has two framelets ψ1, ψ2 with vanishing moments of order two and one respectively, which are able to detect singularities in a given direction set. It provides an efficient and sparse representation for images containing rich textures along with properties of fast implementation and perfect reconstruction. In addition, an adaptive block-wise orientation estimation method based on Gabor filters is presented instead of the conventional minimization of residuals. Furthermore, the TIDFT is utilized to exploit the capability of image denoising, incorporating the MAP estimator for multivariate exponential distribution. Consequently, the TIDFT is able to eliminate the noise effectively while preserving the textures simultaneously. Experimental results show that the TIDFT outperforms some other frame-based denoising methods, such as contourlet and shearlet, and is competitive to the state-of-the-art denoising approaches.
Subject-specific patch-based denoising for contrast-enhanced cardiac MR images
NASA Astrophysics Data System (ADS)
Ma, Lorraine; Ebrahimi, Mehran; Pop, Mihaela
2016-03-01
Many patch-based techniques in imaging, e.g., Non-local means denoising, require tuning parameters to yield optimal results. In real-world applications, e.g., denoising of MR images, ground truth is not generally available and the process of choosing an appropriate set of parameters is a challenge. Recently, Zhu et al. proposed a method to define an image quality measure, called Q, that does not require ground truth. In this manuscript, we evaluate the effect of various parameters of the NL-means denoising on this quality metric Q. Our experiments are based on the late-gadolinium enhancement (LGE) cardiac MR images that are inherently noisy. Our described exhaustive evaluation approach can be used in tuning parameters of patch-based schemes. Even in the case that an estimation of optimal parameters is provided using another existing approach, our described method can be used as a secondary validation step. Our preliminary results suggest that denoising parameters should be case-specific rather than generic.
Automated wavelet denoising of photoacoustic signals for burn-depth image reconstruction
NASA Astrophysics Data System (ADS)
Holan, Scott H.; Viator, John A.
2007-02-01
Photoacoustic image reconstruction involves dozens or perhaps hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a sample with laser light are used to produce an image of the acoustic source. Each of these point measurements must undergo some signal processing, such as denoising and system deconvolution. In order to efficiently process the numerous signals acquired for photoacoustic imaging, we have developed an automated wavelet algorithm for processing signals generated in a burn injury phantom. We used the discrete wavelet transform to denoise photoacoustic signals generated in an optically turbid phantom containing whole blood. The denoising used universal level independent thresholding, as developed by Donoho and Johnstone. The entire signal processing technique was automated so that no user intervention was needed to reconstruct the images. The signals were backprojected using the automated wavelet processing software and showed reconstruction using denoised signals improved image quality by 21%, using a relative 2-norm difference scheme.
An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise.
Le Montagner, Yoann; Angelini, Elsa D; Olivo-Marin, Jean-Christophe
2014-03-01
The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.
Texture preservation in de-noising UAV surveillance video through multi-frame sampling
NASA Astrophysics Data System (ADS)
Wang, Yi; Fevig, Ronald A.; Schultz, Richard R.
2009-02-01
Image de-noising is a widely-used technology in modern real-world surveillance systems. Methods can seldom do both de-noising and texture preservation very well without a direct knowledge of the noise model. Most of the neighborhood fusion-based de-noising methods tend to over-smooth the images, which causes a significant loss of detail. Recently, a new non-local means method has been developed, which is based on the similarities among the different pixels. This technique results in good preservation of the textures; however, it also causes some artifacts. In this paper, we utilize the scale-invariant feature transform (SIFT) [1] method to find the corresponding region between different images, and then reconstruct the de-noised images by a weighted sum of these corresponding regions. Both hard and soft criteria are chosen in order to minimize the artifacts. Experiments applied to real unmanned aerial vehicle thermal infrared surveillance video show that our method is superior to popular methods in the literature.
NASA Astrophysics Data System (ADS)
Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco
2016-10-01
The classification of remote sensing hyperspectral images for land cover applications is a very intensive topic. In the case of supervised classification, Support Vector Machines (SVMs) play a dominant role. Recently, the Extreme Learning Machine algorithm (ELM) has been extensively used. The classification scheme previously published by the authors, and called WT-EMP, introduces spatial information in the classification process by means of an Extended Morphological Profile (EMP) that is created from features extracted by wavelets. In addition, the hyperspectral image is denoised in the 2-D spatial domain, also using wavelets and it is joined to the EMP via a stacked vector. In this paper, the scheme is improved achieving two goals. The first one is to reduce the classification time while preserving the accuracy of the classification by using ELM instead of SVM. The second one is to improve the accuracy results by performing not only a 2-D denoising for every spectral band, but also a previous additional 1-D spectral signature denoising applied to each pixel vector of the image. For each denoising the image is transformed by applying a 1-D or 2-D wavelet transform, and then a NeighShrink thresholding is applied. Improvements in terms of classification accuracy are obtained, especially for images with close regions in the classification reference map, because in these cases the accuracy of the classification in the edges between classes is more relevant.
Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising.
Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian
2015-01-01
Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods.
Denoising of hyperspectral images by best multilinear rank approximation of a tensor
NASA Astrophysics Data System (ADS)
Marin-McGee, Maider; Velez-Reyes, Miguel
2010-04-01
The hyperspectral image cube can be modeled as a three dimensional array. Tensors and the tools of multilinear algebra provide a natural framework to deal with this type of mathematical object. Singular value decomposition (SVD) and its variants have been used by the HSI community for denoising of hyperspectral imagery. Denoising of HSI using SVD is achieved by finding a low rank approximation of a matrix representation of the hyperspectral image cube. This paper investigates similar concepts in hyperspectral denoising by using a low multilinear rank approximation the given HSI tensor representation. The Best Multilinear Rank Approximation (BMRA) of a given tensor A is to find a lower multilinear rank tensor B that is as close as possible to A in the Frobenius norm. Different numerical methods to compute the BMRA using Alternating Least Square (ALS) method and Newton's Methods over product of Grassmann manifolds are presented. The effect of the multilinear rank, the numerical method used to compute the BMRA, and different parameter choices in those methods are studied. Results show that comparable results are achievable with both ALS and Newton type methods. Also, classification results using the filtered tensor are better than those obtained either with denoising using SVD or MNF.
A hybrid spatial-spectral denoising method for infrared hyperspectral images using 2DPCA
NASA Astrophysics Data System (ADS)
Huang, Jun; Ma, Yong; Mei, Xiaoguang; Fan, Fan
2016-11-01
The traditional noise reduction methods for 3-D infrared hyperspectral images typically operate independently in either the spatial or spectral domain, and such methods overlook the relationship between the two domains. To address this issue, we propose a hybrid spatial-spectral method in this paper to link both domains. First, principal component analysis and bivariate wavelet shrinkage are performed in the 2-D spatial domain. Second, 2-D principal component analysis transformation is conducted in the 1-D spectral domain to separate the basic components from detail ones. The energy distribution of noise is unaffected by orthogonal transformation; therefore, the signal-to-noise ratio of each component is used as a criterion to determine whether a component should be protected from over-denoising or denoised with certain 1-D denoising methods. This study implements the 1-D wavelet shrinking threshold method based on Stein's unbiased risk estimator, and the quantitative results on publicly available datasets demonstrate that our method can improve denoising performance more effectively than other state-of-the-art methods can.
Serbes, Gorkem; Aydin, Nizamettin
2014-01-01
Quadrature signals are dual-channel signals obtained from the systems employing quadrature demodulation. Embolic Doppler ultrasound signals obtained from stroke-prone patients by using Doppler ultrasound systems are quadrature signals caused by emboli, which are particles bigger than red blood cells within circulatory system. Detection of emboli is an important step in diagnosing stroke. Most widely used parameter in detection of emboli is embolic signal-to-background signal ratio. Therefore, in order to increase this ratio, denoising techniques are employed in detection systems. Discrete wavelet transform has been used for denoising of embolic signals, but it lacks shift invariance property. Instead, dual-tree complex wavelet transform having near-shift invariance property can be used. However, it is computationally expensive as two wavelet trees are required. Recently proposed modified dual-tree complex wavelet transform, which reduces the computational complexity, can also be used. In this study, the denoising performance of this method is extensively evaluated and compared with the others by using simulated and real quadrature signals. The quantitative results demonstrated that the modified dual-tree-complex-wavelet-transform-based denoising outperforms the conventional discrete wavelet transform with the same level of computational complexity and exhibits almost equal performance to the dual-tree complex wavelet transform with almost half computational cost.
NASA Astrophysics Data System (ADS)
Xian, Yong-Li; Dai, Yun; Gao, Chun-Ming; Du, Rui
2017-01-01
Noninvasive measurement of hemoglobin oxygen saturation (SO2) in retinal vessels is based on spectrophotometry and spectral absorption characteristics of tissue. Retinal images at 570 and 600 nm are simultaneously captured by dual-wavelength retinal oximetry based on fundus camera. SO2 is finally measured after vessel segmentation, image registration, and calculation of optical density ratio of two images. However, image noise can dramatically affect subsequent image processing and SO2 calculation accuracy. The aforementioned problem remains to be addressed. The purpose of this study was to improve image quality and SO2 calculation accuracy by noise analysis and denoising algorithm for dual-wavelength images. First, noise parameters were estimated by mixed Poisson-Gaussian (MPG) noise model. Second, an MPG denoising algorithm which we called variance stabilizing transform (VST) + dual-domain image denoising (DDID) was proposed based on VST and improved dual-domain filter. The results show that VST + DDID is able to effectively remove MPG noise and preserve image edge details. VST + DDID is better than VST + block-matching and three-dimensional filtering, especially in preserving low-contrast details. The following simulation and analysis indicate that MPG noise in the retinal images can lead to erroneously low measurement for SO2, and the denoised images can provide more accurate grayscale values for retinal oximetry.
Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising
Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian
2015-01-01
Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods. PMID:25993566
Denoising 3D MR images by the enhanced non-local means filter for Rician noise.
Liu, Hong; Yang, Cihui; Pan, Ning; Song, Enmin; Green, Richard
2010-12-01
The non-local means (NLM) filter removes noise by calculating the weighted average of the pixels in the global area and shows superiority over existing local filter methods that only consider local neighbor pixels. This filter has been successfully extended from 2D images to 3D images and has been applied to denoising 3D magnetic resonance (MR) images. In this article, a novel filter based on the NLM filter is proposed to improve the denoising effect. Considering the characteristics of Rician noise in the MR images, denoising by the NLM filter is first performed on the squared magnitude images. Then, unbiased correcting is carried out to eliminate the biased deviation. When performing the NLM filter, the weight is calculated based on the Gaussian-filtered image to reduce the disturbance of the noise. The performance of this filter is evaluated by carrying out a qualitative and quantitative comparison of this method with three other filters, namely, the original NLM filter, the unbiased NLM (UNLM) filter and the Rician NLM (RNLM) filter. Experimental results demonstrate that the proposed filter achieves better denoising performance over the other filters being compared.
Biomedical image and signal de-noising using dual tree complex wavelet transform
NASA Astrophysics Data System (ADS)
Rizi, F. Yousefi; Noubari, H. Ahmadi; Setarehdan, S. K.
2011-10-01
Dual tree complex wavelet transform(DTCWT) is a form of discrete wavelet transform, which generates complex coefficients by using a dual tree of wavelet filters to obtain their real and imaginary parts. The purposes of de-noising are reducing noise level and improving signal to noise ratio (SNR) without distorting the signal or image. This paper proposes a method for removing white Gaussian noise from ECG signals and biomedical images. The discrete wavelet transform (DWT) is very valuable in a large scope of de-noising problems. However, it has limitations such as oscillations of the coefficients at a singularity, lack of directional selectivity in higher dimensions, aliasing and consequent shift variance. The complex wavelet transform CWT strategy that we focus on in this paper is Kingsbury's and Selesnick's dual tree CWT (DTCWT) which outperforms the critically decimated DWT in a range of applications, such as de-noising. Each complex wavelet is oriented along one of six possible directions, and the magnitude of each complex wavelet has a smooth bell-shape. In the final part of this paper, we present biomedical image and signal de-noising by the means of thresholding magnitude of the wavelet coefficients.
An NMR log echo data de-noising method based on the wavelet packet threshold algorithm
NASA Astrophysics Data System (ADS)
Meng, Xiangning; Xie, Ranhong; Li, Changxi; Hu, Falong; Li, Chaoliu; Zhou, Cancan
2015-12-01
To improve the de-noising effects of low signal-to-noise ratio (SNR) nuclear magnetic resonance (NMR) log echo data, this paper applies the wavelet packet threshold algorithm to the data. The principle of the algorithm is elaborated in detail. By comparing the properties of a series of wavelet packet bases and the relevance between them and the NMR log echo train signal, ‘sym7’ is found to be the optimal wavelet packet basis of the wavelet packet threshold algorithm to de-noise the NMR log echo train signal. A new method is presented to determine the optimal wavelet packet decomposition scale; this is within the scope of its maximum, using the modulus maxima and the Shannon entropy minimum standards to determine the global and local optimal wavelet packet decomposition scales, respectively. The results of applying the method to the simulated and actual NMR log echo data indicate that compared with the wavelet threshold algorithm, the wavelet packet threshold algorithm, which shows higher decomposition accuracy and better de-noising effect, is much more suitable for de-noising low SNR-NMR log echo data.
NASA Astrophysics Data System (ADS)
Hu, Changmiao; Bai, Yang; Tang, Ping
2016-06-01
We present a denoising algorithm for the pixel-response non-uniformity correction of a scientific complementary metal-oxide-semiconductor (CMOS) image sensor, which captures images under extremely low-light conditions. By analyzing the integrating sphere experimental data, we present a pixel-by-pixel flat-field denoising algorithm to remove this fixed pattern noise, which occur in low-light conditions and high pixel response readouts. The response of the CMOS image sensor imaging system to the uniform radiance field shows a high level of spatial uniformity after the denoising algorithm has been applied.
Environment-dependent denoising autoencoder for distant-talking speech recognition
NASA Astrophysics Data System (ADS)
Ueda, Yuma; Wang, Longbiao; Kai, Atsuhiko; Ren, Bo
2015-12-01
In this paper, we propose an environment-dependent denoising autoencoder (DAE) and automatic environment identification based on a deep neural network (DNN) with blind reverberation estimation for robust distant-talking speech recognition. Recently, DAEs have been shown to be effective in many noise reduction and reverberation suppression applications because higher-level representations and increased flexibility of the feature mapping function can be learned. However, a DAE is not adequate in mismatched training and test environments. In a conventional DAE, parameters are trained using pairs of reverberant speech and clean speech under various acoustic conditions (that is, an environment-independent DAE). To address the above problem, we propose two environment-dependent DAEs to reduce the influence of mismatches between training and test environments. In the first approach, we train various DAEs using speech from different acoustic environments, and the DAE for the condition that best matches the test condition is automatically selected (that is, a two-step environment-dependent DAE). To improve environment identification performance, we propose a DNN that uses both reverberant speech and estimated reverberation. In the second approach, we add estimated reverberation features to the input of the DAE (that is, a one-step environment-dependent DAE or a reverberation-aware DAE). The proposed method is evaluated using speech in simulated and real reverberant environments. Experimental results show that the environment-dependent DAE outperforms the environment-independent one in both simulated and real reverberant environments. For two-step environment-dependent DAE, the performance of environment identification based on the proposed DNN approach is also better than that of the conventional DNN approach, in which only reverberant speech is used and reverberation is not blindly estimated. And, the one-step environment-dependent DAE significantly outperforms the two
1988-06-27
de olf nessse end Id e ;-tl Sb ieeI smleo) ,Optical Artificial Intellegence ; Optical inference engines; Optical logic; Optical informationprocessing...common. They arise in areas such as expert systems and other artificial intelligence systems. In recent years, the computer science language PROLOG has...cal processors should in principle be well suited for : I artificial intelligence applications. In recent years, symbolic logic processing. , the
Active inference and learning.
Friston, Karl; FitzGerald, Thomas; Rigoli, Francesco; Schwartenbeck, Philipp; O'Doherty, John; Pezzulo, Giovanni
2016-09-01
This paper offers an active inference account of choice behaviour and learning. It focuses on the distinction between goal-directed and habitual behaviour and how they contextualise each other. We show that habits emerge naturally (and autodidactically) from sequential policy optimisation when agents are equipped with state-action policies. In active inference, behaviour has explorative (epistemic) and exploitative (pragmatic) aspects that are sensitive to ambiguity and risk respectively, where epistemic (ambiguity-resolving) behaviour enables pragmatic (reward-seeking) behaviour and the subsequent emergence of habits. Although goal-directed and habitual policies are usually associated with model-based and model-free schemes, we find the more important distinction is between belief-free and belief-based schemes. The underlying (variational) belief updating provides a comprehensive (if metaphorical) process theory for several phenomena, including the transfer of dopamine responses, reversal learning, habit formation and devaluation. Finally, we show that active inference reduces to a classical (Bellman) scheme, in the absence of ambiguity.
Scene Construction, Visual Foraging, and Active Inference
Mirza, M. Berk; Adams, Rick A.; Mathys, Christoph D.; Friston, Karl J.
2016-01-01
This paper describes an active inference scheme for visual searches and the perceptual synthesis entailed by scene construction. Active inference assumes that perception and action minimize variational free energy, where actions are selected to minimize the free energy expected in the future. This assumption generalizes risk-sensitive control and expected utility theory to include epistemic value; namely, the value (or salience) of information inherent in resolving uncertainty about the causes of ambiguous cues or outcomes. Here, we apply active inference to saccadic searches of a visual scene. We consider the (difficult) problem of categorizing a scene, based on the spatial relationship among visual objects where, crucially, visual cues are sampled myopically through a sequence of saccadic eye movements. This means that evidence for competing hypotheses about the scene has to be accumulated sequentially, calling upon both prediction (planning) and postdiction (memory). Our aim is to highlight some simple but fundamental aspects of the requisite functional anatomy; namely, the link between approximate Bayesian inference under mean field assumptions and functional segregation in the visual cortex. This link rests upon the (neurobiologically plausible) process theory that accompanies the normative formulation of active inference for Markov decision processes. In future work, we hope to use this scheme to model empirical saccadic searches and identify the prior beliefs that underwrite intersubject variability in the way people forage for information in visual scenes (e.g., in schizophrenia). PMID:27378899
Cannistraci, Carlo Vittorio; Abbas, Ahmed; Gao, Xin
2015-01-26
Denoising multidimensional NMR-spectra is a fundamental step in NMR protein structure determination. The state-of-the-art method uses wavelet-denoising, which may suffer when applied to non-stationary signals affected by Gaussian-white-noise mixed with strong impulsive artifacts, like those in multi-dimensional NMR-spectra. Regrettably, Wavelet's performance depends on a combinatorial search of wavelet shapes and parameters; and multi-dimensional extension of wavelet-denoising is highly non-trivial, which hampers its application to multidimensional NMR-spectra. Here, we endorse a diverse philosophy of denoising NMR-spectra: less is more! We consider spatial filters that have only one parameter to tune: the window-size. We propose, for the first time, the 3D extension of the median-modified-Wiener-filter (MMWF), an adaptive variant of the median-filter, and also its novel variation named MMWF*. We test the proposed filters and the Wiener-filter, an adaptive variant of the mean-filter, on a benchmark set that contains 16 two-dimensional and three-dimensional NMR-spectra extracted from eight proteins. Our results demonstrate that the adaptive spatial filters significantly outperform their non-adaptive versions. The performance of the new MMWF* on 2D/3D-spectra is even better than wavelet-denoising. Noticeably, MMWF* produces stable high performance almost invariant for diverse window-size settings: this signifies a consistent advantage in the implementation of automatic pipelines for protein NMR-spectra analysis.
A comparative study of new and current methods for dental micro-CT image denoising
Lashgari, Mojtaba; Qin, Jie; Swain, Michael
2016-01-01
Objectives: The aim of the current study was to evaluate the application of two advanced noise-reduction algorithms for dental micro-CT images and to implement a comparative analysis of the performance of new and current denoising algorithms. Methods: Denoising was performed using gaussian and median filters as the current filtering approaches and the block-matching and three-dimensional (BM3D) method and total variation method as the proposed new filtering techniques. The performance of the denoising methods was evaluated quantitatively using contrast-to-noise ratio (CNR), edge preserving index (EPI) and blurring indexes, as well as qualitatively using the double-stimulus continuous quality scale procedure. Results: The BM3D method had the best performance with regard to preservation of fine textural features (CNREdge), non-blurring of the whole image (blurring index), the clinical visual score in images with very fine features and the overall visual score for all types of images. On the other hand, the total variation method provided the best results with regard to smoothing of images in texture-free areas (CNRTex-free) and in preserving the edges and borders of image features (EPI). Conclusions: The BM3D method is the most reliable technique for denoising dental micro-CT images with very fine textural details, such as shallow enamel lesions, in which the preservation of the texture and fine features is of the greatest importance. On the other hand, the total variation method is the technique of choice for denoising images without very fine textural details in which the clinician or researcher is interested mainly in anatomical features and structural measurements. PMID:26764583
Cannistraci, Carlo Vittorio; Abbas, Ahmed; Gao, Xin
2015-01-01
Denoising multidimensional NMR-spectra is a fundamental step in NMR protein structure determination. The state-of-the-art method uses wavelet-denoising, which may suffer when applied to non-stationary signals affected by Gaussian-white-noise mixed with strong impulsive artifacts, like those in multi-dimensional NMR-spectra. Regrettably, Wavelet's performance depends on a combinatorial search of wavelet shapes and parameters; and multi-dimensional extension of wavelet-denoising is highly non-trivial, which hampers its application to multidimensional NMR-spectra. Here, we endorse a diverse philosophy of denoising NMR-spectra: less is more! We consider spatial filters that have only one parameter to tune: the window-size. We propose, for the first time, the 3D extension of the median-modified-Wiener-filter (MMWF), an adaptive variant of the median-filter, and also its novel variation named MMWF*. We test the proposed filters and the Wiener-filter, an adaptive variant of the mean-filter, on a benchmark set that contains 16 two-dimensional and three-dimensional NMR-spectra extracted from eight proteins. Our results demonstrate that the adaptive spatial filters significantly outperform their non-adaptive versions. The performance of the new MMWF* on 2D/3D-spectra is even better than wavelet-denoising. Noticeably, MMWF* produces stable high performance almost invariant for diverse window-size settings: this signifies a consistent advantage in the implementation of automatic pipelines for protein NMR-spectra analysis. PMID:25619991
Multimodel inference and adaptive management
Rehme, S.E.; Powell, L.A.; Allen, C.R.
2011-01-01
Ecology is an inherently complex science coping with correlated variables, nonlinear interactions and multiple scales of pattern and process, making it difficult for experiments to result in clear, strong inference. Natural resource managers, policy makers, and stakeholders rely on science to provide timely and accurate management recommendations. However, the time necessary to untangle the complexities of interactions within ecosystems is often far greater than the time available to make management decisions. One method of coping with this problem is multimodel inference. Multimodel inference assesses uncertainty by calculating likelihoods among multiple competing hypotheses, but multimodel inference results are often equivocal. Despite this, there may be pressure for ecologists to provide management recommendations regardless of the strength of their study’s inference. We reviewed papers in the Journal of Wildlife Management (JWM) and the journal Conservation Biology (CB) to quantify the prevalence of multimodel inference approaches, the resulting inference (weak versus strong), and how authors dealt with the uncertainty. Thirty-eight percent and 14%, respectively, of articles in the JWM and CB used multimodel inference approaches. Strong inference was rarely observed, with only 7% of JWM and 20% of CB articles resulting in strong inference. We found the majority of weak inference papers in both journals (59%) gave specific management recommendations. Model selection uncertainty was ignored in most recommendations for management. We suggest that adaptive management is an ideal method to resolve uncertainty when research results in weak inference.
Bayesian inference for OPC modeling
NASA Astrophysics Data System (ADS)
Burbine, Andrew; Sturtevant, John; Fryer, David; Smith, Bruce W.
2016-03-01
The use of optical proximity correction (OPC) demands increasingly accurate models of the photolithographic process. Model building and inference techniques in the data science community have seen great strides in the past two decades which make better use of available information. This paper aims to demonstrate the predictive power of Bayesian inference as a method for parameter selection in lithographic models by quantifying the uncertainty associated with model inputs and wafer data. Specifically, the method combines the model builder's prior information about each modelling assumption with the maximization of each observation's likelihood as a Student's t-distributed random variable. Through the use of a Markov chain Monte Carlo (MCMC) algorithm, a model's parameter space is explored to find the most credible parameter values. During parameter exploration, the parameters' posterior distributions are generated by applying Bayes' rule, using a likelihood function and the a priori knowledge supplied. The MCMC algorithm used, an affine invariant ensemble sampler (AIES), is implemented by initializing many walkers which semiindependently explore the space. The convergence of these walkers to global maxima of the likelihood volume determine the parameter values' highest density intervals (HDI) to reveal champion models. We show that this method of parameter selection provides insights into the data that traditional methods do not and outline continued experiments to vet the method.
Dopamine, Affordance and Active Inference
Friston, Karl J.; Shiner, Tamara; FitzGerald, Thomas; Galea, Joseph M.; Adams, Rick; Brown, Harriet; Dolan, Raymond J.; Moran, Rosalyn; Stephan, Klaas Enno; Bestmann, Sven
2012-01-01
The role of dopamine in behaviour and decision-making is often cast in terms of reinforcement learning and optimal decision theory. Here, we present an alternative view that frames the physiology of dopamine in terms of Bayes-optimal behaviour. In this account, dopamine controls the precision or salience of (external or internal) cues that engender action. In other words, dopamine balances bottom-up sensory information and top-down prior beliefs when making hierarchical inferences (predictions) about cues that have affordance. In this paper, we focus on the consequences of changing tonic levels of dopamine firing using simulations of cued sequential movements. Crucially, the predictions driving movements are based upon a hierarchical generative model that infers the context in which movements are made. This means that we can confuse agents by changing the context (order) in which cues are presented. These simulations provide a (Bayes-optimal) model of contextual uncertainty and set switching that can be quantified in terms of behavioural and electrophysiological responses. Furthermore, one can simulate dopaminergic lesions (by changing the precision of prediction errors) to produce pathological behaviours that are reminiscent of those seen in neurological disorders such as Parkinson's disease. We use these simulations to demonstrate how a single functional role for dopamine at the synaptic level can manifest in different ways at the behavioural level. PMID:22241972
NASA Astrophysics Data System (ADS)
Evrendilek, F.; Karakaya, N.
2014-06-01
Continuous time-series measurements of diel dissolved oxygen (DO) through online sensors are vital to better understanding and management of metabolism of lake ecosystems, but are prone to noise. Discrete wavelet transforms (DWT) with the orthogonal Symmlet and the semiorthogonal Chui-Wang B-spline were compared in denoising diel, daytime and nighttime dynamics of DO, water temperature, pH, and chlorophyll-a. Predictive efficacies of multiple non-linear regression (MNLR) models of DO dynamics were evaluated with or without DWT denoising of either the response variable alone or all the response and explanatory variables. The combined use of the B-spline-based denoising of all the variables and the temporally partitioned data improved both the predictive power and the errors of the MNLR models better than the use of Symmlet DWT denoising of DO only or all the variables with or without the temporal partitioning.
NASA Astrophysics Data System (ADS)
Chen, Yangkang; Huang, Weilin; Zhang, Dong; Chen, Wei
2016-10-01
Simultaneous seismic data denoising and reconstruction is a currently popular research subject in modern reflection seismology. Traditional rank-reduction based 3D seismic data denoising and reconstruction algorithm will cause strong residual noise in the reconstructed data and thus affect the following processing and interpretation tasks. In this paper, we propose an improved rank-reduction method by modifying the truncated singular value decomposition (TSVD) formula used in the traditional method. The proposed approach can help us obtain nearly perfect reconstruction performance even in the case of low signal-to-noise ratio (SNR). The proposed algorithm is tested via one synthetic and field data examples. Considering that seismic data interpolation and denoising source packages are seldom in the public domain, we also provide a program template for the rank-reduction based simultaneous denoising and reconstruction algorithm by providing an open-source Matlab package.
Gene-network inference by message passing
NASA Astrophysics Data System (ADS)
Braunstein, A.; Pagnani, A.; Weigt, M.; Zecchina, R.
2008-01-01
The inference of gene-regulatory processes from gene-expression data belongs to the major challenges of computational systems biology. Here we address the problem from a statistical-physics perspective and develop a message-passing algorithm which is able to infer sparse, directed and combinatorial regulatory mechanisms. Using the replica technique, the algorithmic performance can be characterized analytically for artificially generated data. The algorithm is applied to genome-wide expression data of baker's yeast under various environmental conditions. We find clear cases of combinatorial control, and enrichment in common functional annotations of regulated genes and their regulators.
Morrison, Hilary G; Zamora, Gus; Campbell, Robert K; Sogin, Mitchell L
2002-12-01
Functional assays of genes have historically led to insights about the activities of a protein or protein cascade. However, the rapid expansion of genomic and proteomic information for a variety of diverse taxa is an alternative and powerful means of predicting function by comparing the enzymes and metabolic pathways used by different organisms. As part of the Giardia lamblia genome sequencing project, we routinely survey the complement of predicted proteins and compare those found in this putatively early diverging eukaryote with those of prokaryotes and more recently evolved eukaryotic lineages. Such comparisons reveal the minimal composition of conserved metabolic pathways, suggest which proteins may have been acquired by lateral transfer, and, by their absence, hint at functions lost in the transition from a free-living to a parasitic lifestyle. Here, we describe the use of bioinformatic approaches to investigate the complement and conservation of proteins in Giardia involved in the regulation of translation. We compare an FK506 binding protein homologue and phosphatidylinositol kinase-related kinase present in Giardia to those found in other eukaryotes for which complete genomic sequence data are available. Our investigation of the Giardia genome suggests that PIK-related kinases are of ancient origin and are highly conserved.
NASA Technical Reports Server (NTRS)
Wheeler, Kevin; Timucin, Dogan; Rabbette, Maura; Curry, Charles; Allan, Mark; Lvov, Nikolay; Clanton, Sam; Pilewskie, Peter
2002-01-01
The goal of visual inference programming is to develop a software framework data analysis and to provide machine learning algorithms for inter-active data exploration and visualization. The topics include: 1) Intelligent Data Understanding (IDU) framework; 2) Challenge problems; 3) What's new here; 4) Framework features; 5) Wiring diagram; 6) Generated script; 7) Results of script; 8) Initial algorithms; 9) Independent Component Analysis for instrument diagnosis; 10) Output sensory mapping virtual joystick; 11) Output sensory mapping typing; 12) Closed-loop feedback mu-rhythm control; 13) Closed-loop training; 14) Data sources; and 15) Algorithms. This paper is in viewgraph form.
Single board system for fuzzy inference
NASA Technical Reports Server (NTRS)
Symon, James R.; Watanabe, Hiroyuki
1991-01-01
The very large scale integration (VLSI) implementation of a fuzzy logic inference mechanism allows the use of rule-based control and decision making in demanding real-time applications. Researchers designed a full custom VLSI inference engine. The chip was fabricated using CMOS technology. The chip consists of 688,000 transistors of which 476,000 are used for RAM memory. The fuzzy logic inference engine board system incorporates the custom designed integrated circuit into a standard VMEbus environment. The Fuzzy Logic system uses Transistor-Transistor Logic (TTL) parts to provide the interface between the Fuzzy chip and a standard, double height VMEbus backplane, allowing the chip to perform application process control through the VMEbus host. High level C language functions hide details of the hardware system interface from the applications level programmer. The first version of the board was installed on a robot at Oak Ridge National Laboratory in January of 1990.
Quality of computationally inferred gene ontology annotations.
Skunca, Nives; Altenhoff, Adrian; Dessimoz, Christophe
2012-05-01
Gene Ontology (GO) has established itself as the undisputed standard for protein function annotation. Most annotations are inferred electronically, i.e. without individual curator supervision, but they are widely considered unreliable. At the same time, we crucially depend on those automated annotations, as most newly sequenced genomes are non-model organisms. Here, we introduce a methodology to systematically and quantitatively evaluate electronic annotations. By exploiting changes in successive releases of the UniProt Gene Ontology Annotation database, we assessed the quality of electronic annotations in terms of specificity, reliability, and coverage. Overall, we not only found that electronic annotations have significantly improved in recent years, but also that their reliability now rivals that of annotations inferred by curators when they use evidence other than experiments from primary literature. This work provides the means to identify the subset of electronic annotations that can be relied upon-an important outcome given that >98% of all annotations are inferred without direct curation.
Deep Learning for Population Genetic Inference
Sheehan, Sara; Song, Yun S.
2016-01-01
Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme. PMID:27018908
Deep Learning for Population Genetic Inference.
Sheehan, Sara; Song, Yun S
2016-03-01
Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.
Computationally efficient Bayesian inference for inverse problems.
Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.
2007-10-01
Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.
González, Carolina; Lazcano, Marcelo; Valdés, Jorge; Holmes, David S.
2016-01-01
Using phylogenomic and gene compositional analyses, five highly conserved gene families have been detected in the core genome of the phylogenetically coherent genus Acidithiobacillus of the class Acidithiobacillia. These core gene families are absent in the closest extant genus Thermithiobacillus tepidarius that subtends the Acidithiobacillus genus and roots the deepest in this class. The predicted proteins encoded by these core gene families are not detected by a BLAST search in the NCBI non-redundant database of more than 90 million proteins using a relaxed cut-off of 1.0e−5. None of the five families has a clear functional prediction. However, bioinformatic scrutiny, using pI prediction, motif/domain searches, cellular location predictions, genomic context analyses, and chromosome topology studies together with previously published transcriptomic and proteomic data, suggests that some may have functions associated with membrane remodeling during cell division perhaps in response to pH stress. Despite the high level of amino acid sequence conservation within each family, there is sufficient nucleotide variation of the respective genes to permit the use of the DNA sequences to distinguish different species of Acidithiobacillus, making them useful additions to the armamentarium of tools for phylogenetic analysis. Since the protein families are unique to the Acidithiobacillus genus, they can also be leveraged as probes to detect the genus in environmental metagenomes and metatranscriptomes, including industrial biomining operations, and acid mine drainage (AMD). PMID:28082953
NASA Astrophysics Data System (ADS)
Smit, Renske; Bouwens, Rychard J.; Labbé, Ivo; Franx, Marijn; Wilkins, Stephen M.; Oesch, Pascal A.
2016-12-01
We derive Hα fluxes for a large spectroscopic and photometric-redshift-selected sample of sources over GOODS-North and South in the redshift range z = 3.8-5.0 with deep Hubble Space Telescope (HST), Spitzer/IRAC, and ground-based observations. The Hα flux is inferred based on the offset between the IRAC 3.6 μm flux and that predicted from the best-fit spectral energy distribution (SED). We demonstrate that the Hα flux correlates well with dust-corrected UV star formation rate (SFR) and therefore can serve as an independent SFR indicator. However, we also find a systematic offset in the {{SFR}}{{H}α }/{{SFR}}{UV+β } ratios for z ˜ 4-5 galaxies relative to local relations (assuming the same dust corrections for nebular regions and stellar light). We show that we can resolve the modest tension in the inferred SFRs by assuming bluer intrinsic UV slopes (increasing the dust correction), a rising star formation history, or assuming a low-metallicity stellar population with a hard ionizing spectrum (increasing the {L}{{H}α }/{SFR} ratio). Using Hα as an SFR indicator, we find a normalization of the star formation main sequence in good agreement with recent SED-based determinations and also derive the SFR functions at z˜ 4{--}8. In addition, we assess for the first time the burstiness of star formation in z˜ 4 galaxies on <100 Myr timescales by comparing UV and Hα-based sSFRs; their one-to-one relationship argues against significantly bursty star formation histories.
De-noising of microwave satellite soil moisture time series
NASA Astrophysics Data System (ADS)
Su, Chun-Hsu; Ryu, Dongryeol; Western, Andrew; Wagner, Wolfgang
2013-04-01
Technology) ASCAT data sets to identify two types of errors that are spectrally distinct. Based on a semi-empirical model of soil moisture dynamics, we consider possible digital filter designs to improve the accuracy of their soil moisture products by reducing systematic periodic errors and stochastic noise. We describe a methodology to design bandstop filters to remove artificial resonances, and a Wiener filter to remove stochastic white noise present in the satellite data. Utility of these filters is demonstrated by comparing de-noised data against in-situ observations from ground monitoring stations in the Murrumbidgee Catchment (Smith et al., 2012), southeast Australia. Albergel, C., de Rosnay, P., Gruhier, C., Muñoz Sabater, J., Hasenauer, S., Isaksen, L., Kerr, Y. H., & Wagner, W. (2012). Evaluation of remotely sensed and modelled soil moisture products using global ground-based in situ observations. Remote Sensing of Environment, 118, 215-226. Scipal, K., Holmes, T., de Jeu, R., Naeimi, V., & Wagner, W. (2008), A possible solution for the problem of estimating the error structure of global soil moisture data sets. Geophysical Research Letters, 35, L24403. Smith, A. B., Walker, J. P., Western, A. W., Young, R. I., Ellett, K. M., Pipunic, R. C., Grayson, R. B., Siriwardena, L., Chiew, F. H. S., & Richter, H. (2012). The Murrumbidgee soil moisture network data set. Water Resources Research, 48, W07701. Su, C.-H., Ryu, D., Young, R., Western, A. W., & Wagner, W. (2012). Inter-comparison of microwave satellite soil moisture retrievals over Australia. Submitted to Remote Sensing of Environment.
Circular inferences in schizophrenia.
Jardri, Renaud; Denève, Sophie
2013-11-01
A considerable number of recent experimental and computational studies suggest that subtle impairments of excitatory to inhibitory balance or regulation are involved in many neurological and psychiatric conditions. The current paper aims to relate, specifically and quantitatively, excitatory to inhibitory imbalance with psychotic symptoms in schizophrenia. Considering that the brain constructs hierarchical causal models of the external world, we show that the failure to maintain the excitatory to inhibitory balance results in hallucinations as well as in the formation and subsequent consolidation of delusional beliefs. Indeed, the consequence of excitatory to inhibitory imbalance in a hierarchical neural network is equated to a pathological form of causal inference called 'circular belief propagation'. In circular belief propagation, bottom-up sensory information and top-down predictions are reverberated, i.e. prior beliefs are misinterpreted as sensory observations and vice versa. As a result, these predictions are counted multiple times. Circular inference explains the emergence of erroneous percepts, the patient's overconfidence when facing probabilistic choices, the learning of 'unshakable' causal relationships between unrelated events and a paradoxical immunity to perceptual illusions, which are all known to be associated with schizophrenia.
Moment inference from tomograms
Day-Lewis, F. D.; Chen, Y.; Singha, K.
2007-01-01
Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error. Copyright 2007 by the American Geophysical Union.
Nakano, M.; Kumagai, H.; Chouet, B.A.
2003-01-01
We investigate the source mechanism of long-period (LP) events observed at Kusatsu-Shirane Volcano, Japan, based on waveform inversions of their effective excitation functions. The effective excitation function, which represents the apparent excitation observed at individual receivers, is estimated by applying an autoregressive filter to the LP waveform. Assuming a point source, we apply this method to seven LP events the waveforms of which are characterized by simple decaying and nearly monochromatic oscillations with frequency in the range 1-3 Hz. The results of the waveform inversions show dominant volumetric change components accompanied by single force components, common to all the events analyzed, and suggesting a repeated activation of a sub-horizontal crack located 300 m beneath the summit crater lakes. Based on these results, we propose a model of the source process of LP seismicity, in which a gradual buildup of steam pressure in a hydrothermal crack in response to magmatic heat causes repeated discharges of steam from the crack. The rapid discharge of fluid causes the collapse of the fluid-filled crack and excites acoustic oscillations of the crack, which produce the characteristic waveforms observed in the LP events. The presence of a single force synchronous with the collapse of the crack is interpreted as the release of gravitational energy that occurs as the slug of steam ejected from the crack ascends toward the surface and is replaced by cooler water flowing downward in a fluid-filled conduit linking the crack and the base of the crater lake. ?? 2003 Elsevier Science B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Igarashi, T.; Iidaka, T.; Sakai, S.; Hirata, N.
2012-12-01
We apply receiver function (RF) analyses to estimate the crustal structure and configuration of the subducting Philippine Sea (PHS) plate beneath the Pacific coast industrial zone stretching from Tokyo to Fukuoka in Japan. Destructive earthquakes often occurred at the plate interface of the PHS plate, and seismic activities increase after the 2011 Tohoku earthquake (Mw9.0) around the Tokyo metropolitan area. Investigation on the crustal structure is the key to understanding the stress concentration and strain accumulation process, and information on configuration of the subducting plate is important to mitigate future earthquake disasters. In this study, we searched for the best-correlated velocity structure model between an observed receiver function at each station and synthetic ones by using a grid search method. Synthetic RFs were calculated from many assumed one-dimensional velocity structures that consist of four layers with positive velocity steps. Observed receiver functions were stacked without considering back azimuth or epicentral distance. We further constructed the vertical cross-sections of depth-converted RF images transformed the lapse time of time series to depth by using the estimated structure models. Telemetric seismographic network data covered on the Japanese Islands including the Metropolitan Seismic Observation network, which constructed under the Special Project for Earthquake Disaster Mitigation in the Tokyo Metropolitan area and maintained by Special Project for Reducing Vulnerability for Urban Mega Earthquake Disasters, are used. We selected events with magnitudes greater or equal to 5.0 and epicentral distance between 30 and 90 degrees based on USGS catalogues. As a result, we clarify spatial distributions of the crustal S-wave velocities. Estimated average one-dimensional S-wave velocity structure is approximately equal to the JMA2011 structural model although the velocity from the ground surface to 5 km in depth is slow. In particular
Estimating uncertainty of inference for validation
Booker, Jane M; Langenbrunner, James R; Hemez, Francois M; Ross, Timothy J
2010-09-30
We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the
NASA Astrophysics Data System (ADS)
Steglich, M.; Jäger, C.; Huisken, F.; Friedrich, M.; Plass, W.; Räder, H.-J.; Müllen, K.; Henning, Th.
2013-10-01
Infrared (IR) absorption spectra of individual polycyclic aromatic hydrocarbons (PAHs) containing methyl (\\sbondCH3), methylene (\\protect{\\epsfbox{art/apjs484229un01.eps}}CH2), or diamond-like \\protect{\\epsfbox{art/apjs484229un02.eps}}CH groups and IR spectra of mixtures of methylated and hydrogenated PAHs prepared by gas-phase condensation were measured at room temperature (as grains in pellets) and at low temperature (isolated in Ne matrices). In addition, the PAH blends were subjected to an in-depth molecular structure analysis by means of high-performance liquid chromatography, nuclear magnetic resonance spectroscopy, and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. Supported by calculations at the density functional theory level, the laboratory results were applied to analyze in detail the aliphatic absorption complex of the diffuse interstellar medium at 3.4 μm and to determine the abundances of hydrocarbon functional groups. Assuming that the PAHs are mainly locked in grains, aliphatic CH x groups (x = 1, 2, 3) would contribute approximately in equal quantities to the 3.4 μm feature (N CHx /N H ≈ 10-5-2 × 10-5). The abundances, however, may be two to four times lower if a major contribution to the 3.4 μm feature comes from molecules in the gas phase. Aromatic \\epsfbox{art/apjs484229un03.eps} CH groups seem to be almost absent from some lines of sight, but can be nearly as abundant as each of the aliphatic components in other directions (N_{\\epsfbox{art/apjs484229un03.eps} CH}/N H lsim 2 × 10-5 upper value for grains). Due to comparatively low binding energies, astronomical IR emission sources do not display such heavy excess hydrogenation. At best, especially in protoplanetary nebulae, \\protect{\\epsfbox{art/apjs484229un01.eps}}CH2 groups bound to aromatic molecules, i.e., excess hydrogens on the molecular periphery only, can survive the presence of a nearby star.
2014-01-01
Background Protein sequence similarities to any types of non-globular segments (coiled coils, low complexity regions, transmembrane regions, long loops, etc. where either positional sequence conservation is the result of a very simple, physically induced pattern or rather integral sequence properties are critical) are pertinent sources for mistaken homologies. Regretfully, these considerations regularly escape attention in large-scale annotation studies since, often, there is no substitute to manual handling of these cases. Quantitative criteria are required to suppress events of function annotation transfer as a result of false homology assignments. Results The sequence homology concept is based on the similarity comparison between the structural elements, the basic building blocks for conferring the overall fold of a protein. We propose to dissect the total similarity score into fold-critical and other, remaining contributions and suggest that, for a valid homology statement, the fold-relevant score contribution should at least be significant on its own. As part of the article, we provide the DissectHMMER software program for dissecting HMMER2/3 scores into segment-specific contributions. We show that DissectHMMER reproduces HMMER2/3 scores with sufficient accuracy and that it is useful in automated decisions about homology for instructive sequence examples. To generalize the dissection concept for cases without 3D structural information, we find that a dissection based on alignment quality is an appropriate surrogate. The approach was applied to a large-scale study of SMART and PFAM domains in the space of seed sequences and in the space of UniProt/SwissProt. Conclusions Sequence similarity core dissection with regard to fold-critical and other contributions systematically suppresses false hits and, additionally, recovers previously obscured homology relationships such as the one between aquaporins and formate/nitrite transporters that, so far, was only
Rangel, Bianca de S; Wosnick, Natascha; Hammerschlag, Neil; Ciena, Adriano P; Kfoury Junior, José Roberto; Rici, Rose E G
2017-03-01
Sensory organs in elasmobranchs (sharks, skates, rays) detect and respond to a different set of biotic and/or abiotic stimuli, through sight, smell, taste, hearing, mechanoreception and electroreception. Although gustation is crucial for survival and essential for growth, mobility, and maintenance of neural activity and the proper functioning of the immune system, comparatively little is known about this sensory system in elasmobranchs. Here we present a preliminary investigation into the structural and dimensional characteristics of the oral papillae and denticles found in the oropharyngeal cavity of the blue shark (Prionace glauca) during embryonic development through adulthood. Samples were obtained from the dorsal and ventral surface of the oropharyngeal cavity collected from embryos at different development stages as well as from adults. Our results suggest that development of papillae occurs early in ontogeny, before the formation of the oral denticles. The diameter of oral papillae gradually increases during development, starting from 25 μm in stage I embryos, to 110 μm in stage IV embryos and 272-300 μm in adults. Embryos exhibit papillae at early developmental stages, suggesting that these structures may be important during early in life. The highest density of papillae was observed in the maxillary and mandibular valve regions, possibly related to the ability to identify, capture and process prey. The oral denticles were observed only in the final embryonic stage as well as in adults. Accordingly, we suggest that oral denticles likely aid in ram ventilation (through reducing the hydrodynamic drag), to protect papillae from injury during prey consumption and assist in the retention and consumption of prey (through adhesion), since these processes are only necessary after birth.
Fractional-order TV-L2 model for image denoising
NASA Astrophysics Data System (ADS)
Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu
2013-10-01
This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.
A joint inter- and intrascale statistical model for Bayesian wavelet based image denoising.
Pizurica, Aleksandra; Philips, Wilfried; Lemahieu, Ignace; Acheroy, Marc
2002-01-01
This paper presents a new wavelet-based image denoising method, which extends a "geometrical" Bayesian framework. The new method combines three criteria for distinguishing supposedly useful coefficients from noise: coefficient magnitudes, their evolution across scales and spatial clustering of large coefficients near image edges. These three criteria are combined in a Bayesian framework. The spatial clustering properties are expressed in a prior model. The statistical properties concerning coefficient magnitudes and their evolution across scales are expressed in a joint conditional model. The three main novelties with respect to related approaches are (1) the interscale-ratios of wavelet coefficients are statistically characterized and different local criteria for distinguishing useful coefficients from noise are evaluated, (2) a joint conditional model is introduced, and (3) a novel anisotropic Markov random field prior model is proposed. The results demonstrate an improved denoising performance over related earlier techniques.
Denoising and Multivariate Analysis of Time-Of-Flight SIMS Images
Wickes, Bronwyn; Kim, Y.; Castner, David G.
2003-08-30
Time-of-flight SIMS (ToF-SIMS) imaging offers a modality for simultaneously visualizing the spatial distribution of different surface species. However, the utility of ToF-SIMS datasets may be limited by their large size, degraded mass resolution and low ion counts per pixel. Through denoising and multivariate image analysis, regions of similar chemistries may be differentiated more readily in ToF-SIMS image data. Three established denoising algorithms down-binning, boxcar and wavelet filtering were applied to ToF-SIMS images of different surface geometries and chemistries. The effect of these filters on the performance of principal component analysis (PCA) was evaluated in terms of the capture of important chemical image features in the principal component score images, the quality of the principal component
The Application of Wavelet-Domain Hidden Markov Tree Model in Diabetic Retinal Image Denoising.
Cui, Dong; Liu, Minmin; Hu, Lei; Liu, Keju; Guo, Yongxin; Jiao, Qing
2015-01-01
The wavelet-domain Hidden Markov Tree Model can properly describe the dependence and correlation of fundus angiographic images' wavelet coefficients among scales. Based on the construction of the fundus angiographic images Hidden Markov Tree Models and Gaussian Mixture Models, this paper applied expectation-maximum algorithm to estimate the wavelet coefficients of original fundus angiographic images and the Bayesian estimation to achieve the goal of fundus angiographic images denoising. As is shown in the experimental result, compared with the other algorithms as mean filter and median filter, this method effectively improved the peak signal to noise ratio of fundus angiographic images after denoising and preserved the details of vascular edge in fundus angiographic images.
Zhang, Haiyan; Zhang, Liyi; Sun, Yunshan; Zhang, Jingyu
2015-01-01
Reducing X-ray tube current is one of the widely used methods for decreasing the radiation dose. Unfortunately, the signal-to-noise ratio (SNR) of the projection data degrades simultaneously. To improve the quality of reconstructed images, a dictionary learning based penalized weighted least-squares (PWLS) approach is proposed for sinogram denoising. The weighted least-squares considers the statistical characteristic of noise and the penalty models the sparsity of sinogram based on dictionary learning. Then reconstruct CT image using filtered back projection (FBP) algorithm from the denoised sinogram. The proposed method is particularly suitable for the projection data with low SNR. Experimental results show that the proposed method can get high-quality CT images when the signal to noise ratio of projection data declines sharply.
NASA Astrophysics Data System (ADS)
Yaseen, Alauldeen S.; Pavlov, Alexey N.; Hramov, Alexander E.
2016-03-01
Speech signal processing is widely used to reduce noise impact in acquired data. During the last decades, wavelet-based filtering techniques are often applied in communication systems due to their advantages in signal denoising as compared with Fourier-based methods. In this study we consider applications of a 1-D double density complex wavelet transform (1D-DDCWT) and compare the results with the standard 1-D discrete wavelet-transform (1DDWT). The performances of the considered techniques are compared using the mean opinion score (MOS) being the primary metric for the quality of the processed signals. A two-dimensional extension of this approach can be used for effective image denoising.
Prognostics of Lithium-Ion Batteries Based on Wavelet Denoising and DE-RVM
Zhang, Chaolong; He, Yigang; Yuan, Lifeng; Xiang, Sheng; Wang, Jinping
2015-01-01
Lithium-ion batteries are widely used in many electronic systems. Therefore, it is significantly important to estimate the lithium-ion battery's remaining useful life (RUL), yet very difficult. One important reason is that the measured battery capacity data are often subject to the different levels of noise pollution. In this paper, a novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries. Wavelet denoising is performed with different thresholds in order to weaken the strong noise and remove the weak noise. Relevance vector machine (RVM) improved by differential evolution (DE) algorithm is utilized to estimate the battery RUL based on the denoised data. An experiment including battery 5 capacity prognostics case and battery 18 capacity prognostics case is conducted and validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately. PMID:26413090
Non-local neighbor embedding image denoising algorithm in sparse domain
NASA Astrophysics Data System (ADS)
Shi, Guo-chuan; Xia, Liang; Liu, Shuang-qing; Xu, Guo-ming
2013-12-01
To get better denoising results, the prior knowledge of nature images should be taken into account to regularize the ill-posed inverse problem. In this paper, we propose an image denoising algorithm via non-local similar neighbor embedding in sparse domain. Firstly, a local statistical feature, namely histograms of oriented gradients of image patches is used to perform the clustering, and then the whole training data set is partitioned into a set of subsets which have similar local geometric structures and the centroid of each subset is also obtained. Secondly, we apply the principal component analysis (PCA) to learn the compact sub-dictionary for each cluster. Next, through sparse coding over the sub-dictionary and neighborhood selecting, the image patch to be synthesized can be approximated by its top k neighbors. The extensive experimental results validate the effective of the proposed method both in PSNR and visual perception.
A new performance evaluation scheme for jet engine vibration signal denoising
NASA Astrophysics Data System (ADS)
Sadooghi, Mohammad Saleh; Esmaeilzadeh Khadem, Siamak
2016-08-01
Denoising of a cargo-plane jet engine compressor vibration signal is investigated in this article. Discrete wavelet transform and two families of Donoho-Johnston and parameter method thresholding, are applied to vibration signal. Eighty four combinations of wavelet thresholding and mother wavelet are evaluated. A new performance evaluation scheme for optimal selection of mother wavelet and thresholding method combination is proposed in this paper, which is make a trade off between four performance criteria of signal to noise ratio, percentage root mean square difference, Cross-correlation, and mean square error. Dmeyer mother wavelet (dmey) combined with Rigorous SURE thresholding has the maximum trade off value and was selected as the most appropriate combination for denoising of the signal. It was shown that inappropriate combination leads to data losing. Also higher performance of proposed trade off with respect to other criteria was proven graphically.
de Deckerk, Arnaud; Lee, John Aldo; Verlysen, Michel
2009-01-01
Denoising is a key step in the processing of medical images. It aims at improving both the interpretability and visual aspect of the images. Yet, designing a robust and efficient denoising tool remains an unsolved challenge and a specific issue concerns the noise model. Many filters typically assume that noise is additive and Gaussian, with uniform variance. In contrast, noise in medical images often has more complex properties. This paper considers images with Poissonian noise and the patch-based bilateral filters, that is, filters that involve a tonal kernel and pair wise comparisons between shifted blocks of the images. The main aim is then to integrate two variance stabilizing transformations that allow the filters to work with Gaussianized noise. The performances of these filters are compared to those of the classical bilateral filter with the same transformations. The experiments include an artificial benchmark as well as a positron emission tomography image.
Laplacian based non-local means denoising of MR images with Rician noise.
Bhujle, Hemalata V; Chaudhuri, Subhasis
2013-11-01
Magnetic Resonance (MR) image is often corrupted with a complex white Gaussian noise (Rician noise) which is signal dependent. Considering the special characteristics of Rician noise, we carry out nonlocal means denoising on squared magnitude images and compensate the introduced bias. In this paper, we propose an algorithm which not only preserves the edges and fine structures but also performs efficient denoising. For this purpose we have used a Laplacian of Gaussian (LoG) filter in conjunction with a nonlocal means filter (NLM). Further, to enhance the edges and to accelerate the filtering process, only a few similar patches have been preselected on the basis of closeness in edge and inverted mean values. Experiments have been conducted on both simulated and clinical data sets. The qualitative and quantitative measures demonstrate the efficacy of the proposed method.
Denoising MR images using non-local means filter with combined patch and pixel similarity.
Zhang, Xinyuan; Hou, Guirong; Ma, Jianhua; Yang, Wei; Lin, Bingquan; Xu, Yikai; Chen, Wufan; Feng, Yanqiu
2014-01-01
Denoising is critical for improving visual quality and reliability of associative quantitative analysis when magnetic resonance (MR) images are acquired with low signal-to-noise ratios. The classical non-local means (NLM) filter, which averages pixels weighted by the similarity of their neighborhoods, is adapted and demonstrated to effectively reduce Rician noise without affecting edge details in MR magnitude images. However, the Rician NLM (RNLM) filter usually blurs small high-contrast particle details which might be clinically relevant information. In this paper, we investigated the reason of this particle blurring problem and proposed a novel particle-preserving RNLM filter with combined patch and pixel (RNLM-CPP) similarity. The results of experiments on both synthetic and real MR data demonstrate that the proposed RNLM-CPP filter can preserve small high-contrast particle details better than the original RNLM filter while denoising MR images.
A study of infrared spectroscopy de-noising based on LMS adaptive filter
NASA Astrophysics Data System (ADS)
Mo, Jia-qing; Lv, Xiao-yi; Yu, Xiao
2015-12-01
Infrared spectroscopy has been widely used, but which often contains a lot of noise, so the spectral characteristic of the sample is seriously affected. Therefore the de-noising is very important in the spectrum analysis and processing. In the study of infrared spectroscopy, the least mean square (LMS) adaptive filter was applied in the field firstly. LMS adaptive filter algorithm can reserve the detail and envelope of the effective signal when the method was applied to infrared spectroscopy of breast cancer which signal-to-noise ratio (SNR) is lower than 10 dB, contrast and analysis the result with result of wavelet transform and ensemble empirical mode decomposition (EEMD). The three evaluation standards (SNR, root mean square error (RMSE) and the correlation coefficient (ρ)) fully proved de-noising advantages of LMS adaptive filter in infrared spectroscopy of breast cancer.
BIE: Bayesian Inference Engine
NASA Astrophysics Data System (ADS)
Weinberg, Martin D.
2013-12-01
The Bayesian Inference Engine (BIE) is an object-oriented library of tools written in C++ designed explicitly to enable Bayesian update and model comparison for astronomical problems. To facilitate "what if" exploration, BIE provides a command line interface (written with Bison and Flex) to run input scripts. The output of the code is a simulation of the Bayesian posterior distribution from which summary statistics e.g. by taking moments, or determine confidence intervals and so forth, can be determined. All of these quantities are fundamentally integrals and the Markov Chain approach produces variates heta distributed according to P( heta|D) so moments are trivially obtained by summing of the ensemble of variates.