Science.gov

Sample records for denoising inferred functional

  1. Denoising inferred functional association networks obtained by gene fusion analysis

    PubMed Central

    Kamburov, Atanas; Goldovsky, Leon; Freilich, Shiri; Kapazoglou, Aliki; Kunin, Victor; Enright, Anton J; Tsaftaris, Athanasios; Ouzounis, Christos A

    2007-01-01

    Background Gene fusion detection – also known as the 'Rosetta Stone' method – involves the identification of fused composite genes in a set of reference genomes, which indicates potential interactions between its un-fused counterpart genes in query genomes. The precision of this method typically improves with an ever-increasing number of reference genomes. Results In order to explore the usefulness and scope of this approach for protein interaction prediction and generate a high-quality, non-redundant set of interacting pairs of proteins across a wide taxonomic range, we have exhaustively performed gene fusion analysis for 184 genomes using an efficient variant of a previously developed protocol. By analyzing interaction graphs and applying a threshold that limits the maximum number of possible interactions within the largest graph components, we show that we can reduce the number of implausible interactions due to the detection of promiscuous domains. With this generally applicable approach, we generate a robust set of over 2 million distinct and testable interactions encompassing 696,894 proteins in 184 species or strains, most of which have never been the subject of high-throughput experimental proteomics. We investigate the cumulative effect of increasing numbers of genomes on the fidelity and quantity of predictions, and show that, for large numbers of genomes, predictions do not become saturated but continue to grow linearly, for the majority of the species. We also examine the percentage of component (and composite) proteins with relation to the number of genes and further validate the functional categories that are highly represented in this robust set of detected genome-wide interactions. Conclusion We illustrate the phylogenetic and functional diversity of gene fusion events across genomes, and their usefulness for accurate prediction of protein interaction and function. PMID:18081932

  2. Bayesian Inference for Neighborhood Filters With Application in Denoising.

    PubMed

    Huang, Chao-Tsung

    2015-11-01

    Range-weighted neighborhood filters are useful and popular for their edge-preserving property and simplicity, but they are originally proposed as intuitive tools. Previous works needed to connect them to other tools or models for indirect property reasoning or parameter estimation. In this paper, we introduce a unified empirical Bayesian framework to do both directly. A neighborhood noise model is proposed to reason and infer the Yaroslavsky, bilateral, and modified non-local means filters by joint maximum a posteriori and maximum likelihood estimation. Then, the essential parameter, range variance, can be estimated via model fitting to the empirical distribution of an observable chi scale mixture variable. An algorithm based on expectation-maximization and quasi-Newton optimization is devised to perform the model fitting efficiently. Finally, we apply this framework to the problem of color-image denoising. A recursive fitting and filtering scheme is proposed to improve the image quality. Extensive experiments are performed for a variety of configurations, including different kernel functions, filter types and support sizes, color channel numbers, and noise types. The results show that the proposed framework can fit noisy images well and the range variance can be estimated successfully and efficiently. PMID:26259244

  3. Point Set Denoising Using Bootstrap-Based Radial Basis Function

    PubMed Central

    Ramli, Ahmad; Abd. Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study. PMID:27315105

  4. A Neuro-Fuzzy Inference System Combining Wavelet Denoising, Principal Component Analysis, and Sequential Probability Ratio Test for Sensor Monitoring

    SciTech Connect

    Na, Man Gyun; Oh, Seungrohk

    2002-11-15

    A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information of other sensors. The parameters of the neuro-fuzzy inference system that estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce the time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system, and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors.

  5. A New Adaptive Diffusive Function for Magnetic Resonance Imaging Denoising Based on Pixel Similarity

    PubMed Central

    Heydari, Mostafa; Karami, Mohammad Reza

    2015-01-01

    Although there are many methods for image denoising, but partial differential equation (PDE) based denoising attracted much attention in the field of medical image processing such as magnetic resonance imaging (MRI). The main advantage of PDE-based denoising approach is laid in its ability to smooth image in a nonlinear way, which effectively removes the noise, as well as preserving edge through anisotropic diffusion controlled by the diffusive function. This function was first introduced by Perona and Malik (P-M) in their model. They proposed two functions that are most frequently used in PDE-based methods. Since these functions consider only the gradient information of a diffused pixel, they cannot remove noise in noisy images with low signal-to-noise (SNR). In this paper we propose a modified diffusive function with fractional power that is based on pixel similarity to improve P-M model for low SNR. We also will show that our proposed function will stabilize the P-M method. As experimental results show, our proposed function that is modified version of P-M function effectively improves the SNR and preserves edges more than P-M functions in low SNR. PMID:26955563

  6. [Research on ECG de-noising method based on ensemble empirical mode decomposition and wavelet transform using improved threshold function].

    PubMed

    Ye, Linlin; Yang, Dan; Wang, Xu

    2014-06-01

    A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal. PMID:25219236

  7. Image denoising in bidimensional empirical mode decomposition domain: the role of Student's probability distribution function.

    PubMed

    Lahmiri, Salim

    2016-03-01

    Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications. PMID:27222723

  8. Functional network inference of the suprachiasmatic nucleus.

    PubMed

    Abel, John H; Meeker, Kirsten; Granados-Fuentes, Daniel; St John, Peter C; Wang, Thomas J; Bales, Benjamin B; Doyle, Francis J; Herzog, Erik D; Petzold, Linda R

    2016-04-19

    In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. Although the intracellular genetic oscillator and intercellular biochemical coupling mechanisms have been examined previously, the network topology driving synchronization of the SCN has not been elucidated. This network has been particularly challenging to probe, due to its oscillatory components and slow coupling timescale. In this work, we investigated the SCN network at a single-cell resolution through a chemically induced desynchronization. We then inferred functional connections in the SCN by applying the maximal information coefficient statistic to bioluminescence reporter data from individual neurons while they resynchronized their circadian cycling. Our results demonstrate that the functional network of circadian cells associated with resynchronization has small-world characteristics, with a node degree distribution that is exponential. We show that hubs of this small-world network are preferentially located in the central SCN, with sparsely connected shells surrounding these cores. Finally, we used two computational models of circadian neurons to validate our predictions of network structure. PMID:27044085

  9. Green Channel Guiding Denoising on Bayer Image

    PubMed Central

    Zhang, Maojun

    2014-01-01

    Denoising is an indispensable function for digital cameras. In respect that noise is diffused during the demosaicking, the denoising ought to work directly on bayer data. The difficulty of denoising on bayer image is the interlaced mosaic pattern of red, green, and blue. Guided filter is a novel time efficient explicit filter kernel which can incorporate additional information from the guidance image, but it is still not applied for bayer image. In this work, we observe that the green channel of bayer mode is higher in both sampling rate and Signal-to-Noise Ratio (SNR) than the red and blue ones. Therefore the green channel can be used to guide denoising. This kind of guidance integrates the different color channels together. Experiments on both actual and simulated bayer images indicate that green channel acts well as the guidance signal, and the proposed method is competitive with other popular filter kernel denoising methods. PMID:24741370

  10. Functional neuroanatomy of intuitive physical inference.

    PubMed

    Fischer, Jason; Mikhael, John G; Tenenbaum, Joshua B; Kanwisher, Nancy

    2016-08-23

    To engage with the world-to understand the scene in front of us, plan actions, and predict what will happen next-we must have an intuitive grasp of the world's physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events-a "physics engine" in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general "multiple demand" system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action. PMID:27503892

  11. Denoising of high-resolution single-particle electron-microscopy density maps by their approximation using three-dimensional Gaussian functions.

    PubMed

    Jonić, S; Vargas, J; Melero, R; Gómez-Blanco, J; Carazo, J M; Sorzano, C O S

    2016-06-01

    Cryo-electron microscopy (cryo-EM) of frozen-hydrated preparations of isolated macromolecular complexes is the method of choice to obtain the structure of complexes that cannot be easily studied by other experimental methods due to their flexibility or large size. An increasing number of macromolecular structures are currently being obtained at subnanometer resolution but the interpretation of structural details in such EM-derived maps is often difficult because of noise at these high-frequency signal components that reduces their contrast. In this paper, we show that the method for EM density-map approximation using Gaussian functions can be used for denoising of single-particle EM maps of high (typically subnanometer) resolution. We show its denoising performance using simulated and experimental EM density maps of several complexes. PMID:27085420

  12. Automatic Denoising of Functional MRI Data: Combining Independent Component Analysis and Hierarchical Fusion of Classifiers

    PubMed Central

    Salimi-Khorshidi, Gholamreza; Douaud, Gwenaëlle; Beckmann, Christian F; Glasser, Matthew F; Griffanti, Ludovica; Smith, Stephen M

    2014-01-01

    Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis (ICA) - one of the most widely used techniques for the exploratory analysis of fMRI data - has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject “at rest”). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing “signal” (brain activity) can be distinguished form the “noise” components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX (“FMRIB’s ICA-based X-noiseifier”), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different Classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of

  13. Nonparametric inference on median residual life function.

    PubMed

    Jeong, Jong-Hyeon; Jung, Sin-Ho; Costantino, Joseph P

    2008-03-01

    A simple approach to the estimation of the median residual lifetime is proposed for a single group by inverting a function of the Kaplan-Meier estimators. A test statistic is proposed to compare two median residual lifetimes at any fixed time point. The test statistic does not involve estimation of the underlying probability density function of failure times under censoring. Extensive simulation studies are performed to validate the proposed test statistic in terms of type I error probabilities and powers at various time points. One of the oldest data sets from the National Surgical Adjuvant Breast and Bowel Project (NSABP), which has more than a quarter century of follow-up, is used to illustrate the method. The analysis results indicate that, without systematic post-operative therapy, a significant difference in median residual lifetimes between node-negative and node-positive breast cancer patients persists for about 10 years after surgery. The new estimates of the median residual lifetime could serve as a baseline for physicians to explain any incremental effects of post-operative treatments in terms of delaying breast cancer recurrence or prolonging remaining lifetimes of breast cancer patients. PMID:17501936

  14. Functional inferences of environmental coccolithovirus biodiversity.

    PubMed

    Nissimov, Jozef I; Jones, Mark; Napier, Johnathan A; Munn, Colin B; Kimmance, Susan A; Allen, Michael J

    2013-10-01

    The cosmopolitan calcifying alga Emiliania huxleyi is one of the most abundant bloom forming coccolithophore species in the oceans and plays an important role in global biogeochemical cycling. Coccolithoviruses are a major cause of coccolithophore bloom termination and have been studied in laboratory, mesocosm and open ocean studies. However, little is known about the dynamic interactions between the host and its viruses, and less is known about the natural diversity and role of functionally important genes within natural coccolithovirus communities. Here, we investigate the temporal and spatial distribution of coccolithoviruses by the use of molecular fingerprinting techniques PCR, DGGE and genomic sequencing. The natural biodiversity of the virus genes encoding the major capsid protein (MCP) and serine palmitoyltransferase (SPT) were analysed in samples obtained from the Atlantic Meridional Transect (AMT), the North Sea and the L4 site in the Western Channel Observatory. We discovered nine new coccolithovirus genotypes across the AMT and L4 site, with the majority of MCP sequences observed at the deep chlorophyll maximum layer of the sampled sites on the transect. We also found four new SPT gene variations in the North Sea and at L4. Their translated fragments and the full protein sequence of SPT from laboratory strains EhV-86 and EhV-99B1 were modelled and revealed that the theoretical fold differs among strains. Variation identified in the structural distance between the two domains of the SPT protein may have an impact on the catalytic capabilities of its active site. In summary, the combined use of 'standard' markers (i.e. MCP), in combination with metabolically relevant markers (i.e. SPT) are useful in the study of the phylogeny and functional biodiversity of coccolithoviruses, and can provide an interesting intracellular insight into the evolution of these viruses and their ability to infect and replicate within their algal hosts. PMID:24006045

  15. Classical methods for interpreting objective function minimization as intelligent inference

    SciTech Connect

    Golden, R.M.

    1996-12-31

    Most recognition algorithms and neural networks can be formally viewed as seeking a minimum value of an appropriate objective function during either classification or learning phases. The goal of this paper is to argue that in order to show a recognition algorithm is making intelligent inferences, it is not sufficient to show that the recognition algorithm is computing (or trying to compute) the global minimum of some objective function. One must explicitly define a {open_quotes}relational system{close_quotes} for the recognition algorithm or neural network which identifies the: (i) sample space, (ii) the relevant sigmafield of events generated by the sample space, and (iii) the {open_quotes}relation{close_quotes} for that relational system. Only when such a {open_quotes}relational system{close_quotes} is properly defined, is it possible to formally establish the sense in which computing the global minimum of an objective function is an intelligent, inference.

  16. Denoising PCR-amplified metagenome data

    PubMed Central

    2012-01-01

    Background PCR amplification and high-throughput sequencing theoretically enable the characterization of the finest-scale diversity in natural microbial and viral populations, but each of these methods introduces random errors that are difficult to distinguish from genuine biological diversity. Several approaches have been proposed to denoise these data but lack either speed or accuracy. Results We introduce a new denoising algorithm that we call DADA (Divisive Amplicon Denoising Algorithm). Without training data, DADA infers both the sample genotypes and error parameters that produced a metagenome data set. We demonstrate performance on control data sequenced on Roche’s 454 platform, and compare the results to the most accurate denoising software currently available, AmpliconNoise. Conclusions DADA is more accurate and over an order of magnitude faster than AmpliconNoise. It eliminates the need for training data to establish error parameters, fully utilizes sequence-abundance information, and enables inclusion of context-dependent PCR error rates. It should be readily extensible to other sequencing platforms such as Illumina. PMID:23113967

  17. Local thresholding de-noise speech signal

    NASA Astrophysics Data System (ADS)

    Luo, Haitao

    2013-07-01

    De-noise speech signal if it is noisy. Construct a wavelet according to Daubechies' method, and derive a wavelet packet from the constructed scaling and wavelet functions. Decompose the noisy speech signal by wavelet packet. Develop algorithms to detect beginning and ending point of speech. Construct polynomial function for local thresholding. Apply different strategies to de-noise and compress the decomposed terminal nodes coefficients. Reconstruct the wavelet packet tree. Re-build audio file using reconstructed data and compare the effectiveness of different strategies.

  18. Receiver function deconvolution using transdimensional hierarchical Bayesian inference

    NASA Astrophysics Data System (ADS)

    Kolb, J. M.; Lekić, V.

    2014-06-01

    Teleseismic waves can convert from shear to compressional (Sp) or compressional to shear (Ps) across impedance contrasts in the subsurface. Deconvolving the parent waveforms (P for Ps or S for Sp) from the daughter waveforms (S for Ps or P for Sp) generates receiver functions which can be used to analyse velocity structure beneath the receiver. Though a variety of deconvolution techniques have been developed, they are all adversely affected by background and signal-generated noise. In order to take into account the unknown noise characteristics, we propose a method based on transdimensional hierarchical Bayesian inference in which both the noise magnitude and noise spectral character are parameters in calculating the likelihood probability distribution. We use a reversible-jump implementation of a Markov chain Monte Carlo algorithm to find an ensemble of receiver functions whose relative fits to the data have been calculated while simultaneously inferring the values of the noise parameters. Our noise parametrization is determined from pre-event noise so that it approximates observed noise characteristics. We test the algorithm on synthetic waveforms contaminated with noise generated from a covariance matrix obtained from observed noise. We show that the method retrieves easily interpretable receiver functions even in the presence of high noise levels. We also show that we can obtain useful estimates of noise amplitude and frequency content. Analysis of the ensemble solutions produced by our method can be used to quantify the uncertainties associated with individual receiver functions as well as with individual features within them, providing an objective way for deciding which features warrant geological interpretation. This method should make possible more robust inferences on subsurface structure using receiver function analysis, especially in areas of poor data coverage or under noisy station conditions.

  19. Beyond the bounds of orthology: functional inference from metagenomic context.

    PubMed

    Vey, Gregory; Moreno-Hagelsieb, Gabriel

    2010-07-01

    The effectiveness of the computational inference of function by genomic context is bounded by the diversity of known microbial genomes. Although metagenomes offer access to previously inaccessible organisms, their fragmentary nature prevents the conventional establishment of orthologous relationships required for reliably predicting functional interactions. We introduce a protocol for the prediction of functional interactions using data sources without information about orthologous relationships. To illustrate this process, we use the Sargasso Sea metagenome to construct a functional interaction network for the Escherichia coli K12 genome. We identify two reliability metrics, target intergenic distance and source interaction count, and apply them to selectively filter the predictions retained to construct the network of functional interactions. The resulting network contains 2297 nodes with 10 072 edges with a positive predictive value of 0.80. The metagenome yielded 8423 functional interactions beyond those found using only the genomic orthologs as a data source. This amounted to a 134% increase in the total number of functional interactions that are predicted by combining the metagenome and the genomic orthologs versus the genomic orthologs alone. In the absence of detectable orthologous relationships it remains feasible to derive a reliable set of predicted functional interactions. This offers a strategy for harnessing other metagenomes and homologs in general. Because metagenomes allow access to previously unreachable microorganisms, this will result in expanding the universe of known functional interactions thus furthering our understanding of functional organization. PMID:20419183

  20. Network inference from functional experimental data (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Desrosiers, Patrick; Labrecque, Simon; Tremblay, Maxime; Bélanger, Mathieu; De Dorlodot, Bertrand; Côté, Daniel C.

    2016-03-01

    Functional connectivity maps of neuronal networks are critical tools to understand how neurons form circuits, how information is encoded and processed by neurons, how memory is shaped, and how these basic processes are altered under pathological conditions. Current light microscopy allows to observe calcium or electrical activity of thousands of neurons simultaneously, yet assessing comprehensive connectivity maps directly from such data remains a non-trivial analytical task. There exist simple statistical methods, such as cross-correlation and Granger causality, but they only detect linear interactions between neurons. Other more involved inference methods inspired by information theory, such as mutual information and transfer entropy, identify more accurately connections between neurons but also require more computational resources. We carried out a comparative study of common connectivity inference methods. The relative accuracy and computational cost of each method was determined via simulated fluorescence traces generated with realistic computational models of interacting neurons in networks of different topologies (clustered or non-clustered) and sizes (10-1000 neurons). To bridge the computational and experimental works, we observed the intracellular calcium activity of live hippocampal neuronal cultures infected with the fluorescent calcium marker GCaMP6f. The spontaneous activity of the networks, consisting of 50-100 neurons per field of view, was recorded from 20 to 50 Hz on a microscope controlled by a homemade software. We implemented all connectivity inference methods in the software, which rapidly loads calcium fluorescence movies, segments the images, extracts the fluorescence traces, and assesses the functional connections (with strengths and directions) between each pair of neurons. We used this software to assess, in real time, the functional connectivity from real calcium imaging data in basal conditions, under plasticity protocols, and epileptic

  1. Visualization of group inference data in functional neuroimaging.

    PubMed

    Gläscher, Jan

    2009-01-01

    While thresholded statistical parametric maps can convey an accurate account for the location and spatial extent of an effect in functional neuroimaging studies, their use is somewhat limited for characterizing more complex experimental effects, such as interactions in a factorial design. The resulting necessity for plotting the underlying data has long been recognized. Statistical Parametric Mapping (SPM) is a widely used software package for analyzing functional neuroimaging data that offers a variety of options for visualizing data from first level analyses. However, nowadays, the thrust of the statistical inference lies at the second level thus allowing for population inference. Unfortunately, the options for visualizing data from second level analyses are quite sparse. rfxplot is a new toolbox designed to alleviate this problem by providing a comprehensive array of options for plotting data from within second level analyses in SPM. These include graphs of average effect sizes (across subjects), averaged fitted responses and event-related blood oxygen level-dependent (BOLD) time courses. All data are retrieved from the underlying first level analyses and voxel selection can be tailored to the maximum effect in each subject within a defined search volume. All plot configurations can be easily configured via a graphical user-interface as well as non-interactively via a script. The large variety of plot options renders rfxplot suitable both for data exploration as well as producing high-quality figures for publications. PMID:19140033

  2. Explanation and inference: mechanistic and functional explanations guide property generalization

    PubMed Central

    Lombrozo, Tania; Gwynne, Nicholas Z.

    2014-01-01

    The ability to generalize from the known to the unknown is central to learning and inference. Two experiments explore the relationship between how a property is explained and how that property is generalized to novel species and artifacts. The experiments contrast the consequences of explaining a property mechanistically, by appeal to parts and processes, with the consequences of explaining the property functionally, by appeal to functions and goals. The findings suggest that properties that are explained functionally are more likely to be generalized on the basis of shared functions, with a weaker relationship between mechanistic explanations and generalization on the basis of shared parts and processes. The influence of explanation type on generalization holds even though all participants are provided with the same mechanistic and functional information, and whether an explanation type is freely generated (Experiment 1), experimentally provided (Experiment 2), or experimentally induced (Experiment 2). The experiments also demonstrate that explanations and generalizations of a particular type (mechanistic or functional) can be experimentally induced by providing sample explanations of that type, with a comparable effect when the sample explanations come from the same domain or from a different domains. These results suggest that explanations serve as a guide to generalization, and contribute to a growing body of work supporting the value of distinguishing mechanistic and functional explanations. PMID:25309384

  3. MR image denoising method for brain surface 3D modeling

    NASA Astrophysics Data System (ADS)

    Zhao, De-xin; Liu, Peng-jie; Zhang, De-gan

    2014-11-01

    Three-dimensional (3D) modeling of medical images is a critical part of surgical simulation. In this paper, we focus on the magnetic resonance (MR) images denoising for brain modeling reconstruction, and exploit a practical solution. We attempt to remove the noise existing in the MR imaging signal and preserve the image characteristics. A wavelet-based adaptive curve shrinkage function is presented in spherical coordinates system. The comparative experiments show that the denoising method can preserve better image details and enhance the coefficients of contours. Using these denoised images, the brain 3D visualization is given through surface triangle mesh model, which demonstrates the effectiveness of the proposed method.

  4. Denoising ECG signal based on ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Zhi-dong, Zhao; Liu, Juan; Wang, Sheng-tao

    2011-10-01

    The electrocardiogram (ECG) has been used extensively for detection of heart disease. Frequently the signal is corrupted by various kinds of noise such as muscle noise, electromyogram (EMG) interference, instrument noise etc. In this paper, a new ECG denoising method is proposed based on the recently developed ensemble empirical mode decomposition (EEMD). Noisy ECG signal is decomposed into a series of intrinsic mode functions (IMFs). The statistically significant information content is build by the empirical energy model of IMFs. Noisy ECG signal collected from clinic recording is processed using the method. The results show that on contrast with traditional methods, the novel denoising method can achieve the optimal denoising of the ECG signal.

  5. Medical-Legal Inferences From Functional Neuroimaging Evidence.

    PubMed

    Mayberg

    1996-07-01

    Positron emission (PET) and single-photon emission tomography (SPECT) are validated functional imaging techniques for the in vivo measurement of many neuro-phsyiological and neurochemical parameters. Research studies of patients with a broad range of neurological and psychiatric illness have been published. Reproducible and specific patterns of altered cerebral blood flow and glucose metabolism, however, have been demonstrated and confirmed for only a limited number of specific illnesses. The association of functional scan patterns with specific deficits is less conclusive. Correlations of regional abnormalities with clinical symptoms such as motor weakness, aphasia, and visual spatial dysfunction are the most reproducible but are more poorly localized than lesion-deficit studies would suggest. Findings are even less consistent for nonlocalizing behavioral symptoms such as memory difficulties, poor concentration, irritability, or chronic pain, and no reliable patterns have been demonstrated. In a forensic context, homicidal and sadistic tendencies, aberrant sexual drive, violent impulsivity, psychopathic and sociopathic personality traits, as well as impaired judgement and poor insight, have no known PET or SPECT patterns, and their presence in an individual with any PET or SPECT scan finding cannot be inferred or concluded. Furthermore, the reliable prediction of any specific neurological, psychiatric, or behavioral deficits from specific scan findings has not been demonstrated. Unambiguous results from experiments designed to specifically examine the causative relationships between regional brain dysfunction and these types of complex behaviors are needed before any introduction of functional scans into the courts can be considered scientifically justified or legally admissible. PMID:10320420

  6. Astronomical image denoising using dictionary learning

    NASA Astrophysics Data System (ADS)

    Beckouche, S.; Starck, J. L.; Fadili, J.

    2013-08-01

    Astronomical images suffer a constant presence of multiple defects that are consequences of the atmospheric conditions and of the intrinsic properties of the acquisition equipment. One of the most frequent defects in astronomical imaging is the presence of additive noise which makes a denoising step mandatory before processing data. During the last decade, a particular modeling scheme, based on sparse representations, has drawn the attention of an ever growing community of researchers. Sparse representations offer a promising framework to many image and signal processing tasks, especially denoising and restoration applications. At first, the harmonics, wavelets and similar bases, and overcomplete representations have been considered as candidate domains to seek the sparsest representation. A new generation of algorithms, based on data-driven dictionaries, evolved rapidly and compete now with the off-the-shelf fixed dictionaries. Although designing a dictionary relies on guessing the representative elementary forms and functions, the framework of dictionary learning offers the possibility of constructing the dictionary using the data themselves, which provides us with a more flexible setup to sparse modeling and allows us to build more sophisticated dictionaries. In this paper, we introduce the centered dictionary learning (CDL) method and we study its performance for astronomical image denoising. We show how CDL outperforms wavelet or classic dictionary learning denoising techniques on astronomical images, and we give a comparison of the effects of these different algorithms on the photometry of the denoised images. The current version of the code is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/556/A132

  7. Improved Rotating Kernel Transformation Based Contourlet Domain Image Denoising Framework

    PubMed Central

    Guo, Qing; Dong, Fangmin; Ren, Xuhong; Feng, Shiyu; Gao, Bruce Zhi

    2016-01-01

    A contourlet domain image denoising framework based on a novel Improved Rotating Kernel Transformation is proposed, where the difference of subbands in contourlet domain is taken into account. In detail: (1). A novel Improved Rotating Kernel Transformation (IRKT) is proposed to calculate the direction statistic of the image; The validity of the IRKT is verified by the corresponding extracted edge information comparing with the state-of-the-art edge detection algorithm. (2). The direction statistic represents the difference between subbands and is introduced to the threshold function based contourlet domain denoising approaches in the form of weights to get the novel framework. The proposed framework is utilized to improve the contourlet soft-thresholding (CTSoft) and contourlet bivariate-thresholding (CTB) algorithms. The denoising results on the conventional testing images and the Optical Coherence Tomography (OCT) medical images show that the proposed methods improve the existing contourlet based thresholding denoising algorithm, especially for the medical images. PMID:27148597

  8. Image denoising filter based on patch-based difference refinement

    NASA Astrophysics Data System (ADS)

    Park, Sang Wook; Kang, Moon Gi

    2012-06-01

    In the denoising literature, research based on the nonlocal means (NLM) filter has been done and there have been many variations and improvements regarding weight function and parameter optimization. Here, a NLM filter with patch-based difference (PBD) refinement is presented. PBD refinement, which is the weighted average of the PBD values, is performed with respect to the difference images of all the locations in a refinement kernel. With refined and denoised PBD values, pattern adaptive smoothing threshold and noise suppressed NLM filter weights are calculated. Owing to the refinement of the PBD values, the patterns are divided into flat regions and texture regions by comparing the sorted values in the PBD domain to the threshold value including the noise standard deviation. Then, two different smoothing thresholds are utilized for each region denoising, respectively, and the NLM filter is applied finally. Experimental results of the proposed scheme are shown in comparison with several state-of-the-arts NLM based denoising methods.

  9. Nonlocal Markovian models for image denoising

    NASA Astrophysics Data System (ADS)

    Salvadeo, Denis H. P.; Mascarenhas, Nelson D. A.; Levada, Alexandre L. M.

    2016-01-01

    Currently, the state-of-the art methods for image denoising are patch-based approaches. Redundant information present in nonlocal regions (patches) of the image is considered for better image modeling, resulting in an improved quality of filtering. In this respect, nonlocal Markov random field (MRF) models are proposed by redefining the energy functions of classical MRF models to adopt a nonlocal approach. With the new energy functions, the pairwise pixel interaction is weighted according to the similarities between the patches corresponding to each pair. Also, a maximum pseudolikelihood estimation of the spatial dependency parameter (β) for these models is presented here. For evaluating this proposal, these models are used as an a priori model in a maximum a posteriori estimation to denoise additive white Gaussian noise in images. Finally, results display a notable improvement in both quantitative and qualitative terms in comparison with the local MRFs.

  10. Constructing a Flexible Likelihood Function for Spectroscopic Inference

    NASA Astrophysics Data System (ADS)

    Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Hogg, David W.; Green, Gregory M.

    2015-10-01

    We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf.

  11. A New Adaptive Image Denoising Method

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    In this paper, a new adaptive image denoising method is proposed that follows the soft-thresholding technique. In our method, a new threshold function is also proposed, which is determined by taking the various combinations of noise level, noise-free signal variance, subband size, and decomposition level. It is simple and adaptive as it depends on the data-driven parameters estimation in each subband. The state-of-the-art denoising methods viz. VisuShrink, SureShrink, BayesShrink, WIDNTF and IDTVWT are not able to modify the coefficients in an efficient manner to provide the good quality of image. Our method removes the noise from the noisy image significantly and provides better visual quality of an image.

  12. CT reconstruction via denoising approximate message passing

    NASA Astrophysics Data System (ADS)

    Perelli, Alessandro; Lexa, Michael A.; Can, Ali; Davies, Mike E.

    2016-05-01

    In this paper, we adapt and apply a compressed sensing based reconstruction algorithm to the problem of computed tomography reconstruction for luggage inspection. Specifically, we propose a variant of the denoising generalized approximate message passing (D-GAMP) algorithm and compare its performance to the performance of traditional filtered back projection and to a penalized weighted least squares (PWLS) based reconstruction method. D-GAMP is an iterative algorithm that at each iteration estimates the conditional probability of the image given the measurements and employs a non-linear "denoising" function which implicitly imposes an image prior. Results on real baggage show that D-GAMP is well-suited to limited-view acquisitions.

  13. Structure-based inference of molecular functions of proteins of unknown function from Berkeley Structural Genomics Center

    SciTech Connect

    Kim, Sung-Hou; Shin, Dong Hae; Hou, Jingtong; Chandonia, John-Marc; Das, Debanu; Choi, In-Geol; Kim, Rosalind; Kim, Sung-Hou

    2007-09-02

    Advances in sequence genomics have resulted in an accumulation of a huge number of protein sequences derived from genome sequences. However, the functions of a large portion of them cannot be inferred based on the current methods of sequence homology detection to proteins of known functions. Three-dimensional structure can have an important impact in providing inference of molecular function (physical and chemical function) of a protein of unknown function. Structural genomics centers worldwide have been determining many 3-D structures of the proteins of unknown functions, and possible molecular functions of them have been inferred based on their structures. Combined with bioinformatics and enzymatic assay tools, the successful acceleration of the process of protein structure determination through high throughput pipelines enables the rapid functional annotation of a large fraction of hypothetical proteins. We present a brief summary of the process we used at the Berkeley Structural Genomics Center to infer molecular functions of proteins of unknown function.

  14. Research and Implementation of Heart Sound Denoising

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  15. Birdsong Denoising Using Wavelets.

    PubMed

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  16. Birdsong Denoising Using Wavelets

    PubMed Central

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  17. Study on an improved wavelet shift-invariant threshold denoising for pulsed laser induced glucose photoacoustic signals

    NASA Astrophysics Data System (ADS)

    Wang, Zhengzi; Ren, Zhong; Liu, Guodong

    2015-10-01

    Noninvasive measurement of blood glucose concentration has become a hotspot research in the world due to its characteristic of convenient, rapid and non-destructive etc. The blood glucose concentration monitoring based on photoacoustic technique has attracted many attentions because the detected signal is ultrasonic signals rather than the photo signals. But during the acquisition of the photoacoustic signals of glucose, the photoacoustic signals are not avoid to be polluted by some factors, such as the pulsed laser, electronic noises and circumstance noises etc. These disturbances will impact the measurement accuracy of the glucose concentration, So, the denoising of the glucose photoacoustic signals is a key work. In this paper, a wavelet shift-invariant threshold denoising method is improved, and a novel wavelet threshold function is proposed. For the novel wavelet threshold function, two threshold values and two different factors are set, and the novel function is high order derivative and continuous, which can be looked as the compromise between the wavelet soft threshold denoising and hard threshold denoising. Simulation experimental results illustrate that, compared with other wavelet threshold denoising, this improved wavelet shift-invariant threshold denoising has higher signal-to-noise ratio(SNR) and smaller root mean-square error (RMSE) value. And this improved denoising also has better denoising effect than others. Therefore, this improved denoising has a certain of potential value in the denoising of glucose photoacoustic signals.

  18. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  19. Bayesian Inference for Functional Dynamics Exploring in fMRI Data.

    PubMed

    Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao; Pan, Yi; Zhang, Jing

    2016-01-01

    This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come. PMID:27034708

  20. Bayesian Inference for Functional Dynamics Exploring in fMRI Data

    PubMed Central

    Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao

    2016-01-01

    This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come. PMID:27034708

  1. [DR image denoising based on Laplace-Impact mixture model].

    PubMed

    Feng, Guo-Dong; He, Xiang-Bin; Zhou, He-Qin

    2009-07-01

    A novel DR image denoising algorithm based on Laplace-Impact mixture model in dual-tree complex wavelet domain is proposed in this paper. It uses local variance to build probability density function of Laplace-Impact model fitted to the distribution of high-frequency subband coefficients well. Within Laplace-Impact framework, this paper describes a novel method for image denoising based on designing minimum mean squared error (MMSE) estimators, which relies on strong correlation between amplitudes of nearby coefficients. The experimental results show that the algorithm proposed in this paper outperforms several state-of-art denoising methods such as Bayes least squared Gaussian scale mixture and Laplace prior. PMID:19938519

  2. Role of Utility and Inference in the Evolution of Functional Information

    PubMed Central

    Sharov, Alexei A.

    2009-01-01

    Functional information means an encoded network of functions in living organisms from molecular signaling pathways to an organism’s behavior. It is represented by two components: code and an interpretation system, which together form a self-sustaining semantic closure. Semantic closure allows some freedom between components because small variations of the code are still interpretable. The interpretation system consists of inference rules that control the correspondence between the code and the function (phenotype) and determines the shape of the fitness landscape. The utility factor operates at multiple time scales: short-term selection drives evolution towards higher survival and reproduction rate within a given fitness landscape, and long-term selection favors those fitness landscapes that support adaptability and lead to evolutionary expansion of certain lineages. Inference rules make short-term selection possible by shaping the fitness landscape and defining possible directions of evolution, but they are under control of the long-term selection of lineages. Communication normally occurs within a set of agents with compatible interpretation systems, which I call communication system. Functional information cannot be directly transferred between communication systems with incompatible inference rules. Each biological species is a genetic communication system that carries unique functional information together with inference rules that determine evolutionary directions and constraints. This view of the relation between utility and inference can resolve the conflict between realism/positivism and pragmatism. Realism overemphasizes the role of inference in evolution of human knowledge because it assumes that logic is embedded in reality. Pragmatism substitutes usefulness for truth and therefore ignores the advantage of inference. The proposed concept of evolutionary pragmatism rejects the idea that logic is embedded in reality; instead, inference rules are

  3. Craniofacial biomechanics and functional and dietary inferences in hominin paleontology.

    PubMed

    Grine, Frederick E; Judex, Stefan; Daegling, David J; Ozcivici, Engin; Ungar, Peter S; Teaford, Mark F; Sponheimer, Matt; Scott, Jessica; Scott, Robert S; Walker, Alan

    2010-04-01

    Finite element analysis (FEA) is a potentially powerful tool by which the mechanical behaviors of different skeletal and dental designs can be investigated, and, as such, has become increasingly popular for biomechanical modeling and inferring the behavior of extinct organisms. However, the use of FEA to extrapolate from characterization of the mechanical environment to questions of trophic or ecological adaptation in a fossil taxon is both challenging and perilous. Here, we consider the problems and prospects of FEA applications in paleoanthropology, and provide a critical examination of one such study of the trophic adaptations of Australopithecus africanus. This particular FEA is evaluated with regard to 1) the nature of the A. africanus cranial composite, 2) model validation, 3) decisions made with respect to model parameters, 4) adequacy of data presentation, and 5) interpretation of the results. Each suggests that the results reflect methodological decisions as much as any underlying biological significance. Notwithstanding these issues, this model yields predictions that follow from the posited emphasis on premolar use by A. africanus. These predictions are tested with data from the paleontological record, including a phylogenetically-informed consideration of relative premolar size, and postcanine microwear fabrics and antemortem enamel chipping. In each instance, the data fail to conform to predictions from the model. This model thus serves to emphasize the need for caution in the application of FEA in paleoanthropological enquiry. Theoretical models can be instrumental in the construction of testable hypotheses; but ultimately, the studies that serve to test these hypotheses - rather than data from the models - should remain the source of information pertaining to hominin paleobiology and evolution. PMID:20227747

  4. Iterative denoising of ghost imaging.

    PubMed

    Yao, Xu-Ri; Yu, Wen-Kai; Liu, Xue-Feng; Li, Long-Zhen; Li, Ming-Fei; Wu, Ling-An; Zhai, Guang-Jie

    2014-10-01

    We present a new technique to denoise ghost imaging (GI) in which conventional intensity correlation GI and an iteration process have been combined to give an accurate estimate of the actual noise affecting image quality. The blurring influence of the speckle areas in the beam is reduced in the iteration by setting a threshold. It is shown that with an appropriate choice of threshold value, the quality of the iterative GI reconstructed image is much better than that of differential GI for the same number of measurements. This denoising method thus offers a very effective approach to promote the implementation of GI in real applications. PMID:25322001

  5. Photogrammetric DSM denoising

    NASA Astrophysics Data System (ADS)

    Nex, F.; Gerke, M.

    2014-08-01

    Image matching techniques can nowadays provide very dense point clouds and they are often considered a valid alternative to LiDAR point cloud. However, photogrammetric point clouds are often characterized by a higher level of random noise compared to LiDAR data and by the presence of large outliers. These problems constitute a limitation in the practical use of photogrammetric data for many applications but an effective way to enhance the generated point cloud has still to be found. In this paper we concentrate on the restoration of Digital Surface Models (DSM), computed from dense image matching point clouds. A photogrammetric DSM, i.e. a 2.5D representation of the surface is still one of the major products derived from point clouds. Four different algorithms devoted to DSM denoising are presented: a standard median filter approach, a bilateral filter, a variational approach (TGV: Total Generalized Variation), as well as a newly developed algorithm, which is embedded into a Markov Random Field (MRF) framework and optimized through graph-cuts. The ability of each algorithm to recover the original DSM has been quantitatively evaluated. To do that, a synthetic DSM has been generated and different typologies of noise have been added to mimic the typical errors of photogrammetric DSMs. The evaluation reveals that standard filters like median and edge preserving smoothing through a bilateral filter approach cannot sufficiently remove typical errors occurring in a photogrammetric DSM. The TGV-based approach much better removes random noise, but large areas with outliers still remain. Our own method which explicitly models the degradation properties of those DSM outperforms the others in all aspects.

  6. Generalised partition functions: inferences on phase space distributions

    NASA Astrophysics Data System (ADS)

    Treumann, Rudolf A.; Baumjohann, Wolfgang

    2016-06-01

    It is demonstrated that the statistical mechanical partition function can be used to construct various different forms of phase space distributions. This indicates that its structure is not restricted to the Gibbs-Boltzmann factor prescription which is based on counting statistics. With the widely used replacement of the Boltzmann factor by a generalised Lorentzian (also known as the q-deformed exponential function, where κ = 1/|q - 1|, with κ, q ∈ R) both the kappa-Bose and kappa-Fermi partition functions are obtained in quite a straightforward way, from which the conventional Bose and Fermi distributions follow for κ → ∞. For κ ≠ ∞ these are subject to the restrictions that they can be used only at temperatures far from zero. They thus, as shown earlier, have little value for quantum physics. This is reasonable, because physical κ systems imply strong correlations which are absent at zero temperature where apart from stochastics all dynamical interactions are frozen. In the classical large temperature limit one obtains physically reasonable κ distributions which depend on energy respectively momentum as well as on chemical potential. Looking for other functional dependencies, we examine Bessel functions whether they can be used for obtaining valid distributions. Again and for the same reason, no Fermi and Bose distributions exist in the low temperature limit. However, a classical Bessel-Boltzmann distribution can be constructed which is a Bessel-modified Lorentzian distribution. Whether it makes any physical sense remains an open question. This is not investigated here. The choice of Bessel functions is motivated solely by their convergence properties and not by reference to any physical demands. This result suggests that the Gibbs-Boltzmann partition function is fundamental not only to Gibbs-Boltzmann but also to a large class of generalised Lorentzian distributions as well as to the corresponding nonextensive statistical mechanics.

  7. On the functional equivalence of fuzzy inference systems and spline-based networks.

    PubMed

    Hunt, K J; Haas, R; Brown, M

    1995-06-01

    The conditions under which spline-based networks are functionally equivalent to the Takagi-Sugeno-model of fuzzy inference are formally established. We consider a generalized form of basis function network whose basis functions are splines. The result admits a wide range of fuzzy membership functions which are commonly encountered in fuzzy systems design. We use the theoretical background of functional equivalence to develop a hybrid fuzzy-spline net for inverse dynamic modeling of a hydraulically driven robot manipulator. PMID:7496588

  8. Electrocardiogram signal denoising based on a new improved wavelet thresholding.

    PubMed

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method. PMID:27587134

  9. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    NASA Astrophysics Data System (ADS)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  10. Autocorrelation based denoising of manatee vocalizations using the undecimated discrete wavelet transform.

    PubMed

    Gur, Berke M; Niezrecki, Christopher

    2007-07-01

    Recent interest in the West Indian manatee (Trichechus manatus latirostris) vocalizations has been primarily induced by an effort to reduce manatee mortality rates due to watercraft collisions. A warning system based on passive acoustic detection of manatee vocalizations is desired. The success and feasibility of such a system depends on effective denoising of the vocalizations in the presence of high levels of background noise. In the last decade, simple and effective wavelet domain nonlinear denoising methods have emerged as an alternative to linear estimation methods. However, the denoising performances of these methods degrades considerably with decreasing signal-to-noise ratio (SNR) and therefore are not suited for denoising manatee vocalizations in which the typical SNR is below 0 dB. Manatee vocalizations possess a strong harmonic content and a slow decaying autocorrelation function. In this paper, an efficient denoising scheme that exploits both the autocorrelation function of manatee vocalizations and effectiveness of the nonlinear wavelet transform based denoising algorithms is introduced. The suggested wavelet-based denoising algorithm is shown to outperform linear filtering methods, extending the detection range of vocalizations. PMID:17614478

  11. Structure and function of the mammalian middle ear. II: Inferring function from structure.

    PubMed

    Mason, Matthew J

    2016-02-01

    Anatomists and zoologists who study middle ear morphology are often interested to know what the structure of an ear can reveal about the auditory acuity and hearing range of the animal in question. This paper represents an introduction to middle ear function targetted towards biological scientists with little experience in the field of auditory acoustics. Simple models of impedance matching are first described, based on the familiar concepts of the area and lever ratios of the middle ear. However, using the Mongolian gerbil Meriones unguiculatus as a test case, it is shown that the predictions made by such 'ideal transformer' models are generally not consistent with measurements derived from recent experimental studies. Electrical analogue models represent a better way to understand some of the complex, frequency-dependent responses of the middle ear: these have been used to model the effects of middle ear subcavities, and the possible function of the auditory ossicles as a transmission line. The concepts behind such models are explained here, again aimed at those with little background knowledge. Functional inferences based on middle ear anatomy are more likely to be valid at low frequencies. Acoustic impedance at low frequencies is dominated by compliance; expanded middle ear cavities, found in small desert mammals including gerbils, jerboas and the sengi Macroscelides, are expected to improve low-frequency sound transmission, as long as the ossicular system is not too stiff. PMID:26100915

  12. Locally Based Kernel PLS Regression De-noising with Application to Event-Related Potentials

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Tino, Peter

    2002-01-01

    The close relation of signal de-noising and regression problems dealing with the estimation of functions reflecting dependency between a set of inputs and dependent outputs corrupted with some level of noise have been employed in our approach.

  13. Crustal structure beneath northeast India inferred from receiver function modeling

    NASA Astrophysics Data System (ADS)

    Borah, Kajaljyoti; Bora, Dipok K.; Goyal, Ayush; Kumar, Raju

    2016-09-01

    We estimated crustal shear velocity structure beneath ten broadband seismic stations of northeast India, by using H-Vp/Vs stacking method and a non-linear direct search approach, Neighbourhood Algorithm (NA) technique followed by joint inversion of Rayleigh wave group velocity and receiver function, calculated from teleseismic earthquakes data. Results show significant variations of thickness, shear velocities (Vs) and Vp/Vs ratio in the crust of the study region. The inverted shear wave velocity models show crustal thickness variations of 32-36 km in Shillong Plateau (North), 36-40 in Assam Valley and ∼44 km in Lesser Himalaya (South). Average Vp/Vs ratio in Shillong Plateau is less (1.73-1.77) compared to Assam Valley and Lesser Himalaya (∼1.80). Average crustal shear velocity beneath the study region varies from 3.4 to 3.5 km/s. Sediment structure beneath Shillong Plateau and Assam Valley shows 1-2 km thick sediment layer with low Vs (2.5-2.9 km/s) and high Vp/Vs ratio (1.8-2.1), while it is observed to be of greater thickness (4 km) with similar Vs and high Vp/Vs (∼2.5) in RUP (Lesser Himalaya). Both Shillong Plateau and Assam Valley show thick upper and middle crust (10-20 km), and thin (4-9 km) lower crust. Average Vp/Vs ratio in Assam Valley and Shillong Plateau suggest that the crust is felsic-to-intermediate and intermediate-to-mafic beneath Shillong Plateau and Assam Valley, respectively. Results show that lower crust rocks beneath the Shillong Plateau and Assam Valley lies between mafic granulite and mafic garnet granulite.

  14. Empirical mode decomposition based background removal and de-noising in polarization interference imaging spectrometer.

    PubMed

    Zhang, Chunmin; Ren, Wenyi; Mu, Tingkui; Fu, Lili; Jia, Chenling

    2013-02-11

    Based on empirical mode decomposition (EMD), the background removal and de-noising procedures of the data taken by polarization interference imaging interferometer (PIIS) are implemented. Through numerical simulation, it is discovered that the data processing methods are effective. The assumption that the noise mostly exists in the first intrinsic mode function is verified, and the parameters in the EMD thresholding de-noising methods is determined. In comparison, the wavelet and windowed Fourier transform based thresholding de-noising methods are introduced. The de-noised results are evaluated by the SNR, spectral resolution and peak value of the de-noised spectrums. All the methods are used to suppress the effect from the Gaussian and Poisson noise. The de-noising efficiency is higher for the spectrum contaminated by Gaussian noise. The interferogram obtained by the PIIS is processed by the proposed methods. Both the interferogram without background and noise free spectrum are obtained effectively. The adaptive and robust EMD based methods are effective to the background removal and de-noising in PIIS. PMID:23481716

  15. Pragmatic Inference Abilities in Individuals with Asperger Syndrome or High-Functioning Autism. A Review

    ERIC Educational Resources Information Center

    Loukusa, Soile; Moilanen, Irma

    2009-01-01

    This review summarizes studies involving pragmatic language comprehension and inference abilities in individuals with Asperger syndrome or high-functioning autism. Systematic searches of three electronic databases, selected journals, and reference lists identified 20 studies meeting the inclusion criteria. These studies were evaluated in terms of:…

  16. Dynamic Denoising of Tracking Sequences

    PubMed Central

    Michailovich, Oleg; Tannenbaum, Allen

    2009-01-01

    In this paper, we describe an approach to the problem of simultaneously enhancing image sequences and tracking the objects of interest represented by the latter. The enhancement part of the algorithm is based on Bayesian wavelet denoising, which has been chosen due to its exceptional ability to incorporate diverse a priori information into the process of image recovery. In particular, we demonstrate that, in dynamic settings, useful statistical priors can come both from some reasonable assumptions on the properties of the image to be enhanced as well as from the images that have already been observed before the current scene. Using such priors forms the main contribution of the present paper which is the proposal of the dynamic denoising as a tool for simultaneously enhancing and tracking image sequences. Within the proposed framework, the previous observations of a dynamic scene are employed to enhance its present observation. The mechanism that allows the fusion of the information within successive image frames is Bayesian estimation, while transferring the useful information between the images is governed by a Kalman filter that is used for both prediction and estimation of the dynamics of tracked objects. Therefore, in this methodology, the processes of target tracking and image enhancement “collaborate” in an interlacing manner, rather than being applied separately. The dynamic denoising is demonstrated on several examples of SAR imagery. The results demonstrated in this paper indicate a number of advantages of the proposed dynamic denoising over “static” approaches, in which the tracking images are enhanced independently of each other. PMID:18482881

  17. Function formula oriented construction of Bayesian inference nets for diagnosis of cardiovascular disease.

    PubMed

    Sekar, Booma Devi; Dong, Mingchui

    2014-01-01

    An intelligent cardiovascular disease (CVD) diagnosis system using hemodynamic parameters (HDPs) derived from sphygmogram (SPG) signal is presented to support the emerging patient-centric healthcare models. To replicate clinical approach of diagnosis through a staged decision process, the Bayesian inference nets (BIN) are adapted. New approaches to construct a hierarchical multistage BIN using defined function formulas and a method employing fuzzy logic (FL) technology to quantify inference nodes with dynamic values of statistical parameters are proposed. The suggested methodology is validated by constructing hierarchical Bayesian fuzzy inference nets (HBFIN) to diagnose various heart pathologies from the deduced HDPs. The preliminary diagnostic results show that the proposed methodology has salient validity and effectiveness in the diagnosis of cardiovascular disease. PMID:25247174

  18. Function Formula Oriented Construction of Bayesian Inference Nets for Diagnosis of Cardiovascular Disease

    PubMed Central

    Sekar, Booma Devi; Dong, Mingchui

    2014-01-01

    An intelligent cardiovascular disease (CVD) diagnosis system using hemodynamic parameters (HDPs) derived from sphygmogram (SPG) signal is presented to support the emerging patient-centric healthcare models. To replicate clinical approach of diagnosis through a staged decision process, the Bayesian inference nets (BIN) are adapted. New approaches to construct a hierarchical multistage BIN using defined function formulas and a method employing fuzzy logic (FL) technology to quantify inference nodes with dynamic values of statistical parameters are proposed. The suggested methodology is validated by constructing hierarchical Bayesian fuzzy inference nets (HBFIN) to diagnose various heart pathologies from the deduced HDPs. The preliminary diagnostic results show that the proposed methodology has salient validity and effectiveness in the diagnosis of cardiovascular disease. PMID:25247174

  19. A Model-Based Analysis to Infer the Functional Content of a Gene List

    PubMed Central

    Newton, Michael A.; He, Qiuling; Kendziorski, Christina

    2012-01-01

    An important challenge in statistical genomics concerns integrating experimental data with exogenous information about gene function. A number of statistical methods are available to address this challenge, but most do not accommodate complexities in the functional record. To infer activity of a functional category (e.g., a gene ontology term), most methods use gene-level data on that category, but do not use other functional properties of the same genes. Not doing so creates undue errors in inference. Recent developments in model-based category analysis aim to overcome this difficulty, but in attempting to do so they are faced with serious computational problems. This paper investigates statistical properties and the structure of posterior computation in one such model for the analysis of functional category data. We examine the graphical structures underlying posterior computation in the original parameterization and in a new parameterization aimed at leveraging elements of the model. We characterize identifiability of the underlying activation states, describe a new prior distribution, and introduce approximations that aim to support numerical methods for posterior inference. PMID:22499692

  20. Vikodak - A Modular Framework for Inferring Functional Potential of Microbial Communities from 16S Metagenomic Datasets

    PubMed Central

    Nagpal, Sunil; Haque, Mohammed Monzoorul; Mande, Sharmila S.

    2016-01-01

    Background The overall metabolic/functional potential of any given environmental niche is a function of the sum total of genes/proteins/enzymes that are encoded and expressed by various interacting microbes residing in that niche. Consequently, prior (collated) information pertaining to genes, enzymes encoded by the resident microbes can aid in indirectly (re)constructing/ inferring the metabolic/ functional potential of a given microbial community (given its taxonomic abundance profile). In this study, we present Vikodak—a multi-modular package that is based on the above assumption and automates inferring and/ or comparing the functional characteristics of an environment using taxonomic abundance generated from one or more environmental sample datasets. With the underlying assumptions of co-metabolism and independent contributions of different microbes in a community, a concerted effort has been made to accommodate microbial co-existence patterns in various modules incorporated in Vikodak. Results Validation experiments on over 1400 metagenomic samples have confirmed the utility of Vikodak in (a) deciphering enzyme abundance profiles of any KEGG metabolic pathway, (b) functional resolution of distinct metagenomic environments, (c) inferring patterns of functional interaction between resident microbes, and (d) automating statistical comparison of functional features of studied microbiomes. Novel features incorporated in Vikodak also facilitate automatic removal of false positives and spurious functional predictions. Conclusions With novel provisions for comprehensive functional analysis, inclusion of microbial co-existence pattern based algorithms, automated inter-environment comparisons; in-depth analysis of individual metabolic pathways and greater flexibilities at the user end, Vikodak is expected to be an important value addition to the family of existing tools for 16S based function prediction. Availability and Implementation A web implementation of Vikodak

  1. Empirical Mode Decomposition Technique with Conditional Mutual Information for Denoising Operational Sensor Data

    SciTech Connect

    Omitaomu, Olufemi A; Protopopescu, Vladimir A; Ganguly, Auroop R

    2011-01-01

    A new approach is developed for denoising signals using the Empirical Mode Decomposition (EMD) technique and the Information-theoretic method. The EMD technique is applied to decompose a noisy sensor signal into the so-called intrinsic mode functions (IMFs). These functions are of the same length and in the same time domain as the original signal. Therefore, the EMD technique preserves varying frequency in time. Assuming the given signal is corrupted by high-frequency Gaussian noise implies that most of the noise should be captured by the first few modes. Therefore, our proposition is to separate the modes into high-frequency and low-frequency groups. We applied an information-theoretic method, namely mutual information, to determine the cut-off for separating the modes. A denoising procedure is applied only to the high-frequency group using a shrinkage approach. Upon denoising, this group is combined with the original low-frequency group to obtain the overall denoised signal. We illustrate our approach with simulated and real-world data sets. The results are compared to two popular denoising techniques in the literature, namely discrete Fourier transform (DFT) and discrete wavelet transform (DWT). We found that our approach performs better than DWT and DFT in most cases, and comparatively to DWT in some cases in terms of: (i) mean square error, (ii) recomputed signal-to-noise ratio, and (iii) visual quality of the denoised signals.

  2. Approximation Of Multi-Valued Inverse Functions Using Clustering And Sugeno Fuzzy Inference

    NASA Technical Reports Server (NTRS)

    Walden, Maria A.; Bikdash, Marwan; Homaifar, Abdollah

    1998-01-01

    Finding the inverse of a continuous function can be challenging and computationally expensive when the inverse function is multi-valued. Difficulties may be compounded when the function itself is difficult to evaluate. We show that we can use fuzzy-logic approximators such as Sugeno inference systems to compute the inverse on-line. To do so, a fuzzy clustering algorithm can be used in conjunction with a discriminating function to split the function data into branches for the different values of the forward function. These data sets are then fed into a recursive least-squares learning algorithm that finds the proper coefficients of the Sugeno approximators; each Sugeno approximator finds one value of the inverse function. Discussions about the accuracy of the approximation will be included.

  3. Determination of optimal wavelet denoising parameters for red edge feature extraction from hyperspectral data

    NASA Astrophysics Data System (ADS)

    Shafri, Helmi Z. M.; Yusof, Mohd R. M.

    2009-05-01

    A study of wavelet denoising on hyperspectral reflectance data, specifically the red edge position (REP) and its first derivative is presented in this paper. A synthetic data set was created using a sigmoid to simulate the red edge feature for this study. The sigmoid is injected with Gaussian white noise to simulate noisy reflectance data from handheld spectroradiometers. The use of synthetic data enables better quantification and statistical study of the effects of wavelet denoising on the features of hyperspectral data, specifically the REP. The simulation study will help to identify the most suitable wavelet parameters for denoising and represents the applicability of the wavelet-based denoising procedure in hyperspectral sensing for vegetation. The suitability of the thresholding rules and mother wavelets used in wavelet denoising is evaluated by comparing the denoised sigmoid function with the clean sigmoid, in terms of the shift in the inflection point meant to represent the REP, and also the overall change in the denoised signal compared with the clean one. The VisuShrink soft threshold was used with rescaling based on the noise estimate, in conjunction with wavelets of the Daubechies, Symlet and Coiflet families. It was found that for the VisuShrink threshold with single level noise estimate rescaling, the Daubechies 9 and Symlet 8 wavelets produced the least distortion in the location of sigmoid inflection point and the overall curve. The selected mother wavelets were used to denoise oil palm reflectance data to enable determination of the red edge position by locating the peak of the first derivative.

  4. INTEGRATING EVOLUTIONARY AND FUNCTIONAL APPROACHES TO INFER ADAPTATION AT SPECIFIC LOCI

    PubMed Central

    Storz, Jay F.; Wheat, Christopher W.

    2010-01-01

    Inferences about adaptation at specific loci are often exclusively based on the static analysis of DNA sequence variation. Ideally, population-genetic evidence for positive selection serves as a stepping-off point for experimental studies to elucidate the functional significance of the putatively adaptive variation. We argue that inferences about adaptation at specific loci are best achieved by integrating the indirect, retrospective insights provided by population-genetic analyses with the more direct, mechanistic insights provided by functional experiments. Integrative studies of adaptive genetic variation may sometimes be motivated by experimental insights into molecular function, which then provide the impetus to perform population genetic tests to evaluate whether the functional variation is of adaptive significance. In other cases, studies may be initiated by genome scans of DNA variation to identify candidate loci for recent adaptation. Results of such analyses can then motivate experimental efforts to test whether the identified candidate loci do in fact contribute to functional variation in some fitness-related phenotype. Functional studies can provide corroborative evidence for positive selection at particular loci, and can potentially reveal specific molecular mechanisms of adaptation. PMID:20500215

  5. Analysis and selection of the methods for fruit image denoise

    NASA Astrophysics Data System (ADS)

    Gui, Jiangsheng; Ma, Benxue; Rao, Xiuqin; Ying, Yibin

    2007-09-01

    Applications of machine vision in automated inspection and sorting of fruits have been widely studied by scientists and. Preprocess of the fruit image is needed when it contain much noise. There are many methods for image denoise in literatures and can acquire some nice results, but which will be selected from these methods is a trouble problem. In this research, total variation (TV) and shock filter with diffusion function were introduced, and together with other 6 common used denoise method s for different type noise type were tested. The result demonstrated that when the noise type was Gaussian or random, and SNR of original image was over 8,TV method can achieve the best resume result, when the SNR of original image was under 8, Winner filter can get the best resume result; when the noise type was salt pepper, median filter can achieve the best resume result

  6. Vector anisotropic filter for multispectral image denoising

    NASA Astrophysics Data System (ADS)

    Ben Said, Ahmed; Foufou, Sebti; Hadjidj, Rachid

    2015-04-01

    In this paper, we propose an approach to extend the application of anisotropic Gaussian filtering for multi- spectral image denoising. We study the case of images corrupted with additive Gaussian noise and use sparse matrix transform for noise covariance matrix estimation. Specifically we show that if an image has a low local variability, we can make the assumption that in the noisy image, the local variability originates from the noise variance only. We apply the proposed approach for the denoising of multispectral images corrupted by noise and compare the proposed method with some existing methods. Results demonstrate an improvement in the denoising performance.

  7. Denoising Medical Images using Calculus of Variations

    PubMed Central

    Kohan, Mahdi Nakhaie; Behnam, Hamid

    2011-01-01

    We propose a method for medical image denoising using calculus of variations and local variance estimation by shaped windows. This method reduces any additive noise and preserves small patterns and edges of images. A pyramid structure-texture decomposition of images is used to separate noise and texture components based on local variance measures. The experimental results show that the proposed method has visual improvement as well as a better SNR, RMSE and PSNR than common medical image denoising methods. Experimental results in denoising a sample Magnetic Resonance image show that SNR, PSNR and RMSE have been improved by 19, 9 and 21 percents respectively. PMID:22606674

  8. Using evolutionary sequence variation to make inferences about protein structure and function

    NASA Astrophysics Data System (ADS)

    Colwell, Lucy

    2015-03-01

    The evolutionary trajectory of a protein through sequence space is constrained by its function. Collections of sequence homologs record the outcomes of millions of evolutionary experiments in which the protein evolves according to these constraints. The explosive growth in the number of available protein sequences raises the possibility of using the natural variation present in homologous protein sequences to infer these constraints and thus identify residues that control different protein phenotypes. Because in many cases phenotypic changes are controlled by more than one amino acid, the mutations that separate one phenotype from another may not be independent, requiring us to understand the correlation structure of the data. To address this we build a maximum entropy probability model for the protein sequence. The parameters of the inferred model are constrained by the statistics of a large sequence alignment. Pairs of sequence positions with the strongest interactions accurately predict contacts in protein tertiary structure, enabling all atom structural models to be constructed. We describe development of a theoretical inference framework that enables the relationship between the amount of available input data and the reliability of structural predictions to be better understood.

  9. Nonlocal means denoising of ECG signals.

    PubMed

    Tracey, Brian H; Miller, Eric L

    2012-09-01

    Patch-based methods have attracted significant attention in recent years within the field of image processing for a variety of problems including denoising, inpainting, and super-resolution interpolation. Despite their prevalence for processing 2-D signals, they have received little attention in the 1-D signal processing literature. In this letter, we explore application of one such method, the nonlocal means (NLM) approach, to the denoising of biomedical signals. Using ECG as an example, we demonstrate that a straightforward NLM-based denoising scheme provides signal-to-noise ratio improvements very similar to state of the art wavelet-based methods, while giving ~3 × or greater reduction in metrics measuring distortion of the denoised waveform. PMID:22829361

  10. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy. PMID:20840902

  11. Impact of Prematurity and Perinatal Antibiotics on the Developing Intestinal Microbiota: A Functional Inference Study

    PubMed Central

    Arboleya, Silvia; Sánchez, Borja; Solís, Gonzalo; Fernández, Nuria; Suárez, Marta; Hernández-Barranco, Ana M.; Milani, Christian; Margolles, Abelardo; de los Reyes-Gavilán, Clara G.; Ventura, Marco; Gueimonde, Miguel

    2016-01-01

    Background: The microbial colonization of the neonatal gut provides a critical stimulus for normal maturation and development. This process of early microbiota establishment, known to be affected by several factors, constitutes an important determinant for later health. Methods: We studied the establishment of the microbiota in preterm and full-term infants and the impact of perinatal antibiotics upon this process in premature babies. To this end, 16S rRNA gene sequence-based microbiota assessment was performed at phylum level and functional inference analyses were conducted. Moreover, the levels of the main intestinal microbial metabolites, the short-chain fatty acids (SCFA) acetate, propionate and butyrate, were measured by Gas-Chromatography Flame ionization/Mass spectrometry detection. Results: Prematurity affects microbiota composition at phylum level, leading to increases of Proteobacteria and reduction of other intestinal microorganisms. Perinatal antibiotic use further affected the microbiota of the preterm infant. These changes involved a concomitant alteration in the levels of intestinal SCFA. Moreover, functional inference analyses allowed for identifying metabolic pathways potentially affected by prematurity and perinatal antibiotics use. Conclusion: A deficiency or delay in the establishment of normal microbiota function seems to be present in preterm infants. Perinatal antibiotic use, such as intrapartum prophylaxis, affected the early life microbiota establishment in preterm newborns, which may have consequences for later health. PMID:27136545

  12. Inferring the functional effect of gene expression changes in signaling pathways.

    PubMed

    Sebastián-León, Patricia; Carbonell, José; Salavert, Francisco; Sanchez, Rubén; Medina, Ignacio; Dopazo, Joaquín

    2013-07-01

    Signaling pathways constitute a valuable source of information that allows interpreting the way in which alterations in gene activities affect to particular cell functionalities. There are web tools available that allow viewing and editing pathways, as well as representing experimental data on them. However, few methods aimed to identify the signaling circuits, within a pathway, associated to the biological problem studied exist and none of them provide a convenient graphical web interface. We present PATHiWAYS, a web-based signaling pathway visualization system that infers changes in signaling that affect cell functionality from the measurements of gene expression values in typical expression microarray case-control experiments. A simple probabilistic model of the pathway is used to estimate the probabilities for signal transmission from any receptor to any final effector molecule (taking into account the pathway topology) using for this the individual probabilities of gene product presence/absence inferred from gene expression values. Significant changes in these probabilities allow linking different cell functionalities triggered by the pathway to the biological problem studied. PATHiWAYS is available at: http://pathiways.babelomics.org/. PMID:23748960

  13. Inferring deep-brain activity from cortical activity using functional near-infrared spectroscopy

    PubMed Central

    Liu, Ning; Cui, Xu; Bryant, Daniel M.; Glover, Gary H.; Reiss, Allan L.

    2015-01-01

    Functional near-infrared spectroscopy (fNIRS) is an increasingly popular technology for studying brain function because it is non-invasive, non-irradiating and relatively inexpensive. Further, fNIRS potentially allows measurement of hemodynamic activity with high temporal resolution (milliseconds) and in naturalistic settings. However, in comparison with other imaging modalities, namely fMRI, fNIRS has a significant drawback: limited sensitivity to hemodynamic changes in deep-brain regions. To overcome this limitation, we developed a computational method to infer deep-brain activity using fNIRS measurements of cortical activity. Using simultaneous fNIRS and fMRI, we measured brain activity in 17 participants as they completed three cognitive tasks. A support vector regression (SVR) learning algorithm was used to predict activity in twelve deep-brain regions using information from surface fNIRS measurements. We compared these predictions against actual fMRI-measured activity using Pearson’s correlation to quantify prediction performance. To provide a benchmark for comparison, we also used fMRI measurements of cortical activity to infer deep-brain activity. When using fMRI-measured activity from the entire cortex, we were able to predict deep-brain activity in the fusiform cortex with an average correlation coefficient of 0.80 and in all deep-brain regions with an average correlation coefficient of 0.67. The top 15% of predictions using fNIRS signal achieved an accuracy of 0.7. To our knowledge, this study is the first to investigate the feasibility of using cortical activity to infer deep-brain activity. This new method has the potential to extend fNIRS applications in cognitive and clinical neuroscience research. PMID:25798327

  14. Inferring deep biosphere function and diversity through (near) surface biosphere portals (Invited)

    NASA Astrophysics Data System (ADS)

    Meyer-Dombard, D. R.; Cardace, D.; Woycheese, K. M.; Swingley, W.; Schubotz, F.; Shock, E.

    2013-12-01

    The consideration of surface expressions of the deep subsurface- such as springs- remains one of the most economically viable means to query the deep biosphere's diversity and function. Hot spring source pools are ideal portals for accessing and inferring the taxonomic and functional diversity of related deep subsurface microbial communities. Consideration of the geochemical composition of deep vs. surface fluids provides context for interpretation of community function. Further, parallel assessment of 16S rRNA data, metagenomic sequencing, and isotopic compositions of biomass in surface springs allows inference of the functional capacities of subsurface ecosystems. Springs in Yellowstone National Park (YNP), the Philippines, and Turkey are considered here, incorporating near-surface, transition, and surface ecosystems to identify 'legacy' taxa and functions of the deep biosphere. We find that source pools often support functional capacity suited to subsurface ecosystems. For example, in hot ecosystems, source pools are strictly chemosynthetic, and surface environments with measureable dissolved oxygen may contain evidence of community functions more favorable under anaerobic conditions. Metagenomic reads from a YNP ecosystem indicate the genetic capacity for sulfate reduction at high temperature. However, inorganic sulfate reduction is only minimally energy-yielding in these surface environments suggesting the potential that sulfate reduction is a 'legacy' function of deeper biosphere ecosystems. Carbon fixation tactics shift with increased surface exposure of the thermal fluids. Genes related to the rTCA cycle and the acetyl co-A pathway are most prevalent in highest temperature, anaerobic sites. At lower temperature sites, fewer total carbon fixation genes were observed, perhaps indicating an increase in heterotrophic metabolism with increased surface exposure. In hydrogen and methane rich springs in the Philippines and Turkey, methanogenic taxa dominate source

  15. GLMdenoise: a fast, automated technique for denoising task-based fMRI data.

    PubMed

    Kay, Kendrick N; Rokem, Ariel; Winawer, Jonathan; Dougherty, Robert F; Wandell, Brian A

    2013-01-01

    In task-based functional magnetic resonance imaging (fMRI), researchers seek to measure fMRI signals related to a given task or condition. In many circumstances, measuring this signal of interest is limited by noise. In this study, we present GLMdenoise, a technique that improves signal-to-noise ratio (SNR) by entering noise regressors into a general linear model (GLM) analysis of fMRI data. The noise regressors are derived by conducting an initial model fit to determine voxels unrelated to the experimental paradigm, performing principal components analysis (PCA) on the time-series of these voxels, and using cross-validation to select the optimal number of principal components to use as noise regressors. Due to the use of data resampling, GLMdenoise requires and is best suited for datasets involving multiple runs (where conditions repeat across runs). We show that GLMdenoise consistently improves cross-validation accuracy of GLM estimates on a variety of event-related experimental datasets and is accompanied by substantial gains in SNR. To promote practical application of methods, we provide MATLAB code implementing GLMdenoise. Furthermore, to help compare GLMdenoise to other denoising methods, we present the Denoise Benchmark (DNB), a public database and architecture for evaluating denoising methods. The DNB consists of the datasets described in this paper, a code framework that enables automatic evaluation of a denoising method, and implementations of several denoising methods, including GLMdenoise, the use of motion parameters as noise regressors, ICA-based denoising, and RETROICOR/RVHRCOR. Using the DNB, we find that GLMdenoise performs best out of all of the denoising methods we tested. PMID:24381539

  16. LncRNA ontology: inferring lncRNA functions based on chromatin states and expression patterns

    PubMed Central

    Li, Yongsheng; Chen, Hong; Pan, Tao; Jiang, Chunjie; Zhao, Zheng; Wang, Zishan; Zhang, Jinwen; Xu, Juan; Li, Xia

    2015-01-01

    Accumulating evidences suggest that long non-coding RNAs (lncRNAs) perform important functions. Genome-wide chromatin-states area rich source of information about cellular state, yielding insights beyond what is typically obtained by transcriptome profiling. We propose an integrative method for genome-wide functional predictions of lncRNAs by combining chromatin states data with gene expression patterns. We first validated the method using protein-coding genes with known function annotations. Our validation results indicated that our integrative method performs better than co-expression analysis, and is accurate across different conditions. Next, by applying the integrative model genome-wide, we predicted the probable functions for more than 97% of human lncRNAs. The putative functions inferred by our method match with previously annotated by the targets of lncRNAs. Moreover, the linkage from the cellular processes influenced by cancer-associated lncRNAs to the cancer hallmarks provided a “lncRNA point-of-view” on tumor biology. Our approach provides a functional annotation of the lncRNAs, which we developed into a web-based application, LncRNA Ontology, to provide visualization, analysis, and downloading of lncRNA putative functions. PMID:26485761

  17. Inferring modules of functionally interacting proteins using the Bond Energy Algorithm

    PubMed Central

    Watanabe, Ryosuke LA; Morett, Enrique; Vallejo, Edgar E

    2008-01-01

    Background Non-homology based methods such as phylogenetic profiles are effective for predicting functional relationships between proteins with no considerable sequence or structure similarity. Those methods rely heavily on traditional similarity metrics defined on pairs of phylogenetic patterns. Proteins do not exclusively interact in pairs as the final biological function of a protein in the cellular context is often hold by a group of proteins. In order to accurately infer modules of functionally interacting proteins, the consideration of not only direct but also indirect relationships is required. In this paper, we used the Bond Energy Algorithm (BEA) to predict functionally related groups of proteins. With BEA we create clusters of phylogenetic profiles based on the associations of the surrounding elements of the analyzed data using a metric that considers linked relationships among elements in the data set. Results Using phylogenetic profiles obtained from the Cluster of Orthologous Groups of Proteins (COG) database, we conducted a series of clustering experiments using BEA to predict (upper level) relationships between profiles. We evaluated our results by comparing with COG's functional categories, And even more, with the experimentally determined functional relationships between proteins provided by the DIP and ECOCYC databases. Our results demonstrate that BEA is capable of predicting meaningful modules of functionally related proteins. BEA outperforms traditionally used clustering methods, such as k-means and hierarchical clustering by predicting functional relationships between proteins with higher accuracy. Conclusion This study shows that the linked relationships of phylogenetic profiles obtained by BEA is useful for detecting functional associations between profiles and extending functional modules not found by traditional methods. BEA is capable of detecting relationship among phylogenetic patterns by linking them through a common element shared in

  18. Denoising and dimensionality reduction of genomic data

    NASA Astrophysics Data System (ADS)

    Capobianco, Enrico

    2005-05-01

    Genomics represents a challenging research field for many quantitative scientists, and recently a vast variety of statistical techniques and machine learning algorithms have been proposed and inspired by cross-disciplinary work with computational and systems biologists. In genomic applications, the researcher deals with noisy and complex high-dimensional feature spaces; a wealth of genes whose expression levels are experimentally measured, can often be observed for just a few time points, thus limiting the available samples. This unbalanced combination suggests that it might be hard for standard statistical inference techniques to come up with good general solutions, likewise for machine learning algorithms to avoid heavy computational work. Thus, one naturally turns to two major aspects of the problem: sparsity and intrinsic dimensionality. These two aspects are studied in this paper, where for both denoising and dimensionality reduction, a very efficient technique, i.e., Independent Component Analysis, is used. The numerical results are very promising, and lead to a very good quality of gene feature selection, due to the signal separation power enabled by the decomposition technique. We investigate how the use of replicates can improve these results, and deal with noise through a stabilization strategy which combines the estimated components and extracts the most informative biological information from them. Exploiting the inherent level of sparsity is a key issue in genetic regulatory networks, where the connectivity matrix needs to account for the real links among genes and discard many redundancies. Most experimental evidence suggests that real gene-gene connections represent indeed a subset of what is usually mapped onto either a huge gene vector or a typically dense and highly structured network. Inferring gene network connectivity from the expression levels represents a challenging inverse problem that is at present stimulating key research in biomedical

  19. Improved extreme value weighted sparse representational image denoising with random perturbation

    NASA Astrophysics Data System (ADS)

    Xuan, Shibin; Han, Yulan

    2015-11-01

    Research into the removal of mixed noise is a hot topic in the field of image denoising. Currently, weighted encoding with sparse nonlocal regularization represents an excellent mixed noise removal method. To make the fitting function closer to the requirements of a robust estimation technique, an extreme value technique is used that allows the fitting function to satisfy three conditions of robust estimation on a larger interval. Moreover, a random disturbance sequence is integrated into the denoising model to prevent the iterative solving process from falling into local optima. A radon transform-based noise detection algorithm and an adaptive median filter are used to obtain a high-quality initial solution for the iterative procedure of the image denoising model. Experimental results indicate that this improved method efficiently enhances the weighted encoding with a sparse nonlocal regularization model. The proposed method can effectively remove mixed noise from corrupted images, while better preserving the edges and details of the processed image.

  20. Brain imaging and cognitive neuroscience. Toward strong inference in attributing function to structure.

    PubMed

    Sarter, M; Berntson, G G; Cacioppo, J T

    1996-01-01

    Cognitive neuroscience has emerged from the neurosciences and cognitive psychology as a scientific discipline that aims at the determination of "how brain function gives rise to mental activity" (S. M. Kosslyn & L. M. Shin, 1992, p. 146). While research in cognitive neuroscience combines many levels of neuroscientific and psychological analyses, modern imaging techniques that monitor brain activity during behavioral or cognitive operations have significantly contributed to the emergence of this discipline. The conclusions deduced from these studies are inherently localizationistic in nature; in other words, they describe cognitive functions as being localized in focal brain regions (brain activity in a defined brain region, phi, is involved in specific cognitive function, psi). A broad discussion about the virtues and limitations of such conclusions may help avoid the emergence of a mentalistic localizationism (i.e., the attribution of mentalistic concepts such as happiness, morality, or consciousness to brain structure) and illustrates the importance of a convergence with information generated by different research strategies (such as, for example, evidence generated by studies in which the effects of experimental manipulations of local neuronal processes on cognitive functions are assessed). Progress in capitalizing on brain-imaging studies to investigate questions of the form "brain structure or event phi is associated with cognitive function psi" may be impeded because of the way in which inferences are typically formulated in the brain imaging literature. A conceptual framework to advance the interpretation of data describing the relationships between cognitive phenomena and brain structure activity is provided. PMID:8585670

  1. Geodesic denoising for optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Shahrian Varnousfaderani, Ehsan; Vogl, Wolf-Dieter; Wu, Jing; Gerendas, Bianca S.; Simader, Christian; Langs, Georg; Waldstein, Sebastian M.; Schmidt-Erfurth, Ursula

    2016-03-01

    Optical coherence tomography (OCT) is an optical signal acquisition method capturing micrometer resolution, cross-sectional three-dimensional images. OCT images are used widely in ophthalmology to diagnose and monitor retinal diseases such as age-related macular degeneration (AMD) and Glaucoma. While OCT allows the visualization of retinal structures such as vessels and retinal layers, image quality and contrast is reduced by speckle noise, obfuscating small, low intensity structures and structural boundaries. Existing denoising methods for OCT images may remove clinically significant image features such as texture and boundaries of anomalies. In this paper, we propose a novel patch based denoising method, Geodesic Denoising. The method reduces noise in OCT images while preserving clinically significant, although small, pathological structures, such as fluid-filled cysts in diseased retinas. Our method selects optimal image patch distribution representations based on geodesic patch similarity to noisy samples. Patch distributions are then randomly sampled to build a set of best matching candidates for every noisy sample, and the denoised value is computed based on a geodesic weighted average of the best candidate samples. Our method is evaluated qualitatively on real pathological OCT scans and quantitatively on a proposed set of ground truth, noise free synthetic OCT scans with artificially added noise and pathologies. Experimental results show that performance of our method is comparable with state of the art denoising methods while outperforming them in preserving the critical clinically relevant structures.

  2. Pragmatic inferences in high-functioning adults with autism and Asperger syndrome.

    PubMed

    Pijnacker, Judith; Hagoort, Peter; Buitelaar, Jan; Teunisse, Jan-Pieter; Geurts, Bart

    2009-04-01

    Although people with autism spectrum disorders (ASD) often have severe problems with pragmatic aspects of language, little is known about their pragmatic reasoning. We carried out a behavioral study on high-functioning adults with autistic disorder (n = 11) and Asperger syndrome (n = 17) and matched controls (n = 28) to investigate whether they are capable of deriving scalar implicatures, which are generally considered to be pragmatic inferences. Participants were presented with underinformative sentences like "Some sparrows are birds". This sentence is logically true, but pragmatically inappropriate if the scalar implicature "Not all sparrows are birds" is derived. The present findings indicate that the combined ASD group was just as likely as controls to derive scalar implicatures, yet there was a difference between participants with autistic disorder and Asperger syndrome, suggesting a potential differentiation between these disorders in pragmatic reasoning. Moreover, our results suggest that verbal intelligence is a constraint for task performance in autistic disorder but not in Asperger syndrome. PMID:19052858

  3. On the inference of function from structure using biomechanical modelling and simulation of extinct organisms.

    PubMed

    Hutchinson, John R

    2012-02-23

    Biomechanical modelling and simulation techniques offer some hope for unravelling the complex inter-relationships of structure and function perhaps even for extinct organisms, but have their limitations owing to this complexity and the many unknown parameters for fossil taxa. Validation and sensitivity analysis are two indispensable approaches for quantifying the accuracy and reliability of such models or simulations. But there are other subtleties in biomechanical modelling that include investigator judgements about the level of simplicity versus complexity in model design or how uncertainty and subjectivity are dealt with. Furthermore, investigator attitudes toward models encompass a broad spectrum between extreme credulity and nihilism, influencing how modelling is conducted and perceived. Fundamentally, more data and more testing of methodology are required for the field to mature and build confidence in its inferences. PMID:21666064

  4. Machinery vibration signal denoising based on learned dictionary and sparse representation

    NASA Astrophysics Data System (ADS)

    Guo, Liang; Gao, Hongli; Li, Jun; Huang, Haifeng; Zhang, Xiaochen

    2015-07-01

    Mechanical vibration signal denoising has been an import problem for machine damage assessment and health monitoring. Wavelet transfer and sparse reconstruction are the powerful and practical methods. However, those methods are based on the fixed basis functions or atoms. In this paper, a novel method is presented. The atoms used to represent signals are learned from the raw signal. And in order to satisfy the requirements of real-time signal processing, an online dictionary learning algorithm is adopted. Orthogonal matching pursuit is applied to extract the most pursuit column in the dictionary. At last, denoised signal is calculated with the sparse vector and learned dictionary. A simulation signal and real bearing fault signal are utilized to evaluate the improved performance of the proposed method through the comparison with kinds of denoising algorithms. Then Its computing efficiency is demonstrated by an illustrative runtime example. The results show that the proposed method outperforms current algorithms with efficiency calculation.

  5. Image-Specific Prior Adaptation for Denoising.

    PubMed

    Lu, Xin; Lin, Zhe; Jin, Hailin; Yang, Jianchao; Wang, James Z

    2015-12-01

    Image priors are essential to many image restoration applications, including denoising, deblurring, and inpainting. Existing methods use either priors from the given image (internal) or priors from a separate collection of images (external). We find through statistical analysis that unifying the internal and external patch priors may yield a better patch prior. We propose a novel prior learning algorithm that combines the strength of both internal and external priors. In particular, we first learn a generic Gaussian mixture model from a collection of training images and then adapt the model to the given image by simultaneously adding additional components and refining the component parameters. We apply this image-specific prior to image denoising. The experimental results show that our approach yields better or competitive denoising results in terms of both the peak signal-to-noise ratio and structural similarity. PMID:26316129

  6. Echocardiogram enhancement using supervised manifold denoising.

    PubMed

    Wu, Hui; Huynh, Toan T; Souvenir, Richard

    2015-08-01

    This paper presents data-driven methods for echocardiogram enhancement. Existing denoising algorithms typically rely on a single noise model, and do not generalize to the composite noise sources typically found in real-world echocardiograms. Our methods leverage the low-dimensional intrinsic structure of echocardiogram videos. We assume that echocardiogram images are noisy samples from an underlying manifold parametrized by cardiac motion and denoise images via back-projection onto a learned (non-linear) manifold. Our methods incorporate synchronized side information (e.g., electrocardiography), which is often collected alongside the visual data. We evaluate the proposed methods on a synthetic data set and real-world echocardiograms. Quantitative results show improved performance of our methods over recent image despeckling methods and video denoising methods, and a visual analysis of real-world data shows noticeable image enhancement, even in the challenging case of noise due to dropout artifacts. PMID:26072166

  7. Multiresolution Bilateral Filtering for Image Denoising

    PubMed Central

    Zhang, Ming; Gunturk, Bahadir K.

    2008-01-01

    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges; it has shown to be an effective image denoising technique. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. There are two main contributions of this paper. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising applications. The second contribution is an extension of the bilateral filter: multiresolution bilateral filter, where bilateral filtering is applied to the approximation (low-frequency) subbands of a signal decomposed using a wavelet filter bank. The multiresolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. Experimental results with both simulated and real data are provided. PMID:19004705

  8. An image denoising application using shearlets

    NASA Astrophysics Data System (ADS)

    Sevindir, Hulya Kodal; Yazici, Cuneyt

    2013-10-01

    Medical imaging is a multidisciplinary field related to computer science, electrical/electronic engineering, physics, mathematics and medicine. There has been dramatic increase in variety, availability and resolution of medical imaging devices for the last half century. For proper medical imaging highly trained technicians and clinicians are needed to pull out clinically pertinent information from medical data correctly. Artificial systems must be designed to analyze medical data sets either in a partially or even a fully automatic manner to fulfil the need. For this purpose there has been numerous ongoing research for finding optimal representations in image processing and computer vision [1, 18]. Medical images almost always contain artefacts and it is crucial to remove these artefacts to obtain healthy results. Out of many methods for denoising images, in this paper, two denoising methods, wavelets and shearlets, have been applied to mammography images. Comparing these two methods, shearlets give better results for denoising such data.

  9. LC-MS/MS based proteomic analysis and functional inference of hypothetical proteins in Desulfovibrio vulgaris

    SciTech Connect

    Zhang, Weiwen; Culley, David E.; Gritsenko, Marina A.; Moore, Ronald J.; Nie, Lei; Scholten, Johannes C.; Petritis, Konstantinos; Strittmatter, Eric F.; Camp, David G.; Smith, Richard D.; Brockman, Fred J.

    2006-11-03

    ABSTRACT In the previous study, the whole-genome gene expression profiles of D. vulgaris in response to oxidative stress and heat shock were determined. The results showed 24-28% of the responsive genes were hypothetical proteins that have not been experimentally characterized or whose function can not be deduced by simple sequence comparison. To further explore the protecting mechanisms employed in D. vulgaris against the oxidative stress and heat shock, attempt was made in this study to infer functions of these hypothetical proteins by phylogenomic profiling along with detailed sequence comparison against various publicly available databases. By this approach we were ableto assign possible functions to 25 responsive hypothetical proteins. The findings included that DVU0725, induced by oxidative stress, may be involved in lipopolysaccharide biosynthesis, implying that the alternation of lipopolysaccharide on cell surface might service as a mechanism against oxidative stress in D. vulgaris. In addition, two responsive proteins, DVU0024 encoding a putative transcriptional regulator and DVU1670 encoding predicted redox protein, were sharing co-evolution atterns with rubrerythrin in Archaeoglobus fulgidus and Clostridium perfringens, respectively, implying that they might be part of the stress response and protective systems in D. vulgaris. The study demonstrated that phylogenomic profiling is a useful tool in interpretation of experimental genomics data, and also provided further insight on cellular response to oxidative stress and heat shock in D. vulgaris.

  10. Low-rank separated representation surrogates of high-dimensional stochastic functions: Application in Bayesian inference

    SciTech Connect

    Validi, AbdoulAhad

    2014-03-01

    This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.

  11. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  12. Doppler ultrasound signal denoising based on wavelet frames.

    PubMed

    Zhang, Y; Wang, Y; Wang, W; Liu, B

    2001-05-01

    A novel approach was proposed to denoise the Doppler ultrasound signal. Using this method, wavelet coefficients of the Doppler signal at multiple scales were first obtained using the discrete wavelet frame analysis. Then, a soft thresholding-based denoising algorithm was employed to deal with these coefficients to get the denoised signal. In the simulation experiments, the SNR improvements and the maximum frequency estimation precision were studied for the denoised signal. From the simulation and clinical studies, it was concluded that the performance of this discrete wavelet frame (DWF) approach is higher than that of the standard (critically sampled) wavelet transform (DWT) for the Doppler ultrasound signal denoising. PMID:11381694

  13. Magnetic resonance image denoising using multiple filters

    NASA Astrophysics Data System (ADS)

    Ai, Danni; Wang, Jinjuan; Miwa, Yuichi

    2013-07-01

    We introduced and compared ten denoisingfilters which are all proposed during last fifteen years. Especially, the state-of-art denoisingalgorithms, NLM and BM3D, have attracted much attention. Several expansions are proposed to improve the noise reduction based on these two algorithms. On the other hand, optimal dictionaries, sparse representations and appropriate shapes of the transform's support are also considered for the image denoising. The comparison among variousfiltersis implemented by measuring the SNR of a phantom image and denoising effectiveness of a clinical image. The computational time is finally evaluated.

  14. A total variation denoising algorithm for hyperspectral data

    NASA Astrophysics Data System (ADS)

    Li, Ting; Chen, Xiao-mei; Xue, Bo; Li, Qian-qian; Ni, Guo-qiang

    2010-11-01

    Since noise can undermine the effectiveness of information extracted from hyperspectral imagery, noise reduction is a prerequisite for many classification-based applications of hyperspectral imagery. In this paper, an effective three dimensional total variation denoising algorithm for hyperspectral imagery is introduced. First, a three dimensional objective function of total variation denoising model is derived from the classical two dimensional TV algorithms. For the consideration of the fact that the noise of hyperspectral imagery shows different characteristics in spatial and spectral domain, the objective function is further improved by utilizing two terms (spatial term and spectral term) and separate regularization parameters respectively which can adjust the trade-off between the two terms. Then, the improved objective function is discretized by approximating gradients with local differences, optimized by a quadratic convex function and finally solved by a majorization-minimization based iteration algorithm. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in a desert-dominated area in 2007. Experimental results show that, properly choosing the values of parameters, the new approach removes the indention and restores the spectral absorption peaks more effectively while having a similar improvement of signal-to-noise-ratio as minimum noise fraction (MNF) method.

  15. Analysis the application of several denoising algorithm in the astronomical image denoising

    NASA Astrophysics Data System (ADS)

    Jiang, Chao; Geng, Ze-xun; Bao, Yong-qiang; Wei, Xiao-feng; Pan, Ying-feng

    2014-02-01

    Image denoising is an important method of preprocessing, it is one of the forelands in the field of Computer Graphic and Computer Vision. Astronomical target imaging are most vulnerable to atmospheric turbulence and noise interference, in order to reconstruct the high quality image of the target, we need to restore the high frequency signal of image, but noise also belongs to the high frequency signal, so there will be noise amplification in the reconstruction process. In order to avoid this phenomenon, join image denoising in the process of reconstruction is a feasible solution. This paper mainly research on the principle of four classic denoising algorithm, which are TV, BLS - GSM, NLM and BM3D, we use simulate data for image denoising to analysis the performance of the four algorithms, experiments demonstrate that the four algorithms can remove the noise, the BM3D algorithm not only have high quality of denosing, but also have the highest efficiency at the same time.

  16. Optimization of wavelet- and curvelet-based denoising algorithms by multivariate SURE and GCV

    NASA Astrophysics Data System (ADS)

    Mortezanejad, R.; Gholami, A.

    2016-06-01

    One of the most crucial challenges in seismic data processing is the reduction of noise in the data or improving the signal-to-noise ratio (SNR). Wavelet- and curvelet-based denoising algorithms have become popular to address random noise attenuation for seismic sections. Wavelet basis, thresholding function, and threshold value are three key factors of such algorithms, having a profound effect on the quality of the denoised section. Therefore, given a signal, it is necessary to optimize the denoising operator over these factors to achieve the best performance. In this paper a general denoising algorithm is developed as a multi-variant (variable) filter which performs in multi-scale transform domains (e.g. wavelet and curvelet). In the wavelet domain this general filter is a function of the type of wavelet, characterized by its smoothness, thresholding rule, and threshold value, while in the curvelet domain it is only a function of thresholding rule and threshold value. Also, two methods, Stein’s unbiased risk estimate (SURE) and generalized cross validation (GCV), evaluated using a Monte Carlo technique, are utilized to optimize the algorithm in both wavelet and curvelet domains for a given seismic signal. The best wavelet function is selected from a family of fractional B-spline wavelets. The optimum thresholding rule is selected from general thresholding functions which contain the most well known thresholding functions, and the threshold value is chosen from a set of possible values. The results obtained from numerical tests show high performance of the proposed method in both wavelet and curvelet domains in comparison to conventional methods when denoising seismic data.

  17. Inferring muscle functional roles of the ostrich pelvic limb during walking and running using computer optimization.

    PubMed

    Rankin, Jeffery W; Rubenson, Jonas; Hutchinson, John R

    2016-05-01

    Owing to their cursorial background, ostriches (Struthio camelus) walk and run with high metabolic economy, can reach very fast running speeds and quickly execute cutting manoeuvres. These capabilities are believed to be a result of their ability to coordinate muscles to take advantage of specialized passive limb structures. This study aimed to infer the functional roles of ostrich pelvic limb muscles during gait. Existing gait data were combined with a newly developed musculoskeletal model to generate simulations of ostrich walking and running that predict muscle excitations, force and mechanical work. Consistent with previous avian electromyography studies, predicted excitation patterns showed that individual muscles tended to be excited primarily during only stance or swing. Work and force estimates show that ostrich gaits are partially hip-driven with the bi-articular hip-knee muscles driving stance mechanics. Conversely, the knee extensors acted as brakes, absorbing energy. The digital extensors generated large amounts of both negative and positive mechanical work, with increased magnitudes during running, providing further evidence that ostriches make extensive use of tendinous elastic energy storage to improve economy. The simulations also highlight the need to carefully consider non-muscular soft tissues that may play a role in ostrich gait. PMID:27146688

  18. Inferring muscle functional roles of the ostrich pelvic limb during walking and running using computer optimization

    PubMed Central

    Rubenson, Jonas

    2016-01-01

    Owing to their cursorial background, ostriches (Struthio camelus) walk and run with high metabolic economy, can reach very fast running speeds and quickly execute cutting manoeuvres. These capabilities are believed to be a result of their ability to coordinate muscles to take advantage of specialized passive limb structures. This study aimed to infer the functional roles of ostrich pelvic limb muscles during gait. Existing gait data were combined with a newly developed musculoskeletal model to generate simulations of ostrich walking and running that predict muscle excitations, force and mechanical work. Consistent with previous avian electromyography studies, predicted excitation patterns showed that individual muscles tended to be excited primarily during only stance or swing. Work and force estimates show that ostrich gaits are partially hip-driven with the bi-articular hip–knee muscles driving stance mechanics. Conversely, the knee extensors acted as brakes, absorbing energy. The digital extensors generated large amounts of both negative and positive mechanical work, with increased magnitudes during running, providing further evidence that ostriches make extensive use of tendinous elastic energy storage to improve economy. The simulations also highlight the need to carefully consider non-muscular soft tissues that may play a role in ostrich gait. PMID:27146688

  19. Inference of the cold dark matter substructure mass function at z = 0.2 using strong gravitational lenses

    NASA Astrophysics Data System (ADS)

    Vegetti, S.; Koopmans, L. V. E.; Auger, M. W.; Treu, T.; Bolton, A. S.

    2014-08-01

    We present the results of a search for galaxy substructures in a sample of 11 gravitational lens galaxies from the Sloan Lens ACS Survey by Bolton et al. We find no significant detection of mass clumps, except for a luminous satellite in the system SDSS J0956+5110. We use these non-detections, in combination with a previous detection in the system SDSS J0946+1006, to derive constraints on the substructure mass function in massive early-type host galaxies with an average redshift ˜ 0.2 and an average velocity dispersion <σeff> ˜ 270 km s-1. We perform a Bayesian inference on the substructure mass function, within a median region of about 32 kpc2 around the Einstein radius ( ˜ 4.2 kpc). We infer a mean projected substructure mass fraction f = 0.0076_{-0.0052}^{+0.0208} at the 68 per cent confidence level and a substructure mass function slope α < 2.93 at the 95 per cent confidence level for a uniform prior probability density on α. For a Gaussian prior based on cold dark matter (CDM) simulations, we infer f = 0.0064^{+0.0080}_{-0.0042} and a slope of α = 1.90^{+0.098}_{-0.098} at the 68 per cent confidence level. Since only one substructure was detected in the full sample, we have little information on the mass function slope, which is therefore poorly constrained (i.e. the Bayes factor shows no positive preference for any of the two models). The inferred fraction is consistent with the expectations from CDM simulations and with inference from flux ratio anomalies at the 68 per cent confidence level.

  20. Efficient bias correction for magnetic resonance image denoising.

    PubMed

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. PMID:23074149

  1. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  2. Dissociable functions of reward inference in the lateral prefrontal cortex and the striatum.

    PubMed

    Tanaka, Shingo; Pan, Xiaochuan; Oguchi, Mineki; Taylor, Jessica E; Sakagami, Masamichi

    2015-01-01

    In a complex and uncertain world, how do we select appropriate behavior? One possibility is that we choose actions that are highly reinforced by their probabilistic consequences (model-free processing). However, we may instead plan actions prior to their actual execution by predicting their consequences (model-based processing). It has been suggested that the brain contains multiple yet distinct systems involved in reward prediction. Several studies have tried to allocate model-free and model-based systems to the striatum and the lateral prefrontal cortex (LPFC), respectively. Although there is much support for this hypothesis, recent research has revealed discrepancies. To understand the nature of the reward prediction systems in the LPFC and the striatum, a series of single-unit recording experiments were conducted. LPFC neurons were found to infer the reward associated with the stimuli even when the monkeys had not yet learned the stimulus-reward (SR) associations directly. Striatal neurons seemed to predict the reward for each stimulus only after directly experiencing the SR contingency. However, the one exception was "Exclusive Or" situations in which striatal neurons could predict the reward without direct experience. Previous single-unit studies in monkeys have reported that neurons in the LPFC encode category information, and represent reward information specific to a group of stimuli. Here, as an extension of these, we review recent evidence that a group of LPFC neurons can predict reward specific to a category of visual stimuli defined by relevant behavioral responses. We suggest that the functional difference in reward prediction between the LPFC and the striatum is that while LPFC neurons can utilize abstract code, striatal neurons can code individual associations between stimuli and reward but cannot utilize abstract code. PMID:26236266

  3. Dissociable functions of reward inference in the lateral prefrontal cortex and the striatum

    PubMed Central

    Tanaka, Shingo; Pan, Xiaochuan; Oguchi, Mineki; Taylor, Jessica E.; Sakagami, Masamichi

    2015-01-01

    In a complex and uncertain world, how do we select appropriate behavior? One possibility is that we choose actions that are highly reinforced by their probabilistic consequences (model-free processing). However, we may instead plan actions prior to their actual execution by predicting their consequences (model-based processing). It has been suggested that the brain contains multiple yet distinct systems involved in reward prediction. Several studies have tried to allocate model-free and model-based systems to the striatum and the lateral prefrontal cortex (LPFC), respectively. Although there is much support for this hypothesis, recent research has revealed discrepancies. To understand the nature of the reward prediction systems in the LPFC and the striatum, a series of single-unit recording experiments were conducted. LPFC neurons were found to infer the reward associated with the stimuli even when the monkeys had not yet learned the stimulus-reward (SR) associations directly. Striatal neurons seemed to predict the reward for each stimulus only after directly experiencing the SR contingency. However, the one exception was “Exclusive Or” situations in which striatal neurons could predict the reward without direct experience. Previous single-unit studies in monkeys have reported that neurons in the LPFC encode category information, and represent reward information specific to a group of stimuli. Here, as an extension of these, we review recent evidence that a group of LPFC neurons can predict reward specific to a category of visual stimuli defined by relevant behavioral responses. We suggest that the functional difference in reward prediction between the LPFC and the striatum is that while LPFC neurons can utilize abstract code, striatal neurons can code individual associations between stimuli and reward but cannot utilize abstract code. PMID:26236266

  4. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  5. Performance comparison of denoising filters for source camera identification

    NASA Astrophysics Data System (ADS)

    Cortiana, A.; Conotter, V.; Boato, G.; De Natale, F. G. B.

    2011-02-01

    Source identification for digital content is one of the main branches of digital image forensics. It relies on the extraction of the photo-response non-uniformity (PRNU) noise as a unique intrinsic fingerprint that efficiently characterizes the digital device which generated the content. Such noise is estimated as the difference between the content and its de-noised version obtained via denoising filter processing. This paper proposes a performance comparison of different denoising filters for source identification purposes. In particular, results achieved with a sophisticated 3D filter are presented and discussed with respect to state-of-the-art denoising filters previously employed in such a context.

  6. Postprocessing of Compressed Images via Sequential Denoising.

    PubMed

    Dar, Yehuda; Bruckstein, Alfred M; Elad, Michael; Giryes, Raja

    2016-07-01

    In this paper, we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via alternating direction method of multipliers, leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. In particular, we demonstrate impressive gains in image quality for several leading compression methods-JPEG, JPEG2000, and HEVC. PMID:27214878

  7. Postprocessing of Compressed Images via Sequential Denoising

    NASA Astrophysics Data System (ADS)

    Dar, Yehuda; Bruckstein, Alfred M.; Elad, Michael; Giryes, Raja

    2016-07-01

    In this work we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via Alternating Direction Method of Multipliers (ADMM), leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. Specifically, we demonstrate impressive gains in image quality for several leading compression methods - JPEG, JPEG2000, and HEVC.

  8. Adaptive Image Denoising by Mixture Adaptation.

    PubMed

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms. PMID:27416593

  9. Simultaneous denoising and compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.

    2013-01-01

    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  10. Infrared image denoising by nonlocal means filtering

    NASA Astrophysics Data System (ADS)

    Dee-Noor, Barak; Stern, Adrian; Yitzhaky, Yitzhak; Kopeika, Natan

    2012-05-01

    The recently introduced non-local means (NLM) image denoising technique broke the traditional paradigm according to which image pixels are processed by their surroundings. Non-local means technique was demonstrated to outperform state-of-the art denoising techniques when applied to images in the visible. This technique is even more powerful when applied to low contrast images, which makes it tractable for denoising infrared (IR) images. In this work we investigate the performance of NLM applied to infrared images. We also present a new technique designed to speed-up the NLM filtering process. The main drawback of the NLM is the large computational time required by the process of searching similar patches. Several techniques were developed during the last years to reduce the computational burden. Here we present a new techniques designed to reduce computational cost and sustain optimal filtering results of NLM technique. We show that the new technique, which we call Multi-Resolution Search NLM (MRS-NLM), reduces significantly the computational cost of the filtering process and we present a study of its performance on IR images.

  11. A phylogeny-based benchmarking test for orthology inference reveals the limitations of function-based validation.

    PubMed

    Trachana, Kalliopi; Forslund, Kristoffer; Larsson, Tomas; Powell, Sean; Doerks, Tobias; von Mering, Christian; Bork, Peer

    2014-01-01

    Accurate orthology prediction is crucial for many applications in the post-genomic era. The lack of broadly accepted benchmark tests precludes a comprehensive analysis of orthology inference. So far, functional annotation between orthologs serves as a performance proxy. However, this violates the fundamental principle of orthology as an evolutionary definition, while it is often not applicable due to limited experimental evidence for most species. Therefore, we constructed high quality "gold standard" orthologous groups that can serve as a benchmark set for orthology inference in bacterial species. Herein, we used this dataset to demonstrate 1) why a manually curated, phylogeny-based dataset is more appropriate for benchmarking orthology than other popular practices and 2) how it guides database design and parameterization through careful error quantification. More specifically, we illustrate how function-based tests often fail to identify false assignments, misjudging the true performance of orthology inference methods. We also examined how our dataset can instruct the selection of a "core" species repertoire to improve detection accuracy. We conclude that including more genomes at the proper evolutionary distances can influence the overall quality of orthology detection. The curated gene families, called Reference Orthologous Groups, are publicly available at http://eggnog.embl.de/orthobench2. PMID:25369365

  12. A Phylogeny-Based Benchmarking Test for Orthology Inference Reveals the Limitations of Function-Based Validation

    PubMed Central

    Larsson, Tomas; Powell, Sean; Doerks, Tobias; von Mering, Christian

    2014-01-01

    Accurate orthology prediction is crucial for many applications in the post-genomic era. The lack of broadly accepted benchmark tests precludes a comprehensive analysis of orthology inference. So far, functional annotation between orthologs serves as a performance proxy. However, this violates the fundamental principle of orthology as an evolutionary definition, while it is often not applicable due to limited experimental evidence for most species. Therefore, we constructed high quality "gold standard" orthologous groups that can serve as a benchmark set for orthology inference in bacterial species. Herein, we used this dataset to demonstrate 1) why a manually curated, phylogeny-based dataset is more appropriate for benchmarking orthology than other popular practices and 2) how it guides database design and parameterization through careful error quantification. More specifically, we illustrate how function-based tests often fail to identify false assignments, misjudging the true performance of orthology inference methods. We also examined how our dataset can instruct the selection of a “core” species repertoire to improve detection accuracy. We conclude that including more genomes at the proper evolutionary distances can influence the overall quality of orthology detection. The curated gene families, called Reference Orthologous Groups, are publicly available at http://eggnog.embl.de/orthobench2. PMID:25369365

  13. Inference of S-system models of genetic networks by solving one-dimensional function optimization problems.

    PubMed

    Kimura, S; Araki, D; Matsumura, K; Okada-Hatakeyama, M

    2012-02-01

    Voit and Almeida have proposed the decoupling approach as a method for inferring the S-system models of genetic networks. The decoupling approach defines the inference of a genetic network as a problem requiring the solutions of sets of algebraic equations. The computation can be accomplished in a very short time, as the approach estimates S-system parameters without solving any of the differential equations. Yet the defined algebraic equations are non-linear, which sometimes prevents us from finding reasonable S-system parameters. In this study, we propose a new technique to overcome this drawback of the decoupling approach. This technique transforms the problem of solving each set of algebraic equations into a one-dimensional function optimization problem. The computation can still be accomplished in a relatively short time, as the problem is transformed by solving a linear programming problem. We confirm the effectiveness of the proposed approach through numerical experiments. PMID:22155075

  14. Experimental wavelet based denoising for indoor infrared wireless communications.

    PubMed

    Rajbhandari, Sujan; Ghassemlooy, Zabih; Angelova, Maia

    2013-06-01

    This paper reports the experimental wavelet denoising techniques carried out for the first time for a number of modulation schemes for indoor optical wireless communications in the presence of fluorescent light interference. The experimental results are verified using computer simulations, clearly illustrating the advantage of the wavelet denoising technique in comparison to the high pass filtering for all baseband modulation schemes. PMID:23736631

  15. Denoising and deblurring of Fourier transform infrared spectroscopic imaging data

    NASA Astrophysics Data System (ADS)

    Nguyen, Tan H.; Reddy, Rohith K.; Walsh, Michael J.; Schulmerich, Matthew; Popescu, Gabriel; Do, Minh N.; Bhargava, Rohit

    2012-03-01

    Fourier transform infrared (FT-IR) spectroscopic imaging is a powerful tool to obtain chemical information from images of heterogeneous, chemically diverse samples. Significant advances in instrumentation and data processing in the recent past have led to improved instrument design and relatively widespread use of FT-IR imaging, in a variety of systems ranging from biomedical tissue to polymer composites. Various techniques for improving signal to noise ratio (SNR), data collection time and spatial resolution have been proposed previously. In this paper we present an integrated framework that addresses all these factors comprehensively. We utilize the low-rank nature of the data and model the instrument point spread function to denoise data, and then simultaneously deblurr and estimate unknown information from images, using a Bayesian variational approach. We show that more spatial detail and improved image quality can be obtained using the proposed framework. The proposed technique is validated through experiments on a standard USAF target and on prostate tissue specimens.

  16. Dual-domain denoising in three dimensional magnetic resonance imaging

    PubMed Central

    Peng, Jing; Zhou, Jiliu; Wu, Xi

    2016-01-01

    Denoising is a crucial preprocessing procedure for three dimensional magnetic resonance imaging (3D MRI). Existing denoising methods are predominantly implemented in a single domain, ignoring information in other domains. However, denoising methods are becoming increasingly complex, making analysis and implementation challenging. The present study aimed to develop a dual-domain image denoising (DDID) algorithm for 3D MRI that encapsulates information from the spatial and transform domains. In the present study, the DDID method was used to distinguish signal from noise in the spatial and frequency domains, after which robust accurate noise estimation was introduced for iterative filtering, which is simple and beneficial for computation. In addition, the proposed method was compared quantitatively and qualitatively with existing methods for synthetic and in vivo MRI datasets. The results of the present study suggested that the novel DDID algorithm performed well and provided competitive results, as compared with existing MRI denoising filters. PMID:27446257

  17. A connection between score matching and denoising autoencoders.

    PubMed

    Vincent, Pascal

    2011-07-01

    Denoising autoencoders have been previously shown to be competitive alternatives to restricted Boltzmann machines for unsupervised pretraining of each layer of a deep architecture. We show that a simple denoising autoencoder training criterion is equivalent to matching the score (with respect to the data) of a specific energy-based model to that of a nonparametric Parzen density estimator of the data. This yields several useful insights. It defines a proper probabilistic model for the denoising autoencoder technique, which makes it in principle possible to sample from them or rank examples by their energy. It suggests a different way to apply score matching that is related to learning to denoise and does not require computing second derivatives. It justifies the use of tied weights between the encoder and decoder and suggests ways to extend the success of denoising autoencoders to a larger family of energy-based models. PMID:21492012

  18. Combining interior and exterior characteristics for remote sensing image denoising

    NASA Astrophysics Data System (ADS)

    Peng, Ni; Sun, Shujin; Wang, Runsheng; Zhong, Ping

    2016-04-01

    Remote sensing image denoising faces many challenges since a remote sensing image usually covers a wide area and thus contains complex contents. Using the patch-based statistical characteristics is a flexible method to improve the denoising performance. There are usually two kinds of statistical characteristics available: interior and exterior characteristics. Different statistical characteristics have their own strengths to restore specific image contents. Combining different statistical characteristics to use their strengths together may have the potential to improve denoising results. This work proposes a method combining statistical characteristics to adaptively select statistical characteristics for different image contents. The proposed approach is implemented through a new characteristics selection criterion learned over training data. Moreover, with the proposed combination method, this work develops a denoising algorithm for remote sensing images. Experimental results show that our method can make full use of the advantages of interior and exterior characteristics for different image contents and thus improve the denoising performance.

  19. Denoising portal images by means of wavelet techniques

    NASA Astrophysics Data System (ADS)

    Gonzalez Lopez, Antonio Francisco

    Portal images are used in radiotherapy for the verification of patient positioning. The distinguishing feature of this image type lies in its formation process: the same beam used for patient treatment is used for image formation. The high energy of the photons used in radiotherapy strongly limits the quality of portal images: Low contrast between tissues, low spatial resolution and low signal to noise ratio. This Thesis studies the enhancement of these images, in particular denoising of portal images. The statistical properties of portal images and noise are studied: power spectra, statistical dependencies between image and noise and marginal, joint and conditional distributions in the wavelet domain. Later, various denoising methods are applied to noisy portal images. Methods operating in the wavelet domain are the basis of this Thesis. In addition, the Wiener filter and the non local means filter (NLM), operating in the image domain, are used as a reference. Other topics studied in this Thesis are spatial resolution, wavelet processing and image processing in dosimetry in radiotherapy. In this regard, the spatial resolution of portal imaging systems is studied; a new method for determining the spatial resolution of the imaging equipments in digital radiology is presented; the calculation of the power spectrum in the wavelet domain is studied; reducing uncertainty in film dosimetry is investigated; a method for the dosimetry of small radiation fields with radiochromic film is presented; the optimal signal resolution is determined, as a function of the noise level and the quantization step, in the digitization process of films and the useful optical density range is set, as a function of the required uncertainty level, for a densitometric system. Marginal distributions of portal images are similar to those of natural images. This also applies to the statistical relationships between wavelet coefficients, intra-band and inter-band. These facts result in a better

  20. OPTICAL COHERENCE TOMOGRAPHY HEART TUBE IMAGE DENOISING BASED ON CONTOURLET TRANSFORM.

    PubMed

    Guo, Qing; Sun, Shuifa; Dong, Fangmin; Gao, Bruce Z; Wang, Rui

    2012-01-01

    Optical Coherence Tomography(OCT) gradually becomes a very important imaging technology in the Biomedical field for its noninvasive, nondestructive and real-time properties. However, the interpretation and application of the OCT images are limited by the ubiquitous noise. In this paper, a denoising algorithm based on contourlet transform for the OCT heart tube image is proposed. A bivariate function is constructed to model the joint probability density function (pdf) of the coefficient and its cousin in contourlet domain. A bivariate shrinkage function is deduced to denoise the image by the maximum a posteriori (MAP) estimation. Three metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and equivalent number of look (ENL), are used to evaluate the denoised image using the proposed algorithm. The results show that the signal-to-noise ratio is improved while the edges of object are preserved by the proposed algorithm. Systemic comparisons with other conventional algorithms, such as mean filter, median filter, RKT filter, Lee filter, as well as bivariate shrinkage function for wavelet-based algorithm are conducted. The advantage of the proposed algorithm over these methods is illustrated. PMID:25364626

  1. Musculoskeletal ultrasound image denoising using Daubechies wavelets

    NASA Astrophysics Data System (ADS)

    Gupta, Rishu; Elamvazuthi, I.; Vasant, P.

    2012-11-01

    Among various existing medical imaging modalities Ultrasound is providing promising future because of its ease availability and use of non-ionizing radiations. In this paper we have attempted to denoise ultrasound image using daubechies wavelet and analyze the results with peak signal to noise ratio and coefficient of correlation as performance measurement index. The different daubechies from 1 to 6 is used on four different ultrasound bone fracture images with three different levels from 1 to 3. The images for visual inspection and PSNR, Coefficient of correlation values are graphically shown for quantitaive analysis of resultant images.

  2. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    SciTech Connect

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frederic

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimation of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a

  3. Minimum entropy approach to denoising time-frequency distributions

    NASA Astrophysics Data System (ADS)

    Aviyente, Selin; Williams, William J.

    2001-11-01

    Signals used in time-frequency analysis are usually corrupted by noise. Therefore, denoising the time-frequency representation is a necessity for producing readable time-frequency images. Denoising is defined as the operation of smoothing a noisy signal or image for producing a noise free representation. Linear smoothing of time-frequency distributions (TFDs) suppresses noise at the expense of considerable smearing of the signal components. For this reason, nonlinear denoising has been preferred. A common example to nonlinear denoising methods is the wavelet thresholding. In this paper, we introduce an entropy based approach to denoising time-frequency distributions. This new approach uses the spectrogram decomposition of time-frequency kernels proposed by Cunningham and Williams.In order to denoise the time-frequency distribution, we combine those spectrograms with smallest entropy values, thus ensuring that each spectrogram is well concentrated on the time-frequency plane and contains as little noise as possible. Renyi entropy is used as the measure to quantify the complexity of each spectrogram. The threshold for the number of spectrograms to combine is chosen adaptively based on the tradeoff between entropy and variance. The denoised time-frequency distributions for several signals are shown to demonstrate the effectiveness of the method. The improvement in performance is quantitatively evaluated.

  4. Denoising-enhancing images on elastic manifolds.

    PubMed

    Ratner, Vadim; Zeevi, Yehoshua Y

    2011-08-01

    The conflicting demands for simultaneous low-pass and high-pass processing, required in image denoising and enhancement, still present an outstanding challenge, although a great deal of progress has been made by means of adaptive diffusion-type algorithms. To further advance such processing methods and algorithms, we introduce a family of second-order (in time) partial differential equations. These equations describe the motion of a thin elastic sheet in a damping environment. They are also derived by a variational approach in the context of image processing. The new operator enables better edge preservation in denoising applications by offering an adaptive lowpass filter, which preserves high-frequency components in the pass-band better than the adaptive diffusion filter, while offering slower error propagation across edges. We explore the action of this powerful operator in the context of image processing and exploit for this purpose the wealth of knowledge accumulated in physics and mathematics about the action and behavior of this operator. The resulting methods are further generalized for color and/or texture image processing, by embedding images in multidimensional manifolds. A specific application of the proposed new approach to superresolution is outlined. PMID:21342847

  5. Optimal wavelet denoising for smart biomonitor systems

    NASA Astrophysics Data System (ADS)

    Messer, Sheila R.; Agzarian, John; Abbott, Derek

    2001-03-01

    Future smart-systems promise many benefits for biomedical diagnostics. The ideal is for simple portable systems that display and interpret information from smart integrated probes or MEMS-based devices. In this paper, we will discuss a step towards this vision with a heart bio-monitor case study. An electronic stethoscope is used to record heart sounds and the problem of extracting noise from the signal is addressed via the use of wavelets and averaging. In our example of heartbeat analysis, phonocardiograms (PCGs) have many advantages in that they may be replayed and analysed for spectral and frequency information. Many sources of noise may pollute a PCG including foetal breath sounds if the subject is pregnant, lung and breath sounds, environmental noise and noise from contact between the recording device and the skin. Wavelets can be employed to denoise the PCG. The signal is decomposed by a discrete wavelet transform. Due to the efficient decomposition of heart signals, their wavelet coefficients tend to be much larger than those due to noise. Thus, coefficients below a certain level are regarded as noise and are thresholded out. The signal can then be reconstructed without significant loss of information in the signal. The questions that this study attempts to answer are which wavelet families, levels of decomposition, and thresholding techniques best remove the noise in a PCG. The use of averaging in combination with wavelet denoising is also addressed. Possible applications of the Hilbert Transform to heart sound analysis are discussed.

  6. Denoising solar radiation data using coiflet wavelets

    SciTech Connect

    Karim, Samsul Ariffin Abdul Janier, Josefina B. Muthuvalu, Mohana Sundaram; Hasan, Mohammad Khatim; Sulaiman, Jumat; Ismail, Mohd Tahir

    2014-10-24

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.

  7. Fault Detection of a Roller-Bearing System through the EMD of a Wavelet Denoised Signal

    PubMed Central

    Ahn, Jong-Hyo; Kwak, Dae-Ho; Koh, Bong-Hwan

    2014-01-01

    This paper investigates fault detection of a roller bearing system using a wavelet denoising scheme and proper orthogonal value (POV) of an intrinsic mode function (IMF) covariance matrix. The IMF of the bearing vibration signal is obtained through empirical mode decomposition (EMD). The signal screening process in the wavelet domain eliminates noise-corrupted portions that may lead to inaccurate prognosis of bearing conditions. We segmented the denoised bearing signal into several intervals, and decomposed each of them into IMFs. The first IMF of each segment is collected to become a covariance matrix for calculating the POV. We show that covariance matrices from healthy and damaged bearings exhibit different POV profiles, which can be a damage-sensitive feature. We also illustrate the conventional approach of feature extraction, of observing the kurtosis value of the measured signal, to compare the functionality of the proposed technique. The study demonstrates the feasibility of wavelet-based de-noising, and shows through laboratory experiments that tracking the proper orthogonal values of the covariance matrix of the IMF can be an effective and reliable measure for monitoring bearing fault. PMID:25196008

  8. Inferring gene function from evolutionary change in signatures of translation efficiency

    PubMed Central

    2014-01-01

    Background The genetic code is redundant, meaning that most amino acids can be encoded by more than one codon. Highly expressed genes tend to use optimal codons to increase the accuracy and speed of translation. Thus, codon usage biases provide a signature of the relative expression levels of genes, which can, uniquely, be quantified across the domains of life. Results Here we describe a general statistical framework to exploit this phenomenon and to systematically associate genes with environments and phenotypic traits through changes in codon adaptation. By inferring evolutionary signatures of translation efficiency in 911 bacterial and archaeal genomes while controlling for confounding effects of phylogeny and inter-correlated phenotypes, we linked 187 gene families to 24 diverse phenotypic traits. A series of experiments in Escherichia coli revealed that 13 of 15, 19 of 23, and 3 of 6 gene families with changes in codon adaptation in aerotolerant, thermophilic, or halophilic microbes. Respectively, confer specific resistance to, respectively, hydrogen peroxide, heat, and high salinity. Further, we demonstrate experimentally that changes in codon optimality alone are sufficient to enhance stress resistance. Finally, we present evidence that multiple genes with altered codon optimality in aerobes confer oxidative stress resistance by controlling the levels of iron and NAD(P)H. Conclusions Taken together, these results provide experimental evidence for a widespread connection between changes in translation efficiency and phenotypic adaptation. As the number of sequenced genomes increases, this novel genomic context method for linking genes to phenotypes based on sequence alone will become increasingly useful. PMID:24580753

  9. Effect of taxonomic resolution on ecological and palaeoecological inference - a test using testate amoeba water table depth transfer functions

    NASA Astrophysics Data System (ADS)

    Mitchell, Edward A. D.; Lamentowicz, Mariusz; Payne, Richard J.; Mazei, Yuri

    2014-05-01

    Sound taxonomy is a major requirement for quantitative environmental reconstruction using biological data. Transfer function performance should theoretically be expected to decrease with reduced taxonomic resolution. However for many groups of organisms taxonomy is imperfect and species level identification not always possible. We conducted numerical experiments on five testate amoeba water table (DWT) transfer function data sets. We sequentially reduced the number of taxonomic groups by successively merging morphologically similar species and removing inconspicuous species. We then assessed how these changes affected model performance and palaeoenvironmental reconstruction using two fossil data sets. Model performance decreased with decreasing taxonomic resolution, but this had only limited effects on patterns of inferred DWT, at least to detect major dry/wet shifts. Higher-resolution taxonomy may however still be useful to detect more subtle changes, or for reconstructed shifts to be significant.

  10. A Genome-Scale Investigation of How Sequence, Function, and Tree-Based Gene Properties Influence Phylogenetic Inference.

    PubMed

    Shen, Xing-Xing; Salichos, Leonidas; Rokas, Antonis

    2016-01-01

    Molecular phylogenetic inference is inherently dependent on choices in both methodology and data. Many insightful studies have shown how choices in methodology, such as the model of sequence evolution or optimality criterion used, can strongly influence inference. In contrast, much less is known about the impact of choices in the properties of the data, typically genes, on phylogenetic inference. We investigated the relationships between 52 gene properties (24 sequence-based, 19 function-based, and 9 tree-based) with each other and with three measures of phylogenetic signal in two assembled data sets of 2,832 yeast and 2,002 mammalian genes. We found that most gene properties, such as evolutionary rate (measured through the percent average of pairwise identity across taxa) and total tree length, were highly correlated with each other. Similarly, several gene properties, such as gene alignment length, Guanine-Cytosine content, and the proportion of tree distance on internal branches divided by relative composition variability (treeness/RCV), were strongly correlated with phylogenetic signal. Analysis of partial correlations between gene properties and phylogenetic signal in which gene evolutionary rate and alignment length were simultaneously controlled, showed similar patterns of correlations, albeit weaker in strength. Examination of the relative importance of each gene property on phylogenetic signal identified gene alignment length, alongside with number of parsimony-informative sites and variable sites, as the most important predictors. Interestingly, the subsets of gene properties that optimally predicted phylogenetic signal differed considerably across our three phylogenetic measures and two data sets; however, gene alignment length and RCV were consistently included as predictors of all three phylogenetic measures in both yeasts and mammals. These results suggest that a handful of sequence-based gene properties are reliable predictors of phylogenetic signal

  11. Tectonomagmatic origin of Precambrian rocks of Mexico and Argentina inferred from multi-dimensional discriminant-function based discrimination diagrams

    NASA Astrophysics Data System (ADS)

    Pandarinath, Kailasa

    2014-12-01

    Several new multi-dimensional tectonomagmatic discrimination diagrams employing log-ratio variables of chemical elements and probability based procedure have been developed during the last 10 years for basic-ultrabasic, intermediate and acid igneous rocks. There are numerous studies on extensive evaluations of these newly developed diagrams which have indicated their successful application to know the original tectonic setting of younger and older as well as sea-water and hydrothermally altered volcanic rocks. In the present study, these diagrams were applied to Precambrian rocks of Mexico (southern and north-eastern) and Argentina. The study indicated the original tectonic setting of Precambrian rocks from the Oaxaca Complex of southern Mexico as follows: (1) dominant rift (within-plate) setting for rocks of 1117-988 Ma age; (2) dominant rift and less-dominant arc setting for rocks of 1157-1130 Ma age; and (3) a combined tectonic setting of collision and rift for Etla Granitoid Pluton (917 Ma age). The diagrams have indicated the original tectonic setting of the Precambrian rocks from the north-eastern Mexico as: (1) a dominant arc tectonic setting for the rocks of 988 Ma age; and (2) an arc and collision setting for the rocks of 1200-1157 Ma age. Similarly, the diagrams have indicated the dominant original tectonic setting for the Precambrian rocks from Argentina as: (1) with-in plate (continental rift-ocean island) and continental rift (CR) setting for the rocks of 800 Ma and 845 Ma age, respectively; and (2) an arc setting for the rocks of 1174-1169 Ma and of 1212-1188 Ma age. The inferred tectonic setting for these Precambrian rocks are, in general, in accordance to the tectonic setting reported in the literature, though there are some inconsistence inference of tectonic settings by some of the diagrams. The present study confirms the importance of these newly developed discriminant-function based diagrams in inferring the original tectonic setting of

  12. A Genome-Scale Investigation of How Sequence, Function, and Tree-Based Gene Properties Influence Phylogenetic Inference

    PubMed Central

    Shen, Xing-Xing; Salichos, Leonidas; Rokas, Antonis

    2016-01-01

    Molecular phylogenetic inference is inherently dependent on choices in both methodology and data. Many insightful studies have shown how choices in methodology, such as the model of sequence evolution or optimality criterion used, can strongly influence inference. In contrast, much less is known about the impact of choices in the properties of the data, typically genes, on phylogenetic inference. We investigated the relationships between 52 gene properties (24 sequence-based, 19 function-based, and 9 tree-based) with each other and with three measures of phylogenetic signal in two assembled data sets of 2,832 yeast and 2,002 mammalian genes. We found that most gene properties, such as evolutionary rate (measured through the percent average of pairwise identity across taxa) and total tree length, were highly correlated with each other. Similarly, several gene properties, such as gene alignment length, Guanine-Cytosine content, and the proportion of tree distance on internal branches divided by relative composition variability (treeness/RCV), were strongly correlated with phylogenetic signal. Analysis of partial correlations between gene properties and phylogenetic signal in which gene evolutionary rate and alignment length were simultaneously controlled, showed similar patterns of correlations, albeit weaker in strength. Examination of the relative importance of each gene property on phylogenetic signal identified gene alignment length, alongside with number of parsimony-informative sites and variable sites, as the most important predictors. Interestingly, the subsets of gene properties that optimally predicted phylogenetic signal differed considerably across our three phylogenetic measures and two data sets; however, gene alignment length and RCV were consistently included as predictors of all three phylogenetic measures in both yeasts and mammals. These results suggest that a handful of sequence-based gene properties are reliable predictors of phylogenetic signal

  13. Bayesian inverse modeling of vadose zone hydraulic properties in a layered soil profile with data-driven likelihood function inference

    NASA Astrophysics Data System (ADS)

    Over, M. W.; Wollschlaeger, U.; Osorio-Murillo, C. A.; Ames, D. P.; Rubin, Y.

    2013-12-01

    Good estimates for water retention and hydraulic conductivity functions are essential for accurate modeling of the nonlinear water dynamics of unsaturated soils. Parametric mathematical models for these functions are utilized in numerical applications of vadose zone dynamics; therefore, characterization of the model parameters to represent in situ soil properties is the goal of many inversion or calibration techniques. A critical, statistical challenge of existing approaches is the subjective, user-definition of a likelihood function or objective function - a step known to introduce bias in the results. We present a methodology for Bayesian inversion where the likelihood function is inferred directly from the simulation data, which eliminates subjectivity. Additionally, our approach assumes that there is no one parameterization that is appropriate for soils, but rather that the parameters are randomly distributed. This introduces the familiar concept from groundwater hydrogeology of structural models into vadose zone applications, but without attempting to apply geostatistics, which is extremely difficult in unsaturated problems. We validate our robust statistical approach on field data obtained during a multi-layer, natural boundary condition experiment and compare with previous optimizations using the same data. Our confidence intervals for the water retention and hydraulic conductivity functions as well as joint posterior probability distributions of the Mualem-van Genuchten parameters compare well with the previous work. The entire analysis was carried out using the free, open-source MAD# software available at http://mad.codeplex.com/.

  14. Inferring Functional Interaction and Transition Patterns via Dynamic Bayesian Variable Partition Models

    PubMed Central

    Zhang, Jing; Li, Xiang; Li, Cong; Lian, Zhichao; Huang, Xiu; Zhong, Guocheng; Zhu, Dajiang; Li, Kaiming; Jin, Changfeng; Hu, Xintao; Han, Junwei; Guo, Lei; Hu, Xiaoping; Li, Lingjiang; Liu, Tianming

    2014-01-01

    Multivariate connectivity and functional dynamics have been of wide interest in the neuroimaging field, and a variety of methods have been developed to study functional interactions and dynamics. In contrast, the temporal dynamic transitions of multivariate functional interactions among brain networks, in particular, in resting state, have been much less explored. This paper presents a novel dynamic Bayesian variable partition model (DBVPM) that simultaneously considers and models multivariate functional interactions and their dynamics via a unified Bayesian framework. The basic idea is to detect the temporal boundaries of piecewise quasi-stable functional interaction patterns, which are then modeled by representative signature patterns and whose temporal transitions are characterized by finite-state transition machines. Results on both simulated and experimental datasets demonstrated the effectiveness and accuracy of the DBVPM in dividing temporally transiting functional interaction patterns. The application of DBVPM on a post-traumatic stress disorder (PTSD) dataset revealed substantially different multivariate functional interaction signatures and temporal transitions in the default mode and emotion networks of PTSD patients, in comparison with those in healthy controls. This result demonstrated the utility of DBVPM in elucidating salient features that cannot be revealed by static pair-wise functional connectivity analysis. PMID:24222313

  15. INFERRING FUNCTIONAL NETWORK-BASED SIGNATURES VIA STRUCTURALLY-WEIGHTED LASSO MODEL

    PubMed Central

    Zhu, Dajiang; Shen, Dinggang; Liu, Tianming

    2014-01-01

    Most current research approaches for functional/effective connectivity analysis focus on pair-wise connectivity and cannot deal with network-scale functional interactions. In this paper, we propose a structurally-weighted LASSO (SW-LASSO) regression model to represent the functional interaction among multiple regions of interests (ROIs) based on resting state fMRI (R-fMRI) data. The structural connectivity constraints derived from diffusion tenor imaging (DTI) data will guide the selection of the weights which adjust the penalty levels of different coefficients corresponding to different ROIs. Using the Default Mode Network (DMN) as a test-bed, our results indicate that the learned SW-LASSO has good capability of differentiating Mild Cognitive Impairment (MCI) subjects from their normal controls and has promising potential to characterize the brain functions among different condition, thus serving as the functional network-based signature. PMID:25002915

  16. Denoising time-domain induced polarisation data using wavelet techniques

    NASA Astrophysics Data System (ADS)

    Deo, Ravin N.; Cull, James P.

    2016-05-01

    Time-domain induced polarisation (TDIP) methods are routinely used for near-surface evaluations in quasi-urban environments harbouring networks of buried civil infrastructure. A conventional technique for improving signal to noise ratio in such environments is by using analogue or digital low-pass filtering followed by stacking and rectification. However, this induces large distortions in the processed data. In this study, we have conducted the first application of wavelet based denoising techniques for processing raw TDIP data. Our investigation included laboratory and field measurements to better understand the advantages and limitations of this technique. It was found that distortions arising from conventional filtering can be significantly avoided with the use of wavelet based denoising techniques. With recent advances in full-waveform acquisition and analysis, incorporation of wavelet denoising techniques can further enhance surveying capabilities. In this work, we present the rationale for utilising wavelet denoising methods and discuss some important implications, which can positively influence TDIP methods.

  17. A new method for mobile phone image denoising

    NASA Astrophysics Data System (ADS)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  18. Wavelet-based denoising method for real phonocardiography signal recorded by mobile devices in noisy environment.

    PubMed

    Gradolewski, Dawid; Redlarski, Grzegorz

    2014-09-01

    The main obstacle in development of intelligent autodiagnosis medical systems based on the analysis of phonocardiography (PCG) signals is noise. The noise can be caused by digestive and respiration sounds, movements or even signals from the surrounding environment and it is characterized by wide frequency and intensity spectrum. This spectrum overlaps the heart tones spectrum, which makes the problem of PCG signal filtrating complex. The most common method for filtering such signals are wavelet denoising algorithms. In previous studies, in order to determine the optimum wavelet denoising parameters the disturbances were simulated by Gaussian white noise. However, this paper shows that this noise has a variable character. Therefore, the purpose of this paper is adaptation of a wavelet denoising algorithm for the filtration of real PCG signal disturbances from signals recorded by a mobile devices in a noisy environment. The best results were obtained for Coif 5 wavelet at the 10th decomposition level with the use of a minimaxi threshold selection algorithm and mln rescaling function. The performance of the algorithm was tested on four pathological heart sounds: early systolic murmur, ejection click, late systolic murmur and pansystolic murmur. PMID:25038586

  19. Denoising of chaotic signal using independent component analysis and empirical mode decomposition with circulate translating

    NASA Astrophysics Data System (ADS)

    Wen-Bo, Wang; Xiao-Dong, Zhang; Yuchan, Chang; Xiang-Li, Wang; Zhao, Wang; Xi, Chen; Lei, Zheng

    2016-01-01

    In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. Project supported by the National Science and Technology, China (Grant No. 2012BAJ15B04), the National Natural Science Foundation of China (Grant Nos. 41071270 and 61473213), the Natural Science Foundation of Hubei Province, China (Grant No. 2015CFB424), the State Key Laboratory Foundation of Satellite Ocean Environment Dynamics, China (Grant No. SOED1405), the Hubei Provincial Key Laboratory Foundation of Metallurgical Industry Process System Science, China (Grant No. Z201303), and the Hubei Key Laboratory Foundation of Transportation Internet of Things, Wuhan University of Technology, China (Grant No.2015III015-B02).

  20. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India. PMID:26697285

  1. Denoising Two-Photon Calcium Imaging Data

    PubMed Central

    Malik, Wasim Q.; Schummers, James; Sur, Mriganka; Brown, Emery N.

    2011-01-01

    Two-photon calcium imaging is now an important tool for in vivo imaging of biological systems. By enabling neuronal population imaging with subcellular resolution, this modality offers an approach for gaining a fundamental understanding of brain anatomy and physiology. Proper analysis of calcium imaging data requires denoising, that is separating the signal from complex physiological noise. To analyze two-photon brain imaging data, we present a signal plus colored noise model in which the signal is represented as harmonic regression and the correlated noise is represented as an order autoregressive process. We provide an efficient cyclic descent algorithm to compute approximate maximum likelihood parameter estimates by combing a weighted least-squares procedure with the Burg algorithm. We use Akaike information criterion to guide selection of the harmonic regression and the autoregressive model orders. Our flexible yet parsimonious modeling approach reliably separates stimulus-evoked fluorescence response from background activity and noise, assesses goodness of fit, and estimates confidence intervals and signal-to-noise ratio. This refined separation leads to appreciably enhanced image contrast for individual cells including clear delineation of subcellular details and network activity. The application of our approach to in vivo imaging data recorded in the ferret primary visual cortex demonstrates that our method yields substantially denoised signal estimates. We also provide a general Volterra series framework for deriving this and other signal plus correlated noise models for imaging. This approach to analyzing two-photon calcium imaging data may be readily adapted to other computational biology problems which apply correlated noise models. PMID:21687727

  2. Denoising, deconvolving, and decomposing photon observations. Derivation of the D3PO algorithm

    NASA Astrophysics Data System (ADS)

    Selig, Marco; Enßlin, Torsten A.

    2015-02-01

    The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution is not dependent on the underlying position space, the implementation of the D3PO algorithm uses the nifty package to ensure applicability to various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 × 32 arcmin2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A74

  3. Ecological Inference

    NASA Astrophysics Data System (ADS)

    King, Gary; Rosen, Ori; Tanner, Martin A.

    2004-09-01

    This collection of essays brings together a diverse group of scholars to survey the latest strategies for solving ecological inference problems in various fields. The last half-decade has witnessed an explosion of research in ecological inference--the process of trying to infer individual behavior from aggregate data. Although uncertainties and information lost in aggregation make ecological inference one of the most problematic types of research to rely on, these inferences are required in many academic fields, as well as by legislatures and the Courts in redistricting, by business in marketing research, and by governments in policy analysis.

  4. Preliminary Results of the Lithospheric Structure Beneath the Aeolian Archipelago (Italy) Inferred from Teleseismic Receiver Functions

    NASA Astrophysics Data System (ADS)

    Musumeci, C.; Martinez-Arevalo, C.; de Lis Mancilla3, F.; Patanè, D.

    2009-12-01

    The Aeolian archipelago (Italy) represents an approximately one-million-year-old volcanic arc related to the subduction of the Ionian oceanic plate beneath the Calabrian continental crust. The objective of this work is to develop a better understanding of the regional structure of the whole archipelago. The crustal structure under each station was obtained applying P-receiver function technique to the teleseismic P-coda data recorded by the broadband seismic network (10 stations) installed by the Istituto Nazionale di Geofisica e Volcanologia (INGV-CT). Receiver functions were computed by using the Extended-Time Multitaper Frequency Domain Cross-Correlation Receiver Function (ET-MTRF) method. The preliminary results suggest a very similar listhospheric structure below all the islands of the Aeolian archipelago, with the exception of Stromboli. The boundary between the subducting ocean crust of the Ionian plate and the Thyrrenian mantle is clearly observed below all the stations.

  5. Phylogenetic Gaussian Process Model for the Inference of Functionally Important Regions in Protein Tertiary Structures

    PubMed Central

    Huang, Yi-Fei; Golding, G. Brian

    2014-01-01

    A critical question in biology is the identification of functionally important amino acid sites in proteins. Because functionally important sites are under stronger purifying selection, site-specific substitution rates tend to be lower than usual at these sites. A large number of phylogenetic models have been developed to estimate site-specific substitution rates in proteins and the extraordinarily low substitution rates have been used as evidence of function. Most of the existing tools, e.g. Rate4Site, assume that site-specific substitution rates are independent across sites. However, site-specific substitution rates may be strongly correlated in the protein tertiary structure, since functionally important sites tend to be clustered together to form functional patches. We have developed a new model, GP4Rate, which incorporates the Gaussian process model with the standard phylogenetic model to identify slowly evolved regions in protein tertiary structures. GP4Rate uses the Gaussian process to define a nonparametric prior distribution of site-specific substitution rates, which naturally captures the spatial correlation of substitution rates. Simulations suggest that GP4Rate can potentially estimate site-specific substitution rates with a much higher accuracy than Rate4Site and tends to report slowly evolved regions rather than individual sites. In addition, GP4Rate can estimate the strength of the spatial correlation of substitution rates from the data. By applying GP4Rate to a set of mammalian B7-1 genes, we found a highly conserved region which coincides with experimental evidence. GP4Rate may be a useful tool for the in silico prediction of functionally important regions in the proteins with known structures. PMID:24453956

  6. Pragmatic Inferences in High-Functioning Adults with Autism and Asperger Syndrome

    ERIC Educational Resources Information Center

    Pijnacker, Judith; Hagoort, Peter; Buitelaar, Jan; Teunisse, Jan-Pieter; Geurts, Bart

    2009-01-01

    Although people with autism spectrum disorders (ASD) often have severe problems with pragmatic aspects of language, little is known about their pragmatic reasoning. We carried out a behavioral study on high-functioning adults with autistic disorder (n = 11) and Asperger syndrome (n = 17) and matched controls (n = 28) to investigate whether they…

  7. STATISTICAL INFERENCE PROCEDURES FOR PROBABILITY SELECTION FUNCTIONS IN LONG-TERM MONITORING PROGRAMS

    EPA Science Inventory

    This report develops the theory and illustrates the use of selection functions to describe changes over time in the distributions of environmentally important variables at sites sampled as part of environmental monitoring programs. he first part of the report provides a review of...

  8. Microbial manipulation of immune function for asthma prevention: inferences from clinical trials.

    PubMed

    Yoo, Jennifer; Tcheurekdjian, Haig; Lynch, Susan V; Cabana, Michael; Boushey, Homer A

    2007-07-01

    The "hygiene hypothesis" proposes that the increase in allergic diseases in developing countries reflects a decrease in infections during childhood. Cohort studies suggest, however, that the risks of asthma are increased in children who suffer severe illness from a viral respiratory infection in infancy. This apparent inconsistency can be reconciled through consideration of epidemiologic, clinical, and animal studies. The elements of this line of reasoning are that viral infections can predispose to organ-specific expression of allergic sensitization, and that the severity of illness is shaped by the maturity of immune function, which in turn is influenced by previous contact with bacteria and viruses, whether pathogenic or not. Clinical studies of children and interventional studies of animals indeed suggest that the exposure to microbes through the gastrointestinal tract powerfully shapes immune function. Intestinal microbiota differ in infants who later develop allergic diseases, and feeding Lactobacillus casei to infants at risk has been shown to reduce their rate of developing eczema. This has prompted studies of feeding probiotics as a primary prevention strategy for asthma. We propose that the efficacy of this approach depends on its success in inducing maturation of immune function important in defense against viral infection, rather than on its effectiveness in preventing allergic sensitization. It follows that the endpoints of studies of feeding probiotics to infants at risk for asthma should include not simply tests of responsiveness to allergens, but also assessment of intestinal flora, immune function, and the clinical response to respiratory viral infection. PMID:17607013

  9. Using Functional Behavioral Assessment Data to Infer Learning Histories and Guide Interventions: A Consultation Case Study

    ERIC Educational Resources Information Center

    Parker, Megan; Skinner, Christopher; Booher, Joshua

    2010-01-01

    A teacher requested behavioral consultation services to address a first-grade student's disruptive behavior. Functional behavior assessment (FBA) suggested the behavior was being reinforced by "negative" teacher attention (e.g., reprimands, redirections, response cost). Based on this analysis, the teacher and consultant posited that this student…

  10. Inference for the median residual life function in sequential multiple assignment randomized trials

    PubMed Central

    Kidwell, Kelley M.; Ko, Jin H.; Wahed, Abdus S.

    2014-01-01

    In survival analysis, median residual lifetime is often used as a summary measure to assess treatment effectiveness; it is not clear, however, how such a quantity could be estimated for a given dynamic treatment regimen using data from sequential randomized clinical trials. We propose a method to estimate a dynamic treatment regimen-specific median residual life (MERL) function from sequential multiple assignment randomized trials. We present the MERL estimator, which is based on inverse probability weighting, as well as, two variance estimates for the MERL estimator. One variance estimate follows from Lunceford, Davidian and Tsiatis’ 2002 survival function-based variance estimate and the other uses the sandwich estimator. The MERL estimator is evaluated, and its two variance estimates are compared through simulation studies, showing that the estimator and both variance estimates produce approximately unbiased results in large samples. To demonstrate our methods, the estimator has been applied to data from a sequentially randomized leukemia clinical trial. PMID:24254496

  11. PrOnto database : GO term functional dissimilarity inferred from biological data

    PubMed Central

    Chapple, Charles E.; Herrmann, Carl; Brun, Christine

    2015-01-01

    Moonlighting proteins are defined by their involvement in multiple, unrelated functions. The computational prediction of such proteins requires a formal method of assessing the similarity of cellular processes, for example, by identifying dissimilar Gene Ontology terms. While many measures of Gene Ontology term similarity exist, most depend on abstract mathematical analyses of the structure of the GO tree and do not necessarily represent the underlying biology. Here, we propose two metrics of GO term functional dissimilarity derived from biological information, one based on the protein annotations and the other on the interactions between proteins. They have been collected in the PrOnto database, a novel tool which can be of particular use for the identification of moonlighting proteins. The database can be queried via an web-based interface which is freely available at http://tagc.univ-mrs.fr/pronto. PMID:26089836

  12. Simple Math is Enough: Two Examples of Inferring Functional Associations from Genomic Data

    NASA Technical Reports Server (NTRS)

    Liang, Shoudan

    2003-01-01

    Non-random features in the genomic data are usually biologically meaningful. The key is to choose the feature well. Having a p-value based score prioritizes the findings. If two proteins share a unusually large number of common interaction partners, they tend to be involved in the same biological process. We used this finding to predict the functions of 81 un-annotated proteins in yeast.

  13. Epigenetic regulation of human placental function and pregnancy outcome: considerations for causal inference.

    PubMed

    Januar, Vania; Desoye, Gernot; Novakovic, Boris; Cvitic, Silvija; Saffery, Richard

    2015-10-01

    Epigenetic mechanisms, often defined as regulating gene activity independently of underlying DNA sequence, are crucial for healthy development. The sum total of epigenetic marks within a cell or tissue (the epigenome) is sensitive to environmental influence, and disruption of the epigenome in utero has been associated with adverse pregnancy outcomes. Not surprisingly, given its multifaceted functions and important role in regulating pregnancy outcome, the placenta shows unique epigenetic features. Interestingly however, many of these are only otherwise seen in human malignancy (the pseudomalignant placental epigenome). Epigenetic variation in the placenta is now emerging as a candidate mediator of environmental influence on placental functioning and a key regulator of pregnancy outcome. However, replication of findings is generally lacking, most likely due to small sample sizes and a lack of standardization of analytical approaches. Defining DNA methylation "signatures" in the placenta associated with maternal and fetal outcomes offers tremendous potential to improve pregnancy outcomes, but care must be taken in interpretation of findings. Future placental epigenetic research would do well to address the issues present in epigenetic epidemiology more generally, including careful consideration of sample size, potentially confounding factors, issues of tissue heterogeneity, reverse causation, and the role of genetics in modulating epigenetic profile. The importance of animal or in vitro models in establishing a functional role of epigenetic variation identified in human beings, which is key to establishing causation, should not be underestimated. PMID:26428498

  14. Inferring functional connectivity in MRI using Bayesian network structure learning with a modified PC algorithm.

    PubMed

    Iyer, Swathi P; Shafran, Izhak; Grayson, David; Gates, Kathleen; Nigg, Joel T; Fair, Damien A

    2013-07-15

    Resting state functional connectivity MRI (rs-fcMRI) is a popular technique used to gauge the functional relatedness between regions in the brain for typical and special populations. Most of the work to date determines this relationship by using Pearson's correlation on BOLD fMRI timeseries. However, it has been recognized that there are at least two key limitations to this method. First, it is not possible to resolve the direct and indirect connections/influences. Second, the direction of information flow between the regions cannot be differentiated. In the current paper, we follow-up on recent work by Smith et al. (2011), and apply PC algorithm to both simulated data and empirical data to determine whether these two factors can be discerned with group average, as opposed to single subject, functional connectivity data. When applied on simulated individual subjects, the algorithm performs well determining indirect and direct connection but fails in determining directionality. However, when applied at group level, PC algorithm gives strong results for both indirect and direct connections and the direction of information flow. Applying the algorithm on empirical data, using a diffusion-weighted imaging (DWI) structural connectivity matrix as the baseline, the PC algorithm outperformed the direct correlations. We conclude that, under certain conditions, the PC algorithm leads to an improved estimate of brain network structure compared to the traditional connectivity analysis based on correlations. PMID:23501054

  15. The luminosity function at z ∼ 8 from 97 Y-band dropouts: Inferences about reionization

    SciTech Connect

    Schmidt, Kasper B.; Treu, Tommaso; Kelly, Brandon C.; Trenti, Michele; Bradley, Larry D.; Stiavelli, Massimo; Oesch, Pascal A.; Shull, J. Michael

    2014-05-01

    We present the largest search to date for Y-band dropout galaxies (z ∼ 8 Lyman break galaxies, LBGs) based on 350 arcmin{sup 2} of Hubble Space Telescope observations in the V, Y, J, and H bands from the Brightest of Reionizing Galaxies (BoRG) survey. In addition to previously published data, the BoRG13 data set presented here includes approximately 50 arcmin{sup 2} of new data and deeper observations of two previous BoRG pointings, from which we present 9 new z ∼ 8 LBG candidates, bringing the total number of BoRG Y-band dropouts to 38 with 25.5 ≤ m{sub J} ≤ 27.6 (AB system). We introduce a new Bayesian formalism for estimating the galaxy luminosity function, which does not require binning (and thus smearing) of the data and includes a likelihood based on the formally correct binomial distribution as opposed to the often-used approximate Poisson distribution. We demonstrate the utility of the new method on a sample of 97 Y-band dropouts that combines the bright BoRG galaxies with the fainter sources published in Bouwens et al. from the Hubble Ultra Deep Field and Early Release Science programs. We show that the z ∼ 8 luminosity function is well described by a Schechter function over its full dynamic range with a characteristic magnitude M{sup ⋆}=−20.15{sub −0.38}{sup +0.29}, a faint-end slope of α=−1.87{sub −0.26}{sup +0.26}, and a number density of log{sub 10} ϕ{sup ⋆}[Mpc{sup −3}]=−3.24{sub −0.24}{sup +0.25}. Integrated down to M = –17.7, this luminosity function yields a luminosity density log{sub 10} ϵ[erg s{sup −1} Hz{sup −1} Mpc{sup −3}]=25.52{sub −0.05}{sup +0.05}. Our luminosity function analysis is consistent with previously published determinations within 1σ. The error analysis suggests that uncertainties on the faint-end slope are still too large to draw a firm conclusion about its evolution with redshift. We use our statistical framework to discuss the implication of our study for the physics of

  16. Function of pretribosphenic and tribosphenic mammalian molars inferred from 3D animation.

    PubMed

    Schultz, Julia A; Martin, Thomas

    2014-10-01

    Appearance of the tribosphenic molar in the Late Jurassic (160 Ma) is a crucial innovation for food processing in mammalian evolution. This molar type is characterized by a protocone, a talonid basin and a two-phased chewing cycle, all of which are apomorphic. In this functional study on the teeth of Late Jurassic Dryolestes leiriensis and the living marsupial Monodelphis domestica, we demonstrate that pretribosphenic and tribosphenic molars show fundamental differences of food reduction strategies, representing a shift in dental function during the transition of tribosphenic mammals. By using the Occlusal Fingerprint Analyser (OFA), we simulated the chewing motions of the pretribosphenic Dryolestes that represents an evolutionary precursor condition to such tribosphenic mammals as Monodelphis. Animation of chewing path and detection of collisional contacts between virtual models of teeth suggests that Dryolestes differs from the classical two-phased chewing movement of tribosphenidans, due to the narrowing of the interdental space in cervical (crown-root transition) direction, the inclination angle of the hypoflexid groove, and the unicuspid talonid. The pretribosphenic chewing cycle is equivalent to phase I of the tribosphenic chewing cycle, but the former lacks phase II of the tribosphenic chewing. The new approach can analyze the chewing cycle of the jaw by using polygonal 3D models of tooth surfaces, in a way that is complementary to the electromyography and strain gauge studies of muscle function of living animals. The technique allows alignment and scaling of isolated fossil teeth and utilizes the wear facet orientation and striation of the teeth to reconstruct the chewing path of extinct mammals. PMID:25091547

  17. Function of pretribosphenic and tribosphenic mammalian molars inferred from 3D animation

    NASA Astrophysics Data System (ADS)

    Schultz, Julia A.; Martin, Thomas

    2014-10-01

    Appearance of the tribosphenic molar in the Late Jurassic (160 Ma) is a crucial innovation for food processing in mammalian evolution. This molar type is characterized by a protocone, a talonid basin and a two-phased chewing cycle, all of which are apomorphic. In this functional study on the teeth of Late Jurassic Dryolestes leiriensis and the living marsupial Monodelphis domestica, we demonstrate that pretribosphenic and tribosphenic molars show fundamental differences of food reduction strategies, representing a shift in dental function during the transition of tribosphenic mammals. By using the Occlusal Fingerprint Analyser (OFA), we simulated the chewing motions of the pretribosphenic Dryolestes that represents an evolutionary precursor condition to such tribosphenic mammals as Monodelphis. Animation of chewing path and detection of collisional contacts between virtual models of teeth suggests that Dryolestes differs from the classical two-phased chewing movement of tribosphenidans, due to the narrowing of the interdental space in cervical (crown-root transition) direction, the inclination angle of the hypoflexid groove, and the unicuspid talonid. The pretribosphenic chewing cycle is equivalent to phase I of the tribosphenic chewing cycle, but the former lacks phase II of the tribosphenic chewing. The new approach can analyze the chewing cycle of the jaw by using polygonal 3D models of tooth surfaces, in a way that is complementary to the electromyography and strain gauge studies of muscle function of living animals. The technique allows alignment and scaling of isolated fossil teeth and utilizes the wear facet orientation and striation of the teeth to reconstruct the chewing path of extinct mammals.

  18. Bayesian nonparametric inference on quantile residual life function: Application to breast cancer data.

    PubMed

    Park, Taeyoung; Jeong, Jong-Hyeon; Lee, Jae Won

    2012-08-15

    There is often an interest in estimating a residual life function as a summary measure of survival data. For ease in presentation of the potential therapeutic effect of a new drug, investigators may summarize survival data in terms of the remaining life years of patients. Under heavy right censoring, however, some reasonably high quantiles (e.g., median) of a residual lifetime distribution cannot be always estimated via a popular nonparametric approach on the basis of the Kaplan-Meier estimator. To overcome the difficulties in dealing with heavily censored survival data, this paper develops a Bayesian nonparametric approach that takes advantage of a fully model-based but highly flexible probabilistic framework. We use a Dirichlet process mixture of Weibull distributions to avoid strong parametric assumptions on the unknown failure time distribution, making it possible to estimate any quantile residual life function under heavy censoring. Posterior computation through Markov chain Monte Carlo is straightforward and efficient because of conjugacy properties and partial collapse. We illustrate the proposed methods by using both simulated data and heavily censored survival data from a recent breast cancer clinical trial conducted by the National Surgical Adjuvant Breast and Bowel Project. PMID:22437758

  19. Functional morphology of the hallucal metatarsal with implications for inferring grasping ability in extinct primates.

    PubMed

    Goodenberger, Katherine E; Boyer, Doug M; Orr, Caley M; Jacobs, Rachel L; Femiani, John C; Patel, Biren A

    2015-03-01

    Primate evolutionary morphologists have argued that selection for life in a fine branch niche resulted in grasping specializations that are reflected in the hallucal metatarsal (Mt1) morphology of extant "prosimians", while a transition to use of relatively larger, horizontal substrates explains the apparent loss of such characters in anthropoids. Accordingly, these morphological characters-Mt1 torsion, peroneal process length and thickness, and physiological abduction angle-have been used to reconstruct grasping ability and locomotor mode in the earliest fossil primates. Although these characters are prominently featured in debates on the origin and subsequent radiation of Primates, questions remain about their functional significance. This study examines the relationship between these morphological characters of the Mt1 and a novel metric of pedal grasping ability for a large number of extant taxa in a phylogenetic framework. Results indicate greater Mt1 torsion in taxa that engage in hallucal grasping and in those that utilize relatively small substrates more frequently. This study provides evidence that Carpolestes simpsoni has a torsion value more similar to grasping primates than to any scandentian. The results also show that taxa that habitually grasp vertical substrates are distinguished from other taxa in having relatively longer peroneal processes. Furthermore, a longer peroneal process is also correlated with calcaneal elongation, a metric previously found to reflect leaping proclivity. A more refined understanding of the functional associations between Mt1 morphology and behavior in extant primates enhances the potential for using these morphological characters to comprehend primate (locomotor) evolution. PMID:25378276

  20. Inferring cortical function in the mouse visual system through large-scale systems neuroscience.

    PubMed

    Hawrylycz, Michael; Anastassiou, Costas; Arkhipov, Anton; Berg, Jim; Buice, Michael; Cain, Nicholas; Gouwens, Nathan W; Gratiy, Sergey; Iyer, Ramakrishnan; Lee, Jung Hoon; Mihalas, Stefan; Mitelut, Catalin; Olsen, Shawn; Reid, R Clay; Teeter, Corinne; de Vries, Saskia; Waters, Jack; Zeng, Hongkui; Koch, Christof

    2016-07-01

    The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort. PMID:27382147

  1. Inferring cortical function in the mouse visual system through large-scale systems neuroscience

    PubMed Central

    Hawrylycz, Michael; Anastassiou, Costas; Arkhipov, Anton; Berg, Jim; Buice, Michael; Cain, Nicholas; Gouwens, Nathan W.; Gratiy, Sergey; Iyer, Ramakrishnan; Lee, Jung Hoon; Mihalas, Stefan; Mitelut, Catalin; Olsen, Shawn; Reid, R. Clay; Teeter, Corinne; de Vries, Saskia; Waters, Jack; Zeng, Hongkui; Koch, Christof

    2016-01-01

    The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort. PMID:27382147

  2. Effect of denoising on supervised lung parenchymal clusters

    NASA Astrophysics Data System (ADS)

    Jayamani, Padmapriya; Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Denoising is a critical preconditioning step for quantitative analysis of medical images. Despite promises for more consistent diagnosis, denoising techniques are seldom explored in clinical settings. While this may be attributed to the esoteric nature of the parameter sensitve algorithms, lack of quantitative measures on their ecacy to enhance the clinical decision making is a primary cause of physician apathy. This paper addresses this issue by exploring the eect of denoising on the integrity of supervised lung parenchymal clusters. Multiple Volumes of Interests (VOIs) were selected across multiple high resolution CT scans to represent samples of dierent patterns (normal, emphysema, ground glass, honey combing and reticular). The VOIs were labeled through consensus of four radiologists. The original datasets were ltered by multiple denoising techniques (median ltering, anisotropic diusion, bilateral ltering and non-local means) and the corresponding ltered VOIs were extracted. Plurality of cluster indices based on multiple histogram-based pair-wise similarity measures were used to assess the quality of supervised clusters in the original and ltered space. The resultant rank orders were analyzed using the Borda criteria to nd the denoising-similarity measure combination that has the best cluster quality. Our exhaustive analyis reveals (a) for a number of similarity measures, the cluster quality is inferior in the ltered space; and (b) for measures that benet from denoising, a simple median ltering outperforms non-local means and bilateral ltering. Our study suggests the need to judiciously choose, if required, a denoising technique that does not deteriorate the integrity of supervised clusters.

  3. Bayesian inference in an item response theory model with a generalized student t link function

    NASA Astrophysics Data System (ADS)

    Azevedo, Caio L. N.; Migon, Helio S.

    2012-10-01

    In this paper we introduce a new item response theory (IRT) model with a generalized Student t-link function with unknown degrees of freedom (df), named generalized t-link (GtL) IRT model. In this model we consider only the difficulty parameter in the item response function. GtL is an alternative to the two parameter logit and probit models, since the degrees of freedom (df) play a similar role to the discrimination parameter. However, the behavior of the curves of the GtL is different from those of the two parameter models and the usual Student t link, since in GtL the curve obtained from different df's can cross the probit curves in more than one latent trait level. The GtL model has similar proprieties to the generalized linear mixed models, such as the existence of sufficient statistics and easy parameter interpretation. Also, many techniques of parameter estimation, model fit assessment and residual analysis developed for that models can be used for the GtL model. We develop fully Bayesian estimation and model fit assessment tools through a Metropolis-Hastings step within Gibbs sampling algorithm. We consider a prior sensitivity choice concerning the degrees of freedom. The simulation study indicates that the algorithm recovers all parameters properly. In addition, some Bayesian model fit assessment tools are considered. Finally, a real data set is analyzed using our approach and other usual models. The results indicate that our model fits the data better than the two parameter models.

  4. Crustal structure beneath the Japanese Islands inferred from receiver function analysis using similar earthquakes

    NASA Astrophysics Data System (ADS)

    Igarashi, Toshihiro

    2016-04-01

    The stress concentration and strain accumulation process due to inter-plate coupling of the subducting plate should have a large effect on inland shallow earthquakes that occur in the overriding plate. Information on the crustal structure and the crustal thickness is important to understanding their process. In this study, I applied receiver function analysis using similar earthquakes to estimate the crustal velocity structures beneath the Japanese Islands. Because similar earthquakes are caused repeatedly at almost the same place, they are useful for extracting information on spatial distribution and temporal changes of seismic velocity structures beneath the seismic stations. I used telemetric seismographic network data covered the Japanese Islands and moderate-sized similar earthquakes which occurred in the southern Hemisphere with epicentral distances between 30 and 90 degrees for about 26 years from October 1989. Data analysis was performed separately before and after the 2011 Tohoku-Oki earthquake. To identify the spatial distribution of crustal structure, I searched for the best-correlated model between an observed receiver function at each station and synthetic ones by using a grid search method. As results, I clarified the spatial distribution of the crustal velocity structures. The spatial patterns of velocities from the ground surface to 5 km deep are corresponding with basement depth models although the velocities are slower than those of tomography models. They indicate thick sediment layers in several plain and basin areas. The crustal velocity perturbations are consistent with existing tomography models. The active volcanoes correspond low-velocity zones from the upper crust to the crust-mantle transition. A comparison of the crustal structure before and after the 2011 Tohoku-Oki earthquake suggests that the northeastern Japan arc changed to lower velocities in some areas. This kind of velocity changes might be due to other effects such as changes of

  5. Sediment thickness beneath the Indo-Gangetic Plain and Siwalik Himalaya inferred from receiver function modelling

    NASA Astrophysics Data System (ADS)

    Borah, Kajaljyoti; Kanna, Nagaraju; Rai, S. S.; Prakasam, K. S.

    2015-03-01

    The Indo-Gangetic Plain and the adjoining Siwalik Himalaya are the seismically most vulnerable regions due to high density of human population and presence of thick sediments that amplify the seismic waves due to an earthquake in the region. We investigate the sedimentary structure and crustal thickness of the region through joint inversion of the receiver function time series at 14 broadband seismograph locations and the available Rayleigh velocity data for the region. Results show significant variability of sedimentary layer thicknesses from 1.0 to 2.0 km beneath the Delhi region to 2.0-5.0 km beneath the Indo-Gangetic Plain and the Siwalik Himalaya. As we progress from the Delhi to the Indo-Gangetic Plain, we observe a decrease in the shear velocity in sedimentary layer from ∼2.0 km/s to ∼1.3 km/s while the layer thickness increases progressively from ∼1.0 km in south to 2.0-5.0 km in the north. Average S-velocity in the sedimentary layer beneath the Siwalik Himalaya is ∼2.1 km/s. Crustal thicknesses varies from ∼42 in the Delhi region, ∼48 km in the Indo-Gangetic Plain, ∼50 km in the western part of Siwalik Himalaya to ∼60 km in the Kumaon region of Siwalik Himalaya.

  6. The SWELLS survey - VI. Hierarchical inference of the initial mass functions of bulges and discs

    NASA Astrophysics Data System (ADS)

    Brewer, Brendon J.; Marshall, Philip J.; Auger, Matthew W.; Treu, Tommaso; Dutton, Aaron A.; Barnabè, Matteo

    2014-01-01

    The long-standing assumption that the stellar initial mass function (IMF) is universal has recently been challenged by a number of observations. Several studies have shown that a `heavy' IMF (e.g. with a Salpeter-like abundance of low-mass stars and thus normalization) is preferred for massive early-type galaxies, while this IMF is inconsistent with the properties of less massive, later-type galaxies. These discoveries motivate the hypothesis that the IMF may vary (possibly very slightly) across galaxies and across components of individual galaxies (e.g. bulges versus discs). In this paper, we use a sample of 19 late-type strong gravitational lenses from the Sloan WFC Edge-on Late-type Lens Survey (SWELLS) to investigate the IMFs of the bulges and discs in late-type galaxies. We perform a joint analysis of the galaxies' total masses (constrained by strong gravitational lensing) and stellar masses (constrained by optical and near-infrared colours in the context of a stellar population synthesis model, up to an IMF normalization parameter). Using minimal assumptions apart from the physical constraint that the total stellar mass m* within any aperture must be less than the total mass mtot within the aperture, we find that the bulges of the galaxies cannot have IMFs heavier (i.e. implying high mass per unit luminosity) than Salpeter, while the disc IMFs are not well constrained by this data set. We also discuss the necessity for hierarchical modelling when combining incomplete information about multiple astronomical objects. This modelling approach allows us to place upper limits on the size of any departures from universality. More data, including spatially resolved kinematics (as in Paper V) and stellar population diagnostics over a range of bulge and disc masses, are needed to robustly quantify how the IMF varies within galaxies.

  7. Complex geometry of the subducted Pacific slab inferred from receiver function

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiqing; Wu, Qingju; Zhang, Guangcheng

    2014-05-01

    In recent years, slab tear has received considerable attention and been reported in many arc-arc junctures in Pacific plate subdution zones. From 2009 to 2011, we deployed two portable experiments equipped with CMG-3ESPC seismometers and the recorders of REFTEK-130B in NE China. The two linear seismic arrays were designed nearly parallel, and each of them containing about 60 seismic stations extended about 1200 km from west to east spanning all surface geological terrains of NE China. The south one was firstly set up and continually operated over two year, while the north deployment worked only about one year. By using the teleseismic data collected by these two arrays, we calculate the P receiver functions to map topographic variation of the upper mantle discontinuities. Our sampled region is located where the juncture between the subducting Kuril and Japan slabs reaches the 660-km discontinuity. Distinct variation of the 660-km discontinuity is mapped beneath the regions. A deeper-than-normal 660 km discontinuity is observed locally in the southeastern part of our sampled region. The depression of the 660 km discontinuity may be resulted from an oceanic lithospheric slab deflected in the mantle transition zone, in good agreement with the result of earlier tomographic and other seismic studies in this region. The northeastern portion of our sampled region, however, does not show clearly the deflection of the slab. The variation of the tomography of the 660-km discontinuity in our sampled regions may indicate a complex geometry of the subducted Pacific slab.

  8. Seismic Discontinuities within the Crust and Mantle Beneath Indonesia as Inferred from P Receiver Functions

    NASA Astrophysics Data System (ADS)

    Woelbern, I.; Rumpker, G.

    2015-12-01

    Indonesia is situated at the southern margin of SE Asia, which comprises an assemblage of Gondwana-derived continental terranes, suture zones and volcanic arcs. The formation of SE Asia is believed to have started in Early Devonian. Its complex history involves the opening and closure of three distinct Tethys oceans, each accompanied by the rifting of continental fragments. We apply the receiver function technique to data of the temporary MERAMEX network operated in Central Java from May to October 2004 by the GeoForschungsZentrum Potsdam. The network consisted of 112 mobile stations with a spacing of about 10 km covering the full width of the island between the southern and northern coast lines. The tectonic history is reflected in a complex crustal structure of Central Java exhibiting strong topography of the Moho discontinuity related to different tectonic units. A discontinuity of negative impedance contrast is observed throughout the mid-crust interpreted as the top of a low-velocity layer which shows no depth correlation with the Moho interface. Converted phases generated at greater depth beneath Indonesia indicate the existence of multiple seismic discontinuities within the upper mantle and even below. The strongest signal originates from the base of the mantle transition zone, i.e. the 660 km discontinuity. The phase related to the 410 km discontinuity is less pronounced, but clearly identifiable as well. The derived thickness of the mantle-transition zone is in good agreement with the IASP91 velocity model. Additional phases are observed at roughly 33 s and 90 s relative to the P onset, corresponding to about 300 km and 920 km, respectively. A signal of reversed polarity indicates the top of a low velocity layer at about 370 km depth overlying the mantle transition zone.

  9. Why are dunkels sticky? Preschoolers infer functionality and intentional creation for artifact properties learned from generic language.

    PubMed

    Cimpian, Andrei; Cadena, Cristina

    2010-10-01

    Artifacts pose a potential learning problem for children because the mapping between their features and their functions is often not transparent. In solving this problem, children are likely to rely on a number of information sources (e.g., others' actions, affordances). We argue that children's sensitivity to nuances in the language used to describe artifacts is an important, but so far unacknowledged, piece of this puzzle. Specifically, we hypothesize that children are sensitive to whether an unfamiliar artifact's features are highlighted using generic (e.g., "Dunkels are sticky") or non-generic (e.g., "This dunkel is sticky") language. Across two studies, older-but not younger-preschoolers who heard such features introduced via generic statements inferred that they are a functional part of the artifact's design more often than children who heard the same features introduced via non-generic statements. The ability to pick up on this linguistic cue may expand considerably the amount of conceptual information about artifacts that children derive from conversations with adults. PMID:20656283

  10. Load identification approach based on basis pursuit denoising algorithm

    NASA Astrophysics Data System (ADS)

    Ginsberg, D.; Ruby, M.; Fritzen, C. P.

    2015-07-01

    The information of the external loads is of great interest in many fields of structural analysis, such as structural health monitoring (SHM) systems or assessment of damage after extreme events. However, in most cases it is not possible to measure the external forces directly, so they need to be reconstructed. Load reconstruction refers to the problem of estimating an input to a dynamic system when the system output and the impulse response functions are usually the knowns. Generally, this leads to a so called ill-posed inverse problem, which involves solving an underdetermined linear system of equations. For most practical applications it can be assumed that the applied loads are not arbitrarily distributed in time and space, at least some specific characteristics about the external excitation are known a priori. In this contribution this knowledge was used to develop a more suitable force reconstruction method, which allows identifying the time history and the force location simultaneously by employing significantly fewer sensors compared to other reconstruction approaches. The properties of the external force are used to transform the ill-posed problem into a sparse recovery task. The sparse solution is acquired by solving a minimization problem known as basis pursuit denoising (BPDN). The possibility of reconstructing loads based on noisy structural measurement signals will be demonstrated by considering two frequently occurring loading conditions: harmonic excitation and impact events, separately and combined. First a simulation study of a simple plate structure is carried out and thereafter an experimental investigation of a real beam is performed.

  11. Hybrid regularizers-based adaptive anisotropic diffusion for image denoising.

    PubMed

    Liu, Kui; Tan, Jieqing; Ai, Liefu

    2016-01-01

    To eliminate the staircasing effect for total variation filter and synchronously avoid the edges blurring for fourth-order PDE filter, a hybrid regularizers-based adaptive anisotropic diffusion is proposed for image denoising. In the proposed model, the [Formula: see text]-norm is considered as the fidelity term and the regularization term is composed of a total variation regularization and a fourth-order filter. The two filters can be adaptively selected according to the diffusion function. When the pixels locate at the edges, the total variation filter is selected to filter the image, which can preserve the edges. When the pixels belong to the flat regions, the fourth-order filter is adopted to smooth the image, which can eliminate the staircase artifacts. In addition, the split Bregman and relaxation approach are employed in our numerical algorithm to speed up the computation. Experimental results demonstrate that our proposed model outperforms the state-of-the-art models cited in the paper in both the qualitative and quantitative evaluations. PMID:27047730

  12. Application of time-resolved glucose concentration photoacoustic signals based on an improved wavelet denoising

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-10-01

    Real-time monitoring of blood glucose concentration (BGC) is a great important procedure in controlling diabetes mellitus and preventing the complication for diabetic patients. Noninvasive measurement of BGC has already become a research hotspot because it can overcome the physical and psychological harm. Photoacoustic spectroscopy is a well-established, hybrid and alternative technique used to determine the BGC. According to the theory of photoacoustic technique, the blood is irradiated by plused laser with nano-second repeation time and micro-joule power, the photoacoustic singals contained the information of BGC are generated due to the thermal-elastic mechanism, then the BGC level can be interpreted from photoacoustic signal via the data analysis. But in practice, the time-resolved photoacoustic signals of BGC are polluted by the varities of noises, e.g., the interference of background sounds and multi-component of blood. The quality of photoacoustic signal of BGC directly impacts the precision of BGC measurement. So, an improved wavelet denoising method was proposed to eliminate the noises contained in BGC photoacoustic signals. To overcome the shortcoming of traditional wavelet threshold denoising, an improved dual-threshold wavelet function was proposed in this paper. Simulation experimental results illustrated that the denoising result of this improved wavelet method was better than that of traditional soft and hard threshold function. To varify the feasibility of this improved function, the actual photoacoustic BGC signals were test, the test reslut demonstrated that the signal-to-noises ratio(SNR) of the improved function increases about 40-80%, and its root-mean-square error (RMSE) decreases about 38.7-52.8%.

  13. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  14. Fractional domain varying-order differential denoising method

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-Shan; Zhang, Feng; Li, Bing-Zhao; Tao, Ran

    2014-10-01

    Removal of noise is an important step in the image restoration process, and it remains a challenging problem in image processing. Denoising is a process used to remove the noise from the corrupted image, while retaining the edges and other detailed features as much as possible. Recently, denoising in the fractional domain is a hot research topic. The fractional-order anisotropic diffusion method can bring a less blocky effect and preserve edges in image denoising, a method that has received much interest in the literature. Based on this method, we propose a new method for image denoising, in which fractional-varying-order differential, rather than constant-order differential, is used. The theoretical analysis and experimental results show that compared with the state-of-the-art fractional-order anisotropic diffusion method, the proposed fractional-varying-order differential denoising model can preserve structure and texture well, while quickly removing noise, and yields good visual effects and better peak signal-to-noise ratio.

  15. Wavelet Denoising of Mobile Radiation Data

    SciTech Connect

    Campbell, D B

    2008-10-31

    The FY08 phase of this project investigated the merits of video fusion as a method for mitigating the false alarms encountered by vehicle borne detection systems in an effort to realize performance gains associated with wavelet denoising. The fusion strategy exploited the significant correlations which exist between data obtained from radiation detectors and video systems with coincident fields of view. The additional information provided by optical systems can greatly increase the capabilities of these detection systems by reducing the burden of false alarms and through the generation of actionable information. The investigation into the use of wavelet analysis techniques as a means of filtering the gross-counts signal obtained from moving radiation detectors showed promise for vehicle borne systems. However, the applicability of these techniques to man-portable systems is limited due to minimal gains in performance over the rapid feedback available to system operators under walking conditions. Furthermore, the fusion of video holds significant promise for systems operating from vehicles or systems organized into stationary arrays; however, the added complexity and hardware required by this technique renders it infeasible for man-portable systems.

  16. A new study on mammographic image denoising using multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Dong, Min; Guo, Ya-Nan; Ma, Yi-De; Ma, Yu-run; Lu, Xiang-yu; Wang, Ke-ju

    2015-12-01

    Mammography is the most simple and effective technology for early detection of breast cancer. However, the lesion areas of breast are difficult to detect which due to mammograms are mixed with noise. This work focuses on discussing various multiresolution denoising techniques which include the classical methods based on wavelet and contourlet; moreover the emerging multiresolution methods are also researched. In this work, a new denoising method based on dual tree contourlet transform (DCT) is proposed, the DCT possess the advantage of approximate shift invariant, directionality and anisotropy. The proposed denoising method is implemented on the mammogram, the experimental results show that the emerging multiresolution method succeeded in maintaining the edges and texture details; and it can obtain better performance than the other methods both on visual effects and in terms of the Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM) values.

  17. Non-local MRI denoising using random sampling.

    PubMed

    Hu, Jinrong; Zhou, Jiliu; Wu, Xi

    2016-09-01

    In this paper, we propose a random sampling non-local mean (SNLM) algorithm to eliminate noise in 3D MRI datasets. Non-local means (NLM) algorithms have been implemented efficiently for MRI denoising, but are always limited by high computational complexity. Compared to conventional methods, which raster through the entire search window when computing similarity weights, the proposed SNLM algorithm randomly selects a small subset of voxels which dramatically decreases the computational burden, together with competitive denoising result. Moreover, structure tensor which encapsulates high-order information was introduced as an optimal sampling pattern for further improvement. Numerical experiments demonstrated that the proposed SNLM method can get a good balance between denoising quality and computation efficiency. At a relative sampling ratio (i.e. ξ=0.05), SNLM can remove noise as effectively as full NLM, meanwhile the running time can be reduced to 1/20 of NLM's. PMID:27114338

  18. POGs2: A Web Portal to Facilitate Cross-Species Inferences About Protein Architecture and Function in Plants

    PubMed Central

    Tomcal, Michael; Stiffler, Nicholas; Barkan, Alice

    2013-01-01

    The Putative orthologous Groups 2 Database (POGs2) (http://pogs.uoregon.edu/) integrates information about the inferred proteomes of four plant species (Arabidopsis thaliana, Zea mays, Orza sativa, and Populus trichocarpa) in a display that facilitates comparisons among orthologs and extrapolation of annotations among species. A single-page view collates key functional data for members of each Putative Orthologous Group (POG): graphical representations of InterPro domains, predicted and established intracellular locations, and imported gene descriptions. The display incorporates POGs predicted by two different algorithms as well as gene trees, allowing users to evaluate the validity of POG memberships. The web interface provides ready access to sequences and alignments of POG members, as well as sequences, alignments, and domain architectures of closely-related paralogs. A simple and flexible search interface permits queries by BLAST and by any combination of gene identifier, keywords, domain names, InterPro identifiers, and intracellular location. The concurrent display of domain architectures for orthologous proteins highlights errors in gene models and false-negatives in domain predictions. The POGs2 layout is also useful for exploring candidate genes identified by transposon tagging, QTL mapping, map-based cloning, and proteomics, and for navigating between orthologous groups that belong to the same gene family. PMID:24340041

  19. POGs2: a web portal to facilitate cross-species inferences about protein architecture and function in plants.

    PubMed

    Tomcal, Michael; Stiffler, Nicholas; Barkan, Alice

    2013-01-01

    The Putative orthologous Groups 2 Database (POGs2) (http://pogs.uoregon.edu/) integrates information about the inferred proteomes of four plant species (Arabidopsis thaliana, Zea mays, Orza sativa, and Populus trichocarpa) in a display that facilitates comparisons among orthologs and extrapolation of annotations among species. A single-page view collates key functional data for members of each Putative Orthologous Group (POG): graphical representations of InterPro domains, predicted and established intracellular locations, and imported gene descriptions. The display incorporates POGs predicted by two different algorithms as well as gene trees, allowing users to evaluate the validity of POG memberships. The web interface provides ready access to sequences and alignments of POG members, as well as sequences, alignments, and domain architectures of closely-related paralogs. A simple and flexible search interface permits queries by BLAST and by any combination of gene identifier, keywords, domain names, InterPro identifiers, and intracellular location. The concurrent display of domain architectures for orthologous proteins highlights errors in gene models and false-negatives in domain predictions. The POGs2 layout is also useful for exploring candidate genes identified by transposon tagging, QTL mapping, map-based cloning, and proteomics, and for navigating between orthologous groups that belong to the same gene family. PMID:24340041

  20. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  1. Sinogram denoising via simultaneous sparse representation in learned dictionaries.

    PubMed

    Karimi, Davood; Ward, Rabab K

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster. PMID:27055224

  2. Entropic Inference

    NASA Astrophysics Data System (ADS)

    Caticha, Ariel

    2011-03-01

    In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme.

  3. GPU-Accelerated Denoising in 3D (GD3D)

    Energy Science and Technology Software Center (ESTSC)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  4. Image denoising with the dual-tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Yaseen, Alauldeen S.; Pavlova, Olga N.; Pavlov, Alexey N.; Hramov, Alexander E.

    2016-04-01

    The purpose of this study is to compare image denoising techniques based on real and complex wavelet-transforms. Possibilities provided by the classical discrete wavelet transform (DWT) with hard and soft thresholding are considered, and influences of the wavelet basis and image resizing are discussed. The quality of image denoising for the standard 2-D DWT and the dual-tree complex wavelet transform (DT-CWT) is studied. It is shown that DT-CWT outperforms 2-D DWT at the appropriate selection of the threshold level.

  5. Simultaneous de-noising in phase contrast tomography

    NASA Astrophysics Data System (ADS)

    Koehler, Thomas; Roessl, Ewald

    2012-07-01

    In this work, we investigate methods for de-noising of tomographic differential phase contrast and absorption contrast images. We exploit the fact that in grating-based differential phase contrast imaging (DPCI), first, several images are acquired simultaneously in exactly the same geometry, and second, these different images can show very different contrast-to-noise-ratios. These features of grating-based DPCI are used to generalize the conventional bilateral filter. Experiments using simulations show a superior de-noising performance of the generalized algorithm compared with the conventional one.

  6. The NIFTY way of Bayesian signal inference

    SciTech Connect

    Selig, Marco

    2014-12-05

    We introduce NIFTY, 'Numerical Information Field Theory', a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTY can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTY as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D{sup 3}PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.

  7. The NIFTy way of Bayesian signal inference

    NASA Astrophysics Data System (ADS)

    Selig, Marco

    2014-12-01

    We introduce NIFTy, "Numerical Information Field Theory", a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTy can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTy as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.

  8. Comparison of automatic denoising methods for phonocardiograms with extraction of signal parameters via the Hilbert Transform

    NASA Astrophysics Data System (ADS)

    Messer, Sheila R.; Agzarian, John; Abbott, Derek

    2001-05-01

    Phonocardiograms (PCGs) have many advantages over traditional auscultation (listening to the heart) because they may be replayed, may be analyzed for spectral and frequency content, and frequencies inaudible to the human ear may be recorded. However, various sources of noise may pollute a PCG including lung sounds, environmental noise and noise generated from contact between the recording device and the skin. Because PCG signals are known to be nonlinear and it is often not possible to determine their noise content, traditional de-noising methods may not be effectively applied. However, other methods including wavelet de-noising, wavelet packet de-noising and averaging can be employed to de-noise the PCG. This study examines and compares these de-noising methods. This study answers such questions as to which de-noising method gives a better SNR, the magnitude of signal information that is lost as a result of the de-noising process, the appropriate uses of the different methods down to such specifics as to which wavelets and decomposition levels give best results in wavelet and wavelet packet de-noising. In general, the wavelet and wavelet packet de-noising performed roughly equally with optimal de-noising occurring at 3-5 levels of decomposition. Averaging also proved a highly useful de- noising technique; however, in some cases averaging is not appropriate. The Hilbert Transform is used to illustrate the results of the de-noising process and to extract instantaneous features including instantaneous amplitude, frequency, and phase.

  9. THE PANCHROMATIC HUBBLE ANDROMEDA TREASURY. IV. A PROBABILISTIC APPROACH TO INFERRING THE HIGH-MASS STELLAR INITIAL MASS FUNCTION AND OTHER POWER-LAW FUNCTIONS

    SciTech Connect

    Weisz, Daniel R.; Fouesneau, Morgan; Dalcanton, Julianne J.; Clifton Johnson, L.; Beerman, Lori C.; Williams, Benjamin F.; Hogg, David W.; Foreman-Mackey, Daniel T.; Rix, Hans-Walter; Gouliermis, Dimitrios; Dolphin, Andrew E.; Lang, Dustin; Bell, Eric F.; Gordon, Karl D.; Kalirai, Jason S.; Skillman, Evan D.

    2013-01-10

    We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M {approx}> 1 M {sub Sun }). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, {alpha}, are unbiased and that the uncertainty, {Delta}{alpha}, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on {alpha}, and provide an analytic approximation for {Delta}{alpha} as a function of the observed number of stars and mass range. Comparison with literature studies shows that {approx}3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield ({alpha}) = 2.46, with a 1{sigma} dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the

  10. The Panchromatic Hubble Andromeda Treasury. IV. A Probabilistic Approach to Inferring the High-mass Stellar Initial Mass Function and Other Power-law Functions

    NASA Astrophysics Data System (ADS)

    Weisz, Daniel R.; Fouesneau, Morgan; Hogg, David W.; Rix, Hans-Walter; Dolphin, Andrew E.; Dalcanton, Julianne J.; Foreman-Mackey, Daniel T.; Lang, Dustin; Johnson, L. Clifton; Beerman, Lori C.; Bell, Eric F.; Gordon, Karl D.; Gouliermis, Dimitrios; Kalirai, Jason S.; Skillman, Evan D.; Williams, Benjamin F.

    2013-01-01

    We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M >~ 1 M ⊙). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, α, are unbiased and that the uncertainty, Δα, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on α, and provide an analytic approximation for Δα as a function of the observed number of stars and mass range. Comparison with literature studies shows that ~3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield langαrang = 2.46, with a 1σ dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the completeness for stars of a given mass. The precision on MF

  11. A non-gradient-based energy minimization approach to the image denoising problem

    NASA Astrophysics Data System (ADS)

    Lukić, Tibor; Žunić, Joviša

    2014-09-01

    A common approach to denoising images is to minimize an energy function combining a quadratic data fidelity term with a total variation-based regularization. The total variation, comprising the gradient magnitude function, originally comes from mathematical analysis and is defined on a continuous domain only. When working in a discrete domain (e.g. when dealing with digital images), the accuracy in the gradient computation is limited by the applied image resolution. In this paper we propose a new approach, where the gradient magnitude function is replaced with an operator with similar properties (i.e. it also expresses the intensity variation in a neighborhood of the considered point), but is concurrently applicable in both continuous and discrete space. This operator is the shape elongation measure, one of the shape descriptors intensively used in shape-based image processing and computer vision tasks. The experiments provided in this paper confirm the capability of the proposed approach for providing high-quality reconstructions. Based on the performance comparison of a number of test images, we can say that the new method outperforms the energy minimization-based denoising methods often used in the literature for method comparison.

  12. Adaptive redundant multiwavelet denoising with improved neighboring coefficients for gearbox fault detection

    NASA Astrophysics Data System (ADS)

    Chen, Jinglong; Zi, Yanyang; He, Zhengjia; Wang, Xiaodong

    2013-07-01

    Gearbox fault detection under strong background noise is a challenging task. It is feasible to make the fault feature distinct through multiwavelet denoising. In addition to the advantage of multi-resolution analysis, multiwavelet with several scaling functions and wavelet functions can detect the different fault features effectively. However, the fixed basis functions not related to the given signal may lower the accuracy of fault detection. Moreover, the multiwavelet transform may result in Gibbs phenomena in the step of reconstruction. Furthermore, both traditional term-by-term threshold and neighboring coefficients do not consider the direct spatial dependency of wavelet coefficients at adjacent scale. To overcome these deficiencies, adaptive redundant multiwavelet (ARM) denoising with improved neighboring coefficients (NeighCoeff) is proposed. Based on symmetric multiwavelet lifting scheme (SMLS), taking kurtosis—partial envelope spectrum entropy as the evaluation objective and genetic algorithms as the optimization method, ARM is proposed. Considering the intra-scale and inter-scale dependency of wavelet coefficients, the improved NeighCoeff method is developed and incorporated into ARM. The proposed method is applied to both the simulated signal and the practical gearbox vibration signal under different conditions. The results show its effectiveness and reliance for gearbox fault detection.

  13. Discrete shearlet transform on GPU with applications in anomaly detection and denoising

    NASA Astrophysics Data System (ADS)

    Gibert, Xavier; Patel, Vishal M.; Labate, Demetrio; Chellappa, Rama

    2014-12-01

    Shearlets have emerged in recent years as one of the most successful methods for the multiscale analysis of multidimensional signals. Unlike wavelets, shearlets form a pyramid of well-localized functions defined not only over a range of scales and locations, but also over a range of orientations and with highly anisotropic supports. As a result, shearlets are much more effective than traditional wavelets in handling the geometry of multidimensional data, and this was exploited in a wide range of applications from image and signal processing. However, despite their desirable properties, the wider applicability of shearlets is limited by the computational complexity of current software implementations. For example, denoising a single 512 × 512 image using a current implementation of the shearlet-based shrinkage algorithm can take between 10 s and 2 min, depending on the number of CPU cores, and much longer processing times are required for video denoising. On the other hand, due to the parallel nature of the shearlet transform, it is possible to use graphics processing units (GPU) to accelerate its implementation. In this paper, we present an open source stand-alone implementation of the 2D discrete shearlet transform using CUDA C++ as well as GPU-accelerated MATLAB implementations of the 2D and 3D shearlet transforms. We have instrumented the code so that we can analyze the running time of each kernel under different GPU hardware. In addition to denoising, we describe a novel application of shearlets for detecting anomalies in textured images. In this application, computation times can be reduced by a factor of 50 or more, compared to multicore CPU implementations.

  14. Denoising of 4D Cardiac Micro-CT Data Using Median-Centric Bilateral Filtration

    PubMed Central

    Clark, D.; Johnson, G.A.; Badea, C.T.

    2012-01-01

    Bilateral filtration has proven an effective tool for denoising CT data. The classic filter utilizes Gaussian domain and range weighting functions in 2D. More recently, other distributions have yielded more accurate results in specific applications, and the bilateral filtration framework has been extended to higher dimensions. In this study, brute-force optimization is employed to evaluate the use of several alternative distributions for both domain and range weighting: Andrew's Sine Wave, El Fallah Ford, Gaussian, Flat, Lorentzian, Huber's Minimax, Tukey's Bi-weight, and Cosine. Two variations on the classic bilateral filter which use median filtration to reduce bias in range weights are also investigated: median-centric and hybrid bilateral filtration. Using the 4D MOBY mouse phantom reconstructed with noise (stdev. ~ 65 HU), hybrid bilateral filtration, a combination of the classic and median-centric filters, with Flat domain and range weighting is shown to provide optimal denoising results (PSNRs: 31.69, classic; 31.58 median-centric; 32.25, hybrid). To validate these phantom studies, the optimal filters are also applied to in vivo, 4D cardiac micro-CT data acquired in the mouse. In a constant region of the left ventricle, hybrid bilateral filtration with Flat domain and range weighting is shown to provide optimal smoothing (stdev: original, 72.2 HU; classic, 20.3 HU; median-centric, 24.1 HU; hybrid, 15.9 HU). While the optimal results were obtained using 4D filtration, the 3D hybrid filter is ultimately recommended for denoising 4D cardiac micro-CT data because it is more computationally tractable and less prone to artifacts (MOBY PSNR: 32.05; left ventricle stdev: 20.5 HU). PMID:24386540

  15. A wavelet multiscale denoising algorithm for magnetic resonance (MR) images

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Fei, Baowei

    2011-02-01

    Based on the Radon transform, a wavelet multiscale denoising method is proposed for MR images. The approach explicitly accounts for the Rician nature of MR data. Based on noise statistics we apply the Radon transform to the original MR images and use the Gaussian noise model to process the MR sinogram image. A translation invariant wavelet transform is employed to decompose the MR 'sinogram' into multiscales in order to effectively denoise the images. Based on the nature of Rician noise we estimate noise variance in different scales. For the final denoised sinogram we apply the inverse Radon transform in order to reconstruct the original MR images. Phantom, simulation brain MR images, and human brain MR images were used to validate our method. The experiment results show the superiority of the proposed scheme over the traditional methods. Our method can reduce Rician noise while preserving the key image details and features. The wavelet denoising method can have wide applications in MRI as well as other imaging modalities.

  16. A procedure for denoising dual-axis swallowing accelerometry signals.

    PubMed

    Sejdić, Ervin; Steele, Catriona M; Chau, Tom

    2010-01-01

    Dual-axis swallowing accelerometry is an emerging tool for the assessment of dysphagia (swallowing difficulties). These signals however can be very noisy as a result of physiological and motion artifacts. In this note, we propose a novel scheme for denoising those signals, i.e. a computationally efficient search for the optimal denoising threshold within a reduced wavelet subspace. To determine a viable subspace, the algorithm relies on the minimum value of the estimated upper bound for the reconstruction error. A numerical analysis of the proposed scheme using synthetic test signals demonstrated that the proposed scheme is computationally more efficient than minimum noiseless description length (MNDL)-based denoising. It also yields smaller reconstruction errors than MNDL, SURE and Donoho denoising methods. When applied to dual-axis swallowing accelerometry signals, the proposed scheme exhibits improved performance for dry, wet and wet chin tuck swallows. These results are important for the further development of medical devices based on dual-axis swallowing accelerometry signals. PMID:19940343

  17. Perceptual inference.

    PubMed

    Aggelopoulos, Nikolaos C

    2015-08-01

    Perceptual inference refers to the ability to infer sensory stimuli from predictions that result from internal neural representations built through prior experience. Methods of Bayesian statistical inference and decision theory model cognition adequately by using error sensing either in guiding action or in "generative" models that predict the sensory information. In this framework, perception can be seen as a process qualitatively distinct from sensation, a process of information evaluation using previously acquired and stored representations (memories) that is guided by sensory feedback. The stored representations can be utilised as internal models of sensory stimuli enabling long term associations, for example in operant conditioning. Evidence for perceptual inference is contributed by such phenomena as the cortical co-localisation of object perception with object memory, the response invariance in the responses of some neurons to variations in the stimulus, as well as from situations in which perception can be dissociated from sensation. In the context of perceptual inference, sensory areas of the cerebral cortex that have been facilitated by a priming signal may be regarded as comparators in a closed feedback loop, similar to the better known motor reflexes in the sensorimotor system. The adult cerebral cortex can be regarded as similar to a servomechanism, in using sensory feedback to correct internal models, producing predictions of the outside world on the basis of past experience. PMID:25976632

  18. Crustal anisotropy in northeastern Tibetan Plateau inferred from receiver functions: Rock textures caused by metamorphic fluids and lower crust flow?

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Park, Jeffrey; Rye, Danny M.

    2015-10-01

    The crust of Tibetan Plateau may have formed via shortening/thickening or large-scale underthrusting, and subsequently modified via lower crust channel flows and volatile-mediated regional metamorphism. The amplitude and distribution of crustal anisotropy record the history of continental deformation, offering clues to its formation and later modification. In this study, we first investigate the back-azimuth dependence of Ps converted phases using multitaper receiver functions (RFs). We analyze teleseismic data for 35 temporary broadband stations in the ASCENT experiment located in northeastern Tibet. We stack receiver functions after a moving-window moveout correction. Major features of RFs include: 1) Ps arrivals at 8-10 s on the radial components, suggesting a 70-90-km crustal thickness in the study area; 2) two-lobed back-azimuth variation for intra-crustal Ps phases in the upper crust (< 20 km), consistent with tilted symmetry axis anisotropy or dipping interfaces; 3) significant Ps arrivals with four-lobed back-azimuth variation distributed in distinct layers in the middle and lower crust (up to 60 km), corresponding to (sub)horizontal-axis anisotropy; and 4) weak or no evidence of azimuthal anisotropy in the lowermost crust. To study the anisotropy, we compare the observed RF stacks with one-dimensional reflectivity synthetic seismograms in anisotropic media, and fit major features by "trial and error" forward modeling. Crustal anisotropy offers few clues on plateau formation, but strong evidence of ongoing deformation and metamorphism. We infer strong horizontal-axis anisotropy concentrated in the middle and lower crust, which could be explained by vertically aligned sheet silicates, open cracks filled with magma or other fluid, vertical vein structures or by 1-10-km-scale chimney structures that have focused metamorphic fluids. Simple dynamic models encounter difficulty in generating vertically aligned sheet silicates. Instead, we interpret our data to

  19. The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal

    NASA Astrophysics Data System (ADS)

    Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis

    2016-05-01

    The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.

  20. The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal

    NASA Astrophysics Data System (ADS)

    Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis

    2016-08-01

    The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.

  1. Enhancement of signal denoising and multiple fault signatures detecting in rotating machinery using dual-tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Wang, Yanxue; He, Zhengjia; Zi, Yanyang

    2010-01-01

    In order to enhance the desired features related to some special type of machine fault, a technique based on the dual-tree complex wavelet transform (DTCWT) is proposed in this paper. It is demonstrated that DTCWT enjoys better shift invariance and reduced spectral aliasing than second-generation wavelet transform (SGWT) and empirical mode decomposition by means of numerical simulations. These advantages of the DTCWT arise from the relationship between the two dual-tree wavelet basis functions, instead of the matching of the used single wavelet basis function to the signal being analyzed. Since noise inevitably exists in the measured signals, an enhanced vibration signals denoising algorithm incorporating DTCWT with NeighCoeff shrinkage is also developed. Denoising results of vibration signals resulting from a crack gear indicate the proposed denoising method can effectively remove noise and retain the valuable information as much as possible compared to those DWT- and SGWT-based NeighCoeff shrinkage denoising methods. As is well known, excavation of comprehensive signatures embedded in the vibration signals is of practical importance to clearly clarify the roots of the fault, especially the combined faults. In the case of multiple features detection, diagnosis results of rolling element bearings with combined faults and an actual industrial equipment confirm that the proposed DTCWT-based method is a powerful and versatile tool and consistently outperforms SGWT and fast kurtogram, which are widely used recently. Moreover, it must be noted, the proposed method is completely suitable for on-line surveillance and diagnosis due to its good robustness and efficient algorithm.

  2. Multitaper Spectral Analysis and Wavelet Denoising Applied to Helioseismic Data

    NASA Technical Reports Server (NTRS)

    Komm, R. W.; Gu, Y.; Hill, F.; Stark, P. B.; Fodor, I. K.

    1999-01-01

    Estimates of solar normal mode frequencies from helioseismic observations can be improved by using Multitaper Spectral Analysis (MTSA) to estimate spectra from the time series, then using wavelet denoising of the log spectra. MTSA leads to a power spectrum estimate with reduced variance and better leakage properties than the conventional periodogram. Under the assumption of stationarity and mild regularity conditions, the log multitaper spectrum has a statistical distribution that is approximately Gaussian, so wavelet denoising is asymptotically an optimal method to reduce the noise in the estimated spectra. We find that a single m-upsilon spectrum benefits greatly from MTSA followed by wavelet denoising, and that wavelet denoising by itself can be used to improve m-averaged spectra. We compare estimates using two different 5-taper estimates (Stepian and sine tapers) and the periodogram estimate, for GONG time series at selected angular degrees l. We compare those three spectra with and without wavelet-denoising, both visually, and in terms of the mode parameters estimated from the pre-processed spectra using the GONG peak-fitting algorithm. The two multitaper estimates give equivalent results. The number of modes fitted well by the GONG algorithm is 20% to 60% larger (depending on l and the temporal frequency) when applied to the multitaper estimates than when applied to the periodogram. The estimated mode parameters (frequency, amplitude and width) are comparable for the three power spectrum estimates, except for modes with very small mode widths (a few frequency bins), where the multitaper spectra broadened the modest compared with the periodogram. We tested the influence of the number of tapers used and found that narrow modes at low n values are broadened to the extent that they can no longer be fit if the number of tapers is too large. For helioseismic time series of this length and temporal resolution, the optimal number of tapers is less than 10.

  3. A fast non-local image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Dauwe, A.; Goossens, B.; Luong, H. Q.; Philips, W.

    2008-02-01

    In this paper we propose several improvements to the original non-local means algorithm introduced by Buades et al. which obtains state-of-the-art denoising results. The strength of this algorithm is to exploit the repetitive character of the image in order to denoise the image unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Due to the enormous amount of weight computations, the original algorithm has a high computational cost. An improvement of image quality towards the original algorithm is to ignore the contributions from dissimilar windows. Even though their weights are very small at first sight, the new estimated pixel value can be severely biased due to the many small contributions. This bad influence of dissimilar windows can be eliminated by setting their corresponding weights to zero. Using the preclassification based on the first three statistical moments, only contributions from similar neighborhoods are computed. To decide whether a window is similar or dissimilar, we will derive thresholds for images corrupted with additive white Gaussian noise. Our accelerated approach is further optimized by taking advantage of the symmetry in the weights, which roughly halves the computation time, and by using a lookup table to speed up the weight computations. Compared to the original algorithm, our proposed method produces images with increased PSNR and better visual performance in less computation time. Our proposed method even outperforms state-of-the-art wavelet denoising techniques in both visual quality and PSNR values for images containing a lot of repetitive structures such as textures: the denoised images are much sharper and contain less artifacts. The proposed optimizations can also be applied in other image processing tasks which employ the concept of repetitive structures such as intra-frame super-resolution or detection of digital image forgery.

  4. Dictionary-based image denoising for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Allner, Sebastian; Mei, Kai; Pfeiffer, Franz; Noël, Peter B.

    2016-03-01

    Compared to conventional computed tomography (CT), dual energy CT allows for improved material decomposition by conducting measurements at two distinct energy spectra. Since radiation exposure is a major concern in clinical CT, there is a need for tools to reduce the noise level in images while preserving diagnostic information. One way to achieve this goal is the application of image-based denoising algorithms after an analytical reconstruction has been performed. We have developed a modified dictionary denoising algorithm for dual energy CT aimed at exploiting the high spatial correlation between between images obtained from different energy spectra. Both the low-and high energy image are partitioned into small patches which are subsequently normalized. Combined patches with improved signal-to-noise ratio are formed by a weighted addition of corresponding normalized patches from both images. Assuming that corresponding low-and high energy image patches are related by a linear transformation, the signal in both patches is added coherently while noise is neglected. Conventional dictionary denoising is then performed on the combined patches. Compared to conventional dictionary denoising and bilateral filtering, our algorithm achieved superior performance in terms of qualitative and quantitative image quality measures. We demonstrate, in simulation studies, that this approach can produce 2d-histograms of the high- and low-energy reconstruction which are characterized by significantly improved material features and separation. Moreover, in comparison to other approaches that attempt denoising without simultaneously using both energy signals, superior similarity to the ground truth can be found with our proposed algorithm.

  5. MicroRNA-Target Network Inference and Local Network Enrichment Analysis Identify Two microRNA Clusters with Distinct Functions in Head and Neck Squamous Cell Carcinoma

    PubMed Central

    Sass, Steffen; Pitea, Adriana; Unger, Kristian; Hess, Julia; Mueller, Nikola S.; Theis, Fabian J.

    2015-01-01

    MicroRNAs represent ~22 nt long endogenous small RNA molecules that have been experimentally shown to regulate gene expression post-transcriptionally. One main interest in miRNA research is the investigation of their functional roles, which can typically be accomplished by identification of mi-/mRNA interactions and functional annotation of target gene sets. We here present a novel method “miRlastic”, which infers miRNA-target interactions using transcriptomic data as well as prior knowledge and performs functional annotation of target genes by exploiting the local structure of the inferred network. For the network inference, we applied linear regression modeling with elastic net regularization on matched microRNA and messenger RNA expression profiling data to perform feature selection on prior knowledge from sequence-based target prediction resources. The novelty of miRlastic inference originates in predicting data-driven intra-transcriptome regulatory relationships through feature selection. With synthetic data, we showed that miRlastic outperformed commonly used methods and was suitable even for low sample sizes. To gain insight into the functional role of miRNAs and to determine joint functional properties of miRNA clusters, we introduced a local enrichment analysis procedure. The principle of this procedure lies in identifying regions of high functional similarity by evaluating the shortest paths between genes in the network. We can finally assign functional roles to the miRNAs by taking their regulatory relationships into account. We thoroughly evaluated miRlastic on a cohort of head and neck cancer (HNSCC) patients provided by The Cancer Genome Atlas. We inferred an mi-/mRNA regulatory network for human papilloma virus (HPV)-associated miRNAs in HNSCC. The resulting network best enriched for experimentally validated miRNA-target interaction, when compared to common methods. Finally, the local enrichment step identified two functional clusters of mi

  6. Functional characterization of somatic mutations in cancer using network-based inference of protein activity | Office of Cancer Genomics

    Cancer.gov

    Identifying the multiple dysregulated oncoproteins that contribute to tumorigenesis in a given patient is crucial for developing personalized treatment plans. However, accurate inference of aberrant protein activity in biological samples is still challenging as genetic alterations are only partially predictive and direct measurements of protein activity are generally not feasible.

  7. A hybrid fault diagnosis method based on second generation wavelet de-noising and local mean decomposition for rotating machinery.

    PubMed

    Liu, Zhiwen; He, Zhengjia; Guo, Wei; Tang, Zhangchun

    2016-03-01

    In order to extract fault features of large-scale power equipment from strong background noise, a hybrid fault diagnosis method based on the second generation wavelet de-noising (SGWD) and the local mean decomposition (LMD) is proposed in this paper. In this method, a de-noising algorithm of second generation wavelet transform (SGWT) using neighboring coefficients was employed as the pretreatment to remove noise in rotating machinery vibration signals by virtue of its good effect in enhancing the signal-noise ratio (SNR). Then, the LMD method is used to decompose the de-noised signals into several product functions (PFs). The PF corresponding to the faulty feature signal is selected according to the correlation coefficients criterion. Finally, the frequency spectrum is analyzed by applying the FFT to the selected PF. The proposed method is applied to analyze the vibration signals collected from an experimental gearbox and a real locomotive rolling bearing. The results demonstrate that the proposed method has better performances such as high SNR and fast convergence speed than the normal LMD method. PMID:26753616

  8. Customized maximal-overlap multiwavelet denoising with data-driven group threshold for condition monitoring of rolling mill drivetrain

    NASA Astrophysics Data System (ADS)

    Chen, Jinglong; Wan, Zhiguo; Pan, Jun; Zi, Yanyang; Wang, Yu; Chen, Binqiang; Sun, Hailiang; Yuan, Jing; He, Zhengjia

    2016-02-01

    Fault identification timely of rolling mill drivetrain is significant for guaranteeing product quality and realizing long-term safe operation. So, condition monitoring system of rolling mill drivetrain is designed and developed. However, because compound fault and weak fault feature information is usually sub-merged in heavy background noise, this task still faces challenge. This paper provides a possibility for fault identification of rolling mills drivetrain by proposing customized maximal-overlap multiwavelet denoising method. The effectiveness of wavelet denoising method mainly relies on the appropriate selections of wavelet base, transform strategy and threshold rule. First, in order to realize exact matching and accurate detection of fault feature, customized multiwavelet basis function is constructed via symmetric lifting scheme and then vibration signal is processed by maximal-overlap multiwavelet transform. Next, based on spatial dependency of multiwavelet transform coefficients, spatial neighboring coefficient data-driven group threshold shrinkage strategy is developed for denoising process by choosing the optimal group length and threshold via the minimum of Stein's Unbiased Risk Estimate. The effectiveness of proposed method is first demonstrated through compound fault identification of reduction gearbox on rolling mill. Then it is applied for weak fault identification of dedusting fan bearing on rolling mill and the results support its feasibility.

  9. A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2016-05-01

    An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

  10. Statistical denoising of signals in the S-transform domain

    NASA Astrophysics Data System (ADS)

    Weishi, Man; Jinghuai, Gao

    2009-06-01

    In this paper, the denoising of stochastic noise in the S-transform (ST) and generalized S-transform (GST) domains is discussed. First, the mean power spectrum (MPS) of white noise is derived in the ST and GST domains. The results show that the MPS varies linearly with the frequency in the ST and GST domains (with a Gaussian window). Second, the local power spectrum (LPS) of red noise is studied by employing the Monte Carlo method in the two domains. The results suggest that the LPS of Gaussian red noise can be transformed into a chi-square distribution with two degrees of freedom. On the basis of the difference between the LPS distribution of signals and noise, a denoising method is presented through hypothesis testing. The effectiveness of the method is confirmed by testing synthetic seismic data and a chirp signal.

  11. Examining Alternatives to Wavelet Denoising for Astronomical Source Finding

    NASA Astrophysics Data System (ADS)

    Jurek, R.; Brown, S.

    2012-08-01

    The Square Kilometre Array and its pathfinders ASKAP and MeerKAT will produce prodigious amounts of data that necessitate automated source finding. The performance of automated source finders can be improved by pre-processing a dataset. In preparation for the WALLABY and DINGO surveys, we have used a test HI datacube constructed from actual Westerbork Telescope noise and WHISP HI galaxies to test the real world improvement of linear smoothing, the Duchamp source finder's wavelet denoising, iterative median smoothing and mathematical morphology subtraction, on intensity threshold source finding of spectral line datasets. To compare these pre-processing methods we have generated completeness-reliability performance curves for each method and a range of input parameters. We find that iterative median smoothing produces the best source finding results for ASKAP HI spectral line observations, but wavelet denoising is a safer pre-processing technique. In this paper we also present our implementations of iterative median smoothing and mathematical morphology subtraction.

  12. Shearlet-based total variation diffusion for denoising.

    PubMed

    Easley, Glenn R; Labate, Demetrio; Colonna, Flavia

    2009-02-01

    We propose a shearlet formulation of the total variation (TV) method for denoising images. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. Common approaches in combining wavelet-like representations such as curvelets with TV or diffusion methods aim at reducing Gibbs-type artifacts after obtaining a nearly optimal estimate. We show that it is possible to obtain much better estimates from a shearlet representation by constraining the residual coefficients using a projected adaptive total variation scheme in the shearlet domain. We also analyze the performance of a shearlet-based diffusion method. Numerical examples demonstrate that these schemes are highly effective at denoising complex images and outperform a related method based on the use of the curvelet transform. Furthermore, the shearlet-TV scheme requires far fewer iterations than similar competitors. PMID:19095539

  13. Comparative study of wavelet denoising in myoelectric control applications.

    PubMed

    Sharma, Tanu; Veer, Karan

    2016-04-01

    Here, the wavelet analysis has been investigated to improve the quality of myoelectric signal before use in prosthetic design. Effective Surface Electromyogram (SEMG) signals were estimated by first decomposing the obtained signal using wavelet transform and then analysing the decomposed coefficients by threshold methods. With the appropriate choice of wavelet, it is possible to reduce interference noise effectively in the SEMG signal. However, the most effective wavelet for SEMG denoising is chosen by calculating the root mean square value and signal power values. The combined results of root mean square value and signal power shows that wavelet db4 performs the best denoising among the wavelets. Furthermore, time domain and frequency domain methods were applied for SEMG signal analysis to investigate the effect of muscle-force contraction on the signal. It was found that, during sustained contractions, the mean frequency (MNF) and median frequency (MDF) increase as muscle force levels increase. PMID:26887581

  14. Wavelet-based ultrasound image denoising: performance analysis and comparison.

    PubMed

    Rizi, F Yousefi; Noubari, H Ahmadi; Setarehdan, S K

    2011-01-01

    Ultrasound images are generally affected by multiplicative speckle noise, which is mainly due to the coherent nature of the scattering phenomenon. Speckle noise filtering is thus a critical pre-processing step in medical ultrasound imaging provided that the diagnostic features of interest are not lost. A comparative study of the performance of alternative wavelet based ultrasound image denoising methods is presented in this article. In particular, the contourlet and curvelet techniques with dual tree complex and real and double density wavelet transform denoising methods were applied to real ultrasound images and results were quantitatively compared. The results show that curvelet-based method performs superior as compared to other methods and can effectively reduce most of the speckle noise content of a given image. PMID:22255196

  15. Undecimated Wavelet Transforms for Image De-noising

    SciTech Connect

    Gyaourova, A; Kamath, C; Fodor, I K

    2002-11-19

    A few different approaches exist for computing undecimated wavelet transform. In this work we construct three undecimated schemes and evaluate their performance for image noise reduction. We use standard wavelet based de-noising techniques and compare the performance of our algorithms with the original undecimated wavelet transform, as well as with the decimated wavelet transform. The experiments we have made show that our algorithms have better noise removal/blurring ratio.

  16. Comparison of de-noising techniques for FIRST images

    SciTech Connect

    Fodor, I K; Kamath, C

    2001-01-22

    Data obtained through scientific observations are often contaminated by noise and artifacts from various sources. As a result, a first step in mining these data is to isolate the signal of interest by minimizing the effects of the contaminations. Once the data has been cleaned or de-noised, data mining can proceed as usual. In this paper, we describe our work in denoising astronomical images from the Faint Images of the Radio Sky at Twenty-Centimeters (FIRST) survey. We are mining this survey to detect radio-emitting galaxies with a bent-double morphology. This task is made difficult by the noise in the images caused by the processing of the sensor data. We compare three different approaches to de-noising: thresholding of wavelet coefficients advocated in the statistical community, traditional Altering methods used in the image processing community, and a simple thresholding scheme proposed by FIRST astronomers. While each approach has its merits and pitfalls, we found that for our purpose, the simple thresholding scheme worked relatively well for the FIRST dataset.

  17. Spatio-Temporal Multiscale Denoising of Fluoroscopic Sequence.

    PubMed

    Amiot, Carole; Girard, Catherine; Chanussot, Jocelyn; Pescatore, Jeremie; Desvignes, Michel

    2016-06-01

    In the past 20 years, a wide range of complex fluoroscopically guided procedures have shown considerable growth. Biologic effects of the exposure (radiation induced burn, cancer) lead to reduce the dose during the intervention, for the safety of patients and medical staff. However, when the dose is reduced, image quality decreases, with a high level of noise and a very low contrast. Efficient restoration and denoising algorithms should overcome this drawback. We propose a spatio-temporal filter operating in a multi-scales space. This filter relies on a first order, motion compensated, recursive temporal denoising. Temporal high frequency content is first detected and then matched over time to allow for a strong denoising in the temporal axis. We study this filter in the curvelet domain and in the dual-tree complex wavelet domain, and compare those results to state of the art methods. Quantitative and qualitative analysis on both synthetic and real fluoroscopic sequences demonstrate that the proposed filter allows a great dose reduction. PMID:26812705

  18. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  19. Adaptive nonlocal means filtering based on local noise level for CT denoising

    SciTech Connect

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.

    2014-01-15

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  20. Biases in the inferred mass-to-light ratio of globular clusters: no need for variations in the stellar mass function

    NASA Astrophysics Data System (ADS)

    Shanahan, Rosemary L.; Gieles, Mark

    2015-03-01

    From a study of the integrated light properties of 200 globular clusters (GCs) in M31, Strader et al. found that the mass-to-light ratios are lower than what is expected from simple stellar population models with a `canonical' stellar initial mass function (IMF), with the discrepancy being larger at high metallicities. We use dynamical multimass models, that include a prescription for equipartition, to quantify the bias in the inferred dynamical mass as the result of the assumption that light follows mass. For a universal IMF and a metallicity-dependent present-day mass function, we find that the inferred mass from integrated light properties systematically underestimates the true mass, and that the bias is more important at high metallicities, as was found for the M31 GCs. We show that mass segregation and a flattening of the mass function have opposing effects of similar magnitude on the mass inferred from integrated properties. This makes the mass-to-light ratio as derived from integrated properties an inadequate probe of the low-mass end of the stellar mass function. There is, therefore, no need for variations in the IMF, nor the need to invoke depletion of low-mass stars, to explain the observations. Finally, we find that the retention fraction of stellar-mass black holes (BHs) is an equally important parameter in understanding the mass segregation bias. We speculatively put forward to idea that kinematical data of GCs can in fact be used to constrain the total mass in stellar-mass BHs in GCs.

  1. Denoised and texture enhanced MVCT to improve soft tissue conspicuity

    SciTech Connect

    Sheng, Ke Qi, Sharon X.; Gou, Shuiping; Wu, Jiaolong

    2014-10-15

    Purpose: MVCT images have been used in TomoTherapy treatment to align patients based on bony anatomies but its usefulness for soft tissue registration, delineation, and adaptive radiation therapy is limited due to insignificant photoelectric interaction components and the presence of noise resulting from low detector quantum efficiency of megavoltage x-rays. Algebraic reconstruction with sparsity regularizers as well as local denoising methods has not significantly improved the soft tissue conspicuity. The authors aim to utilize a nonlocal means denoising method and texture enhancement to recover the soft tissue information in MVCT (DeTECT). Methods: A block matching 3D (BM3D) algorithm was adapted to reduce the noise while keeping the texture information of the MVCT images. Following imaging denoising, a saliency map was created to further enhance visual conspicuity of low contrast structures. In this study, BM3D and saliency maps were applied to MVCT images of a CT imaging quality phantom, a head and neck, and four prostate patients. Following these steps, the contrast-to-noise ratios (CNRs) were quantified. Results: By applying BM3D denoising and saliency map, postprocessed MVCT images show remarkable improvements in imaging contrast without compromising resolution. For the head and neck patient, the difficult-to-see lymph nodes and vein in the carotid space in the original MVCT image became conspicuous in DeTECT. For the prostate patients, the ambiguous boundary between the bladder and the prostate in the original MVCT was clarified. The CNRs of phantom low contrast inserts were improved from 1.48 and 3.8 to 13.67 and 16.17, respectively. The CNRs of two regions-of-interest were improved from 1.5 and 3.17 to 3.14 and 15.76, respectively, for the head and neck patient. DeTECT also increased the CNR of prostate from 0.13 to 1.46 for the four prostate patients. The results are substantially better than a local denoising method using anisotropic diffusion

  2. Comparative analysis on some spatial-domain filters for fringe pattern denoising.

    PubMed

    Wang, Haixia; Kemao, Qian

    2011-04-20

    Fringe patterns produced by various optical interferometric techniques encode information such as shape, deformation, and refractive index. Noise affects further processing of the fringe patterns. Denoising is often needed before fringe pattern demodulation. Filtering along the fringe orientation is an effective option. Such filters include coherence enhancing diffusion, spin filtering with curve windows, second-order oriented partial-differential equations, and the regularized quadratic cost function for oriented fringe pattern filtering. These filters are analyzed to establish the relationships among them. Theoretical analysis shows that the four filters are largely equivalent to each other. Quantitative results are given on simulated fringe patterns to validate the theoretical analysis and to compare the performance of these filters. PMID:21509060

  3. Blind Deblurring and Denoising of Images Corrupted by Unidirectional Object Motion Blur and Sensor Noise.

    PubMed

    Zhang, Yi; Hirakawa, Keigo

    2016-09-01

    Low light photography suffers from blur and noise. In this paper, we propose a novel method to recover a dense estimate of spatially varying blur kernel as well as a denoised and deblurred image from a single noisy and object motion blurred image. A proposed method takes the advantage of the sparse representation of double discrete wavelet transform-a generative model of image blur that simplifies the wavelet analysis of a blurred image-and the Bayesian perspective of modeling the prior distribution of the latent sharp wavelet coefficient and the likelihood function that makes the noise handling explicit. We demonstrate the effectiveness of the proposed method on moderate noise and severely blurred images using simulated and real camera data. PMID:27337717

  4. A New Method for Nonlocal Means Image Denoising Using Multiple Images

    PubMed Central

    Wang, Xingzheng; Wang, Haoqian; Yang, Jiangfeng; Zhang, Yongbing

    2016-01-01

    The basic principle of nonlocal means is to denoise a pixel using the weighted average of the neighbourhood pixels, while the weight is decided by the similarity of these pixels. The key issue of the nonlocal means method is how to select similar patches and design the weight of them. There are two main contributions of this paper: The first contribution is that we use two images to denoise the pixel. These two noised images are with the same noise deviation. Instead of using only one image, we calculate the weight from two noised images. After the first denoising process, we get a pre-denoised image and a residual image. The second contribution is combining the nonlocal property between residual image and pre-denoised image. The improved nonlocal means method pays more attention on the similarity than the original one, which turns out to be very effective in eliminating gaussian noise. Experimental results with simulated data are provided. PMID:27459293

  5. Phase-aware candidate selection for time-of-flight depth map denoising

    NASA Astrophysics Data System (ADS)

    Hach, Thomas; Seybold, Tamara; Böttcher, Hendrik

    2015-03-01

    This paper presents a new pre-processing algorithm for Time-of-Flight (TOF) depth map denoising. Typically, denoising algorithms use the raw depth map as it comes from the sensor. Systematic artifacts due to the measurement principle are not taken into account which degrades the denoising results. For phase measurement TOF sensing, a major artifact is observed as salt-and-pepper noise caused by the measurement's ambiguity. Our pre-processing algorithm is able to isolate and unwrap affected pixels deploying the physical behavior of the capturing system yielding Gaussian noise. Using this pre-processing method before applying the denoising step clearly improves the parameter estimation for the denoising filter together with its final results.

  6. Blind source separation based x-ray image denoising from an image sequence

    NASA Astrophysics Data System (ADS)

    Yu, Chun-Yu; Li, Yan; Fei, Bin; Li, Wei-Liang

    2015-09-01

    Blind source separation (BSS) based x-ray image denoising from an image sequence is proposed. Without priori knowledge, the useful image signal can be separated from an x-ray image sequence, for original images are supposed as different combinations of stable image signal and random image noise. The BSS algorithms such as fixed-point independent component analysis and second-order statistics singular value decomposition are used and compared with multi-frame averaging which is a common algorithm for improving image's signal-to-noise ratio (SNR). Denoising performance is evaluated in SNR, standard deviation, entropy, and runtime. Analysis indicates that BSS is applicable to image denoising; the denoised image's quality will get better when more frames are included in an x-ray image sequence, but it will cost more time; there should be trade-off between denoising performance and runtime, which means that the number of frames included in an image sequence is enough.

  7. Nonlocal two dimensional denoising of frequency specific chirp evoked ABR single trials.

    PubMed

    Schubert, J Kristof; Teuber, Tanja; Steidl, Gabriele; Strauss, Daniel J; Corona-Strauss, Farah I

    2012-01-01

    Recently, we have shown that denoising evoked potential (EP) images is possible using two dimensional diffusion filtering methods. This restoration allows for an integration of regularities over multiple stimulations into the denoising process. In the present work we propose the nonlocal means (NLM) method for EP image denoising. The EP images were constructed using auditory brainstem responses (ABR) collected in young healthy subjects using frequency specific and broadband chirp stimulations. It is concluded that the NLM method is more efficient than conventional approaches in EP imaging denoising, specially in the case of ABRs, where the relevant information can be easily masked by the ongoing EEG activity, i.e., signals suffer from rather low signal-to-noise ratio SNR. The proposed approach is for the a posteriori denoising of single trials after the experiment and not for real time applications. PMID:23366439

  8. The performance and reliability of wavelet denoising for Doppler ultrasound fetal heart rate signal preprocessing.

    PubMed

    Papadimitriou, S; Papadopoulos, V; Gatzounas, D; Tzigounis, V; Bezerianos, A

    1997-01-01

    The present paper deals with the performance and the reliability of a Wavelet Denoising method for Doppler ultrasound Fetal Heart Rate (FHR) recordings. It displays strong evidence that the denoising process extracts the actual noise components. The analysis is approached with three methods. First, the power spectrum of the denoised FHR displays more clearly an 1/fa scaling law, i.e. the characteristic of fractal time series. Second, the rescaled scale analysis technique reveals a Hurst exponent at the range of 0.7-0.8 that corresponds to a long memory persistent process. Moreover, the variance of the Hurst exponent across time scales is smaller at the denoised signal. Third, a chaotic attractor reconstructed with the embedding dimension technique becomes evident at the denoised signals, while it is completely obscured at the unfiltered ones. PMID:10179728

  9. Denoising in digital speckle pattern interferometry using wave atoms.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2007-05-15

    We present an effective method for speckle noise removal in digital speckle pattern interferometry, which is based on a wave-atom thresholding technique. Wave atoms are a variant of 2D wavelet packets with a parabolic scaling relation and improve the sparse representation of fringe patterns when compared with traditional expansions. The performance of the denoising method is analyzed by using computer-simulated fringes, and the results are compared with those produced by wavelet and curvelet thresholding techniques. An application of the proposed method to reduce speckle noise in experimental data is also presented. PMID:17440544

  10. Feature-Preserving Mesh Denoising via Anisotropic Surface Fitting

    PubMed Central

    Yu, Zeyun

    2012-01-01

    We propose in this paper a robust surface mesh denoising method that can effectively remove mesh noise while faithfully preserving sharp features. This method utilizes surface fitting and projection techniques. Sharp features are preserved in the surface fitting algorithm by considering an anisotropic neighborhood of each vertex detected by the normal-weighted distance. In addition, to handle the mesh with a high level of noise, we perform a pre-filtering of surface normals prior to the neighborhood searching. A number of experimental results and comparisons demonstrate the excellent performance of our method in preserving important surface geometries while filtering mesh noise. PMID:22328806

  11. Denoising the Speaking Brain: Toward a Robust Technique for Correcting Artifact-Contaminated fMRI Data under Severe Motion

    PubMed Central

    Xu, Yisheng; Tong, Yunxia; Liu, Siyuan; Chow, Ho Ming; AbdulSabur, Nuria Y.; Mattay, Govind S.; Braun, Allen R.

    2014-01-01

    A comprehensive set of methods based on spatial independent component analysis (sICA) is presented as a robust technique for artifact removal, applicable to a broad range of functional magnetic resonance imaging (fMRI) experiments that have been plagued by motion-related artifacts. Although the applications of sICA for fMRI denoising have been studied previously, three fundamental elements of this approach have not been established as follows: 1) a mechanistically-based ground truth for component classification; 2) a general framework for evaluating the performance and generalizability of automated classifiers; 3) a reliable method for validating the effectiveness of denoising. Here we perform a thorough investigation of these issues and demonstrate the power of our technique by resolving the problem of severe imaging artifacts associated with continuous overt speech production. As a key methodological feature, a dual-mask sICA method is proposed to isolate a variety of imaging artifacts by directly revealing their extracerebral spatial origins. It also plays an important role for understanding the mechanistic properties of noise components in conjunction with temporal measures of physical or physiological motion. The potentials of a spatially-based machine learning classifier and the general criteria for feature selection have both been examined, in order to maximize the performance and generalizability of automated component classification. The effectiveness of denoising is quantitatively validated by comparing the activation maps of fMRI with those of positron emission tomography acquired under the same task conditions. The general applicability of this technique is further demonstrated by the successful reduction of distance-dependent effect of head motion on resting-state functional connectivity. PMID:25225001

  12. Improving Students' Ability to Intuitively Infer Resistance from Magnitude of Current and Potential Difference Information: A Functional Learning Approach

    ERIC Educational Resources Information Center

    Chasseigne, Gerard; Giraudeau, Caroline; Lafon, Peggy; Mullet, Etienne

    2011-01-01

    The study examined the knowledge of the functional relations between potential difference, magnitude of current, and resistance among seventh graders, ninth graders, 11th graders (in technical schools), and college students. It also tested the efficiency of a learning device named "functional learning" derived from cognitive psychology on the…

  13. A method for predicting DCT-based denoising efficiency for grayscale images corrupted by AWGN and additive spatially correlated noise

    NASA Astrophysics Data System (ADS)

    Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.

    2015-03-01

    Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.

  14. Experimental and theoretical analysis of wavelet-based denoising filter for echocardiographic images.

    PubMed

    Kang, S C; Hong, S H

    2001-01-01

    One of the most significant features of diagnostic echocardiographic images is to reduce speckle noise and make better image quality. In this paper we proposed a simple and effective filter design for image denoising and contrast enhancement based on multiscale wavelet denoising method. Wavelet threshold algorithms replace wavelet coefficients with small magnitude by zero and keep or shrink the other coefficients. This is basically a local procedure, since wavelet coefficients characterize the local regularity of a function. After we estimate distribution of noise within echocardiographic image, then apply to fitness Wavelet threshold algorithm. A common way of the estimating the speckle noise level in coherent imaging is to calculate the mean-to-standard-deviation ratio of the pixel intensity, often termed the Equivalent Number of Looks(ENL), over a uniform image area. Unfortunately, we found this measure not very robust mainly because of the difficulty to identify a uniform area in a real image. For this reason, we will only use here the S/MSE ratio and which corresponds to the standard SNR in case of additivie noise. We have simulated some echocardiographic images by specialized hardware for real-time application;processing of a 512*512 images takes about 1 min. Our experiments show that the optimal threshold level depends on the spectral content of the image. High spectral content tends to over-estimate the noise standard deviation estimation performed at the finest level of the DWT. As a result, a lower threshold parameter is required to get the optimal S/MSE. The standard WCS theory predicts a threshold that depends on the number of signal samples only. PMID:11604864

  15. Making Inferences: Comprehension of Physical Causality, Intentionality, and Emotions in Discourse by High-Functioning Older Children, Adolescents, and Adults with Autism

    ERIC Educational Resources Information Center

    Bodner, Kimberly E.; Engelhardt, Christopher R.; Minshew, Nancy J.; Williams, Diane L.

    2015-01-01

    Studies investigating inferential reasoning in autism spectrum disorder (ASD) have focused on the ability to make socially-related inferences or inferences more generally. Important variables for intervention planning such as whether inferences depend on physical experiences or the nature of social information have received less consideration. A…

  16. Decoding the Role of the Insula in Human Cognition: Functional Parcellation and Large-Scale Reverse Inference

    PubMed Central

    Yarkoni, Tal; Khaw, Mel Win; Sanfey, Alan G.

    2013-01-01

    Recent work has indicated that the insula may be involved in goal-directed cognition, switching between networks, and the conscious awareness of affect and somatosensation. However, these findings have been limited by the insula’s remarkably high base rate of activation and considerable functional heterogeneity. The present study used a relatively unbiased data-driven approach combining resting-state connectivity-based parcellation of the insula with large-scale meta-analysis to understand how the insula is anatomically organized based on functional connectivity patterns as well as the consistency and specificity of the associated cognitive functions. Our findings support a tripartite subdivision of the insula and reveal that the patterns of functional connectivity in the resting-state analysis appear to be relatively conserved across tasks in the meta-analytic coactivation analysis. The function of the networks was meta-analytically “decoded” using the Neurosynth framework and revealed that while the dorsoanterior insula is more consistently involved in human cognition than ventroanterior and posterior networks, each parcellated network is specifically associated with a distinct function. Collectively, this work suggests that the insula is instrumental in integrating disparate functional systems involved in processing affect, sensory-motor processing, and general cognition and is well suited to provide an interface between feelings, cognition, and action. PMID:22437053

  17. Comparison of f2/f1 ratio functions in rabbit and gerbil: Ear-canal DPOAEs vs noninvasively inferred intracochlear DPs

    NASA Astrophysics Data System (ADS)

    Martin, Glen K.; Stagner, Barden B.; Dong, Wei; Lonsbury-Martin, Brenda L.

    2015-12-01

    The properties of distortion product otoacoustic emissions (DPOAEs), i.e., distortion products (DPs) measured in the ear canal, have been thoroughly described. However, considerably less is known about the behavior of intracochlear DPs (iDPs). Detailed comparisons of DPOAEs to iDPs would provide valuable insights on the extent to which ear-canal DPOAEs mirror iDPs. Prior studies described a technique whereby the behavior of iDPs could be inferred by interacting a probe tone (f3) with the iDP of interest to produce a `secondary' DPOAE (DPOAÉ). The behavior of DPOAÉ was then used to deduce the characteristics of the iDP. In the present study, this method was used in rabbits and gerbils to simultaneously compare DPOAE f2/f1-ratio functions to their iDP counterparts. The 2f1-f2 and 2f2-f1 DPOAEs were collected with f1 and f2 primary-tone levels varied from 35-75 dB SPL, and with a 50-dB SPL f3 placed at a DP/f3 ratio of 1.25 to evoke a DPOAÉ at 2f3-(2f1-f2) or 2f3-(2f2-f1). Control experiments demonstrated little effect of the f3-probe tone on DPOAE-ratio functions. Substitution experiments were performed to determine any suppressive effects of the f1 and f2 primaries on the generation of DPOAÉ, as well as to infer the intracochlear level of the iDP once the DPOAÉ was corrected for suppression. Results showed that at low primary-tone levels, 2f1-f2 DPOAE f2/f1-ratio functions peaked around f2/f1=1.25, and exhibited an inverted U-shaped function. In contrast, simultaneously measured 2f1-f2 iDP-ratio functions peaked at f2/f1≈1. Similar growth of the inferred iDP was obtained for higher-level primaries when the ratio functions were corrected for suppressive effects. At these higher levels, DPOAE-ratio functions leveled off and no longer showed the steep reduction at narrow f2/f1 ratios. Overall, noninvasive estimates of 2f1-f2 iDP-ratio functions agreed with reports of similar functions directly measured for 2f1-f2 DPs on the basilar membrane (BM) or in

  18. A comparison of de-noising methods for differential phase shift and associated rainfall estimation

    NASA Astrophysics Data System (ADS)

    Hu, Zhiqun; Liu, Liping; Wu, Linlin; Wei, Qing

    2015-04-01

    Measured differential phase shift UDP is known to be a noisy unstable polarimetric radar variable, such that the quality of UDP data has direct impact on specific differential phase shift KDP estimation, and subsequently, the KDP-based rainfall estimation. Over the past decades, many UDP de-noising methods have been developed; however, the de-noising effects in these methods and their impact on KDP-based rainfall estimation lack comprehensive comparative analysis. In this study, simulated noisy UDP data were generated and de-noised by using several methods such as finite-impulse response (FIR), Kalman, wavelet, traditional mean, and median filters. The biases were compared between KDP from simulated and observed UDP radial profiles after de-noising by these methods. The results suggest that the complicated FIR, Kalman, and wavelet methods have a better de-noising effect than the traditional methods. After UDP was de-noised, the accuracy of the KDP-based rainfall estimation increased significantly based on the analysis of three actual rainfall events. The improvement in estimation was more obvious when KDP was estimated with UDP de-noised by Kalman, FIR, and wavelet methods when the average rainfall was heavier than 5 mm h ≥1. However, the improved estimation was not significant when the precipitation intensity further increased to a rainfall rate beyond 10 mm h ≥1. The performance of wavelet analysis was found to be the most stable of these filters.

  19. Evaluation of Effectiveness of Wavelet Based Denoising Schemes Using ANN and SVM for Bearing Condition Classification

    PubMed Central

    G. S., Vijay; H. S., Kumar; Pai P., Srinivasa; N. S., Sriram; Rao, Raj B. K. N.

    2012-01-01

    The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal. PMID:23213323

  20. Noise distribution and denoising of current density images

    PubMed Central

    Beheshti, Mohammadali; Foomany, Farbod H.; Magtibay, Karl; Jaffray, David A.; Krishnan, Sridhar; Nanthakumar, Kumaraswamy; Umapathy, Karthikeyan

    2015-01-01

    Abstract. Current density imaging (CDI) is a magnetic resonance (MR) imaging technique that could be used to study current pathways inside the tissue. The current distribution is measured indirectly as phase changes. The inherent noise in the MR imaging technique degrades the accuracy of phase measurements leading to imprecise current variations. The outcome can be affected significantly, especially at a low signal-to-noise ratio (SNR). We have shown the residual noise distribution of the phase to be Gaussian-like and the noise in CDI images approximated as a Gaussian. This finding matches experimental results. We further investigated this finding by performing comparative analysis with denoising techniques, using two CDI datasets with two different currents (20 and 45 mA). We found that the block-matching and three-dimensional (BM3D) technique outperforms other techniques when applied on current density (J). The minimum gain in noise power by BM3D applied to J compared with the next best technique in the analysis was found to be around 2 dB per pixel. We characterize the noise profile in CDI images and provide insights on the performance of different denoising techniques when applied at two different stages of current density reconstruction. PMID:26158100

  1. GPU-based cone-beam reconstruction using wavelet denoising

    NASA Astrophysics Data System (ADS)

    Jin, Kyungchan; Park, Jungbyung; Park, Jongchul

    2012-03-01

    The scattering noise artifact resulted in low-dose projection in repetitive cone-beam CT (CBCT) scans decreases the image quality and lessens the accuracy of the diagnosis. To improve the image quality of low-dose CT imaging, the statistical filtering is more effective in noise reduction. However, image filtering and enhancement during the entire reconstruction process exactly may be challenging due to high performance computing. The general reconstruction algorithm for CBCT data is the filtered back-projection, which for a volume of 512×512×512 takes up to a few minutes on a standard system. To speed up reconstruction, massively parallel architecture of current graphical processing unit (GPU) is a platform suitable for acceleration of mathematical calculation. In this paper, we focus on accelerating wavelet denoising and Feldkamp-Davis-Kress (FDK) back-projection using parallel processing on GPU, utilize compute unified device architecture (CUDA) platform and implement CBCT reconstruction based on CUDA technique. Finally, we evaluate our implementation on clinical tooth data sets. Resulting implementation of wavelet denoising is able to process a 1024×1024 image within 2 ms, except data loading process, and our GPU-based CBCT implementation reconstructs a 512×512×512 volume from 400 projection data in less than 1 minute.

  2. Denoising Stimulated Raman Spectroscopic Images by Total Variation Minimization

    PubMed Central

    Liao, Chien-Sheng; Choi, Joon Hee; Zhang, Delong; Chan, Stanley H.; Cheng, Ji-Xin

    2016-01-01

    High-speed coherent Raman scattering imaging is opening a new avenue to unveiling the cellular machinery by visualizing the spatio-temporal dynamics of target molecules or intracellular organelles. By extracting signals from the laser at MHz modulation frequency, current stimulated Raman scattering (SRS) microscopy has reached shot noise limited detection sensitivity. The laser-based local oscillator in SRS microscopy not only generates high levels of signal, but also delivers a large shot noise which degrades image quality and spectral fidelity. Here, we demonstrate a denoising algorithm that removes the noise in both spatial and spectral domains by total variation minimization. The signal-to-noise ratio of SRS spectroscopic images was improved by up to 57 times for diluted dimethyl sulfoxide solutions and by 15 times for biological tissues. Weak Raman peaks of target molecules originally buried in the noise were unraveled. Coupling the denoising algorithm with multivariate curve resolution allowed discrimination of fat stores from protein-rich organelles in C. elegans. Together, our method significantly improved detection sensitivity without frame averaging, which can be useful for in vivo spectroscopic imaging. PMID:26955400

  3. Multiresolution generalized N dimension PCA for ultrasound image denoising

    PubMed Central

    2014-01-01

    Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917

  4. Real-time image denoising algorithm in teleradiology systems

    NASA Astrophysics Data System (ADS)

    Gupta, Pradeep Kumar; Kanhirodan, Rajan

    2006-02-01

    Denoising of medical images in wavelet domain has potential application in transmission technologies such as teleradiology. This technique becomes all the more attractive when we consider the progressive transmission in a teleradiology system. The transmitted images are corrupted mainly due to noisy channels. In this paper, we present a new real time image denoising scheme based on limited restoration of bit-planes of wavelet coefficients. The proposed scheme exploits the fundamental property of wavelet transform - its ability to analyze the image at different resolution levels and the edge information associated with each sub-band. The desired bit-rate control is achieved by applying the restoration on a limited number of bit-planes subject to the optimal smoothing. The proposed method adapts itself to the preference of the medical expert; a single parameter can be used to balance the preservation of (expert-dependent) relevant details against the degree of noise reduction. The proposed scheme relies on the fact that noise commonly manifests itself as a fine-grained structure in image and wavelet transform allows the restoration strategy to adapt itself according to directional features of edges. The proposed approach shows promising results when compared with unrestored case, in context of error reduction. It also has capability to adapt to situations where noise level in the image varies and with the changing requirements of medical-experts. The applicability of the proposed approach has implications in restoration of medical images in teleradiology systems. The proposed scheme is computationally efficient.

  5. HARDI denoising using nonlocal means on S2

    NASA Astrophysics Data System (ADS)

    Kuurstra, Alan; Dolui, Sudipto; Michailovich, Oleg

    2012-02-01

    Diffusion MRI (dMRI) is a unique imaging modality for in vivo delineation of the anatomical structure of white matter in the brain. In particular, high angular resolution diffusion imaging (HARDI) is a specific instance of dMRI which is known to excel in detection of multiple neural fibers within a single voxel. Unfortunately, the angular resolution of HARDI is known to be inversely proportional to SNR, which makes the problem of denoising of HARDI data be of particular practical importance. Since HARDI signals are effectively band-limited, denoising can be accomplished by means of linear filtering. However, the spatial dependency of diffusivity in brain tissue makes it impossible to find a single set of linear filter parameters which is optimal for all types of diffusion signals. Hence, adaptive filtering is required. In this paper, we propose a new type of non-local means (NLM) filtering which possesses the required adaptivity property. As opposed to similar methods in the field, however, the proposed NLM filtering is applied in the spherical domain of spatial orientations. Moreover, the filter uses an original definition of adaptive weights, which are designed to be invariant to both spatial rotations as well as to a particular sampling scheme in use. As well, we provide a detailed description of the proposed filtering procedure, its efficient implementation, as well as experimental results with synthetic data. We demonstrate that our filter has substantially better adaptivity as compared to a number of alternative methods.

  6. Explorative Learning and Functional Inferences on a Five-Step Means-Means-End Problem in Goffin’s Cockatoos (Cacatua goffini)

    PubMed Central

    Auersperg, Alice M. I.; Kacelnik, Alex; von Bayern, Auguste M. P.

    2013-01-01

    To investigate cognitive operations underlying sequential problem solving, we confronted ten Goffin’s cockatoos with a baited box locked by five different inter-locking devices. Subjects were either naïve or had watched a conspecific demonstration, and either faced all devices at once or incrementally. One naïve subject solved the problem without demonstration and with all locks present within the first five sessions (each consisting of one trial of up to 20 minutes), while five others did so after social demonstrations or incremental experience. Performance was aided by species-specific traits including neophilia, a haptic modality and persistence. Most birds showed a ratchet-like progress, rarely failing to solve a stage once they had done it once. In most transfer tests subjects reacted flexibly and sensitively to alterations of the locks’ sequencing and functionality, as expected from the presence of predictive inferences about mechanical interactions between the locks. PMID:23844247

  7. System-Level Insights into the Cellular Interactome of a Non-Model Organism: Inferring, Modelling and Analysing Functional Gene Network of Soybean (Glycine max)

    PubMed Central

    Xu, Yungang; Guo, Maozu; Zou, Quan; Liu, Xiaoyan; Wang, Chunyu; Liu, Yang

    2014-01-01

    Cellular interactome, in which genes and/or their products interact on several levels, forming transcriptional regulatory-, protein interaction-, metabolic-, signal transduction networks, etc., has attracted decades of research focuses. However, such a specific type of network alone can hardly explain the various interactive activities among genes. These networks characterize different interaction relationships, implying their unique intrinsic properties and defects, and covering different slices of biological information. Functional gene network (FGN), a consolidated interaction network that models fuzzy and more generalized notion of gene-gene relations, have been proposed to combine heterogeneous networks with the goal of identifying functional modules supported by multiple interaction types. There are yet no successful precedents of FGNs on sparsely studied non-model organisms, such as soybean (Glycine max), due to the absence of sufficient heterogeneous interaction data. We present an alternative solution for inferring the FGNs of soybean (SoyFGNs), in a pioneering study on the soybean interactome, which is also applicable to other organisms. SoyFGNs exhibit the typical characteristics of biological networks: scale-free, small-world architecture and modularization. Verified by co-expression and KEGG pathways, SoyFGNs are more extensive and accurate than an orthology network derived from Arabidopsis. As a case study, network-guided disease-resistance gene discovery indicates that SoyFGNs can provide system-level studies on gene functions and interactions. This work suggests that inferring and modelling the interactome of a non-model plant are feasible. It will speed up the discovery and definition of the functions and interactions of other genes that control important functions, such as nitrogen fixation and protein or lipid synthesis. The efforts of the study are the basis of our further comprehensive studies on the soybean functional interactome at the genome

  8. The application study of wavelet packet transformation in the de-noising of dynamic EEG data.

    PubMed

    Li, Yifeng; Zhang, Lihui; Li, Baohui; Wei, Xiaoyang; Yan, Guiding; Geng, Xichen; Jin, Zhao; Xu, Yan; Wang, Haixia; Liu, Xiaoyan; Lin, Rong; Wang, Quan

    2015-01-01

    This paper briefly describes the basic principle of wavelet packet analysis, and on this basis introduces the general principle of wavelet packet transformation for signal den-noising. The dynamic EEG data under +Gz acceleration is made a de-noising treatment by using wavelet packet transformation, and the de-noising effects with different thresholds are made a comparison. The study verifies the validity and application value of wavelet packet threshold method for the de-noising of dynamic EEG data under +Gz acceleration. PMID:26405863

  9. Application of the dual-tree complex wavelet transform in biomedical signal denoising.

    PubMed

    Wang, Fang; Ji, Zhong

    2014-01-01

    In biomedical signal processing, Gibbs oscillation and severe frequency aliasing may occur when using the traditional discrete wavelet transform (DWT). Herein, a new denoising algorithm based on the dual-tree complex wavelet transform (DTCWT) is presented. Electrocardiogram (ECG) signals and heart sound signals are denoised based on the DTCWT. The results prove that the DTCWT is efficient. The signal-to-noise ratio (SNR) and the mean square error (MSE) are used to compare the denoising effect. Results of the paired samples t-test show that the new method can remove noise more thoroughly and better retain the boundary and texture of the signal. PMID:24211889

  10. Using fMRI non-local means denoising to uncover activation in sub-cortical structures at 1.5 T for guided HARDI tractography

    PubMed Central

    Bernier, Michaël; Chamberland, Maxime; Houde, Jean-Christophe; Descoteaux, Maxime; Whittingstall, Kevin

    2014-01-01

    In recent years, there has been ever-increasing interest in combining functional magnetic resonance imaging (fMRI) and diffusion magnetic resonance imaging (dMRI) for better understanding the link between cortical activity and connectivity, respectively. However, it is challenging to detect and validate fMRI activity in key sub-cortical areas such as the thalamus, given that they are prone to susceptibility artifacts due to the partial volume effects (PVE) of surrounding tissues (GM/WM interface). This is especially true on relatively low-field clinical MR systems (e.g., 1.5 T). We propose to overcome this limitation by using a spatial denoising technique used in structural MRI and more recently in diffusion MRI called non-local means (NLM) denoising, which uses a patch-based approach to suppress the noise locally. To test this, we measured fMRI in 20 healthy subjects performing three block-based tasks : eyes-open closed (EOC) and left/right finger tapping (FTL, FTR). Overall, we found that NLM yielded more thalamic activity compared to traditional denoising methods. In order to validate our pipeline, we also investigated known structural connectivity going through the thalamus using HARDI tractography: the optic radiations, related to the EOC task, and the cortico-spinal tract (CST) for FTL and FTR. To do so, we reconstructed the tracts using functionally based thalamic and cortical ROIs to initiates seeds of tractography in a two-level coarse-to-fine fashion. We applied this method at the single subject level, which allowed us to see the structural connections underlying fMRI thalamic activity. In summary, we propose a new fMRI processing pipeline which uses a recent spatial denoising technique (NLM) to successfully detect sub-cortical activity which was validated using an advanced dMRI seeding strategy in single subjects at 1.5 T. PMID:25309391

  11. A Function for Representing the Biological Challenge to Respiration Posed by Ocean Acidification and the Geochemical Consequences Inferred

    NASA Astrophysics Data System (ADS)

    Peltzer, E. T.; Brewer, P. G.

    2008-12-01

    Increasing levels of dissolved total CO2 in the ocean from the invasion of fossil fuel CO2 via the atmosphere are widely believed to pose challenges to marine life on several fronts. This is most often expressed as a concern from the resulting lower pH, and the impact of this on calcification in marine organisms (coral reefs, calcareous phytoplankton etc.). These concerns are real, but calcification is by no means the only process affected, nor is the fossil fuel CO2 signal the only geochemical driver of the rapidly emerging deep-sea biological stress. Physical climate change is reducing deep-sea ventilation rates, and thereby leading to increasing oxygen deficits and concomitant increased respiratory CO2. We seek to understand the combined effects of the downward penetration of the fossil fuel signal, and the emergence of the depleted O2/increased respiratory CO2 signal at depth. As a first step, we seek to provide a simple function to capture the changing oceanic state. The most basic thermodynamic equation for the functioning of marine animals can be written as Corg + O2 → CO2 , and this results in the simple Gibbs free energy equation: ΔG° = - RT * ln [fCO2]/[Corg]*[fO2], in which the ratio of pO2 to pCO2 emerges as the dominant factor. From this we construct a simple Respiration Index: RI = log10 (pO2/pCO2), which is linear in energy and map this function for key oceanic regions illustrating the expansion of oceanic dead zones. The formal thermodynamic limit for aerobic life is RI = 0; in practice field data shows that at RI ~ 0.7 microbes turn to electron acceptors other than O2, and denitrification begins to occur. This likely represents the lowest limit for the long-term functioning of higher animals, and the zone RI = 0.7 to 1 appears to present challenges to basic functioning of many marine species. In addition, there are large regions of the ocean where denitrification already occurs, and these zones will expand greatly in size as the combined

  12. Bayesian inference in physics

    NASA Astrophysics Data System (ADS)

    von Toussaint, Udo

    2011-07-01

    Bayesian inference provides a consistent method for the extraction of information from physics experiments even in ill-conditioned circumstances. The approach provides a unified rationale for data analysis, which both justifies many of the commonly used analysis procedures and reveals some of the implicit underlying assumptions. This review summarizes the general ideas of the Bayesian probability theory with emphasis on the application to the evaluation of experimental data. As case studies for Bayesian parameter estimation techniques examples ranging from extra-solar planet detection to the deconvolution of the apparatus functions for improving the energy resolution and change point estimation in time series are discussed. Special attention is paid to the numerical techniques suited for Bayesian analysis, with a focus on recent developments of Markov chain Monte Carlo algorithms for high-dimensional integration problems. Bayesian model comparison, the quantitative ranking of models for the explanation of a given data set, is illustrated with examples collected from cosmology, mass spectroscopy, and surface physics, covering problems such as background subtraction and automated outlier detection. Additionally the Bayesian inference techniques for the design and optimization of future experiments are introduced. Experiments, instead of being merely passive recording devices, can now be designed to adapt to measured data and to change the measurement strategy on the fly to maximize the information of an experiment. The applied key concepts and necessary numerical tools which provide the means of designing such inference chains and the crucial aspects of data fusion are summarized and some of the expected implications are highlighted.

  13. What is it like to have type-2 blindsight? Drawing inferences from residual function in type-1 blindsight.

    PubMed

    Kentridge, Robert W

    2015-03-01

    Controversy surrounds the question of whether the experience sometimes elicited by visual stimuli in blindsight (type-2 blindsight) is visual in nature or whether it is some sort of non-visual experience. The suggestion that the experience is visual seems, at face value, to make sense. I argue here, however, that the residual abilities found in type-1 blindsight (blindsight in which stimuli elicit no conscious experience) are not aspects of normal vision with consciousness deleted, but are based fragments of visual processes that, in themselves, would not be intelligible as visual experiences. If type-2 blindsight is a conscious manifestation of this residual function then it is not obvious that type-2 blindsight would be phenomenally like vision. PMID:25301438

  14. Inferences regarding the diet of extinct hominins: structural and functional trends in dental and mandibular morphology within the hominin clade

    PubMed Central

    Lucas, Peter W; Constantino, Paul J; Wood, Bernard A

    2008-01-01

    This contribution investigates the evolution of diet in the Pan–Homo and hominin clades. It does this by focusing on 12 variables (nine dental and three mandibular) for which data are available about extant chimpanzees, modern humans and most extinct hominins. Previous analyses of this type have approached the interpretation of dental and gnathic function by focusing on the identification of the food consumed (i.e. fruits, leaves, etc.) rather than on the physical properties (i.e. hardness, toughness, etc.) of those foods, and they have not specifically addressed the role that the physical properties of foods play in determining dental adaptations. We take the available evidence for the 12 variables, and set out what the expression of each of those variables is in extant chimpanzees, the earliest hominins, archaic hominins, megadont archaic hominins, and an inclusive grouping made up of transitional hominins and pre-modern Homo. We then present hypotheses about what the states of these variables would be in the last common ancestor of the Pan–Homo clade and in the stem hominin. We review the physical properties of food and suggest how these physical properties can be used to investigate the functional morphology of the dentition. We show what aspects of anterior tooth morphology are critical for food preparation (e.g. peeling fruit) prior to its ingestion, which features of the postcanine dentition (e.g. overall and relative size of the crowns) are related to the reduction in the particle size of food, and how information about the macrostructure (e.g. enamel thickness) and microstructure (e.g. extent and location of enamel prism decussation) of the enamel cap might be used to make predictions about the types of foods consumed by extinct hominins. Specifically, we show how thick enamel can protect against the generation and propagation of cracks in the enamel that begin at the enamel–dentine junction and move towards the outer enamel surface. PMID:18380867

  15. Forecasting performance of denoising signal by Wavelet and Fourier Transforms using SARIMA model

    NASA Astrophysics Data System (ADS)

    Ismail, Mohd Tahir; Mamat, Siti Salwana; Hamzah, Firdaus Mohamad; Karim, Samsul Ariffin Abdul

    2014-07-01

    The goal of this research is to determine the forecasting performance of denoising signal. Monthly rainfall and monthly number of raindays with duration of 20 years (1990-2009) from Bayan Lepas station are utilized as the case study. The Fast Fourier Transform (FFT) and Wavelet Transform (WT) are used in this research to find the denoise signal. The denoise data obtained by Fast Fourier Transform and Wavelet Transform are being analyze by seasonal ARIMA model. The best fitted model is determined by the minimum value of MSE. The result indicates that Wavelet Transform is an effective method in denoising the monthly rainfall and number of rain days signals compared to Fast Fourier Transform.

  16. On high-order denoising models and fast algorithms for vector-valued images.

    PubMed

    Brito-Loeza, Carlos; Chen, Ke

    2010-06-01

    Variational techniques for gray-scale image denoising have been deeply investigated for many years; however, little research has been done for the vector-valued denoising case and the very few existent works are all based on total-variation regularization. It is known that total-variation models for denoising gray-scaled images suffer from staircasing effect and there is no reason to suggest this effect is not transported into the vector-valued models. High-order models, on the contrary, do not present staircasing. In this paper, we introduce three high-order and curvature-based denoising models for vector-valued images. Their properties are analyzed and a fast multigrid algorithm for the numerical solution is provided. AMS subject classifications: 68U10, 65F10, 65K10. PMID:20172828

  17. Image denoising via Bayesian estimation of local variance with Maxwell density prior

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-10-01

    The need for efficient image denoising methods has grown with the massive production of digital images and movies of all kinds. The distortion of images by additive white Gaussian noise (AWGN) is common during its processing and transmission. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. Indeed, one of the cruxes of the Bayesian image denoising algorithms is to estimate the local variance of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with Maxwell density prior for local observed variance and Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by analytical and computational tractability. The experimental results show that the proposed method yields good denoising results.

  18. [Ultrasound image de-noising based on nonlinear diffusion of complex wavelet transform].

    PubMed

    Hou, Wen; Wu, Yiquan

    2012-04-01

    Ultrasound images are easily corrupted by speckle noise, which limits its further application in medical diagnoses. An image de-noising method combining dual-tree complex wavelet transform (DT-CWT) with nonlinear diffusion is proposed in this paper. Firstly, an image is decomposed by DT-CWT. Then adaptive-contrast-factor diffusion and total variation diffusion are applied to high-frequency component and low-frequency component, respectively. Finally the image is synthesized. The experimental results are given. The comparisons of the image de-noising results are made with those of the image de-noising methods based on the combination of wavelet shrinkage with total variation diffusion, the combination of wavelet/multiwavelet with nonlinear diffusion. It is shown that the proposed image de-noising method based on DT-CWT and nonlinear diffusion can obtain superior results. It can both remove speckle noise and preserve the original edges and textural features more efficiently. PMID:22616185

  19. [An improved wavelet threshold algorithm for ECG denoising].

    PubMed

    Liu, Xiuling; Qiao, Lei; Yang, Jianli; Dong, Bin; Wang, Hongrui

    2014-06-01

    Due to the characteristics and environmental factors, electrocardiogram (ECG) signals are usually interfered by noises in the course of signal acquisition, so it is crucial for ECG intelligent analysis to eliminate noises in ECG signals. On the basis of wavelet transform, threshold parameters were improved and a more appropriate threshold expression was proposed. The discrete wavelet coefficients were processed using the improved threshold parameters, the accurate wavelet coefficients without noises were gained through inverse discrete wavelet transform, and then more original signal coefficients could be preserved. MIT-BIH arrythmia database was used to validate the method. Simulation results showed that the improved method could achieve better denoising effect than the traditional ones. PMID:25219225

  20. The research and application of double mean weighting denoising algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Hao; Xiong, Feng

    2015-12-01

    In the application of image processing and pattern recognition, the precision of image preprocessing has a great influence on the image after-processing and analysis. This paper describes a novel local double mean weighted algorithm (hereinafter referred to as D-M algorithm) for image denoising. Firstly, the pixel difference and the absolute value are taken for the current pixels and the pixels in the neighborhood; then the absolute values are sorted again, the means of such pixels are taken in an half-to-half way; finally the weighting coefficient of the mean is taken. According to a large number of experiments, such algorithm not only introduces a certain robustness, but also improves increment significantly.

  1. A novel de-noising method for B ultrasound images

    NASA Astrophysics Data System (ADS)

    Tian, Da-Yong; Mo, Jia-qing; Yu, Yin-Feng; Lv, Xiao-Yi; Yu, Xiao; Jia, Zhen-Hong

    2015-12-01

    B ultrasound as a kind of ultrasonic imaging, which has become the indispensable diagnosis method in clinical medicine. However, the presence of speckle noise in ultrasound image greatly reduces the image quality and interferes with the accuracy of the diagnosis. Therefore, how to construct a method which can eliminate the speckle noise effectively, and at the same time keep the image details effectively is the research target of the current ultrasonic image de-noising. This paper is intended to remove the inherent speckle noise of B ultrasound image. The novel algorithm proposed is based on both wavelet transformation of B ultrasound images and data fusion of B ultrasound images, with a smaller mean squared error (MSE) and greater signal to noise ratio (SNR) compared with other algorithms. The results of this study can effectively remove speckle noise from B ultrasound images, and can well preserved the details and edge information which will produce better visual effects.

  2. Wavelet denoising of multiframe optical coherence tomography data

    PubMed Central

    Mayer, Markus A.; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.

    2012-01-01

    We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise. PMID:22435103

  3. Denoised Wigner distribution deconvolution via low-rank matrix completion.

    PubMed

    Lee, Justin; Barbastathis, George

    2016-09-01

    Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object's phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phase retrieval such as ptychography. Our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise. PMID:27607616

  4. Exploiting the self-similarity in ERP images by nonlocal means for single-trial denoising.

    PubMed

    Strauss, Daniel J; Teuber, Tanja; Steidl, Gabriele; Corona-Strauss, Farah I

    2013-07-01

    Event related potentials (ERPs) represent a noninvasive and widely available means to analyze neural correlates of sensory and cognitive processing. Recent developments in neural and cognitive engineering proposed completely new application fields of this well-established measurement technique when using an advanced single-trial processing. We have recently shown that 2-D diffusion filtering methods from image processing can be used for the denoising of ERP single-trials in matrix representations, also called ERP images. In contrast to conventional 1-D transient ERP denoising techniques, the 2-D restoration of ERP images allows for an integration of regularities over multiple stimulations into the denoising process. Advanced anisotropic image restoration methods may require directional information for the ERP denoising process. This is especially true if there is a lack of a priori knowledge about possible traces in ERP images. However due to the use of event related experimental paradigms, ERP images are characterized by a high degree of self-similarity over the individual trials. In this paper, we propose the simple and easy to apply nonlocal means method for ERP image denoising in order to exploit this self-similarity rather than focusing on the edge-based extraction of directional information. Using measured and simulated ERP data, we compare our method to conventional approaches in ERP denoising. It is concluded that the self-similarity in ERP images can be exploited for single-trial ERP denoising by the proposed approach. This method might be promising for a variety of evoked and event-related potential applications, including nonstationary paradigms such as changing exogeneous stimulus characteristics or endogenous states during the experiment. As presented, the proposed approach is for the a posteriori denoising of single-trial sequences. PMID:23060344

  5. Denoising of Multi-Modal Images with PCA Self-Cross Bilateral Filter

    NASA Astrophysics Data System (ADS)

    Qiu, Yu; Urahama, Kiichi

    We present the PCA self-cross bilateral filter for denoising multi-modal images. We firstly apply the principal component analysis for input multi-modal images. We next smooth the first principal component with a preliminary filter and use it as a supplementary image for cross bilateral filtering of input images. Among some preliminary filters, the undecimated wavelet transform is useful for effective denoising of various multi-modal images such as color, multi-lighting and medical images.

  6. Water quality functioning of lowland permeable catchments: inferences from an intensive study of the RIVER KENNEt and upper River Thames.

    PubMed

    Nea, Colin; Jarvie, Helen P; Wade, Andrew J; Whitehead, Paul G

    2002-01-23

    This paper brings together information on the water quality functioning of the River Kennet and other parts of the upper River Thames in the south east of England. The Kennet represents a groundwater fed riverine environment impacted by agricultural and sewage sources of nutrient pollution. Descriptions of the general water quality of the area, nutrient sources, sinks and within river processes are provided together with biological responses to driving issues of agriculture, sewage treatment and climatic change. Models are developed and applied to assess the key processes involved for a highly dynamic system and to provide initial estimates of the likely responses to environmental change. Furthermore, the economic aspects of pollution control are reviewed, together with legislation issues, which are presented within the context of a landmark case known as the 'Axford Inquiry', the implications of which extend to regional and national dimensions. The paper concludes with a discussion on the present state of knowledge, key issues and future research on the science and management of groundwater fed nutrient impacted riverine systems. PMID:11846085

  7. Functional Inference of Methylenetetrahydrofolate Reductase Gene Polymorphisms on Enzyme Stability as a Potential Risk Factor for Down Syndrome in Croatia

    PubMed Central

    Vraneković, Jadranka; Babić Božović, Ivana; Starčević Čizmarević, Nada; Buretić-Tomljanović, Alena; Ristić, Smiljana; Petrović, Oleg; Kapović, Miljenko; Brajenović-Milić, Bojana

    2010-01-01

    Understanding the biochemical structure and function of the methylenetetrahydrofolate reductase gene (MTHFR) provides new evidence in elucidating the risk of having a child with Down syndrome (DS) in association with two common MTHFR polymorphisms, C677T and A1298C. The aim of this study was to evaluate the risk for DS according to the presence of MTHFR C677T and A1298C polymorphisms as well as the stability of the enzyme configuration. This study included mothers from Croatia with a liveborn DS child (n = 102) or DS pregnancy (n = 9) and mothers with a healthy child (n = 141). MTHFR C677T and A1298C polymorphisms were assessed by PCR-RFLP. Allele/genotype frequencies differences were determined using χ2 test. Odds ratio and the 95% confidence intervals were calculated to evaluate the effects of different alleles/genotypes. No statistically significant differences were found between the frequencies of allele/genotype or genotype combinations of the MTHFR C677T and A1298C polymorphisms in the case and the control groups. Additionally, the observed frequencies of the stable (677CC/1298AA, 677CC/1298AC, 677CC/1298CC) and unstable (677CT/1298AA, 677CT/1298AC, 677TT/1298AA) enzyme configurations were not significantly different. We found no evidence to support the possibility that MTHFR polymorphisms and the stability of the enzyme configurations were associated with risk of having a child with DS in Croatian population. PMID:20592453

  8. Hybrid de-noising approach for fiber optic gyroscopes combining improved empirical mode decomposition and forward linear prediction algorithms

    NASA Astrophysics Data System (ADS)

    Shen, Chong; Cao, Huiliang; Li, Jie; Tang, Jun; Zhang, Xiaoming; Shi, Yunbo; Yang, Wei; Liu, Jun

    2016-03-01

    A noise reduction algorithm based on an improved empirical mode decomposition (EMD) and forward linear prediction (FLP) is proposed for the fiber optic gyroscope (FOG). Referred to as the EMD-FLP algorithm, it was developed to decompose the FOG outputs into a number of intrinsic mode functions (IMFs) after which mode manipulations are performed to select noise-only IMFs, mixed IMFs, and residual IMFs. The FLP algorithm is then employed to process the mixed IMFs, from which the refined IMFs components are reconstructed to produce the final de-noising results. This hybrid approach is applied to, and verified using, both simulated signals and experimental FOG outputs. The results from the applications show that the method eliminates noise more effectively than the conventional EMD or FLP methods and decreases the standard deviations of the FOG outputs after de-noising from 0.17 to 0.026 under sweep frequency vibration and from 0.22 to 0.024 under fixed frequency vibration.

  9. Hybrid de-noising approach for fiber optic gyroscopes combining improved empirical mode decomposition and forward linear prediction algorithms.

    PubMed

    Shen, Chong; Cao, Huiliang; Li, Jie; Tang, Jun; Zhang, Xiaoming; Shi, Yunbo; Yang, Wei; Liu, Jun

    2016-03-01

    A noise reduction algorithm based on an improved empirical mode decomposition (EMD) and forward linear prediction (FLP) is proposed for the fiber optic gyroscope (FOG). Referred to as the EMD-FLP algorithm, it was developed to decompose the FOG outputs into a number of intrinsic mode functions (IMFs) after which mode manipulations are performed to select noise-only IMFs, mixed IMFs, and residual IMFs. The FLP algorithm is then employed to process the mixed IMFs, from which the refined IMFs components are reconstructed to produce the final de-noising results. This hybrid approach is applied to, and verified using, both simulated signals and experimental FOG outputs. The results from the applications show that the method eliminates noise more effectively than the conventional EMD or FLP methods and decreases the standard deviations of the FOG outputs after de-noising from 0.17 to 0.026 under sweep frequency vibration and from 0.22 to 0.024 under fixed frequency vibration. PMID:27036770

  10. Hydrous state of the subducting Philippine Sea plate inferred from receiver function image using onshore and offshore data

    NASA Astrophysics Data System (ADS)

    Akuhara, Takeshi; Mochizuki, Kimihiro

    2015-12-01

    Exploring the hydrous state of subducting oceanic crust is intriguing because it is considered to affect the strength of megathrust faults that cause various types of earthquakes; however, its state beneath offshore regions remains unclear. In this study, we investigated fluid contents along the subducting Philippine Sea plate around the Kii Peninsula by receiver function (RF) analysis using data from both on-land stations and ocean bottom seismometers (OBSs). The vertical component of OBS records contains dominant water reverberations, and thus, conventional methods fail to estimate RFs correctly. We therefore developed a method to calculate RFs that removes such reverberations. The RFs calculated by our method showed considerable improvement for later phase identification, compared with those obtained using a conventional method. Resultant RF amplitudes suggest the existence of low-velocity zones directly beneath the plate interface of both onshore and offshore regions. We interpreted this as evidence of hydrous oceanic crust, which extends from 5 km to 35 km depth to the plate interface. Reduction of RF amplitudes beneath the Kii Peninsula suggests that dehydration of the oceanic crust increases the seismic velocity, and the accompanying densification makes the plate interface permeable. This permeable plate interface may characterize the location of non-volcanic tremors. This contrasts with long-term slow slip events because it is believed that they occur along the sealed plate interface. Comparison between the plate geometry and local earthquakes reveals the paucity of earthquakes in the oceanic crust below a certain depth, which provides further insight into the dehydration process in the oceanic crust.

  11. Melt infiltration of the lower lithosphere beneath the Tanzania craton and the Albertine rift inferred from S receiver functions

    NASA Astrophysics Data System (ADS)

    WöLbern, Ingo; Rümpker, Georg; Link, Klemens; Sodoudi, Forough

    2012-08-01

    The transition between the lithosphere and the asthenosphere is subject to numerous contemporary studies as its nature is still poorly understood. The thickest lithosphere is associated with old cratons and platforms and it has been shown that seismic investigations may fail to image the lithosphere-asthenosphere boundary in these areas. Instead, several recent studies have proposed a mid-lithospheric discontinuity of unknown origin existing under several cratons. In this study we investigate the Tanzania craton in East Africa which is enclosed by the eastern and western branches of the East African Rift System. We present evidence from S receiver functions for two consecutive discontinuities at depths of 50-100 km and 140-200 km, which correspond to significant S wave velocity reductions under the Tanzania craton and the Albert and Edward rift segments. By comparison with synthetic waveforms we show that the lower discontinuity coincides with the LAB exhibiting velocity reductions of 6-9%. The shallower interface reveals a velocity drop that varies from 12% beneath the craton to 24% below the Albert-Edward rift. It is interpreted as an infiltration front marking the upper boundary of altered lithosphere due to ascending asthenospheric melts. This is corroborated by computing S velocity variations based on xenolith samples which exhibit a dense system of crystallized veins acting as pathways of the infiltrating melt. Mineral assemblages in these veins are rich in phlogopite and pyroxenite which can explain the reduced shear wave velocities. Melt infiltration represents a suitable mechanism to form a mid-lithospheric discontinuity within cratonic lithosphere that is underlain by anomalously hot mantle.

  12. Improving wavelet denoising based on an in-depth analysis of the camera color processing

    NASA Astrophysics Data System (ADS)

    Seybold, Tamara; Plichta, Mathias; Stechele, Walter

    2015-02-01

    While Denoising is an extensively studied task in signal processing research, most denoising methods are designed and evaluated using readily processed image data, e.g. the well-known Kodak data set. The noise model is usually additive white Gaussian noise (AWGN). This kind of test data does not correspond to nowadays real-world image data taken with a digital camera. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or suboptimal choices in research on real-time camera denoising algorithms. In this paper we derive a precise analysis of the noise characteristics for the different steps in the color processing. Based on real camera noise measurements and simulation of the processing steps, we obtain a good approximation for the noise characteristics. We further show how this approximation can be used in standard wavelet denoising methods. We improve the wavelet hard thresholding and bivariate thresholding based on our noise analysis results. Both the visual quality and objective quality metrics show the advantage of the proposed method. As the method is implemented using look-up-tables that are calculated before the denoising step, our method can be implemented with very low computational complexity and can process HD video sequences real-time in an FPGA.

  13. Patch-based and multiresolution optimum bilateral filters for denoising images corrupted by Gaussian noise

    NASA Astrophysics Data System (ADS)

    Kishan, Harini; Seelamantula, Chandra Sekhar

    2015-09-01

    We propose optimal bilateral filtering techniques for Gaussian noise suppression in images. To achieve maximum denoising performance via optimal filter parameter selection, we adopt Stein's unbiased risk estimate (SURE)-an unbiased estimate of the mean-squared error (MSE). Unlike MSE, SURE is independent of the ground truth and can be used in practical scenarios where the ground truth is unavailable. In our recent work, we derived SURE expressions in the context of the bilateral filter and proposed SURE-optimal bilateral filter (SOBF). We selected the optimal parameters of SOBF using the SURE criterion. To further improve the denoising performance of SOBF, we propose variants of SOBF, namely, SURE-optimal multiresolution bilateral filter (SMBF), which involves optimal bilateral filtering in a wavelet framework, and SURE-optimal patch-based bilateral filter (SPBF), where the bilateral filter parameters are optimized on small image patches. Using SURE guarantees automated parameter selection. The multiresolution and localized denoising in SMBF and SPBF, respectively, yield superior denoising performance when compared with the globally optimal SOBF. Experimental validations and comparisons show that the proposed denoisers perform on par with some state-of-the-art denoising techniques.

  14. Seismic velocity discontinuities in the crust and uppermost mantle beneath the Tokyo metropolitan area inferred from receiver function analysis

    NASA Astrophysics Data System (ADS)

    Igarashi, T.; Sakai, S.; Hirata, N.

    2010-12-01

    We apply receiver function (RF) analyses to estimate the seismic velocity structure and seismic velocity discontinuities in the crust and uppermost mantle beneath the Tokyo metropolitan area, central Japan. Destructive earthquakes often occurred at various places including the subducting Philippine Sea plate (PSP), the subducting Pacific plate (PAP), and inland earthquake in this area. Investigation on the crustal structure and configurations of the subducting plates is the key to understanding the stress and strain concentration process and important to mitigate future earthquake disasters. A RF analysis is widely used to estimate velocity discontinuities in the crust and mantle beneath each seismic station. However, crustal structure beneath the Kanto plain could not be analyzed for lack of applicable seismic stations. Recently, comprehensive surveys are conducted as the Special Project for Earthquake Disaster Mitigation in the Tokyo Metropolitan area from 2007. The Metropolitan Seismic Observation network (MeSO-net) is constructed under this project. In this study, we searched for the best correlated velocity structure model between an observed RF at each station and synthetic ones by using a grid search method. Synthetic RFs calculated from assumed many one-dimensional velocity structures which consisting of four layers. We further constructed the vertical cross-sections of depth converted RF images transformed the lapse time of time series to depth by using the estimated structure models. MeSO-net data and telemetric seismographic network data operated by NIED, JMA and ERI are used. We used events with magnitudes greater or equal to 5.0 and epicentral distances between 30 and 90 degrees based on USGS catalogues. As a result, we clarify spatial distributions of the crustal S-wave velocities. The Boso Peninsula and Kanto plain are covered in the thick low-velocity sediment layers. We image standard velocity distributions in the deep crust of the Boso Peninsula

  15. Hardware Design and Implementation of a Wavelet De-Noising Procedure for Medical Signal Preprocessing

    PubMed Central

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-01-01

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz. PMID:26501290

  16. Hardware design and implementation of a wavelet de-noising procedure for medical signal preprocessing.

    PubMed

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-01-01

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz. PMID:26501290

  17. The Middle Miocene Ape Pierolapithecus catalaunicus Exhibits Extant Great Ape-Like Morphometric Affinities on Its Patella: Inferences on Knee Function and Evolution

    PubMed Central

    Pina, Marta; Almécija, Sergio; Alba, David M.; O'Neill, Matthew C.; Moyà-Solà, Salvador

    2014-01-01

    The mosaic nature of the Miocene ape postcranium hinders the reconstruction of the positional behavior and locomotion of these taxa based on isolated elements only. The fossil great ape Pierolapithecus catalaunicus (IPS 21350 skeleton; 11.9 Ma) exhibits a relatively wide and shallow thorax with moderate hand length and phalangeal curvature, dorsally-oriented metacarpophalangeal joints, and loss of ulnocarpal articulation. This evidence reveals enhanced orthograde postures without modern ape-like below-branch suspensory adaptations. Therefore, it has been proposed that natural selection enhanced vertical climbing (and not suspension per se) in Pierolapithecus catalaunicus. Although limb long bones are not available for this species, its patella (IPS 21350.37) can potentially provide insights into its knee function and thus on the complexity of its total morphological pattern. Here we provide a detailed description and morphometric analyses of IPS 21350.37, which are based on four external dimensions intended to capture the overall patellar shape. Our results reveal that the patella of Pierolapithecus is similar to that of extant great apes: proximodistally short, mediolaterally broad and anteroposteriorly thin. Previous biomechanical studies of the anthropoid knee based on the same measurements proposed that the modern great ape patella reflects a mobile knee joint while the long, narrow and thick patella of platyrrhine and especially cercopithecoid monkeys would increase the quadriceps moment arm in knee extension during walking, galloping, climbing and leaping. The patella of Pierolapithecus differs not only from that of monkeys and hylobatids, but also from that of basal hominoids (e.g., Proconsul and Nacholapithecus), which display slightly thinner patellae than extant great apes (the previously-inferred plesiomorphic hominoid condition). If patellar shape in Pierolapithecus is related to modern great ape-like knee function, our results suggest that increased

  18. Convergence analysis of a finite element skull model of Herpestes javanicus (Carnivora, Mammalia): implications for robust comparative inferences of biomechanical function.

    PubMed

    Tseng, Zhijie Jack; Flynn, John J

    2015-01-21

    biomechanical attributes from these simulations are used to infer form-function linkage. PMID:25445190

  19. The Middle Miocene ape Pierolapithecus catalaunicus exhibits extant great ape-like morphometric affinities on its patella: inferences on knee function and evolution.

    PubMed

    Pina, Marta; Almécija, Sergio; Alba, David M; O'Neill, Matthew C; Moyà-Solà, Salvador

    2014-01-01

    The mosaic nature of the Miocene ape postcranium hinders the reconstruction of the positional behavior and locomotion of these taxa based on isolated elements only. The fossil great ape Pierolapithecus catalaunicus (IPS 21350 skeleton; 11.9 Ma) exhibits a relatively wide and shallow thorax with moderate hand length and phalangeal curvature, dorsally-oriented metacarpophalangeal joints, and loss of ulnocarpal articulation. This evidence reveals enhanced orthograde postures without modern ape-like below-branch suspensory adaptations. Therefore, it has been proposed that natural selection enhanced vertical climbing (and not suspension per se) in Pierolapithecus catalaunicus. Although limb long bones are not available for this species, its patella (IPS 21350.37) can potentially provide insights into its knee function and thus on the complexity of its total morphological pattern. Here we provide a detailed description and morphometric analyses of IPS 21350.37, which are based on four external dimensions intended to capture the overall patellar shape. Our results reveal that the patella of Pierolapithecus is similar to that of extant great apes: proximodistally short, mediolaterally broad and anteroposteriorly thin. Previous biomechanical studies of the anthropoid knee based on the same measurements proposed that the modern great ape patella reflects a mobile knee joint while the long, narrow and thick patella of platyrrhine and especially cercopithecoid monkeys would increase the quadriceps moment arm in knee extension during walking, galloping, climbing and leaping. The patella of Pierolapithecus differs not only from that of monkeys and hylobatids, but also from that of basal hominoids (e.g., Proconsul and Nacholapithecus), which display slightly thinner patellae than extant great apes (the previously-inferred plesiomorphic hominoid condition). If patellar shape in Pierolapithecus is related to modern great ape-like knee function, our results suggest that increased

  20. Inference or Observation?

    ERIC Educational Resources Information Center

    Finson, Kevin D.

    2010-01-01

    Learning about what inferences are, and what a good inference is, will help students become more scientifically literate and better understand the nature of science in inquiry. Students in K-4 should be able to give explanations about what they investigate (NSTA 1997) and that includes doing so through inferring. This article provides some tips…

  1. Multiadaptive Bionic Wavelet Transform: Application to ECG Denoising and Baseline Wandering Reduction

    NASA Astrophysics Data System (ADS)

    Sayadi, Omid; Shamsollahi, Mohammad B.

    2007-12-01

    We present a new modified wavelet transform, called the multiadaptive bionic wavelet transform (MABWT), that can be applied to ECG signals in order to remove noise from them under a wide range of variations for noise. By using the definition of bionic wavelet transform and adaptively determining both the center frequency of each scale together with the[InlineEquation not available: see fulltext.]-function, the problem of desired signal decomposition is solved. Applying a new proposed thresholding rule works successfully in denoising the ECG. Moreover by using the multiadaptation scheme, lowpass noisy interference effects on the baseline of ECG will be removed as a direct task. The method was extensively clinically tested with real and simulated ECG signals which showed high performance of noise reduction, comparable to those of wavelet transform (WT). Quantitative evaluation of the proposed algorithm shows that the average SNR improvement of MABWT is 1.82 dB more than the WT-based results, for the best case. Also the procedure has largely proved advantageous over wavelet-based methods for baseline wandering cancellation, including both DC components and baseline drifts.

  2. The discriminative bilateral filter: an enhanced denoising filter for electron microscopy data.

    PubMed

    Pantelic, Radosav S; Rothnagel, Rosalba; Huang, Chang-Yi; Muller, David; Woolford, David; Landsberg, Michael J; McDowall, Alasdair; Pailthorpe, Bernard; Young, Paul R; Banks, Jasmine; Hankamer, Ben; Ericksson, Geoffery

    2006-09-01

    Advances in three-dimensional (3D) electron microscopy (EM) and image processing are providing considerable improvements in the resolution of subcellular volumes, macromolecular assemblies and individual proteins. However, the recovery of high-frequency information from biological samples is hindered by specimen sensitivity to beam damage. Low dose electron cryo-microscopy conditions afford reduced beam damage but typically yield images with reduced contrast and low signal-to-noise ratios (SNRs). Here, we describe the properties of a new discriminative bilateral (DBL) filter that is based upon the bilateral filter implementation of Jiang et al. (Jiang, W., Baker, M.L., Wu, Q., Bajaj, C., Chiu, W., 2003. Applications of a bilateral denoising filter in biological electron microscopy. J. Struc. Biol. 128, 82-97.). In contrast to the latter, the DBL filter can distinguish between object edges and high-frequency noise pixels through the use of an additional photometric exclusion function. As a result, high frequency noise pixels are smoothed, yet object edge detail is preserved. In the present study, we show that the DBL filter effectively reduces noise in low SNR single particle data as well as cellular tomograms of stained plastic sections. The properties of the DBL filter are discussed in terms of its usefulness for single particle analysis and for pre-processing cellular tomograms ahead of image segmentation. PMID:16774838

  3. A correlated empirical mode decomposition method for partial discharge signal denoising

    NASA Astrophysics Data System (ADS)

    Tang, Ya-Wen; Tai, Cheng-Chi; Su, Ching-Chau; Chen, Chien-Yi; Chen, Jiann-Fuh

    2010-08-01

    Empirical mode decomposition (EMD) is a signal processing method used to extract intrinsic mode functions (IMFs) from a complicated signal. For a measurement with two or more correlated inputs, finding and capturing the correlated IMFs is a critical challenge that must be confronted. In this paper, a new correlated EMD method is proposed. The cross-correlation method was employed to determine dependence between the IMFs. To verify feasibility, an analysis was performed on simulated test signals and practically measured partial discharge (PD) signals collected from several acoustic emission sensors. At the surface of the gas-insulated transmission line, the PD signal arrived at the AE sensors with varying time delays and unique mechanism vibrations. Following an abnormal detection using the standard-deviation variation, the PD signal and the background signal of each sensor were applied using the correlated-EMD method. A twice correlated-EMD calculation was applied to the signals for the purpose of noise elimination. In addition, the unwanted low-frequency IMFs induced from the EMD calculations were excluded. The experimental results reveal that the correlated-EMD method performs well on both selecting and denoising the correlated IMFs. The results further provide analysis on correlated-input applications with a precise signal completely induced from the disturbance.

  4. Gaussian mixture model-based gradient field reconstruction for infrared image detail enhancement and denoising

    NASA Astrophysics Data System (ADS)

    Zhao, Fan; Zhao, Jian; Zhao, Wenda; Qu, Feng

    2016-05-01

    Infrared images are characterized by low signal-to-noise ratio and low contrast. Therefore, the edge details are easily immerged in the background and noise, making it much difficult to achieve infrared image edge detail enhancement and denoising. This article proposes a novel method of Gaussian mixture model-based gradient field reconstruction, which enhances image edge details while suppressing noise. First, by analyzing the gradient histogram of noisy infrared image, Gaussian mixture model is adopted to simulate the distribution of the gradient histogram, and divides the image information into three parts corresponding to faint details, noise and the edges of clear targets, respectively. Then, the piecewise function is constructed based on the characteristics of the image to increase gradients of faint details and suppress gradients of noise. Finally, anisotropic diffusion constraint is added while visualizing enhanced image from the transformed gradient field to further suppress noise. The experimental results show that the method possesses unique advantage of effectively enhancing infrared image edge details and suppressing noise as well, compared with the existing methods. In addition, it can be used to effectively enhance other types of images such as the visible and medical images.

  5. Segmentation based denoising of PET images: an iterative approach via regional means and affinity propagation.

    PubMed

    Xu, Ziyue; Bagci, Ulas; Seidel, Jurgen; Thomasson, David; Solomon, Jeff; Mollura, Daniel J

    2014-01-01

    Delineation and noise removal play a significant role in clinical quantification of PET images. Conventionally, these two tasks are considered independent, however, denoising can improve the performance of boundary delineation by enhancing SNR while preserving the structural continuity of local regions. On the other hand, we postulate that segmentation can help denoising process by constraining the smoothing criteria locally. Herein, we present a novel iterative approach for simultaneous PET image denoising and segmentation. The proposed algorithm uses generalized Anscombe transformation priori to non-local means based noise removal scheme and affinity propagation based delineation. For nonlocal means denoising, we propose a new regional means approach where we automatically and efficiently extract the appropriate subset of the image voxels by incorporating the class information from affinity propagation based segmentation. PET images after denoising are further utilized for refinement of the segmentation in an iterative manner. Qualitative and quantitative results demonstrate that the proposed framework successfully removes the noise from PET images while preserving the structures, and improves the segmentation accuracy. PMID:25333180

  6. Total variation-regularized weighted nuclear norm minimization for hyperspectral image mixed denoising

    NASA Astrophysics Data System (ADS)

    Wu, Zhaojun; Wang, Qiang; Wu, Zhenghua; Shen, Yi

    2016-01-01

    Many nuclear norm minimization (NNM)-based methods have been proposed for hyperspectral image (HSI) mixed denoising due to the low-rank (LR) characteristics of clean HSI. However, the NNM-based methods regularize each eigenvalue equally, which is unsuitable for the denoising problem, where each eigenvalue stands for special physical meaning and should be regularized differently. However, the NNM-based methods only exploit the high spectral correlation, while ignoring the local structure of HSI and resulting in spatial distortions. To address these problems, a total variation (TV)-regularized weighted nuclear norm minimization (TWNNM) method is proposed. To obtain the desired denoising performance, two issues are included. First, to exploit the high spectral correlation, the HSI is restricted to be LR, and different eigenvalues are minimized with different weights based on the WNNM. Second, to preserve the local structure of HSI, the TV regularization is incorporated, and the alternating direction method of multipliers is used to solve the resulting optimization problem. Both simulated and real data experiments demonstrate that the proposed TWNNM approach produces superior denoising results for the mixed noise case in comparison with several state-of-the-art denoising methods.

  7. The use of ensemble empirical mode decomposition as a novel denoising technique

    NASA Astrophysics Data System (ADS)

    Gaci, Said; Hachay, Olga; Zaourar, Naima

    2016-04-01

    Denoising is of a high importance in geophysical data processing. This paper suggests a new denoising technique based on the Ensemble Empirical mode decomposition (EEMD). This technique has been compared with the discrete wavelet transform (DWT) thresholding. Firstly, both methods have been implemented on synthetic signals with diverse waveforms ('blocks', 'heavy sine', 'Doppler', and 'mishmash'). The EEMD denoising method is proved to be the most efficient for 'blocks', 'heavy sine' and 'mishmash' signals for all the considered signal-to-noise ratio (SNR) values. However, the results obtained using the DWT thresholding are the most reliable for 'Doppler' signal, and the difference between the calculated mean square error (MSE) values using the studied methods is slight and decreases as the SNR value gets smaller values. Secondly, the denoising methods have been applied on real seismic traces recorded in the Algerian Sahara. It is shown that the proposed technique outperforms the DWT thresholding. In conclusion, the EEMD technique can provide a powerful tool for denoising seismic signals. Keywords: Ensemble Empirical mode decomposition (EEMD), Discrete wavelet transform (DWT), seismic signal.

  8. Denoising of PET images by combining wavelets and curvelets for improved preservation of resolution and quantitation.

    PubMed

    Le Pogam, A; Hanzouli, H; Hatt, M; Cheze Le Rest, C; Visvikis, D

    2013-12-01

    Denoising of Positron Emission Tomography (PET) images is a challenging task due to the inherent low signal-to-noise ratio (SNR) of the acquired data. A pre-processing denoising step may facilitate and improve the results of further steps such as segmentation, quantification or textural features characterization. Different recent denoising techniques have been introduced and most state-of-the-art methods are based on filtering in the wavelet domain. However, the wavelet transform suffers from some limitations due to its non-optimal processing of edge discontinuities. More recently, a new multi scale geometric approach has been proposed, namely the curvelet transform. It extends the wavelet transform to account for directional properties in the image. In order to address the issue of resolution loss associated with standard denoising, we considered a strategy combining the complementary wavelet and curvelet transforms. We compared different figures of merit (e.g. SNR increase, noise decrease in homogeneous regions, resolution loss, and intensity bias) on simulated and clinical datasets with the proposed combined approach and the wavelet-only and curvelet-only filtering techniques. The three methods led to an increase of the SNR. Regarding the quantitative accuracy however, the wavelet and curvelet only denoising approaches led to larger biases in the intensity and the contrast than the proposed combined algorithm. This approach could become an alternative solution to filters currently used after image reconstruction in clinical systems such as the Gaussian filter. PMID:23837964

  9. Edge-preserving image denoising via group coordinate descent on the GPU

    PubMed Central

    McGaffin, Madison G.; Fessler, Jeffrey A.

    2015-01-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel one-dimensional pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates inplace and only store the noisy data, denoised image and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation (TV). Both algorithms use the majorize-minimize (MM) framework to solve the one-dimensional pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time. PMID:25675454

  10. Evaluation of Wavelet Denoising Methods for Small-Scale Joint Roughness Estimation Using Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Bitenc, M.; Kieffer, D. S.; Khoshelham, K.

    2015-08-01

    The precision of Terrestrial Laser Scanning (TLS) data depends mainly on the inherent random range error, which hinders extraction of small details from TLS measurements. New post processing algorithms have been developed that reduce or eliminate the noise and therefore enable modelling details at a smaller scale than one would traditionally expect. The aim of this research is to find the optimum denoising method such that the corrected TLS data provides a reliable estimation of small-scale rock joint roughness. Two wavelet-based denoising methods are considered, namely Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), in combination with different thresholding procedures. The question is, which technique provides a more accurate roughness estimates considering (i) wavelet transform (SWT or DWT), (ii) thresholding method (fixed-form or penalised low) and (iii) thresholding mode (soft or hard). The performance of denoising methods is tested by two analyses, namely method noise and method sensitivity to noise. The reference data are precise Advanced TOpometric Sensor (ATOS) measurements obtained on 20 × 30 cm rock joint sample, which are for the second analysis corrupted by different levels of noise. With such a controlled noise level experiments it is possible to evaluate the methods' performance for different amounts of noise, which might be present in TLS data. Qualitative visual checks of denoised surfaces and quantitative parameters such as grid height and roughness are considered in a comparative analysis of denoising methods. Results indicate that the preferred method for realistic roughness estimation is DWT with penalised low hard thresholding.

  11. Robust 4D Flow Denoising Using Divergence-Free Wavelet Transform

    PubMed Central

    Ong, Frank; Uecker, Martin; Tariq, Umar; Hsiao, Albert; Alley, Marcus T; Vasanawala, Shreyas S.; Lustig, Michael

    2014-01-01

    Purpose To investigate four-dimensional flow denoising using the divergence-free wavelet (DFW) transform and compare its performance with existing techniques. Theory and Methods DFW is a vector-wavelet that provides a sparse representation of flow in a generally divergence-free field and can be used to enforce “soft” divergence-free conditions when discretization and partial voluming result in numerical nondivergence-free components. Efficient denoising is achieved by appropriate shrinkage of divergence-free wavelet and nondivergence-free coefficients. SureShrink and cycle spinning are investigated to further improve denoising performance. Results DFW denoising was compared with existing methods on simulated and phantom data and was shown to yield better noise reduction overall while being robust to segmentation errors. The processing was applied to in vivo data and was demonstrated to improve visualization while preserving quantifications of flow data. Conclusion DFW denoising of four-dimensional flow data was shown to reduce noise levels in flow data both quantitatively and visually. PMID:24549830

  12. ECG signals denoising using wavelet transform and independent component analysis

    NASA Astrophysics Data System (ADS)

    Liu, Manjin; Hui, Mei; Liu, Ming; Dong, Liquan; Zhao, Zhu; Zhao, Yuejin

    2015-08-01

    A method of two channel exercise electrocardiograms (ECG) signals denoising based on wavelet transform and independent component analysis is proposed in this paper. First of all, two channel exercise ECG signals are acquired. We decompose these two channel ECG signals into eight layers and add up the useful wavelet coefficients separately, getting two channel ECG signals with no baseline drift and other interference components. However, it still contains electrode movement noise, power frequency interference and other interferences. Secondly, we use these two channel ECG signals processed and one channel signal constructed manually to make further process with independent component analysis, getting the separated ECG signal. We can see the residual noises are removed effectively. Finally, comparative experiment is made with two same channel exercise ECG signals processed directly with independent component analysis and the method this paper proposed, which shows the indexes of signal to noise ratio (SNR) increases 21.916 and the root mean square error (MSE) decreases 2.522, proving the method this paper proposed has high reliability.

  13. A fast-convergence POCS seismic denoising and reconstruction method

    NASA Astrophysics Data System (ADS)

    Ge, Zi-Jian; Li, Jing-Ye; Pan, Shu-Lin; Chen, Xiao-Hong

    2015-06-01

    The efficiency, precision, and denoising capabilities of reconstruction algorithms are critical to seismic data processing. Based on the Fourier-domain projection onto convex sets (POCS) algorithm, we propose an inversely proportional threshold model that defines the optimum threshold, in which the descent rate is larger than in the exponential threshold in the large-coefficient section and slower than in the exponential threshold in the small-coefficient section. Thus, the computation efficiency of the POCS seismic reconstruction greatly improves without affecting the reconstructed precision of weak reflections. To improve the flexibility of the inversely proportional threshold, we obtain the optimal threshold by using an adjustable dependent variable in the denominator of the inversely proportional threshold model. For random noise attenuation by completing the missing traces in seismic data reconstruction, we present a weighted reinsertion strategy based on the data-driven model that can be obtained by using the percentage of the data-driven threshold in each iteration in the threshold section. We apply the proposed POCS reconstruction method to 3D synthetic and field data. The results suggest that the inversely proportional threshold model improves the computational efficiency and precision compared with the traditional threshold models; furthermore, the proposed reinserting weight strategy increases the SNR of the reconstructed data.

  14. Computed tomography perfusion imaging denoising using Gaussian process regression

    NASA Astrophysics Data System (ADS)

    Zhu, Fan; Carpenter, Trevor; Rodriguez Gonzalez, David; Atkinson, Malcolm; Wardlaw, Joanna

    2012-06-01

    Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study.

  15. Computed tomography perfusion imaging denoising using gaussian process regression.

    PubMed

    Zhu, Fan; Carpenter, Trevor; Rodriguez Gonzalez, David; Atkinson, Malcolm; Wardlaw, Joanna

    2012-06-21

    Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study. PMID:22617159

  16. Generalized non-local means filtering for image denoising

    NASA Astrophysics Data System (ADS)

    Dolui, Sudipto; Salgado Patarroyo, Iván. C.; Michailovich, Oleg V.

    2014-02-01

    Non-local means (NLM) filtering has been shown to outperform alternative denoising methodologies under the model of additive white Gaussian noise contamination. Recently, several theoretical frameworks have been developed to extend this class of algorithms to more general types of noise statistics. However, many of these frameworks are specifically designed for a single noise contamination model, and are far from optimal across varying noise statistics. The NLM filtering techniques rely on the definition of a similarity measure, which quantifies the similarity of two neighbourhoods along with their respective centroids. The key to the unification of the NLM filter for different noise statistics lies in the definition of a universal similarity measure which is guaranteed to provide favourable performance irrespective of the statistics of the noise. Accordingly, the main contribution of this work is to provide a rigorous statistical framework to derive such a universal similarity measure, while highlighting some of its theoretical and practical favourable characteristics. Additionally, the closed form expressions of the proposed similarity measure are provided for a number of important noise scenarios and the practical utility of the proposed similarity measure is demonstrated through numerical experiments.

  17. Inferring biotic interactions from proxies.

    PubMed

    Morales-Castilla, Ignacio; Matias, Miguel G; Gravel, Dominique; Araújo, Miguel B

    2015-06-01

    Inferring biotic interactions from functional, phylogenetic and geographical proxies remains one great challenge in ecology. We propose a conceptual framework to infer the backbone of biotic interaction networks within regional species pools. First, interacting groups are identified to order links and remove forbidden interactions between species. Second, additional links are removed by examination of the geographical context in which species co-occur. Third, hypotheses are proposed to establish interaction probabilities between species. We illustrate the framework using published food-webs in terrestrial and marine systems. We conclude that preliminary descriptions of the web of life can be made by careful integration of data with theory. PMID:25922148

  18. Wavelet based de-noising of breath air absorption spectra profiles for improved classification by principal component analysis

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.

    2015-11-01

    The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.

  19. Inference engine using optical array logic

    NASA Astrophysics Data System (ADS)

    Iwata, Masaya; Tanida, Jun; Ichioka, Yoshiki

    1990-07-01

    An implementation method for an inference engine using optical array logic is presented. Optical array logic is a technique for parallel neighborhood operation using spatial coding and 2-D correlation. For efficient execution of inference in artificial intelligence problems, a large number of data must be searched effectively. To achieve this demand, a template matching technique is applied to the inference operation. By introducing a new function of data conversion, the inference operation can be implemented with optical array logic, which utilizes parallelism in optical techniques.

  20. Statistical inference and string theory

    NASA Astrophysics Data System (ADS)

    Heckman, Jonathan J.

    2015-09-01

    In this paper, we expose some surprising connections between string theory and statistical inference. We consider a large collective of agents sweeping out a family of nearby statistical models for an M-dimensional manifold of statistical fitting parameters. When the agents making nearby inferences align along a d-dimensional grid, we find that the pooled probability that the collective reaches a correct inference is the partition function of a nonlinear sigma model in d dimensions. Stability under perturbations to the original inference scheme requires the agents of the collective to distribute along two dimensions. Conformal invariance of the sigma model corresponds to the condition of a stable inference scheme, directly leading to the Einstein field equations for classical gravity. By summing over all possible arrangements of the agents in the collective, we reach a string theory. We also use this perspective to quantify how much an observer can hope to learn about the internal geometry of a superstring compactification. Finally, we present some brief speculative remarks on applications to the AdS/CFT correspondence and Lorentzian signature space-times.

  1. ECG denoising and fiducial point extraction using an extended Kalman filtering framework with linear and nonlinear phase observations.

    PubMed

    Akhbari, Mahsa; Shamsollahi, Mohammad B; Jutten, Christian; Armoundas, Antonis A; Sayadi, Omid

    2016-02-01

    In this paper we propose an efficient method for denoising and extracting fiducial point (FP) of ECG signals. The method is based on a nonlinear dynamic model which uses Gaussian functions to model ECG waveforms. For estimating the model parameters, we use an extended Kalman filter (EKF). In this framework called EKF25, all the parameters of Gaussian functions as well as the ECG waveforms (P-wave, QRS complex and T-wave) in the ECG dynamical model, are considered as state variables. In this paper, the dynamic time warping method is used to estimate the nonlinear ECG phase observation. We compare this new approach with linear phase observation models. Using linear and nonlinear EKF25 for ECG denoising and nonlinear EKF25 for fiducial point extraction and ECG interval analysis are the main contributions of this paper. Performance comparison with other EKF-based techniques shows that the proposed method results in higher output SNR with an average SNR improvement of 12 dB for an input SNR of -8 dB. To evaluate the FP extraction performance, we compare the proposed method with a method based on partially collapsed Gibbs sampler and an established EKF-based method. The mean absolute error and the root mean square error of all FPs, across all databases are 14 ms and 22 ms, respectively, for our proposed method, with an advantage when using a nonlinear phase observation. These errors are significantly smaller than errors obtained with other methods. For ECG interval analysis, with an absolute mean error and a root mean square error of about 22 ms and 29 ms, the proposed method achieves better accuracy and smaller variability with respect to other methods. PMID:26767425

  2. Comparative study of ECG signal denoising by wavelet thresholding in empirical and variational mode decomposition domains.

    PubMed

    Lahmiri, Salim

    2014-09-01

    Hybrid denoising models based on combining empirical mode decomposition (EMD) and discrete wavelet transform (DWT) were found to be effective in removing additive Gaussian noise from electrocardiogram (ECG) signals. Recently, variational mode decomposition (VMD) has been proposed as a multiresolution technique that overcomes some of the limits of the EMD. Two ECG denoising approaches are compared. The first is based on denoising in the EMD domain by DWT thresholding, whereas the second is based on noise reduction in the VMD domain by DWT thresholding. Using signal-to-noise ratio and mean of squared errors as performance measures, simulation results show that the VMD-DWT approach outperforms the conventional EMD-DWT. In addition, a non-local means approach used as a reference technique provides better results than the VMD-DWT approach. PMID:26609387

  3. Raman spectroscopy de-noising based on EEMD combined with VS-LMS algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Xiao; Xu, Liang; Mo, Jia-qing; Lü, Xiao-yi

    2016-01-01

    This paper proposes a novel de-noising algorithm based on ensemble empirical mode decomposition (EEMD) and the variable step size least mean square (VS-LMS) adaptive filter. The noise of the high frequency part of spectrum will be removed through EEMD, and then the VS-LMS algorithm is utilized for overall de-noising. The EEMD combined with VS-LMS algorithm can not only preserve the detail and envelope of the effective signal, but also improve the system stability. When the method is used on pure R6G, the signal-to-noise ratio ( SNR) of Raman spectrum is lower than 10 dB. The de-noising superiority of the proposed method in Raman spectrum can be verified by three evaluation standards of SNR, root mean square error ( RMSE) and the correlation coefficient ρ.

  4. A multiscale products technique for denoising of DNA capillary electrophoresis signals

    NASA Astrophysics Data System (ADS)

    Gao, Qingwei; Lu, Yixiang; Sun, Dong; Zhang, Dexiang

    2013-06-01

    Since noise degrades the accuracy and precision of DNA capillary electrophoresis (CE) analysis, signal denoising is thus important to facilitate the postprocessing of CE data. In this paper, a new denoising algorithm based on dyadic wavelet transform using multiscale products is applied for the removal of the noise in the DNA CE signal. The adjacent scale wavelet coefficients are first multiplied to amplify the significant features of the CE signal while diluting noise. Then, noise is suppressed by applying a multiscale threshold to the multiscale products instead of directly to the wavelet coefficients. Finally, the noise-free CE signal is recovered from the thresholded coefficients by using inverse dyadic wavelet transform. We compare the performance of the proposed algorithm with other denoising methods applied to the synthetic CE and real CE signals. Experimental results show that the new scheme achieves better removal of noise while preserving the shape of peaks corresponding to the analytes in the sample.

  5. Comparative study of ECG signal denoising by wavelet thresholding in empirical and variational mode decomposition domains

    PubMed Central

    2014-01-01

    Hybrid denoising models based on combining empirical mode decomposition (EMD) and discrete wavelet transform (DWT) were found to be effective in removing additive Gaussian noise from electrocardiogram (ECG) signals. Recently, variational mode decomposition (VMD) has been proposed as a multiresolution technique that overcomes some of the limits of the EMD. Two ECG denoising approaches are compared. The first is based on denoising in the EMD domain by DWT thresholding, whereas the second is based on noise reduction in the VMD domain by DWT thresholding. Using signal-to-noise ratio and mean of squared errors as performance measures, simulation results show that the VMD-DWT approach outperforms the conventional EMD–DWT. In addition, a non-local means approach used as a reference technique provides better results than the VMD-DWT approach. PMID:26609387

  6. Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction.

    PubMed

    Holan, Scott H; Viator, John A

    2008-06-21

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples. PMID:18495977

  7. Class of Fibonacci-Daubechies-4-Haar wavelets with applicability to ECG denoising

    NASA Astrophysics Data System (ADS)

    Smith, Christopher B.; Agaian, Sos S.

    2004-05-01

    The presented paper introduces a new class of wavelets that includes the simplest Haar wavelet (Daubechies-2) as well as the Daubechies-4 wavelet. This class is shown to have several properties similar to the Daubechies wavelets. In application, the new class of wavelets has been shown to effectively denoise ECG signals. In addition, the paper introduces a new polynomial soft threshold technique for denoising through wavelet shrinkage. The polynomial soft threshold technique is able to represent a wide class of polynomial behaviors, including classical soft thresholding.

  8. Variable-order fractional numerical differentiation for noisy signals by wavelet denoising

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Ming; Wei, Yan-Qiao; Liu, Da-Yan; Boutat, Driss; Chen, Xiu-Kai

    2016-04-01

    In this paper, a numerical method is proposed to estimate the variable-order fractional derivatives of an unknown signal in noisy environment. Firstly, the wavelet denoising process is adopted to reduce the noise effect for the signal. Secondly, polynomials are constructed to fit the denoised signal in a set of overlapped subintervals of a considered interval. Thirdly, the variable-order fractional derivatives of these fitting polynomials are used as the estimations of the unknown ones, where the values obtained near the boundaries of each subinterval are ignored in the overlapped parts. Finally, numerical examples are presented to demonstrate the efficiency and robustness of the proposed method.

  9. MR images denoising using DCT-based unbiased nonlocal means filter

    NASA Astrophysics Data System (ADS)

    Zheng, Xiuqing; Hu, Jinrong; Zhou, Jiuliu

    2013-03-01

    The non-local means (NLM) filter has been proven to be an efficient feature-preserved denoising method and can be applied to remove noise in the magnetic resonance (MR) images. To suppress noise more efficiently, we present a novel NLM filter by using a low-pass filtered and low dimensional version of neighborhood for calculating the similarity weights. The discrete cosine transform (DCT) is used as a smoothing kernel, allowing both improvements in similarity estimation and computational speed-up. Experimental results show that the proposed filter achieves better denoising performance in MR Images compared to others filters, such as recently proposed NLM filter and unbiased NLM (UNLM) filter.

  10. Physical limits of inference

    NASA Astrophysics Data System (ADS)

    Wolpert, David H.

    2008-07-01

    We show that physical devices that perform observation, prediction, or recollection share an underlying mathematical structure. We call devices with that structure “inference devices”. We present a set of existence and impossibility results concerning inference devices. These results hold independent of the precise physical laws governing our universe. In a limited sense, the impossibility results establish that Laplace was wrong to claim that even in a classical, non-chaotic universe the future can be unerringly predicted, given sufficient knowledge of the present. Alternatively, these impossibility results can be viewed as a non-quantum-mechanical “uncertainty principle”. The mathematics of inference devices has close connections to the mathematics of Turing Machines (TMs). In particular, the impossibility results for inference devices are similar to the Halting theorem for TMs. Furthermore, one can define an analog of Universal TMs (UTMs) for inference devices. We call those analogs “strong inference devices”. We use strong inference devices to define the “inference complexity” of an inference task, which is the analog of the Kolmogorov complexity of computing a string. A task-independent bound is derived on how much the inference complexity of an inference task can differ for two different inference devices. This is analogous to the “encoding” bound governing how much the Kolmogorov complexity of a string can differ between two UTMs used to compute that string. However no universe can contain more than one strong inference device. So whereas the Kolmogorov complexity of a string is arbitrary up to specification of the UTM, there is no such arbitrariness in the inference complexity of an inference task. We informally discuss the philosophical implications of these results, e.g., for whether the universe “is” a computer. We also derive some graph-theoretic properties governing any set of multiple inference devices. We also present an

  11. INFERRING THE ECCENTRICITY DISTRIBUTION

    SciTech Connect

    Hogg, David W.; Bovy, Jo; Myers, Adam D.

    2010-12-20

    Standard maximum-likelihood estimators for binary-star and exoplanet eccentricities are biased high, in the sense that the estimated eccentricity tends to be larger than the true eccentricity. As with most non-trivial observables, a simple histogram of estimated eccentricities is not a good estimate of the true eccentricity distribution. Here, we develop and test a hierarchical probabilistic method for performing the relevant meta-analysis, that is, inferring the true eccentricity distribution, taking as input the likelihood functions for the individual star eccentricities, or samplings of the posterior probability distributions for the eccentricities (under a given, uninformative prior). The method is a simple implementation of a hierarchical Bayesian model; it can also be seen as a kind of heteroscedastic deconvolution. It can be applied to any quantity measured with finite precision-other orbital parameters, or indeed any astronomical measurements of any kind, including magnitudes, distances, or photometric redshifts-so long as the measurements have been communicated as a likelihood function or a posterior sampling.

  12. Inferring the Eccentricity Distribution

    NASA Astrophysics Data System (ADS)

    Hogg, David W.; Myers, Adam D.; Bovy, Jo

    2010-12-01

    Standard maximum-likelihood estimators for binary-star and exoplanet eccentricities are biased high, in the sense that the estimated eccentricity tends to be larger than the true eccentricity. As with most non-trivial observables, a simple histogram of estimated eccentricities is not a good estimate of the true eccentricity distribution. Here, we develop and test a hierarchical probabilistic method for performing the relevant meta-analysis, that is, inferring the true eccentricity distribution, taking as input the likelihood functions for the individual star eccentricities, or samplings of the posterior probability distributions for the eccentricities (under a given, uninformative prior). The method is a simple implementation of a hierarchical Bayesian model; it can also be seen as a kind of heteroscedastic deconvolution. It can be applied to any quantity measured with finite precision—other orbital parameters, or indeed any astronomical measurements of any kind, including magnitudes, distances, or photometric redshifts—so long as the measurements have been communicated as a likelihood function or a posterior sampling.

  13. Inference in `poor` languages

    SciTech Connect

    Petrov, S.

    1996-10-01

    Languages with a solvable implication problem but without complete and consistent systems of inference rules (`poor` languages) are considered. The problem of existence of finite complete and consistent inference rule system for a ``poor`` language is stated independently of the language or rules syntax. Several properties of the problem arc proved. An application of results to the language of join dependencies is given.

  14. Biomedical image and signal de-noising using dual tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Rizi, F. Yousefi; Noubari, H. Ahmadi; Setarehdan, S. K.

    2011-10-01

    Dual tree complex wavelet transform(DTCWT) is a form of discrete wavelet transform, which generates complex coefficients by using a dual tree of wavelet filters to obtain their real and imaginary parts. The purposes of de-noising are reducing noise level and improving signal to noise ratio (SNR) without distorting the signal or image. This paper proposes a method for removing white Gaussian noise from ECG signals and biomedical images. The discrete wavelet transform (DWT) is very valuable in a large scope of de-noising problems. However, it has limitations such as oscillations of the coefficients at a singularity, lack of directional selectivity in higher dimensions, aliasing and consequent shift variance. The complex wavelet transform CWT strategy that we focus on in this paper is Kingsbury's and Selesnick's dual tree CWT (DTCWT) which outperforms the critically decimated DWT in a range of applications, such as de-noising. Each complex wavelet is oriented along one of six possible directions, and the magnitude of each complex wavelet has a smooth bell-shape. In the final part of this paper, we present biomedical image and signal de-noising by the means of thresholding magnitude of the wavelet coefficients.

  15. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising

    PubMed Central

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods. PMID:25993566

  16. a Universal De-Noising Algorithm for Ground-Based LIDAR Signal

    NASA Astrophysics Data System (ADS)

    Ma, Xin; Xiang, Chengzhi; Gong, Wei

    2016-06-01

    Ground-based lidar, working as an effective remote sensing tool, plays an irreplaceable role in the study of atmosphere, since it has the ability to provide the atmospheric vertical profile. However, the appearance of noise in a lidar signal is unavoidable, which leads to difficulties and complexities when searching for more information. Every de-noising method has its own characteristic but with a certain limitation, since the lidar signal will vary with the atmosphere changes. In this paper, a universal de-noising algorithm is proposed to enhance the SNR of a ground-based lidar signal, which is based on signal segmentation and reconstruction. The signal segmentation serving as the keystone of the algorithm, segments the lidar signal into three different parts, which are processed by different de-noising method according to their own characteristics. The signal reconstruction is a relatively simple procedure that is to splice the signal sections end to end. Finally, a series of simulation signal tests and real dual field-of-view lidar signal shows the feasibility of the universal de-noising algorithm.

  17. Dual tree complex wavelet transform based denoising of optical microscopy images.

    PubMed

    Bal, Ufuk

    2012-12-01

    Photon shot noise is the main noise source of optical microscopy images and can be modeled by a Poisson process. Several discrete wavelet transform based methods have been proposed in the literature for denoising images corrupted by Poisson noise. However, the discrete wavelet transform (DWT) has disadvantages such as shift variance, aliasing, and lack of directional selectivity. To overcome these problems, a dual tree complex wavelet transform is used in our proposed denoising algorithm. Our denoising algorithm is based on the assumption that for the Poisson noise case threshold values for wavelet coefficients can be estimated from the approximation coefficients. Our proposed method was compared with one of the state of the art denoising algorithms. Better results were obtained by using the proposed algorithm in terms of image quality metrics. Furthermore, the contrast enhancement effect of the proposed method on collagen fıber images is examined. Our method allows fast and efficient enhancement of images obtained under low light intensity conditions. PMID:23243573

  18. Translation invariant directional framelet transform combined with Gabor filters for image denoising.

    PubMed

    Shi, Yan; Yang, Xiaoyuan; Guo, Yuhua

    2014-01-01

    This paper is devoted to the study of a directional lifting transform for wavelet frames. A nonsubsampled lifting structure is developed to maintain the translation invariance as it is an important property in image denoising. Then, the directionality of the lifting-based tight frame is explicitly discussed, followed by a specific translation invariant directional framelet transform (TIDFT). The TIDFT has two framelets ψ1, ψ2 with vanishing moments of order two and one respectively, which are able to detect singularities in a given direction set. It provides an efficient and sparse representation for images containing rich textures along with properties of fast implementation and perfect reconstruction. In addition, an adaptive block-wise orientation estimation method based on Gabor filters is presented instead of the conventional minimization of residuals. Furthermore, the TIDFT is utilized to exploit the capability of image denoising, incorporating the MAP estimator for multivariate exponential distribution. Consequently, the TIDFT is able to eliminate the noise effectively while preserving the textures simultaneously. Experimental results show that the TIDFT outperforms some other frame-based denoising methods, such as contourlet and shearlet, and is competitive to the state-of-the-art denoising approaches. PMID:24215934

  19. Fast and Memory-Efficient Topological Denoising of 2D and 3D Scalar Fields.

    PubMed

    Günther, David; Jacobson, Alec; Reininghaus, Jan; Seidel, Hans-Peter; Sorkine-Hornung, Olga; Weinkauf, Tino

    2014-12-01

    Data acquisition, numerical inaccuracies, and sampling often introduce noise in measurements and simulations. Removing this noise is often necessary for efficient analysis and visualization of this data, yet many denoising techniques change the minima and maxima of a scalar field. For example, the extrema can appear or disappear, spatially move, and change their value. This can lead to wrong interpretations of the data, e.g., when the maximum temperature over an area is falsely reported being a few degrees cooler because the denoising method is unaware of these features. Recently, a topological denoising technique based on a global energy optimization was proposed, which allows the topology-controlled denoising of 2D scalar fields. While this method preserves the minima and maxima, it is constrained by the size of the data. We extend this work to large 2D data and medium-sized 3D data by introducing a novel domain decomposition approach. It allows processing small patches of the domain independently while still avoiding the introduction of new critical points. Furthermore, we propose an iterative refinement of the solution, which decreases the optimization energy compared to the previous approach and therefore gives smoother results that are closer to the input. We illustrate our technique on synthetic and real-world 2D and 3D data sets that highlight potential applications. PMID:26356972

  20. A multi-scale non-local means algorithm for image de-noising

    NASA Astrophysics Data System (ADS)

    Nercessian, Shahan; Panetta, Karen A.; Agaian, Sos S.

    2012-06-01

    A highly studied problem in image processing and the field of electrical engineering in general is the recovery of a true signal from its noisy version. Images can be corrupted by noise during their acquisition or transmission stages. As noisy images are visually very poor in quality, and complicate further processing stages of computer vision systems, it is imperative to develop algorithms which effectively remove noise in images. In practice, it is a difficult task to effectively remove the noise while simultaneously retaining the edge structures within the image. Accordingly, many de-noising algorithms have been considered attempt to intelligent smooth the image while still preserving its details. Recently, a non-local means (NLM) de-noising algorithm was introduced, which exploited the redundant nature of images to achieve image de-noising. The algorithm was shown to outperform current de-noising standards, including Gaussian filtering, anisotropic diffusion, total variation minimization, and multi-scale transform coefficient thresholding. However, the NLM algorithm was developed in the spatial domain, and therefore, does not leverage the benefit that multi-scale transforms provide a framework in which signals can be better distinguished by noise. Accordingly, in this paper, a multi-scale NLM (MS-NLM) algorithm is proposed, which combines the advantage of the NLM algorithm and multi-scale image processing techniques. Experimental results via computer simulations illustrate that the MS-NLM algorithm outperforms the NLM, both visually and quantitatively.

  1. Random wavelet transforms, algebraic geometric coding, and their applications in signal compression and de-noising

    SciTech Connect

    Bieleck, T.; Song, L.M.; Yau, S.S.T.; Kwong, M.K.

    1995-07-01

    The concepts of random wavelet transforms and discrete random wavelet transforms are introduced. It is shown that these transforms can lead to simultaneous compression and de-noising of signals that have been corrupted with fractional noises. Potential applications of algebraic geometric coding theory to encode the ensuing data are also discussed.

  2. Texture preservation in de-noising UAV surveillance video through multi-frame sampling

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Fevig, Ronald A.; Schultz, Richard R.

    2009-02-01

    Image de-noising is a widely-used technology in modern real-world surveillance systems. Methods can seldom do both de-noising and texture preservation very well without a direct knowledge of the noise model. Most of the neighborhood fusion-based de-noising methods tend to over-smooth the images, which causes a significant loss of detail. Recently, a new non-local means method has been developed, which is based on the similarities among the different pixels. This technique results in good preservation of the textures; however, it also causes some artifacts. In this paper, we utilize the scale-invariant feature transform (SIFT) [1] method to find the corresponding region between different images, and then reconstruct the de-noised images by a weighted sum of these corresponding regions. Both hard and soft criteria are chosen in order to minimize the artifacts. Experiments applied to real unmanned aerial vehicle thermal infrared surveillance video show that our method is superior to popular methods in the literature.

  3. An NMR log echo data de-noising method based on the wavelet packet threshold algorithm

    NASA Astrophysics Data System (ADS)

    Meng, Xiangning; Xie, Ranhong; Li, Changxi; Hu, Falong; Li, Chaoliu; Zhou, Cancan

    2015-12-01

    To improve the de-noising effects of low signal-to-noise ratio (SNR) nuclear magnetic resonance (NMR) log echo data, this paper applies the wavelet packet threshold algorithm to the data. The principle of the algorithm is elaborated in detail. By comparing the properties of a series of wavelet packet bases and the relevance between them and the NMR log echo train signal, ‘sym7’ is found to be the optimal wavelet packet basis of the wavelet packet threshold algorithm to de-noise the NMR log echo train signal. A new method is presented to determine the optimal wavelet packet decomposition scale; this is within the scope of its maximum, using the modulus maxima and the Shannon entropy minimum standards to determine the global and local optimal wavelet packet decomposition scales, respectively. The results of applying the method to the simulated and actual NMR log echo data indicate that compared with the wavelet threshold algorithm, the wavelet packet threshold algorithm, which shows higher decomposition accuracy and better de-noising effect, is much more suitable for de-noising low SNR-NMR log echo data.

  4. Denoising of hyperspectral images by best multilinear rank approximation of a tensor

    NASA Astrophysics Data System (ADS)

    Marin-McGee, Maider; Velez-Reyes, Miguel

    2010-04-01

    The hyperspectral image cube can be modeled as a three dimensional array. Tensors and the tools of multilinear algebra provide a natural framework to deal with this type of mathematical object. Singular value decomposition (SVD) and its variants have been used by the HSI community for denoising of hyperspectral imagery. Denoising of HSI using SVD is achieved by finding a low rank approximation of a matrix representation of the hyperspectral image cube. This paper investigates similar concepts in hyperspectral denoising by using a low multilinear rank approximation the given HSI tensor representation. The Best Multilinear Rank Approximation (BMRA) of a given tensor A is to find a lower multilinear rank tensor B that is as close as possible to A in the Frobenius norm. Different numerical methods to compute the BMRA using Alternating Least Square (ALS) method and Newton's Methods over product of Grassmann manifolds are presented. The effect of the multilinear rank, the numerical method used to compute the BMRA, and different parameter choices in those methods are studied. Results show that comparable results are achievable with both ALS and Newton type methods. Also, classification results using the filtered tensor are better than those obtained either with denoising using SVD or MNF.

  5. Subject-specific patch-based denoising for contrast-enhanced cardiac MR images

    NASA Astrophysics Data System (ADS)

    Ma, Lorraine; Ebrahimi, Mehran; Pop, Mihaela

    2016-03-01

    Many patch-based techniques in imaging, e.g., Non-local means denoising, require tuning parameters to yield optimal results. In real-world applications, e.g., denoising of MR images, ground truth is not generally available and the process of choosing an appropriate set of parameters is a challenge. Recently, Zhu et al. proposed a method to define an image quality measure, called Q, that does not require ground truth. In this manuscript, we evaluate the effect of various parameters of the NL-means denoising on this quality metric Q. Our experiments are based on the late-gadolinium enhancement (LGE) cardiac MR images that are inherently noisy. Our described exhaustive evaluation approach can be used in tuning parameters of patch-based schemes. Even in the case that an estimation of optimal parameters is provided using another existing approach, our described method can be used as a secondary validation step. Our preliminary results suggest that denoising parameters should be case-specific rather than generic.

  6. Denoising Algorithm for the Pixel-Response Non-Uniformity Correction of a Scientific CMOS Under Low Light Conditions

    NASA Astrophysics Data System (ADS)

    Hu, Changmiao; Bai, Yang; Tang, Ping

    2016-06-01

    We present a denoising algorithm for the pixel-response non-uniformity correction of a scientific complementary metal-oxide-semiconductor (CMOS) image sensor, which captures images under extremely low-light conditions. By analyzing the integrating sphere experimental data, we present a pixel-by-pixel flat-field denoising algorithm to remove this fixed pattern noise, which occur in low-light conditions and high pixel response readouts. The response of the CMOS image sensor imaging system to the uniform radiance field shows a high level of spatial uniformity after the denoising algorithm has been applied.

  7. Environment-dependent denoising autoencoder for distant-talking speech recognition

    NASA Astrophysics Data System (ADS)

    Ueda, Yuma; Wang, Longbiao; Kai, Atsuhiko; Ren, Bo

    2015-12-01

    In this paper, we propose an environment-dependent denoising autoencoder (DAE) and automatic environment identification based on a deep neural network (DNN) with blind reverberation estimation for robust distant-talking speech recognition. Recently, DAEs have been shown to be effective in many noise reduction and reverberation suppression applications because higher-level representations and increased flexibility of the feature mapping function can be learned. However, a DAE is not adequate in mismatched training and test environments. In a conventional DAE, parameters are trained using pairs of reverberant speech and clean speech under various acoustic conditions (that is, an environment-independent DAE). To address the above problem, we propose two environment-dependent DAEs to reduce the influence of mismatches between training and test environments. In the first approach, we train various DAEs using speech from different acoustic environments, and the DAE for the condition that best matches the test condition is automatically selected (that is, a two-step environment-dependent DAE). To improve environment identification performance, we propose a DNN that uses both reverberant speech and estimated reverberation. In the second approach, we add estimated reverberation features to the input of the DAE (that is, a one-step environment-dependent DAE or a reverberation-aware DAE). The proposed method is evaluated using speech in simulated and real reverberant environments. Experimental results show that the environment-dependent DAE outperforms the environment-independent one in both simulated and real reverberant environments. For two-step environment-dependent DAE, the performance of environment identification based on the proposed DNN approach is also better than that of the conventional DNN approach, in which only reverberant speech is used and reverberation is not blindly estimated. And, the one-step environment-dependent DAE significantly outperforms the two

  8. Segmentation of confocal Raman microspectroscopic imaging data using edge-preserving denoising and clustering.

    PubMed

    Alexandrov, Theodore; Lasch, Peter

    2013-06-18

    Over the past decade, confocal Raman microspectroscopic (CRM) imaging has matured into a useful analytical tool to obtain spatially resolved chemical information on the molecular composition of biological samples and has found its way into histopathology, cytology, and microbiology. A CRM imaging data set is a hyperspectral image in which Raman intensities are represented as a function of three coordinates: a spectral coordinate λ encoding the wavelength and two spatial coordinates x and y. Understanding CRM imaging data is challenging because of its complexity, size, and moderate signal-to-noise ratio. Spatial segmentation of CRM imaging data is a way to reveal regions of interest and is traditionally performed using nonsupervised clustering which relies on spectral domain-only information with the main drawback being the high sensitivity to noise. We present a new pipeline for spatial segmentation of CRM imaging data which combines preprocessing in the spectral and spatial domains with k-means clustering. Its core is the preprocessing routine in the spatial domain, edge-preserving denoising (EPD), which exploits the spatial relationships between Raman intensities acquired at neighboring pixels. Additionally, we propose to use both spatial correlation to identify Raman spectral features colocalized with defined spatial regions and confidence maps to assess the quality of spatial segmentation. For CRM data acquired from midsagittal Syrian hamster ( Mesocricetus auratus ) brain cryosections, we show how our pipeline benefits from the complex spatial-spectral relationships inherent in the CRM imaging data. EPD significantly improves the quality of spatial segmentation that allows us to extract the underlying structural and compositional information contained in the Raman microspectra. PMID:23701523

  9. Median Modified Wiener Filter for nonlinear adaptive spatial denoising of protein NMR multidimensional spectra

    PubMed Central

    Cannistraci, Carlo Vittorio; Abbas, Ahmed; Gao, Xin

    2015-01-01

    Denoising multidimensional NMR-spectra is a fundamental step in NMR protein structure determination. The state-of-the-art method uses wavelet-denoising, which may suffer when applied to non-stationary signals affected by Gaussian-white-noise mixed with strong impulsive artifacts, like those in multi-dimensional NMR-spectra. Regrettably, Wavelet's performance depends on a combinatorial search of wavelet shapes and parameters; and multi-dimensional extension of wavelet-denoising is highly non-trivial, which hampers its application to multidimensional NMR-spectra. Here, we endorse a diverse philosophy of denoising NMR-spectra: less is more! We consider spatial filters that have only one parameter to tune: the window-size. We propose, for the first time, the 3D extension of the median-modified-Wiener-filter (MMWF), an adaptive variant of the median-filter, and also its novel variation named MMWF*. We test the proposed filters and the Wiener-filter, an adaptive variant of the mean-filter, on a benchmark set that contains 16 two-dimensional and three-dimensional NMR-spectra extracted from eight proteins. Our results demonstrate that the adaptive spatial filters significantly outperform their non-adaptive versions. The performance of the new MMWF* on 2D/3D-spectra is even better than wavelet-denoising. Noticeably, MMWF* produces stable high performance almost invariant for diverse window-size settings: this signifies a consistent advantage in the implementation of automatic pipelines for protein NMR-spectra analysis. PMID:25619991

  10. Median Modified Wiener Filter for nonlinear adaptive spatial denoising of protein NMR multidimensional spectra.

    PubMed

    Cannistraci, Carlo Vittorio; Abbas, Ahmed; Gao, Xin

    2015-01-01

    Denoising multidimensional NMR-spectra is a fundamental step in NMR protein structure determination. The state-of-the-art method uses wavelet-denoising, which may suffer when applied to non-stationary signals affected by Gaussian-white-noise mixed with strong impulsive artifacts, like those in multi-dimensional NMR-spectra. Regrettably, Wavelet's performance depends on a combinatorial search of wavelet shapes and parameters; and multi-dimensional extension of wavelet-denoising is highly non-trivial, which hampers its application to multidimensional NMR-spectra. Here, we endorse a diverse philosophy of denoising NMR-spectra: less is more! We consider spatial filters that have only one parameter to tune: the window-size. We propose, for the first time, the 3D extension of the median-modified-Wiener-filter (MMWF), an adaptive variant of the median-filter, and also its novel variation named MMWF*. We test the proposed filters and the Wiener-filter, an adaptive variant of the mean-filter, on a benchmark set that contains 16 two-dimensional and three-dimensional NMR-spectra extracted from eight proteins. Our results demonstrate that the adaptive spatial filters significantly outperform their non-adaptive versions. The performance of the new MMWF* on 2D/3D-spectra is even better than wavelet-denoising. Noticeably, MMWF* produces stable high performance almost invariant for diverse window-size settings: this signifies a consistent advantage in the implementation of automatic pipelines for protein NMR-spectra analysis. PMID:25619991

  11. Linguistic Markers of Inference Generation While Reading.

    PubMed

    Clinton, Virginia; Carlson, Sarah E; Seipel, Ben

    2016-06-01

    Words can be informative linguistic markers of psychological constructs. The purpose of this study is to examine associations between word use and the process of making meaningful connections to a text while reading (i.e., inference generation). To achieve this purpose, think-aloud data from third-fifth grade students ([Formula: see text]) reading narrative texts were hand-coded for inferences. These data were also processed with a computer text analysis tool, Linguistic Inquiry and Word Count, for percentages of word use in the following categories: cognitive mechanism words, nonfluencies, and nine types of function words. Findings indicate that cognitive mechanisms were an independent, positive predictor of connections to background knowledge (i.e., elaborative inference generation) and nonfluencies were an independent, negative predictor of connections within the text (i.e., bridging inference generation). Function words did not provide unique variance towards predicting inference generation. These findings are discussed in the context of a cognitive reflection model and the differences between bridging and elaborative inference generation. In addition, potential practical implications for intelligent tutoring systems and computer-based methods of inference identification are presented. PMID:25833811

  12. Network Plasticity as Bayesian Inference

    PubMed Central

    Legenstein, Robert; Maass, Wolfgang

    2015-01-01

    General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information so well to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling. PMID:26545099

  13. A real-time de-noising method applied for transient and weak biomolecular interaction analysis in surface plasmon resonance biosensing

    NASA Astrophysics Data System (ADS)

    Zhan, Shuyue; Shi, Chunfei; Ou, Huichao; Song, Hong; Wang, Xiaoping

    2016-03-01

    Surface plasmon resonance (SPR) biosensing technology will likely become a type of label-free technology for transient and weak biomolecular interaction analysis (BIA); however, it needs some improvement with regard to high-speed and high-resolution measurement. We studied a type of real-time de-noising (RD) data processing method for SPR sensorgrams based on moving average; it can immediately distinguish ultra-weak signals during the process of experiment, and can display a low-noise sensorgram in real time. A flow injection analysis experiment and a CM5 sensorchip affinity experiment are designed to evaluate the characteristics of the RD method. High noise suppression ability and low signal distortion risks of the RD method have been proved. The RD method does not significantly distort signals of the sensorgram in the molecular affinity experiment, and K D values of the RD method essentially coincide with those of the raw sensorgram with a higher signal-to-noise ratio (SNR). Meanwhile, by the RD method denoising the sensorgram with an ultralow SNR that is closer to the condition of the transient and weak molecular interactions, the kinetic constant can be more accurately analyzed, whereas it cannot be realized for the raw sensorgram. The crucial function and significance of the RD method are primarily embodied in the measurement limit of SPR sensing.

  14. De-noising of microwave satellite soil moisture time series

    NASA Astrophysics Data System (ADS)

    Su, Chun-Hsu; Ryu, Dongryeol; Western, Andrew; Wagner, Wolfgang

    2013-04-01

    Technology) ASCAT data sets to identify two types of errors that are spectrally distinct. Based on a semi-empirical model of soil moisture dynamics, we consider possible digital filter designs to improve the accuracy of their soil moisture products by reducing systematic periodic errors and stochastic noise. We describe a methodology to design bandstop filters to remove artificial resonances, and a Wiener filter to remove stochastic white noise present in the satellite data. Utility of these filters is demonstrated by comparing de-noised data against in-situ observations from ground monitoring stations in the Murrumbidgee Catchment (Smith et al., 2012), southeast Australia. Albergel, C., de Rosnay, P., Gruhier, C., Muñoz Sabater, J., Hasenauer, S., Isaksen, L., Kerr, Y. H., & Wagner, W. (2012). Evaluation of remotely sensed and modelled soil moisture products using global ground-based in situ observations. Remote Sensing of Environment, 118, 215-226. Scipal, K., Holmes, T., de Jeu, R., Naeimi, V., & Wagner, W. (2008), A possible solution for the problem of estimating the error structure of global soil moisture data sets. Geophysical Research Letters, 35, L24403. Smith, A. B., Walker, J. P., Western, A. W., Young, R. I., Ellett, K. M., Pipunic, R. C., Grayson, R. B., Siriwardena, L., Chiew, F. H. S., & Richter, H. (2012). The Murrumbidgee soil moisture network data set. Water Resources Research, 48, W07701. Su, C.-H., Ryu, D., Young, R., Western, A. W., & Wagner, W. (2012). Inter-comparison of microwave satellite soil moisture retrievals over Australia. Submitted to Remote Sensing of Environment.

  15. Application of Wavelet Based Denoising for T-Wave Alternans Analysis in High Resolution ECG Maps

    NASA Astrophysics Data System (ADS)

    Janusek, D.; Kania, M.; Zaczek, R.; Zavala-Fernandez, H.; Zbieć, A.; Opolski, G.; Maniewski, R.

    2011-01-01

    T-wave alternans (TWA) allows for identification of patients at an increased risk of ventricular arrhythmia. Stress test, which increases heart rate in controlled manner, is used for TWA measurement. However, the TWA detection and analysis are often disturbed by muscular interference. The evaluation of wavelet based denoising methods was performed to find optimal algorithm for TWA analysis. ECG signals recorded in twelve patients with cardiac disease were analyzed. In seven of them significant T-wave alternans magnitude was detected. The application of wavelet based denoising method in the pre-processing stage increases the T-wave alternans magnitude as well as the number of BSPM signals where TWA was detected.

  16. Parameters optimization for wavelet denoising based on normalized spectral angle and threshold constraint machine learning

    NASA Astrophysics Data System (ADS)

    Li, Hao; Ma, Yong; Liang, Kun; Tian, Yong; Wang, Rui

    2012-01-01

    Wavelet parameters (e.g., wavelet type, level of decomposition) affect the performance of the wavelet denoising algorithm in hyperspectral applications. Current studies select the best wavelet parameters for a single spectral curve by comparing similarity criteria such as spectral angle (SA). However, the method to find the best parameters for a spectral library that contains multiple spectra has not been studied. In this paper, a criterion named normalized spectral angle (NSA) is proposed. By comparing NSA, the best combination of parameters for a spectral library can be selected. Moreover, a fast algorithm based on threshold constraint and machine learning is developed to reduce the time of a full search. After several iterations of learning, the combination of parameters that constantly surpasses a threshold is selected. The experiments proved that by using the NSA criterion, the SA values decreased significantly, and the fast algorithm could save 80% time consumption, while the denoising performance was not obviously impaired.

  17. Prognostics of Lithium-Ion Batteries Based on Wavelet Denoising and DE-RVM.

    PubMed

    Zhang, Chaolong; He, Yigang; Yuan, Lifeng; Xiang, Sheng; Wang, Jinping

    2015-01-01

    Lithium-ion batteries are widely used in many electronic systems. Therefore, it is significantly important to estimate the lithium-ion battery's remaining useful life (RUL), yet very difficult. One important reason is that the measured battery capacity data are often subject to the different levels of noise pollution. In this paper, a novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries. Wavelet denoising is performed with different thresholds in order to weaken the strong noise and remove the weak noise. Relevance vector machine (RVM) improved by differential evolution (DE) algorithm is utilized to estimate the battery RUL based on the denoised data. An experiment including battery 5 capacity prognostics case and battery 18 capacity prognostics case is conducted and validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately. PMID:26413090

  18. Prognostics of Lithium-Ion Batteries Based on Wavelet Denoising and DE-RVM

    PubMed Central

    Zhang, Chaolong; He, Yigang; Yuan, Lifeng; Xiang, Sheng; Wang, Jinping

    2015-01-01

    Lithium-ion batteries are widely used in many electronic systems. Therefore, it is significantly important to estimate the lithium-ion battery's remaining useful life (RUL), yet very difficult. One important reason is that the measured battery capacity data are often subject to the different levels of noise pollution. In this paper, a novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries. Wavelet denoising is performed with different thresholds in order to weaken the strong noise and remove the weak noise. Relevance vector machine (RVM) improved by differential evolution (DE) algorithm is utilized to estimate the battery RUL based on the denoised data. An experiment including battery 5 capacity prognostics case and battery 18 capacity prognostics case is conducted and validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately. PMID:26413090

  19. R-L Method and BLS-GSM Denoising for Penumbra Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Mei; Li, Yang; Sheng, Liang; Li, Chunhua; Wei, Fuli; Peng, Bodong

    2013-12-01

    When neutron yield is very low, reconstruction of coding penumbra image is rather difficult. In this paper, low-yield (109) 14 MeV neutron penumbra imaging was simulated by Monte Carlo method. The Richardson Lucy (R-L) iteration method was proposed to incorporated with Bayesian least square-Gaussian scale mixture model (BLS-GSM) wavelet denoising for the simulated image. Optimal number of R-L iterations was gotten by a large number of tests. The results show that compared with Wiener method and median filter denoising, this method is better in restraining background noise, the correlation coefficient Rsr between the reconstructed and the real images is larger, and the reconstruction result is better.

  20. A new performance evaluation scheme for jet engine vibration signal denoising

    NASA Astrophysics Data System (ADS)

    Sadooghi, Mohammad Saleh; Esmaeilzadeh Khadem, Siamak

    2016-08-01

    Denoising of a cargo-plane jet engine compressor vibration signal is investigated in this article. Discrete wavelet transform and two families of Donoho-Johnston and parameter method thresholding, are applied to vibration signal. Eighty four combinations of wavelet thresholding and mother wavelet are evaluated. A new performance evaluation scheme for optimal selection of mother wavelet and thresholding method combination is proposed in this paper, which is make a trade off between four performance criteria of signal to noise ratio, percentage root mean square difference, Cross-correlation, and mean square error. Dmeyer mother wavelet (dmey) combined with Rigorous SURE thresholding has the maximum trade off value and was selected as the most appropriate combination for denoising of the signal. It was shown that inappropriate combination leads to data losing. Also higher performance of proposed trade off with respect to other criteria was proven graphically.

  1. Convex optimization-based windowed Fourier filtering with multiple windows for wrapped-phase denoising.

    PubMed

    Yatabe, Kohei; Oikawa, Yasuhiro

    2016-06-10

    The windowed Fourier filtering (WFF), defined as a thresholding operation in the windowed Fourier transform (WFT) domain, is a successful method for denoising a phase map and analyzing a fringe pattern. However, it has some shortcomings, such as extremely high redundancy, which results in high computational cost, and difficulty in selecting an appropriate window size. In this paper, an extension of WFF for denoising a wrapped-phase map is proposed. It is formulated as a convex optimization problem using Gabor frames instead of WFT. Two Gabor frames with differently sized windows are used simultaneously so that the above-mentioned issues are resolved. In addition, a differential operator is combined with a Gabor frame in order to preserve discontinuity of the underlying phase map better. Some numerical experiments demonstrate that the proposed method is able to reconstruct a wrapped-phase map, even for a severely contaminated situation. PMID:27409020

  2. A study of infrared spectroscopy de-noising based on LMS adaptive filter

    NASA Astrophysics Data System (ADS)

    Mo, Jia-qing; Lv, Xiao-yi; Yu, Xiao

    2015-12-01

    Infrared spectroscopy has been widely used, but which often contains a lot of noise, so the spectral characteristic of the sample is seriously affected. Therefore the de-noising is very important in the spectrum analysis and processing. In the study of infrared spectroscopy, the least mean square (LMS) adaptive filter was applied in the field firstly. LMS adaptive filter algorithm can reserve the detail and envelope of the effective signal when the method was applied to infrared spectroscopy of breast cancer which signal-to-noise ratio (SNR) is lower than 10 dB, contrast and analysis the result with result of wavelet transform and ensemble empirical mode decomposition (EEMD). The three evaluation standards (SNR, root mean square error (RMSE) and the correlation coefficient (ρ)) fully proved de-noising advantages of LMS adaptive filter in infrared spectroscopy of breast cancer.

  3. Denoising preterm EEG by signal decomposition and adaptive filtering: a comparative study.

    PubMed

    Navarro, X; Porée, F; Beuchée, A; Carrault, G

    2015-03-01

    Electroencephalography (EEG) from preterm infant monitoring systems is usually contaminated by several sources of noise that have to be removed in order to correctly interpret signals and perform automated analysis reliably. Band-pass and adaptive filters (AF) continue to be systematically applied, but their efficacy may be decreased facing preterm EEG patterns such as the tracé alternant and slow delta-waves. In this paper, we propose the combination of EEG decomposition with AF to improve the overall denoising process. Using artificially contaminated signals from real EEGs, we compared the quality of filtered signals applying different decomposition techniques: the discrete wavelet transform, the empirical mode decomposition (EMD) and a recent improved version, the complete ensemble EMD with adaptive noise. Simulations demonstrate that introducing EMD-based techniques prior to AF can reduce up to 30% the root mean squared errors in denoised EEGs. PMID:25659233

  4. The Application of Wavelet-Domain Hidden Markov Tree Model in Diabetic Retinal Image Denoising

    PubMed Central

    Cui, Dong; Liu, Minmin; Hu, Lei; Liu, Keju; Guo, Yongxin; Jiao, Qing

    2015-01-01

    The wavelet-domain Hidden Markov Tree Model can properly describe the dependence and correlation of fundus angiographic images’ wavelet coefficients among scales. Based on the construction of the fundus angiographic images Hidden Markov Tree Models and Gaussian Mixture Models, this paper applied expectation-maximum algorithm to estimate the wavelet coefficients of original fundus angiographic images and the Bayesian estimation to achieve the goal of fundus angiographic images denoising. As is shown in the experimental result, compared with the other algorithms as mean filter and median filter, this method effectively improved the peak signal to noise ratio of fundus angiographic images after denoising and preserved the details of vascular edge in fundus angiographic images. PMID:26628926

  5. Projection domain denoising method based on dictionary learning for low-dose CT image reconstruction.

    PubMed

    Zhang, Haiyan; Zhang, Liyi; Sun, Yunshan; Zhang, Jingyu

    2015-01-01

    Reducing X-ray tube current is one of the widely used methods for decreasing the radiation dose. Unfortunately, the signal-to-noise ratio (SNR) of the projection data degrades simultaneously. To improve the quality of reconstructed images, a dictionary learning based penalized weighted least-squares (PWLS) approach is proposed for sinogram denoising. The weighted least-squares considers the statistical characteristic of noise and the penalty models the sparsity of sinogram based on dictionary learning. Then reconstruct CT image using filtered back projection (FBP) algorithm from the denoised sinogram. The proposed method is particularly suitable for the projection data with low SNR. Experimental results show that the proposed method can get high-quality CT images when the signal to noise ratio of projection data declines sharply. PMID:26409424

  6. Speech signal denoising with wavelet-transforms and the mean opinion score characterizing the filtering quality

    NASA Astrophysics Data System (ADS)

    Yaseen, Alauldeen S.; Pavlov, Alexey N.; Hramov, Alexander E.

    2016-03-01

    Speech signal processing is widely used to reduce noise impact in acquired data. During the last decades, wavelet-based filtering techniques are often applied in communication systems due to their advantages in signal denoising as compared with Fourier-based methods. In this study we consider applications of a 1-D double density complex wavelet transform (1D-DDCWT) and compare the results with the standard 1-D discrete wavelet-transform (1DDWT). The performances of the considered techniques are compared using the mean opinion score (MOS) being the primary metric for the quality of the processed signals. A two-dimensional extension of this approach can be used for effective image denoising.

  7. Mining Spatial-Temporal Patterns and Structural Sparsity for Human Motion Data Denoising.

    PubMed

    Feng, Yinfu; Ji, Mingming; Xiao, Jun; Yang, Xiaosong; Zhang, Jian J; Zhuang, Yueting; Li, Xuelong

    2015-12-01

    Motion capture is an important technique with a wide range of applications in areas such as computer vision, computer animation, film production, and medical rehabilitation. Even with the professional motion capture systems, the acquired raw data mostly contain inevitable noises and outliers. To denoise the data, numerous methods have been developed, while this problem still remains a challenge due to the high complexity of human motion and the diversity of real-life situations. In this paper, we propose a data-driven-based robust human motion denoising approach by mining the spatial-temporal patterns and the structural sparsity embedded in motion data. We first replace the regularly used entire pose model with a much fine-grained partlet model as feature representation to exploit the abundant local body part posture and movement similarities. Then, a robust dictionary learning algorithm is proposed to learn multiple compact and representative motion dictionaries from the training data in parallel. Finally, we reformulate the human motion denoising problem as a robust structured sparse coding problem in which both the noise distribution information and the temporal smoothness property of human motion have been jointly taken into account. Compared with several state-of-the-art motion denoising methods on both the synthetic and real noisy motion data, our method consistently yields better performance than its counterparts. The outputs of our approach are much more stable than that of the others. In addition, it is much easier to setup the training dataset of our method than that of the other data-driven-based methods. PMID:25561602

  8. Wavelet-domain TI Wiener-like filtering for complex MR data denoising.

    PubMed

    Hu, Kai; Cheng, Qiaocui; Gao, Xieping

    2016-10-01

    Magnetic resonance (MR) images are affected by random noises, which degrade many image processing and analysis tasks. It has been shown that the noise in magnitude MR images follows a Rician distribution. Unlike additive Gaussian noise, the noise is signal-dependent, and consequently difficult to reduce, especially in low signal-to-noise ratio (SNR) images. Wirestam et al. in [20] proposed a Wiener-like filtering technique in wavelet-domain to reduce noise before construction of the magnitude MR image. Based on Wirestam's study, we propose a wavelet-domain translation-invariant (TI) Wiener-like filtering algorithm for noise reduction in complex MR data. The proposed denoising algorithm shows the following improvements compared with Wirestam's method: (1) we introduce TI property into the Wiener-like filtering in wavelet-domain to suppress artifacts caused by translations of the signal; (2) we integrate one Stein's Unbiased Risk Estimator (SURE) thresholding with two Wiener-like filters to make the hard-thresholding scale adaptive; and (3) the first Wiener-like filtering is used to filter the original noisy image in which the noise obeys Gaussian distribution and it provides more reasonable results. The proposed algorithm is applied to denoise the real and imaginary parts of complex MR images. To evaluate our proposed algorithm, we conduct extensive denoising experiments using T1-weighted simulated MR images, diffusion-weighted (DW) phantom and in vivo data. We compare our algorithm with other popular denoising methods. The results demonstrate that our algorithm outperforms others in term of both efficiency and robustness. PMID:27238055

  9. Feasibility study of dose reduction in digital breast tomosynthesis using non-local denoising algorithms

    NASA Astrophysics Data System (ADS)

    Vieira, Marcelo A. C.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Borges, Lucas R.; Bakic, Predrag R.; Barufaldi, Bruno; Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2015-03-01

    The main purpose of this work is to study the ability of denoising algorithms to reduce the radiation dose in Digital Breast Tomosynthesis (DBT) examinations. Clinical use of DBT is normally performed in "combo-mode", in which, in addition to DBT projections, a 2D mammogram is taken with the standard radiation dose. As a result, patients have been exposed to radiation doses higher than used in digital mammography. Thus, efforts to reduce the radiation dose in DBT examinations are of great interest. However, a decrease in dose leads to an increased quantum noise level, and related decrease in image quality. This work is aimed at addressing this problem by the use of denoising techniques, which could allow for dose reduction while keeping the image quality acceptable. We have studied two "state of the art" denoising techniques for filtering the quantum noise due to the reduced dose in DBT projections: Non-local Means (NLM) and Block-matching 3D (BM3D). We acquired DBT projections at different dose levels of an anthropomorphic physical breast phantom with inserted simulated microcalcifications. Then, we found the optimal filtering parameters where the denoising algorithms are capable of recovering the quality from the DBT images acquired with the standard radiation dose. Results using objective image quality assessment metrics showed that BM3D algorithm achieved better noise adjustment (mean difference in peak signal to noise ratio < 0.1dB) and less blurring (mean difference in image sharpness ~ 6%) than the NLM for the projections acquired with lower radiation doses.

  10. Locally optimized non-local means denoising for low-dose X-ray backscatter imagery.

    PubMed

    Tracey, Brian H; Miller, Eric L; Wu, Yue; Alvino, Christopher; Schiefele, Markus; Al-Kofahi, Omar

    2014-01-01

    While recent years have seen considerable progress in image denoising, the leading techniques have been developed for digital photographs or other images that can have very different characteristics than those encountered in X-ray applications. In particular here we examine X-ray backscatter (XBS) images collected by airport security systems, where images are piecewise smooth and edge information is typically more correlated with objects while texture is dominated by statistical noise in the detected signal. In this paper, we show how multiple estimates for a denoised XBS image can be combined using a variational approach, giving a solution that enhances edge contrast by trading off gradient penalties against data fidelity terms. We demonstrate the approach by combining several estimates made using the non-local means (NLM) algorithm, a widely used patch-based denoising method. The resulting improvements hold the potential for improving automated analysis of low-SNR X-ray imagery and can be applied in other applications where edge information is of interest. PMID:25265919

  11. Denoising of brain MRI images using modified PDE based on pixel similarity

    NASA Astrophysics Data System (ADS)

    Jin, Renchao; Song, Enmin; Zhang, Lijuan; Min, Zhifang; Xu, Xiangyang; Huang, Chih-Cheng

    2008-03-01

    Although various image denoising methods such as PDE-based algorithms have made remarkable progress in the past years, the trade-off between noise reduction and edge preservation is still an interesting and difficult problem in the field of image processing and analysis. A new image denoising algorithm, using a modified PDE model based on pixel similarity, is proposed to deal with the problem. The pixel similarity measures the similarity between two pixels. Then the neighboring consistency of the center pixel can be calculated. Informally, if a pixel is not consistent enough with its surrounding pixels, it can be considered as a noise, but an extremely strong inconsistency suggests an edge. The pixel similarity is a probability measure, its value is between 0 and 1. According to the neighboring consistency of the pixel, a diffusion control factor can be determined by a simple thresholding rule. The factor is combined into the primary partial differential equation as an adjusting factor for controlling the speed of diffusion for different type of pixels. An evaluation of the proposed algorithm on the simulated brain MRI images was carried out. The initial experimental results showed that the new algorithm can smooth the MRI images better while keeping the edges better and achieve higher peak signal to noise ratio (PSNR) comparing with several existing denoising algorithms.

  12. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal.

    PubMed

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-01-01

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal. PMID:26512665

  13. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal

    PubMed Central

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-01-01

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal. PMID:26512665

  14. Self-adapting denoising, alignment and reconstruction in electron tomography in materials science.

    PubMed

    Printemps, Tony; Mula, Guido; Sette, Daniele; Bleuet, Pierre; Delaye, Vincent; Bernier, Nicolas; Grenier, Adeline; Audoit, Guillaume; Gambacorti, Narciso; Hervé, Lionel

    2016-01-01

    An automatic procedure for electron tomography is presented. This procedure is adapted for specimens that can be fashioned into a needle-shaped sample and has been evaluated on inorganic samples. It consists of self-adapting denoising, automatic and accurate alignment including detection and correction of tilt axis, and 3D reconstruction. We propose the exploitation of a large amount of information of an electron tomography acquisition to achieve robust and automatic mixed Poisson-Gaussian noise parameter estimation and denoising using undecimated wavelet transforms. The alignment is made by mixing three techniques, namely (i) cross-correlations between neighboring projections, (ii) common line algorithm to get a precise shift correction in the direction of the tilt axis and (iii) intermediate reconstructions to precisely determine the tilt axis and shift correction in the direction perpendicular to that axis. Mixing alignment techniques turns out to be very efficient and fast. Significant improvements are highlighted in both simulations and real data reconstructions of porous silicon in high angle annular dark field mode and agglomerated silver nanoparticles in incoherent bright field mode. 3D reconstructions obtained with minimal user-intervention present fewer artefacts and less noise, which permits easier and more reliable segmentation and quantitative analysis. After careful sample preparation and data acquisition, the denoising procedure, alignment and reconstruction can be achieved within an hour for a 3D volume of about a hundred million voxels, which is a step toward a more routine use of electron tomography. PMID:26413937

  15. Image Pretreatment Tools I: Algorithms for Map Denoising and Background Subtraction Methods.

    PubMed

    Cannistraci, Carlo Vittorio; Alessio, Massimo

    2016-01-01

    One of the critical steps in two-dimensional electrophoresis (2-DE) image pre-processing is the denoising, that might aggressively affect either spot detection or pixel-based methods. The Median Modified Wiener Filter (MMWF), a new nonlinear adaptive spatial filter, resulted to be a good denoising approach to use in practice with 2-DE. MMWF is suitable for global denoising, and contemporary for the removal of spikes and Gaussian noise, being its best setting invariant on the type of noise. The second critical step rises because of the fact that 2-DE gel images may contain high levels of background, generated by the laboratory experimental procedures, that must be subtracted for accurate measurements of the proteomic optical density signals. Here we discuss an efficient mathematical method for background estimation, that is suitable to work even before the 2-DE image spot detection, and it is based on the 3D mathematical morphology (3DMM) theory. PMID:26611410

  16. A New Image Denoising Algorithm that Preserves Structures of Astronomical Data

    NASA Astrophysics Data System (ADS)

    Bressert, Eli; Edmonds, P.; Kowal Arcand, K.

    2007-05-01

    We have processed numerous x-ray data sets using several well-known algorithms such as Gaussian and adaptive smoothing for public related image releases. These algorithms are used to denoise/smooth images and retain the overall structure of observed objects. Recently, a new PDE algorithm and program, provided by Dr. David Tschumperle and referred to as GREYCstoration, has been tested and is in the progress of being implemented into the Chandra EPO imaging group. Results of GREYCstoration will be presented and compared to the currently used methods for x-ray and multiple wavelength images. What demarcates Tschumperle's algorithm from the current algorithms used by the EPO imaging group is its ability to preserve the main structures of an image strongly, while reducing noise. In addition to denoising images, GREYCstoration can be used to erase artifacts accumulated during observation and mosaicing stages. GREYCstoration produces results that are comparable and in some cases more preferable than the current denoising/smoothing algorithms. From our early stages of testing, the results of the new algorithm will provide insight on the algorithm's initial capabilities on multiple wavelength astronomy data sets.

  17. Image Denoising With Edge-Preserving and Segmentation Based on Mask NHA.

    PubMed

    Hosotani, Fumitaka; Inuzuka, Yuya; Hasegawa, Masaya; Hirobayashi, Shigeki; Misawa, Tadanobu

    2015-12-01

    In this paper, we propose a zero-mean white Gaussian noise removal method using a high-resolution frequency analysis. It is difficult to separate an original image component from a noise component when using discrete Fourier transform or discrete cosine transform for analysis because sidelobes occur in the results. The 2D non-harmonic analysis (2D NHA) is a high-resolution frequency analysis technique that improves noise removal accuracy because of its sidelobe reduction feature. However, spectra generated by NHA are distorted, because of which the signal of the image is non-stationary. In this paper, we analyze each region with a homogeneous texture in the noisy image. Non-uniform regions that occur due to segmentation are analyzed by an extended 2D NHA method called Mask NHA. We conducted an experiment using a simulation image, and found that Mask NHA denoising attains a higher peak signal-to-noise ratio (PSNR) value than the state-of-the-art methods if a suitable segmentation result can be obtained from the input image, even though parameter optimization was incomplete. This experimental result exhibits the upper limit on the value of PSNR in our Mask NHA denoising method. The performance of Mask NHA denoising is expected to approach the limit of PSNR by improving the segmentation method. PMID:26513792

  18. Adaptive non-local means filtering based on local noise level for CT denoising

    NASA Astrophysics Data System (ADS)

    Li, Zhoubo; Yu, Lifeng; Trzasko, Joshua D.; Fletcher, Joel G.; McCollough, Cynthia H.; Manduca, Armando

    2012-03-01

    Radiation dose from CT scans is an increasing health concern in the practice of radiology. Higher dose scans can produce clearer images with high diagnostic quality, but may increase the potential risk of radiation-induced cancer or other side effects. Lowering radiation dose alone generally produces a noisier image and may degrade diagnostic performance. Recently, CT dose reduction based on non-local means (NLM) filtering for noise reduction has yielded promising results. However, traditional NLM denoising operates under the assumption that image noise is spatially uniform noise, while in CT images the noise level varies significantly within and across slices. Therefore, applying NLM filtering to CT data using a global filtering strength cannot achieve optimal denoising performance. In this work, we have developed a technique for efficiently estimating the local noise level for CT images, and have modified the NLM algorithm to adapt to local variations in noise level. The local noise level estimation technique matches the true noise distribution determined from multiple repetitive scans of a phantom object very well. The modified NLM algorithm provides more effective denoising of CT data throughout a volume, and may allow significant lowering of radiation dose. Both the noise map calculation and the adaptive NLM filtering can be performed in times that allow integration with the clinical workflow.

  19. The Bayes Inference Engine

    SciTech Connect

    Hanson, K.M.; Cunningham, G.S.

    1996-04-01

    The authors are developing a computer application, called the Bayes Inference Engine, to provide the means to make inferences about models of physical reality within a Bayesian framework. The construction of complex nonlinear models is achieved by a fully object-oriented design. The models are represented by a data-flow diagram that may be manipulated by the analyst through a graphical programming environment. Maximum a posteriori solutions are achieved using a general, gradient-based optimization algorithm. The application incorporates a new technique of estimating and visualizing the uncertainties in specific aspects of the model.

  20. Diagnostic accuracy of late iodine enhancement on cardiac computed tomography with a denoise filter for the evaluation of myocardial infarction.

    PubMed

    Matsuda, Takuya; Kido, Teruhito; Itoh, Toshihide; Saeki, Hideyuki; Shigemi, Susumu; Watanabe, Kouki; Kido, Tomoyuki; Aono, Shoji; Yamamoto, Masaya; Matsuda, Takeshi; Mochizuki, Teruhito

    2015-12-01

    We evaluated the image quality and diagnostic performance of late iodine enhancement (LIE) in dual-source computed tomography (DSCT) with low kilo-voltage peak (kVp) images and a denoise filter for the detection of acute myocardial infarction (AMI) in comparison with late gadolinium enhancement (LGE) magnetic resonance imaging (MRI). The Hospital Ethics Committee approved the study protocol. Before discharge, 19 patients who received percutaneous coronary intervention after AMI underwent DSCT and 1.5 T MRI. Immediately after coronary computed tomography (CT) angiography, contrast medium was administered at a slow injection rate. LIE-CT scans were acquired via dual-energy CT and reconstructed as 100-, 140-kVp, and mixed images. An iterative three-dimensional edge-preserved smoothing filter was applied to the 100-kVp images to obtain denoised 100-kVp images. The mixed, 140-kVp, 100-kVp, and denoised 100-kVp images were assessed using contrast-to-noise ratio (CNR), and their diagnostic performance in comparison with MRI and infarcted volumes were evaluated. Three hundred four segments of 19 patients were evaluated. Fifty-three segments showed LGE in MRI. The median CNR of the mixed, 140-, 100-kVp and denoised 100-kVp images was 3.49, 1.21, 3.57, and 6.08, respectively. The median CNR was significantly higher in the denoised 100-kVp images than in the other three images (P < 0.05). The denoised 100-kVp images showed the highest diagnostic accuracy and sensitivity. The percentage of myocardium in the four CT image types was significantly correlated with the respective MRI findings. The use of a denoise filter with a low-kVp image can improve CNR, sensitivity, and accuracy in LIE-CT. PMID:26202159

  1. Wavelet-based denoising of the Fourier metric in real-time wavefront correction for single molecule localization microscopy

    NASA Astrophysics Data System (ADS)

    Tehrani, Kayvan Forouhesh; Mortensen, Luke J.; Kner, Peter

    2016-03-01

    Wavefront sensorless schemes for correction of aberrations induced by biological specimens require a time invariant property of an image as a measure of fitness. Image intensity cannot be used as a metric for Single Molecule Localization (SML) microscopy because the intensity of blinking fluorophores follows exponential statistics. Therefore a robust intensity-independent metric is required. We previously reported a Fourier Metric (FM) that is relatively intensity independent. The Fourier metric has been successfully tested on two machine learning algorithms, a Genetic Algorithm and Particle Swarm Optimization, for wavefront correction about 50 μm deep inside the Central Nervous System (CNS) of Drosophila. However, since the spatial frequencies that need to be optimized fall into regions of the Optical Transfer Function (OTF) that are more susceptible to noise, adding a level of denoising can improve performance. Here we present wavelet-based approaches to lower the noise level and produce a more consistent metric. We compare performance of different wavelets such as Daubechies, Bi-Orthogonal, and reverse Bi-orthogonal of different degrees and orders for pre-processing of images.

  2. Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras.

    PubMed

    Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

    2010-10-25

    In this paper we present a new denoising method for the depth images of a 3D imaging sensor, based on the time-of-flight principle. We propose novel ways to use luminance-like information produced by a time-of flight camera along with depth images. Firstly, we propose a wavelet-based method for estimating the noise level in depth images, using luminance information. The underlying idea is that luminance carries information about the power of the optical signal reflected from the scene and is hence related to the signal-to-noise ratio for every pixel within the depth image. In this way, we can efficiently solve the difficult problem of estimating the non-stationary noise within the depth images. Secondly, we use luminance information to better restore object boundaries masked with noise in the depth images. Information from luminance images is introduced into the estimation formula through the use of fuzzy membership functions. In particular, we take the correlation between the measured depth and luminance into account, and the fact that edges (object boundaries) present in the depth image are likely to occur in the luminance image as well. The results on real 3D images show a significant improvement over the state-of-the-art in the field. PMID:21164605

  3. PDE-based Non-Linear Diffusion Techniques for Denoising Scientific and Industrial Images: An Empirical Study

    SciTech Connect

    Weeratunga, S K; Kamath, C

    2001-12-20

    Removing noise from data is often the first step in data analysis. Denoising techniques should not only reduce the noise, but do so without blurring or changing the location of the edges. Many approaches have been proposed to accomplish this; in this paper, they focus on one such approach, namely the use of non-linear diffusion operators. This approach has been studied extensively from a theoretical viewpoint ever since the 1987 work of Perona and Malik showed that non-linear filters outperformed the more traditional linear Canny edge detector. They complement this theoretical work by investigating the performance of several isotropic diffusion operators on test images from scientific domains. They explore the effects of various parameters such as the choice of diffusivity function, explicit and implicit methods for the discretization of the PDE, and approaches for the spatial discretization of the non-linear operator etc. They also compare these schemes with simple spatial filters and the more complex wavelet-based shrinkage techniques. The empirical results show that, with an appropriate choice of parameters, diffusion-based schemes can be as effective as competitive techniques.

  4. Spectral and geographical variability in the oceanic response to atmospheric pressure fluctuations, as inferred from “dynamic barometer” Green's functions

    NASA Astrophysics Data System (ADS)

    Dey, N.; Dickman, S. R.

    2010-09-01

    A decade ago, a novel theoretical approach was developed (Dickman, 1998) for determining the dynamic response of the oceans to atmospheric pressure variations, a response nicknamed the "dynamic barometer" (DB), and the effects of that response on Earth's rotation. This approach employed a generalized, spherical harmonic ocean tide model to compute oceanic Green's functions, the oceans' fluid dynamic response to unit-amplitude pressure forcing on various spatial and temporal scales, and then construct rotational Green's functions, representing the rotational effects of that response. When combined with the observed atmospheric pressure field, the rotational Green's functions would yield the effects of the DB on Earth's rotation. The Green's functions reflect in some way the geographical and spectral sensitivity of the oceans to atmospheric pressure forcing. We have formulated a measure of that sensitivity using a simple combination of rotational Green's functions. We find that the DB response of the oceans to atmospheric pressure forcing depends significantly on geographic location and on frequency. Compared to the inverted barometer (IB) (the traditional static model), the DB effects differ slightly at long periods but become very different at shorter periods. Among all the responses, the prograde polar motion effects are the most dynamic, with large portions of the North Atlantic and some of the North Pacific no larger than one third of IB, but most of the Southern Hemisphere oceans at least 50% greater than IB.

  5. INFERENCES FROM ROSSI TRACES

    SciTech Connect

    KENNETH M. HANSON; JANE M. BOOKER

    2000-09-08

    The authors an uncertainty analysis of data taken using the Rossi technique, in which the horizontal oscilloscope sweep is driven sinusoidally in time ,while the vertical axis follows the signal amplitude. The analysis is done within a Bayesian framework. Complete inferences are obtained by tilting the Markov chain Monte Carlo technique, which produces random samples from the posterior probability distribution expressed in terms of the parameters.

  6. Active inference and learning.

    PubMed

    Friston, Karl; FitzGerald, Thomas; Rigoli, Francesco; Schwartenbeck, Philipp; O'Doherty, John; Pezzulo, Giovanni

    2016-09-01

    This paper offers an active inference account of choice behaviour and learning. It focuses on the distinction between goal-directed and habitual behaviour and how they contextualise each other. We show that habits emerge naturally (and autodidactically) from sequential policy optimisation when agents are equipped with state-action policies. In active inference, behaviour has explorative (epistemic) and exploitative (pragmatic) aspects that are sensitive to ambiguity and risk respectively, where epistemic (ambiguity-resolving) behaviour enables pragmatic (reward-seeking) behaviour and the subsequent emergence of habits. Although goal-directed and habitual policies are usually associated with model-based and model-free schemes, we find the more important distinction is between belief-free and belief-based schemes. The underlying (variational) belief updating provides a comprehensive (if metaphorical) process theory for several phenomena, including the transfer of dopamine responses, reversal learning, habit formation and devaluation. Finally, we show that active inference reduces to a classical (Bellman) scheme, in the absence of ambiguity. PMID:27375276

  7. Scene Construction, Visual Foraging, and Active Inference

    PubMed Central

    Mirza, M. Berk; Adams, Rick A.; Mathys, Christoph D.; Friston, Karl J.

    2016-01-01

    This paper describes an active inference scheme for visual searches and the perceptual synthesis entailed by scene construction. Active inference assumes that perception and action minimize variational free energy, where actions are selected to minimize the free energy expected in the future. This assumption generalizes risk-sensitive control and expected utility theory to include epistemic value; namely, the value (or salience) of information inherent in resolving uncertainty about the causes of ambiguous cues or outcomes. Here, we apply active inference to saccadic searches of a visual scene. We consider the (difficult) problem of categorizing a scene, based on the spatial relationship among visual objects where, crucially, visual cues are sampled myopically through a sequence of saccadic eye movements. This means that evidence for competing hypotheses about the scene has to be accumulated sequentially, calling upon both prediction (planning) and postdiction (memory). Our aim is to highlight some simple but fundamental aspects of the requisite functional anatomy; namely, the link between approximate Bayesian inference under mean field assumptions and functional segregation in the visual cortex. This link rests upon the (neurobiologically plausible) process theory that accompanies the normative formulation of active inference for Markov decision processes. In future work, we hope to use this scheme to model empirical saccadic searches and identify the prior beliefs that underwrite intersubject variability in the way people forage for information in visual scenes (e.g., in schizophrenia). PMID:27378899

  8. Scene Construction, Visual Foraging, and Active Inference.

    PubMed

    Mirza, M Berk; Adams, Rick A; Mathys, Christoph D; Friston, Karl J

    2016-01-01

    This paper describes an active inference scheme for visual searches and the perceptual synthesis entailed by scene construction. Active inference assumes that perception and action minimize variational free energy, where actions are selected to minimize the free energy expected in the future. This assumption generalizes risk-sensitive control and expected utility theory to include epistemic value; namely, the value (or salience) of information inherent in resolving uncertainty about the causes of ambiguous cues or outcomes. Here, we apply active inference to saccadic searches of a visual scene. We consider the (difficult) problem of categorizing a scene, based on the spatial relationship among visual objects where, crucially, visual cues are sampled myopically through a sequence of saccadic eye movements. This means that evidence for competing hypotheses about the scene has to be accumulated sequentially, calling upon both prediction (planning) and postdiction (memory). Our aim is to highlight some simple but fundamental aspects of the requisite functional anatomy; namely, the link between approximate Bayesian inference under mean field assumptions and functional segregation in the visual cortex. This link rests upon the (neurobiologically plausible) process theory that accompanies the normative formulation of active inference for Markov decision processes. In future work, we hope to use this scheme to model empirical saccadic searches and identify the prior beliefs that underwrite intersubject variability in the way people forage for information in visual scenes (e.g., in schizophrenia). PMID:27378899

  9. Radiation dose reduction in computed tomography (CT) using a new implementation of wavelet denoising in low tube current acquisitions

    NASA Astrophysics Data System (ADS)

    Tao, Yinghua; Brunner, Stephen; Tang, Jie; Speidel, Michael; Rowley, Howard; VanLysel, Michael; Chen, Guang-Hong

    2011-03-01

    Radiation dose reduction remains at the forefront of research in computed tomography. X-ray tube parameters such as tube current can be lowered to reduce dose; however, images become prohibitively noisy when the tube current is too low. Wavelet denoising is one of many noise reduction techniques. However, traditional wavelet techniques have the tendency to create an artificial noise texture, due to the nonuniform denoising across the image, which is undesirable from a diagnostic perspective. This work presents a new implementation of wavelet denoising that is able to achieve noise reduction, while still preserving spatial resolution. Further, the proposed method has the potential to improve those unnatural noise textures. The technique was tested on both phantom and animal datasets (Catphan phantom and timeresolved swine heart scan) acquired on a GE Discovery VCT scanner. A number of tube currents were used to investigate the potential for dose reduction.

  10. SVD and Hankel matrix based de-noising approach for ball bearing fault detection and its assessment using artificial faults

    NASA Astrophysics Data System (ADS)

    Golafshan, Reza; Yuce Sanliturk, Kenan

    2016-03-01

    Ball bearings remain one of the most crucial components in industrial machines and due to their critical role, it is of great importance to monitor their conditions under operation. However, due to the background noise in acquired signals, it is not always possible to identify probable faults. This incapability in identifying the faults makes the de-noising process one of the most essential steps in the field of Condition Monitoring (CM) and fault detection. In the present study, Singular Value Decomposition (SVD) and Hankel matrix based de-noising process is successfully applied to the ball bearing time domain vibration signals as well as to their spectrums for the elimination of the background noise and the improvement the reliability of the fault detection process. The test cases conducted using experimental as well as the simulated vibration signals demonstrate the effectiveness of the proposed de-noising approach for the ball bearing fault detection.

  11. Performance evaluation and optimization of BM4D-AV denoising algorithm for cone-beam CT images

    NASA Astrophysics Data System (ADS)

    Huang, Kuidong; Tian, Xiaofei; Zhang, Dinghua; Zhang, Hua

    2015-12-01

    The broadening application of cone-beam Computed Tomography (CBCT) in medical diagnostics and nondestructive testing, necessitates advanced denoising algorithms for its 3D images. The block-matching and four dimensional filtering algorithm with adaptive variance (BM4D-AV) is applied to the 3D image denoising in this research. To optimize it, the key filtering parameters of the BM4D-AV algorithm are assessed firstly based on the simulated CBCT images and a table of optimized filtering parameters is obtained. Then, considering the complexity of the noise in realistic CBCT images, possible noise standard deviations in BM4D-AV are evaluated to attain the chosen principle for the realistic denoising. The results of corresponding experiments demonstrate that the BM4D-AV algorithm with optimized parameters presents excellent denosing effect on the realistic 3D CBCT images.

  12. Towards Context Sensitive Information Inference.

    ERIC Educational Resources Information Center

    Song, D.; Bruza, P. D.

    2003-01-01

    Discusses information inference from a psychologistic stance and proposes an information inference mechanism that makes inferences via computations of information flow through an approximation of a conceptual space. Highlights include cognitive economics of information processing; context sensitivity; and query models for information retrieval.…

  13. Scattering Properties of Jovian Tropospheric Cloud Particles Inferred from Cassini/ISS: Mie Scattering Phase Function and Particle Size in South Tropical Zone III

    NASA Astrophysics Data System (ADS)

    Sato, T.; Satoh, T.; Kasaba, Y.

    2010-12-01

    The three distinct cloud layers were predicted by an equilibrium cloud condensation model (ECCM) of Jupiter. An ammonia ice cloud (NH3), an ammonia hydrosulfide cloud (NH4SH), and a water ice (H2O) cloud would be based at altitudes corresponding to pressures of about 0.7, 2.2 and 6 bars, respectively. However, there are significant gaps in our knowledge of the vertical cloud structure, despite the continuing effort by numerous ground-based, space-based, and in-situ observations and theory. Methane (CH4) is considered that its altitude distribution is globally uniform because it does not condense in Jovian atmosphere. Therefore, it is possible to derive the vertical cloud structure and the optical properties of clouds (i.e., optical thickness and single scattering albedo) by observing reflected sunlight in CH4 bands (727, 890 nm) and continuum in visible to near-infrared spectral ranges. Since we need to consider multiple scattering by clouds, it is essential to know scattering properties (e.g., scattering phase function) of clouds for determination of vertical cloud structure. However, we cannot derive those from ground-based and Earth-orbit observations because of the limitation of solar phase angle as viewed from the Earth. Then, most previous studies have used the scattering phase function deduced from the Pioneer 10/IPP data (blue: 440 nm, red: 640nm) [Tomasko et al., 1978]. There are two shortcomings in the Pioneer scattering phase function. One is that we have to use this scattering phase function at red as a substitute for analyses of imaging photometry using CH4 bands (center: 727 and 890 nm), although clouds should have wavelength dependency. The other is that the red pass band of IPP was so broad (595-720 nm) that this scattering phase function in red just show wavelength-averaged scattering properties of clouds. To provide a new reference scattering phase function with wavelength dependency, we have analyzed the Cassini/ISS data in BL1 (451 nm), CB1 (619

  14. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time. PMID:26405887

  15. Real-time wavelet denoising with edge enhancement for medical x-ray imaging

    NASA Astrophysics Data System (ADS)

    Luo, Gaoyong; Osypiw, David; Hudson, Chris

    2006-02-01

    X-ray image visualized in real-time plays an important role in clinical applications. The real-time system design requires that images with the highest perceptual quality be acquired while minimizing the x-ray dose to the patient, which can result in severe noise that must be reduced. The approach based on the wavelet transform has been widely used for noise reduction. However, by removing noise, high frequency components belonging to edges that hold important structural information of an image are also removed, which leads to blurring the features. This paper presents a new method of x-ray image denoising based on fast lifting wavelet thresholding for general noise reduction and spatial filtering for further denoising by using a derivative model to preserve edges. General denoising is achieved by estimating the level of the contaminating noise and employing an adaptive thresholding scheme with variance analysis. The soft thresholding scheme is to remove the overall noise including that attached to edges. A new edge identification method of using approximation of spatial gradient at each pixel location is developed together with a spatial filter to smooth noise in the homogeneous areas but preserve important structures. Fine noise reduction is only applied to the non-edge parts, such that edges are preserved and enhanced. Experimental results demonstrate that the method performs well both visually and in terms of quantitative performance measures for clinical x-ray images contaminated by natural and artificial noise. The proposed algorithm with fast computation and low complexity provides a potential solution for real-time applications.

  16. 2-D Continuous Wavelet Transform for ESPI phase-maps denoising

    NASA Astrophysics Data System (ADS)

    Escalante, Nivia; Villa, Jesús; de la Rosa, Ismael; de la Rosa, Enrique; González-Ramírez, Efrén; Gutiérrez, Osvaldo; Olvera, Carlos; Araiza, María

    2013-09-01

    In this work we introduce a 2-D Continuous Wavelet Transform (2-D CWT) method for denoising ESPI phase-maps. Multiresolution analysis with 2-D wavelets can provide high directional sensitivity and high anisotropy which are proper characteristics for this task. In particular, the 2-D CWT method using Gabor atoms (Gabor mother wavelets) which can naturally model phase fringes, has a good performance against noise and can preserve phase fringes. We describe the theoretical basis of the proposed technique and show some experimental results with real and simulated ESPI phase-maps. As can be verified the proposal is robust and effective.

  17. Enhanced optical coherence tomography imaging using a histogram-based denoising algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Keo-Sik; Park, Hyoung-Jun; Kang, Hyun Seo

    2015-11-01

    A histogram-based denoising algorithm was developed to effectively reduce ghost artifact noise and enhance the quality of an optical coherence tomography (OCT) imaging system used to guide surgical instruments. The noise signal is iteratively detected by comparing the histogram of the ensemble average of all A-scans, and the ghost artifacts included in the noisy signal are removed separately from the raw signals using the polynomial curve fitting method. The devised algorithm was simulated with various noisy OCT images, and >87% of the ghost artifact noise was removed despite different locations. Our results show the feasibility of selectively and effectively removing ghost artifact noise.

  18. Non Local Spatial and Angular Matching: Enabling higher spatial resolution diffusion MRI datasets through adaptive denoising.

    PubMed

    St-Jean, Samuel; Coupé, Pierrick; Descoteaux, Maxime

    2016-08-01

    Diffusion magnetic resonance imaging (MRI) datasets suffer from low Signal-to-Noise Ratio (SNR), especially at high b-values. Acquiring data at high b-values contains relevant information and is now of great interest for microstructural and connectomics studies. High noise levels bias the measurements due to the non-Gaussian nature of the noise, which in turn can lead to a false and biased estimation of the diffusion parameters. Additionally, the usage of in-plane acceleration techniques during the acquisition leads to a spatially varying noise distribution, which depends on the parallel acceleration method implemented on the scanner. This paper proposes a novel diffusion MRI denoising technique that can be used on all existing data, without adding to the scanning time. We first apply a statistical framework to convert both stationary and non stationary Rician and non central Chi distributed noise to Gaussian distributed noise, effectively removing the bias. We then introduce a spatially and angular adaptive denoising technique, the Non Local Spatial and Angular Matching (NLSAM) algorithm. Each volume is first decomposed in small 4D overlapping patches, thus capturing the spatial and angular structure of the diffusion data, and a dictionary of atoms is learned on those patches. A local sparse decomposition is then found by bounding the reconstruction error with the local noise variance. We compare against three other state-of-the-art denoising methods and show quantitative local and connectivity results on a synthetic phantom and on an in-vivo high resolution dataset. Overall, our method restores perceptual information, removes the noise bias in common diffusion metrics, restores the extracted peaks coherence and improves reproducibility of tractography on the synthetic dataset. On the 1.2 mm high resolution in-vivo dataset, our denoising improves the visual quality of the data and reduces the number of spurious tracts when compared to the noisy acquisition. Our

  19. Denoising of human speech using combined acoustic and em sensor signal processing

    SciTech Connect

    Ng, L C; Burnett, G C; Holzrichter, J F; Gable, T J

    1999-11-29

    Low Power EM radar-like sensors have made it possible to measure properties of the human speech production system in real-time, without acoustic interference. This greatly enhances the quality and quantify of information for many speech related applications. See Holzrichter, Burnett, Ng, and Lea, J. Acoustic. Soc. Am. 103 (1) 622 (1998). By using combined Glottal-EM- Sensor- and Acoustic-signals, segments of voiced, unvoiced, and no-speech can be reliably defined. Real-time Denoising filters can be constructed to remove noise from the user's corresponding speech signal.

  20. Image denoising with 2D scale-mixing complex wavelet transforms.

    PubMed

    Remenyi, Norbert; Nicolis, Orietta; Nason, Guy; Vidakovic, Brani

    2014-12-01

    This paper introduces an image denoising procedure based on a 2D scale-mixing complex-valued wavelet transform. Both the minimal (unitary) and redundant (maximum overlap) versions of the transform are used. The covariance structure of white noise in wavelet domain is established. Estimation is performed via empirical Bayesian techniques, including versions that preserve the phase of the complex-valued wavelet coefficients and those that do not. The new procedure exhibits excellent quantitative and visual performance, which is demonstrated by simulation on standard test images. PMID:25312931

  1. A Method for Ventricular Late Potentials Detection Using Time-Frequency Representation and Wavelet Denoising

    PubMed Central

    Gadaleta, Matteo; Giorgio, Agostino

    2012-01-01

    This study proposes a method for ventricular late potentials (VLPs) detection using time-frequency representation and wavelet denoising in high-resolution electrocardiography (HRECG). The analysis is performed both with the signal averaged electrocardiography (SAECG) and in real time. A comparison between the temporal and the time-frequency analysis is also reported. In the first analysis the standard parameters QRSd, LAS40, and RMS40 were used; in the second normalized energy in time-frequency domain was calculated. The algorithm was tested adding artificial VLPs to real ECGs. PMID:22957271

  2. Modeling diffusion-weighted MRI as a spatially variant Gaussian mixture: Application to image denoising

    PubMed Central

    Gonzalez, Juan Eugenio Iglesias; Thompson, Paul M.; Zhao, Aishan; Tu, Zhuowen

    2011-01-01

    Purpose: This work describes a spatially variant mixture model constrained by a Markov random field to model high angular resolution diffusion imaging (HARDI) data. Mixture models suit HARDI well because the attenuation by diffusion is inherently a mixture. The goal is to create a general model that can be used in different applications. This study focuses on image denoising and segmentation (primarily the former). Methods: HARDI signal attenuation data are used to train a Gaussian mixture model in which the mean vectors and covariance matrices are assumed to be independent of spatial locations, whereas the mixture weights are allowed to vary at different lattice positions. Spatial smoothness of the data is ensured by imposing a Markov random field prior on the mixture weights. The model is trained in an unsupervised fashion using the expectation maximization algorithm. The number of mixture components is determined using the minimum message length criterion from information theory. Once the model has been trained, it can be fitted to a noisy diffusion MRI volume by maximizing the posterior probability of the underlying noiseless data in a Bayesian framework, recovering a denoised version of the image. Moreover, the fitted probability maps of the mixture components can be used as features for posterior image segmentation. Results: The model-based denoising algorithm proposed here was compared on real data with three other approaches that are commonly used in the literature: Gaussian filtering, anisotropic diffusion, and Rician-adapted nonlocal means. The comparison shows that, at low signal-to-noise ratio, when these methods falter, our algorithm considerably outperforms them. When tractography is performed on the model-fitted data rather than on the noisy measurements, the quality of the output improves substantially. Finally, ventricle and caudate nucleus segmentation experiments also show the potential usefulness of the mixture probability maps for

  3. A new denoising method in high-dimensional PCA-space

    NASA Astrophysics Data System (ADS)

    Do, Quoc Bao; Beghdadi, Azeddine; Luong, Marie

    2012-03-01

    Kernel-design based method such as Bilateral filter (BIL), non-local means (NLM) filter is known as one of the most attractive approaches for denoising. We propose in this paper a new noise filtering method inspired by BIL, NLM filters and principal component analysis (PCA). The main idea here is to perform the BIL in a multidimensional PCA-space using an anisotropic kernel. The filtered multidimensional signal is then transformed back onto the image spatial domain to yield the desired enhanced image. In this work, it is demonstrated that the proposed method is a generalization of kernel-design based methods. The obtained results are highly promising.

  4. Gene-network inference by message passing

    NASA Astrophysics Data System (ADS)

    Braunstein, A.; Pagnani, A.; Weigt, M.; Zecchina, R.

    2008-01-01

    The inference of gene-regulatory processes from gene-expression data belongs to the major challenges of computational systems biology. Here we address the problem from a statistical-physics perspective and develop a message-passing algorithm which is able to infer sparse, directed and combinatorial regulatory mechanisms. Using the replica technique, the algorithmic performance can be characterized analytically for artificially generated data. The algorithm is applied to genome-wide expression data of baker's yeast under various environmental conditions. We find clear cases of combinatorial control, and enrichment in common functional annotations of regulated genes and their regulators.

  5. Mental state inference using visual control parameters.

    PubMed

    Oztop, Erhan; Wolpert, Daniel; Kawato, Mitsuo

    2005-02-01

    Although we can often infer the mental states of others by observing their actions, there are currently no computational models of this remarkable ability. Here we develop a computational model of mental state inference that builds upon a generic visuomanual feedback controller, and implements mental simulation and mental state inference functions using circuitry that subserves sensorimotor control. Our goal is (1) to show that control mechanisms developed for manual manipulation are readily endowed with visual and predictive processing capabilities and thus allows a natural extension to the understanding of movements performed by others; and (2) to give an explanation on how cortical regions, in particular the parietal and premotor cortices, may be involved in such dual mechanism. To analyze the model, we simulate tasks in which an observer watches an actor performing either a reaching or a grasping movement. The observer's goal is to estimate the 'mental state' of the actor: the goal of the reaching movement or the intention of the agent performing the grasping movement. We show that the motor modules of the observer can be used in a 'simulation mode' to infer the mental state of the actor. The simulations with different grasping and non-straight line reaching strategies show that the mental state inference model is applicable to complex movements. Moreover, we simulate deceptive reaching, where an actor imposes false beliefs about his own mental state on an observer. The simulations show that computational elements developed for sensorimotor control are effective in inferring the mental states of others. The parallels between the model and cortical organization of movement suggest that primates might have developed a similar resource utilization strategy for action understanding, and thus lead to testable predictions about the brain mechanisms of mental state inference. PMID:15653289

  6. Functional interactions between OCA2 and the protein complexes BLOC-1, BLOC-2, and AP-3 inferred from epistatic analyses of mouse coat pigmentation.

    PubMed

    Hoyle, Diego J; Rodriguez-Fernandez, Imilce A; Dell'angelica, Esteban C

    2011-04-01

    The biogenesis of melanosomes is a multistage process that requires the function of cell-type-specific and ubiquitously expressed proteins. OCA2, the product of the gene defective in oculocutaneous albinism type 2, is a melanosomal membrane protein with restricted expression pattern and a potential role in the trafficking of other proteins to melanosomes. The ubiquitous protein complexes AP-3, BLOC-1, and BLOC-2, which contain as subunits the products of genes defective in various types of Hermansky-Pudlak syndrome, have been likewise implicated in trafficking to melanosomes. We have tested for genetic interactions between mutant alleles causing deficiency in OCA2 (pink-eyed dilution unstable), AP-3 (pearl), BLOC-1 (pallid), and BLOC-2 (cocoa) in C57BL/6J mice. The pallid allele was epistatic to pink-eyed dilution, and the latter behaved as a semi-dominant phenotypic enhancer of cocoa and, to a lesser extent, of pearl. These observations suggest functional links between OCA2 and these three protein complexes involved in melanosome biogenesis. PMID:21392365

  7. Visual Inference Programming

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin; Timucin, Dogan; Rabbette, Maura; Curry, Charles; Allan, Mark; Lvov, Nikolay; Clanton, Sam; Pilewskie, Peter

    2002-01-01

    The goal of visual inference programming is to develop a software framework data analysis and to provide machine learning algorithms for inter-active data exploration and visualization. The topics include: 1) Intelligent Data Understanding (IDU) framework; 2) Challenge problems; 3) What's new here; 4) Framework features; 5) Wiring diagram; 6) Generated script; 7) Results of script; 8) Initial algorithms; 9) Independent Component Analysis for instrument diagnosis; 10) Output sensory mapping virtual joystick; 11) Output sensory mapping typing; 12) Closed-loop feedback mu-rhythm control; 13) Closed-loop training; 14) Data sources; and 15) Algorithms. This paper is in viewgraph form.

  8. THE ABUNDANCES OF HYDROCARBON FUNCTIONAL GROUPS IN THE INTERSTELLAR MEDIUM INFERRED FROM LABORATORY SPECTRA OF HYDROGENATED AND METHYLATED POLYCYCLIC AROMATIC HYDROCARBONS

    SciTech Connect

    Steglich, M.; Jäger, C.; Huisken, F.; Friedrich, M.; Plass, W.; Räder, H.-J.; Müllen, K.; Henning, Th.

    2013-10-01

    Infrared (IR) absorption spectra of individual polycyclic aromatic hydrocarbons (PAHs) containing methyl (-CH{sub 3}), methylene (CH{sub 2}), or diamond-like CH groups and IR spectra of mixtures of methylated and hydrogenated PAHs prepared by gas-phase condensation were measured at room temperature (as grains in pellets) and at low temperature (isolated in Ne matrices). In addition, the PAH blends were subjected to an in-depth molecular structure analysis by means of high-performance liquid chromatography, nuclear magnetic resonance spectroscopy, and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. Supported by calculations at the density functional theory level, the laboratory results were applied to analyze in detail the aliphatic absorption complex of the diffuse interstellar medium at 3.4 μm and to determine the abundances of hydrocarbon functional groups. Assuming that the PAHs are mainly locked in grains, aliphatic CH {sub x} groups (x = 1, 2, 3) would contribute approximately in equal quantities to the 3.4 μm feature (N {sub CHx}/N {sub H} ≈ 10{sup –5}-2 × 10{sup –5}). The abundances, however, may be two to four times lower if a major contribution to the 3.4 μm feature comes from molecules in the gas phase. Aromatic ≅CH groups seem to be almost absent from some lines of sight, but can be nearly as abundant as each of the aliphatic components in other directions (N{sub ≅CH}/N {sub H} ∼< 2 × 10{sup –5}; upper value for grains). Due to comparatively low binding energies, astronomical IR emission sources do not display such heavy excess hydrogenation. At best, especially in protoplanetary nebulae, CH{sub 2} groups bound to aromatic molecules, i.e., excess hydrogens on the molecular periphery only, can survive the presence of a nearby star.

  9. Source mechanism of long-period events at Kusatsu-Shirane Volcano, Japan, inferred from waveform inversion of the effective excitation functions

    USGS Publications Warehouse

    Nakano, M.; Kumagai, H.; Chouet, B.A.

    2003-01-01

    We investigate the source mechanism of long-period (LP) events observed at Kusatsu-Shirane Volcano, Japan, based on waveform inversions of their effective excitation functions. The effective excitation function, which represents the apparent excitation observed at individual receivers, is estimated by applying an autoregressive filter to the LP waveform. Assuming a point source, we apply this method to seven LP events the waveforms of which are characterized by simple decaying and nearly monochromatic oscillations with frequency in the range 1-3 Hz. The results of the waveform inversions show dominant volumetric change components accompanied by single force components, common to all the events analyzed, and suggesting a repeated activation of a sub-horizontal crack located 300 m beneath the summit crater lakes. Based on these results, we propose a model of the source process of LP seismicity, in which a gradual buildup of steam pressure in a hydrothermal crack in response to magmatic heat causes repeated discharges of steam from the crack. The rapid discharge of fluid causes the collapse of the fluid-filled crack and excites acoustic oscillations of the crack, which produce the characteristic waveforms observed in the LP events. The presence of a single force synchronous with the collapse of the crack is interpreted as the release of gravitational energy that occurs as the slug of steam ejected from the crack ascends toward the surface and is replaced by cooler water flowing downward in a fluid-filled conduit linking the crack and the base of the crater lake. ?? 2003 Elsevier Science B.V. All rights reserved.

  10. Parameter inference with estimated covariance matrices

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena; Heavens, Alan F.

    2016-02-01

    When inferring parameters from a Gaussian-distributed data set by computing a likelihood, a covariance matrix is needed that describes the data errors and their correlations. If the covariance matrix is not known a priori, it may be estimated and thereby becomes a random object with some intrinsic uncertainty itself. We show how to infer parameters in the presence of such an estimated covariance matrix, by marginalizing over the true covariance matrix, conditioned on its estimated value. This leads to a likelihood function that is no longer Gaussian, but rather an adapted version of a multivariate t-distribution, which has the same numerical complexity as the multivariate Gaussian. As expected, marginalization over the true covariance matrix improves inference when compared with Hartlap et al.'s method, which uses an unbiased estimate of the inverse covariance matrix but still assumes that the likelihood is Gaussian.

  11. Single board system for fuzzy inference

    NASA Technical Reports Server (NTRS)

    Symon, James R.; Watanabe, Hiroyuki

    1991-01-01

    The very large scale integration (VLSI) implementation of a fuzzy logic inference mechanism allows the use of rule-based control and decision making in demanding real-time applications. Researchers designed a full custom VLSI inference engine. The chip was fabricated using CMOS technology. The chip consists of 688,000 transistors of which 476,000 are used for RAM memory. The fuzzy logic inference engine board system incorporates the custom designed integrated circuit into a standard VMEbus environment. The Fuzzy Logic system uses Transistor-Transistor Logic (TTL) parts to provide the interface between the Fuzzy chip and a standard, double height VMEbus backplane, allowing the chip to perform application process control through the VMEbus host. High level C language functions hide details of the hardware system interface from the applications level programmer. The first version of the board was installed on a robot at Oak Ridge National Laboratory in January of 1990.

  12. Deep Learning for Population Genetic Inference

    PubMed Central

    Sheehan, Sara; Song, Yun S.

    2016-01-01

    Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme. PMID:27018908

  13. Computationally efficient Bayesian inference for inverse problems.

    SciTech Connect

    Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.

    2007-10-01

    Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.

  14. Deep Learning for Population Genetic Inference.

    PubMed

    Sheehan, Sara; Song, Yun S

    2016-03-01

    Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme. PMID:27018908

  15. Circular inferences in schizophrenia.

    PubMed

    Jardri, Renaud; Denève, Sophie

    2013-11-01

    A considerable number of recent experimental and computational studies suggest that subtle impairments of excitatory to inhibitory balance or regulation are involved in many neurological and psychiatric conditions. The current paper aims to relate, specifically and quantitatively, excitatory to inhibitory imbalance with psychotic symptoms in schizophrenia. Considering that the brain constructs hierarchical causal models of the external world, we show that the failure to maintain the excitatory to inhibitory balance results in hallucinations as well as in the formation and subsequent consolidation of delusional beliefs. Indeed, the consequence of excitatory to inhibitory imbalance in a hierarchical neural network is equated to a pathological form of causal inference called 'circular belief propagation'. In circular belief propagation, bottom-up sensory information and top-down predictions are reverberated, i.e. prior beliefs are misinterpreted as sensory observations and vice versa. As a result, these predictions are counted multiple times. Circular inference explains the emergence of erroneous percepts, the patient's overconfidence when facing probabilistic choices, the learning of 'unshakable' causal relationships between unrelated events and a paradoxical immunity to perceptual illusions, which are all known to be associated with schizophrenia. PMID:24065721

  16. Moment inference from tomograms

    USGS Publications Warehouse

    Day-Lewis, F. D.; Chen, Y.; Singha, K.

    2007-01-01

    Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error. Copyright 2007 by the American Geophysical Union.

  17. Design of a functional calcium channel protein: inferences about an ion channel-forming motif derived from the primary structure of voltage-gated calcium channels.

    PubMed Central

    Grove, A.; Tomich, J. M.; Iwamoto, T.; Montal, M.

    1993-01-01

    To identify sequence-specific motifs associated with the formation of an ionic pore, we systematically evaluated the channel-forming activity of synthetic peptides with sequence of predicted transmembrane segments of the voltage-gated calcium channel. The amino acid sequence of voltage-gated, dihydropyridine (DHP)-sensitive calcium channels suggests the presence in each of four homologous repeats (I-IV) of six segments (S1-S6) predicted to form membrane-spanning, alpha-helical structures. Only peptides representing amphipathic segments S2 or S3 form channels in lipid bilayers. To generate a functional calcium channel based on a four-helix bundle motif, four-helix bundle proteins representing IVS2 (T4CaIVS2) or IVS3 (T4CaIVS3) were synthesized. Both proteins form cation-selective channels, but with distinct characteristics: the single-channel conductance in 50 mM BaCl2 is 3 pS and 10 pS. For T4CaIVS3, the conductance saturates with increasing concentration of divalent cation. The dissociation constants for Ba2+, Ca2+, and Sr2+ are 13.6 mM, 17.7 mM, and 15.0 mM, respectively. The conductance of T4CaIVS2 does not saturate up to 150 mM salt. Whereas T4CaIVS3 is blocked by microM Ca2+ and Cd2+, T4CaIVS2 is not blocked by divalent cations. Only T4CaIVS3 is modulated by enantiomers of the DHP derivative BayK 8644, demonstrating sequence requirement for specific drug action. Thus, only T4CaIVS3 exhibits pore properties characteristic also of authentic calcium channels. The designed functional calcium channel may provide insights into fundamental mechanisms of ionic permeation and drug action, information that may in turn further our understanding of molecular determinants underlying authentic pore structures. PMID:7505682

  18. The Abundances of Hydrocarbon Functional Groups in the Interstellar Medium Inferred from Laboratory Spectra of Hydrogenated and Methylated Polycyclic Aromatic Hydrocarbons

    NASA Astrophysics Data System (ADS)

    Steglich, M.; Jäger, C.; Huisken, F.; Friedrich, M.; Plass, W.; Räder, H.-J.; Müllen, K.; Henning, Th.

    2013-10-01

    Infrared (IR) absorption spectra of individual polycyclic aromatic hydrocarbons (PAHs) containing methyl (\\sbondCH3), methylene (\\protect{\\epsfbox{art/apjs484229un01.eps}}CH2), or diamond-like \\protect{\\epsfbox{art/apjs484229un02.eps}}CH groups and IR spectra of mixtures of methylated and hydrogenated PAHs prepared by gas-phase condensation were measured at room temperature (as grains in pellets) and at low temperature (isolated in Ne matrices). In addition, the PAH blends were subjected to an in-depth molecular structure analysis by means of high-performance liquid chromatography, nuclear magnetic resonance spectroscopy, and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. Supported by calculations at the density functional theory level, the laboratory results were applied to analyze in detail the aliphatic absorption complex of the diffuse interstellar medium at 3.4 μm and to determine the abundances of hydrocarbon functional groups. Assuming that the PAHs are mainly locked in grains, aliphatic CH x groups (x = 1, 2, 3) would contribute approximately in equal quantities to the 3.4 μm feature (N CHx /N H ≈ 10-5-2 × 10-5). The abundances, however, may be two to four times lower if a major contribution to the 3.4 μm feature comes from molecules in the gas phase. Aromatic \\epsfbox{art/apjs484229un03.eps} CH groups seem to be almost absent from some lines of sight, but can be nearly as abundant as each of the aliphatic components in other directions (N_{\\epsfbox{art/apjs484229un03.eps} CH}/N H lsim 2 × 10-5 upper value for grains). Due to comparatively low binding energies, astronomical IR emission sources do not display such heavy excess hydrogenation. At best, especially in protoplanetary nebulae, \\protect{\\epsfbox{art/apjs484229un01.eps}}CH2 groups bound to aromatic molecules, i.e., excess hydrogens on the molecular periphery only, can survive the presence of a nearby star.

  19. Armored geckos: A histological investigation of osteoderm development in Tarentola (Phyllodactylidae) and Gekko (Gekkonidae) with comments on their regeneration and inferred function.

    PubMed

    Vickaryous, M K; Meldrum, G; Russell, A P

    2015-11-01

    Osteoderms are bone-rich organs found in the dermis of many scleroglossan lizards sensu lato, but are only known for two genera of gekkotans (geckos): Tarentola and Gekko. Here, we investigate their sequence of appearance, mode of development, structural diversity and ability to regenerate following tail loss. Osteoderms were present in all species of Tarentola sampled (Tarentola annularis, T. mauritanica, T. americana, T. crombei, T. chazaliae) as well as Gekko gecko, but not G. smithii. Gekkotan osteoderms first appear within the integument dorsal to the frontal bone or within the supraocular scales. They then manifest as mineralized structures in other positions across the head. In Tarentola and G. gecko, discontinuous clusters subsequently form dorsal to the pelvis/base of the tail, and then dorsal to the pectoral apparatus. Gekkotan osteoderm formation begins once the dermis is fully formed. Early bone deposition appears to involve populations of fibroblast-like cells, which are gradually replaced by more rounded osteoblasts. In T. annularis and T. mauritanica, an additional skeletal tissue is deposited across the superficial surface of the osteoderm. This tissue is vitreous, avascular, cell-poor, lacks intrinsic collagen, and is herein identified as osteodermine. We also report that following tail loss, both T. annularis and T. mauritanica are capable of regenerating osteoderms, including osteodermine, in the regenerated part of the tail. We propose that osteoderms serve roles in defense against combative prey and intraspecific aggression, along with anti-predation functions. PMID:26248595

  20. Fractional Diffusion, Low Exponent Lévy Stable Laws, and ‘Slow Motion’ Denoising of Helium Ion Microscope Nanoscale Imagery

    PubMed Central

    Carasso, Alfred S.; Vladár, András E.

    2012-01-01

    Helium ion microscopes (HIM) are capable of acquiring images with better than 1 nm resolution, and HIM images are particularly rich in morphological surface details. However, such images are generally quite noisy. A major challenge is to denoise these images while preserving delicate surface information. This paper presents a powerful slow motion denoising technique, based on solving linear fractional diffusion equations forward in time. The method is easily implemented computationally, using fast Fourier transform (FFT) algorithms. When applied to actual HIM images, the method is found to reproduce the essential surface morphology of the sample with high fidelity. In contrast, such highly sophisticated methodologies as Curvelet Transform denoising, and Total Variation denoising using split Bregman iterations, are found to eliminate vital fine scale information, along with the noise. Image Lipschitz exponents are a useful image metrology tool for quantifying the fine structure content in an image. In this paper, this tool is applied to rank order the above three distinct denoising approaches, in terms of their texture preserving properties. In several denoising experiments on actual HIM images, it was found that fractional diffusion smoothing performed noticeably better than split Bregman TV, which in turn, performed slightly better than Curvelet denoising. PMID:26900518

  1. Can Change in Prolonged Walking Be Inferred From a Short Test of Gait Speed Among Older Adults Who Are Initially Well-Functioning?

    PubMed Central

    Neogi, Tuhina; King, Wendy C.; LaValley, Michael P.; Kritchevsky, Stephen B.; Nevitt, Michael C.; Harris, Tamara B.; Ferrucci, Luigi; Simonsick, Eleanor M.; Satterfield, Suzanne; Strotmeyer, Elsa S.; Zhang, Yuqing

    2014-01-01

    Background The ability to walk for short and prolonged periods of time is often measured with separate walking tests. It is unclear whether decline in the 2-minute walk coincides with decline in a shorter 20-m walk among older adults. Objective The aim of this study was to describe patterns of change in the 20-m walk and 2-minute walk over 8 years among a large cohort of older adults. Should change be similar between tests of walking ability, separate retesting of prolonged walking may need to be reconsidered. Design A longitudinal, observational cohort study was conducted. Methods Data were from 1,893 older adults who were well-functioning (≥70 years of age). The 20-m walk and 2-minute walk were repeatedly measured over 8 years to measure change during short and prolonged periods of walking, respectively. Change was examined using a dual group-based trajectory model (dual model), and agreement between walking trajectories was quantified with a weighted kappa statistic. Results Three trajectory groups for the 20-m walk and 2-minute walk were identified. More than 86% of the participants were in similar trajectory groups for both tests from the dual model. There was high chance-corrected agreement (kappa=.84; 95% confidence interval=.82, .86) between the 20-m walk and 2-minute walk trajectory groups. Limitations One-third of the original Health, Aging and Body Composition (Health ABC) study cohort was excluded from analysis due to missing clinic visits, followed by being excluded for health reasons for performing the 2-minute walk, limiting generalizability to healthy older adults. Conclusions Patterns of change in the 2-minute walk are similar to those in the 20-m walk. Thus, separate retesting of the 2-minute walk may need to be reconsidered to gauge change in prolonged walking. PMID:24786943

  2. On the necessity of dissecting sequence similarity scores into segment-specific contributions for inferring protein homology, function prediction and annotation

    PubMed Central

    2014-01-01

    Background Protein sequence similarities to any types of non-globular segments (coiled coils, low complexity regions, transmembrane regions, long loops, etc. where either positional sequence conservation is the result of a very simple, physically induced pattern or rather integral sequence properties are critical) are pertinent sources for mistaken homologies. Regretfully, these considerations regularly escape attention in large-scale annotation studies since, often, there is no substitute to manual handling of these cases. Quantitative criteria are required to suppress events of function annotation transfer as a result of false homology assignments. Results The sequence homology concept is based on the similarity comparison between the structural elements, the basic building blocks for conferring the overall fold of a protein. We propose to dissect the total similarity score into fold-critical and other, remaining contributions and suggest that, for a valid homology statement, the fold-relevant score contribution should at least be significant on its own. As part of the article, we provide the DissectHMMER software program for dissecting HMMER2/3 scores into segment-specific contributions. We show that DissectHMMER reproduces HMMER2/3 scores with sufficient accuracy and that it is useful in automated decisions about homology for instructive sequence examples. To generalize the dissection concept for cases without 3D structural information, we find that a dissection based on alignment quality is an appropriate surrogate. The approach was applied to a large-scale study of SMART and PFAM domains in the space of seed sequences and in the space of UniProt/SwissProt. Conclusions Sequence similarity core dissection with regard to fold-critical and other contributions systematically suppresses false hits and, additionally, recovers previously obscured homology relationships such as the one between aquaporins and formate/nitrite transporters that, so far, was only

  3. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    PubMed

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature. PMID:27084318

  4. A New Approach to Inverting and De-Noising Backscatter from Lidar Observations

    NASA Astrophysics Data System (ADS)

    Marais, Willem; Hen Hu, Yu; Holz, Robert; Eloranta, Edwin

    2016-06-01

    Atmospheric lidar observations provide a unique capability to directly observe the vertical profile of cloud and aerosol scattering properties and have proven to be an important capability for the atmospheric science community. For this reason NASA and ESA have put a major emphasis on developing both space and ground based lidar instruments. Measurement noise (solar background and detector noise) has proven to be a significant limitation and is typically reduced by temporal and vertical averaging. This approach has significant limitations as it results in significant reduction in the spatial information and can introduce biases due to the non-linear relationship between the signal and the retrieved scattering properties. This paper investigates a new approach to de-noising and retrieving cloud and aerosol backscatter properties from lidar observations that leverages a technique developed for medical imaging to de-blur and de-noise images; the accuracy is defined as the error between the true and inverted photon rates. Hence non-linear bias errors can be mitigated and spatial information can be preserved.

  5. Bearing fault diagnosis based on variational mode decomposition and total variation denoising

    NASA Astrophysics Data System (ADS)

    Zhang, Suofeng; Wang, Yanxue; He, Shuilong; Jiang, Zhansi

    2016-07-01

    Feature extraction plays an essential role in bearing fault detection. However, the measured vibration signals are complex and non-stationary in nature, and meanwhile impulsive signatures of rolling bearing are usually immersed in stochastic noise. Hence, a novel hybrid fault diagnosis approach is developed for the denoising and non-stationary feature extraction in this work, which combines well with the variational mode decomposition (VMD) and majoriation–minization based total variation denoising (TV-MM). The TV-MM approach is utilized to remove stochastic noise in the raw signal and to enhance the corresponding characteristics. Since the parameter λ is very important in TV-MM, the weighted kurtosis index is also proposed in this work to determine an appropriate λ used in TV-MM. The performance of the proposed hybrid approach is conducted through the analysis of the simulated and practical bearing vibration signals. Results demonstrate that the proposed approach has superior capability to detect roller bearing faults from vibration signals.

  6. Noise reduction of time domain electromagnetic data: Application of a combined wavelet denoising method

    NASA Astrophysics Data System (ADS)

    Ji, Yanju; Li, Dongsheng; Yuan, Guiyang; Lin, Jun; Du, Shangyu; Xie, Lijun; Wang, Yuan

    2016-06-01

    A denoising method based on wavelet analysis is presented for the removal of noise (background noise and random spike) from time domain electromagnetic (TEM) data. This method includes two signal processing technologies: wavelet threshold method and stationary wavelet transform. First, wavelet threshold method is used for the removal of background noise from TEM data. Then, the data are divided into a series of details and approximations by using stationary wavelet transform. The random spike in details is identified by zero reference data and adaptive energy detector. Next, the corresponding details are processed to suppress the random spike. The denoised TEM data are reconstructed via inverse stationary wavelet transform using the processed details at each level and the approximations at the highest level. The proposed method has been verified using a synthetic TEM data, the signal-to-noise ratio of synthetic TEM data is increased from 10.97 dB to 24.37 dB at last. This method is also applied to the noise suppression of the field data which were collected at Hengsha island, China. The section image results shown that the noise is suppressed effectively and the resolution of the deep anomaly is obviously improved.

  7. Efficient and Robust Nonlocal Means Denoising of MR Data Based on Salient Features Matching

    PubMed Central

    Tristán-Vega, Antonio; García-Pérez, Verónica; Aja-Fernández, Santiago; Westin, Carl-Fredrik

    2014-01-01

    The Nonlocal Means (NLM) filter has become a popular approach for denoising medical images due to its excellent performance. However, its heavy computational load has been an important shortcoming preventing its use. NLM works by averaging pixels in nonlocal vicinities, weighting them depending on their similarity with the pixel of interest. This similarity is assessed based on the squared difference between corresponding pixels inside local patches centered at the locations compared. Our proposal is to reduce the computational load of this comparison by checking only a subset of salient features associated to the pixels, which suffice to estimate the actual difference as computed in the original NLM approach. The speedup achieved with respect to the original implementation is over one order of magnitude, and, when compared to more recent NLM improvements for MRI denoising, our method is nearly twice as fast. At the same time, we evidence from both synthetic and in vivo experiments that computing of appropriate salient features make the estimation of NLM weights more robust to noise. Consequently, we are able to improve the outcomes achieved with recent state of the art techniques for a wide range of realistic Signal-to-Noise Ratio scenarios like diffusion MRI. Finally, the statistical characterization of the features computed allows to get rid of some of the heuristics commonly used for parameter tuning. PMID:21906832

  8. An MRI denoising method using image data redundancy and local SNR estimation.

    PubMed

    Golshan, Hosein M; Hasanzadeh, Reza P R; Yousefzadeh, Shahrokh C

    2013-09-01

    This paper presents an LMMSE-based method for the three-dimensional (3D) denoising of MR images assuming a Rician noise model. Conventionally, the LMMSE method estimates the noise-less signal values using the observed MR data samples within local neighborhoods. This is not an efficient procedure to deal with this issue while the 3D MR data intrinsically includes many similar samples that can be used to improve the estimation results. To overcome this problem, we model MR data as random fields and establish a principled way which is capable of choosing the samples not only from a local neighborhood but also from a large portion of the given data. To follow the similar samples within the MR data, an effective similarity measure based on the local statistical moments of images is presented. The parameters of the proposed filter are automatically chosen from the estimated local signal-to-noise ratio. To further enhance the denoising performance, a recursive version of the introduced approach is also addressed. The proposed filter is compared with related state-of-the-art filters using both synthetic and real MR datasets. The experimental results demonstrate the superior performance of our proposal in removing the noise and preserving the anatomical structures of MR images. PMID:23668996

  9. Localization and de-noising seismic signals on SASW measurement by wavelet transform

    NASA Astrophysics Data System (ADS)

    Golestani, Alireza; S. Kolbadi, S. Mahdi; Heshmati, Ali Akbar

    2013-11-01

    SASW method is a nondestructive in situ testing method that is used to determine the dynamic properties of soil sites and pavement systems. Phase information and dispersion characteristics of a wave propagating through these systems have a significant role in the processing of recorded data. Inversion of the dispersive phase data provides information on the variation of shear-wave velocity with depth. However, in the case of sanded residual soil, it is not easy to produce the reliable phase spectrum curve. Due to natural noises and other human intervention in surface wave date generation deal with to reliable phase spectrum curve for sanded residual soil turn into the complex issue for geological scientist. In this paper, a time-frequency analysis based on complex Gaussian Derivative wavelet was applied to detect and localize all the events that are not identifiable by conventional signal processing methods. Then, the performance of discrete wavelet transform (DWT) in noise reduction of these recorded seismic signals was evaluated. Furthermore, in particular the influence of the decomposition level choice was investigated on efficiency of this process. This method is developed by various wavelet thresholding techniques which provide many options for controllable de-noising at each level of signal decomposition. Also, it obviates the need for high computation time compare with continuous wavelet transform. According to the results, the proposed method is powerful to visualize the interested spectrum range of seismic signals and to de-noise at low level decomposition.

  10. Statistics of Natural Stochastic Textures and Their Application in Image Denoising.

    PubMed

    Zachevsky, Ido; Zeevi, Yehoshua Y Josh

    2016-05-01

    Natural stochastic textures (NSTs), characterized by their fine details, are prone to corruption by artifacts, introduced during the image acquisition process by the combined effect of blur and noise. While many successful algorithms exist for image restoration and enhancement, the restoration of natural textures and textured images based on suitable statistical models has yet to be further improved. We examine the statistical properties of NST using three image databases. We show that the Gaussian distribution is suitable for many NST, while other natural textures can be properly represented by a model that separates the image into two layers; one of these layers contains the structural elements of smooth areas and edges, while the other contains the statistically Gaussian textural details. Based on these statistical properties, an algorithm for the denoising of natural images containing NST is proposed, using patch-based fractional Brownian motion model and regularization by means of anisotropic diffusion. It is illustrated that this algorithm successfully recovers both missing textural details and structural attributes that characterize natural images. The algorithm is compared with classical as well as the state-of-the-art denoising algorithms. PMID:27045423

  11. Foetal phonocardiographic signal denoising based on non-negative matrix factorization.

    PubMed

    Chourasia, V S; Tiwari, A K; Gangopadhyay, R; Akant, K A

    2012-01-01

    Foetal phonocardiography (fPCG) is a non-invasive, cost-effective and simple technique for antenatal care. The fPCG signals contain vital information of diagnostic importance regarding the foetal health. However, the fPCG signal is usually contaminated by various noises and thus requires robust signal processing to denoise the signal. The main aim of this paper is to develop a methodology for removal of unwanted noise from the fPCG signal. The proposed methodology utilizes the non-negative matrix factorization (NMF) algorithm. The developed methodology is tested on both simulated and real-time fPCG signals. The performance of the developed methodology has been evaluated in terms of the gain in signal-to-noise ratio (SNR) achieved through the process of denoising. In particular, using the NMF algorithm, a substantial improvement in SNR of the fPCG signals in the range of 12-30 dB has been achieved, providing a high quality assessment of foetal well-being. PMID:22136609

  12. Denoising of X-ray pulsar observed profile in the undecimated wavelet domain

    NASA Astrophysics Data System (ADS)

    Xue, Meng-fan; Li, Xiao-ping; Fu, Ling-zhong; Liu, Xiu-ping; Sun, Hai-feng; Shen, Li-rong

    2016-01-01

    The low intensity of the X-ray pulsar signal and the strong X-ray background radiation lead to low signal-to-noise ratio (SNR) of the X-ray pulsar observed profile obtained through epoch folding, especially when the observation time is not long enough. This signifies the necessity of denoising of the observed profile. In this paper, the statistical characteristics of the X-ray pulsar signal are studied, and a signal-dependent noise model is established for the observed profile. Based on this, a profile noise reduction method by performing a local linear minimum mean square error filtering in the un-decimated wavelet domain is developed. The detail wavelet coefficients are rescaled by multiplying their amplitudes by a locally adaptive factor, which is the local variance ratio of the noiseless coefficients to the noisy ones. All the nonstationary statistics needed in the algorithm are calculated from the observed profile, without a priori information. The results of experim! ents, carried out on simulated data obtained by the ground-based simulation system and real data obtained by Rossi X-Ray Timing Explorer satellite, indicate that the proposed method is excellent in both noise suppression and preservation of peak sharpness, and it also clearly outperforms four widely accepted and used wavelet denoising methods, in terms of SNR, Pearson correlation coefficient and root mean square error.

  13. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    PubMed Central

    Hou, Wenguang; Zhang, Xuming; Ding, Mingyue

    2013-01-01

    Speckle suppression plays an important role in improving ultrasound (US) image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM) provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU-) based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm. PMID:24348747

  14. Research on infrared-image denoising algorithm based on the noise analysis of the detector

    NASA Astrophysics Data System (ADS)

    Liu, Songtao; Zhou, Xiaodong; Shen, Tongsheng; Han, Yanli

    2005-01-01

    Since the conventional denoising algorithms have not considered the influence of certain concrete detector, they are not very effective to remove various noises contained in the low signal-to-noise ration infrared image. In this paper, a new thinking for infrared image denoising is proposed, which is based on the noise analyses of detector with an example of L model infrared multi-element detector. According to the noise analyses of this detector, the emphasis is placed on how to filter white noise and fractal noise in the preprocessing phase. Wavelet analysis is a good tool for analyzing 1/f process. 1/f process can be viewed as white noise approximately since its wavelet coefficients are stationary and uncorrelated. So if wavelet transform is adopted, the problem of removing white noise and fraction noise is simplified as the only one problem, i.e., removing white noise. To address this problem, a new wavelet domain adaptive wiener filtering algorithm is presented. From the viewpoint of quantitative and qualitative analyses, the filtering effect of our method is compared with those of traditional median filter, mean filter and wavelet thresholding algorithm in detail. The results show that our method can reduce various noises effectively and raise the ratio of signal-to-noise evidently.

  15. Estimating uncertainty of inference for validation

    SciTech Connect

    Booker, Jane M; Langenbrunner, James R; Hemez, Francois M; Ross, Timothy J

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  16. BIE: Bayesian Inference Engine

    NASA Astrophysics Data System (ADS)

    Weinberg, Martin D.

    2013-12-01

    The Bayesian Inference Engine (BIE) is an object-oriented library of tools written in C++ designed explicitly to enable Bayesian update and model comparison for astronomical problems. To facilitate "what if" exploration, BIE provides a command line interface (written with Bison and Flex) to run input scripts. The output of the code is a simulation of the Bayesian posterior distribution from which summary statistics e.g. by taking moments, or determine confidence intervals and so forth, can be determined. All of these quantities are fundamentally integrals and the Markov Chain approach produces variates heta distributed according to P( heta|D) so moments are trivially obtained by summing of the ensemble of variates.

  17. Bayesian inference in geomagnetism

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1988-01-01

    The inverse problem in empirical geomagnetic modeling is investigated, with critical examination of recently published studies. Particular attention is given to the use of Bayesian inference (BI) to select the damping parameter lambda in the uniqueness portion of the inverse problem. The mathematical bases of BI and stochastic inversion are explored, with consideration of bound-softening problems and resolution in linear Gaussian BI. The problem of estimating the radial magnetic field B(r) at the earth core-mantle boundary from surface and satellite measurements is then analyzed in detail, with specific attention to the selection of lambda in the studies of Gubbins (1983) and Gubbins and Bloxham (1985). It is argued that the selection method is inappropriate and leads to lambda values much larger than those that would result if a reasonable bound on the heat flow at the CMB were assumed.

  18. Bayesian Inference of Tumor Hypoxia

    NASA Astrophysics Data System (ADS)

    Gunawan, R.; Tenti, G.; Sivaloganathan, S.

    2009-12-01

    automatically that the inference from the data could not be summarized by just two numbers, but the full posterior probability density function (pdf) had to be used.

  19. A formal model of interpersonal inference

    PubMed Central

    Moutoussis, Michael; Trujillo-Barreto, Nelson J.; El-Deredy, Wael; Dolan, Raymond J.; Friston, Karl J.

    2014-01-01

    Introduction: We propose that active Bayesian inference—a general framework for decision-making—can equally be applied to interpersonal exchanges. Social cognition, however, entails special challenges. We address these challenges through a novel formulation of a formal model and demonstrate its psychological significance. Method: We review relevant literature, especially with regards to interpersonal representations, formulate a mathematical model and present a simulation study. The model accommodates normative models from utility theory and places them within the broader setting of Bayesian inference. Crucially, we endow people's prior beliefs, into which utilities are absorbed, with preferences of self and others. The simulation illustrates the model's dynamics and furnishes elementary predictions of the theory. Results: (1) Because beliefs about self and others inform both the desirability and plausibility of outcomes, in this framework interpersonal representations become beliefs that have to be actively inferred. This inference, akin to “mentalizing” in the psychological literature, is based upon the outcomes of interpersonal exchanges. (2) We show how some well-known social-psychological phenomena (e.g., self-serving biases) can be explained in terms of active interpersonal inference. (3) Mentalizing naturally entails Bayesian updating of how people value social outcomes. Crucially this includes inference about one's own qualities and preferences. Conclusion: We inaugurate a Bayes optimal framework for modeling intersubject variability in mentalizing during interpersonal exchanges. Here, interpersonal representations are endowed with explicit functional and affective properties. We suggest the active inference framework lends itself to the study of psychiatric conditions where mentalizing is distorted. PMID:24723872

  20. Bayes factors and multimodel inference

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    2009-01-01

    Multimodel inference has two main themes: model selection, and model averaging. Model averaging is a means of making inference conditional on a model set, rather than on a selected model, allowing formal recognition of the uncertainty associated with model choice. The Bayesian paradigm provides a natural framework for model averaging, and provides a context for evaluation of the commonly used AIC weights. We review Bayesian multimodel inference, noting the importance of Bayes factors. Noting the sensitivity of Bayes factors to the choice of priors on parameters, we define and propose nonpreferential priors as offering a reasonable standard for objective multimodel inference.

  1. Noise Level Estimation for Model Selection in Kernel PCA Denoising.

    PubMed

    Varon, Carolina; Alzate, Carlos; Suykens, Johan A K

    2015-11-01

    One of the main challenges in unsupervised learning is to find suitable values for the model parameters. In kernel principal component analysis (kPCA), for example, these are the number of components, the kernel, and its parameters. This paper presents a model selection criterion based on distance distributions (MDDs). This criterion can be used to find the number of components and the σ(2) parameter of radial basis function kernels by means of spectral comparison between information and noise. The noise content is estimated from the statistical moments of the distribution of distances in the original dataset. This allows for a type of randomization of the dataset, without actually having to permute the data points or generate artificial datasets. After comparing the eigenvalues computed from the estimated noise with the ones from the input dataset, information is retained and maximized by a set of model parameters. In addition to the model selection criterion, this paper proposes a modification to the fixed-size method and uses the incomplete Cholesky factorization, both of which are used to solve kPCA in large-scale applications. These two approaches, together with the model selection MDD, were tested in toy examples and real life applications, and it is shown that they outperform other known algorithms. PMID:25608316

  2. [Denoising and assessing method of additive noise in the ultraviolet spectrum of SO2 in flue gas].

    PubMed

    Zhou, Tao; Sun, Chang-Ku; Liu, Bin; Zhao, Yu-Mei

    2009-11-01

    The problem of denoising and assessing method of the spectrum of SO2 in flue gas was studied based on DOAS. The denoising procedure of the additive noise in the spectrum was divided into two parts: reducing the additive noise and enhancing the useful signal. When obtaining the absorption feature of measured gas, a multi-resolution preprocessing method of original spectrum was adopted for denoising by DWT (discrete wavelet transform). The signal energy operators in different scales were used to choose the denoising threshold and separate the useful signal from the noise. On the other hand, because there was no sudden change in the spectra of flue gas in time series, the useful signal component was enhanced according to the signal time dependence. And the standard absorption cross section was used to build the ideal absorption spectrum with the measured gas temperature and pressure. This ideal spectrum was used as the desired signal instead of the original spectrum in the assessing method to modify the SNR (signal-noise ratio). There were two different environments to do the proof test-in the lab and at the scene. In the lab, SO2 was measured several times with the system using this method mentioned above. The average deviation was less than 1.5%, while the repeatability was less than 1%. And the short range experiment data were better than the large range. In the scene of a power plant whose concentration of flue gas had a large variation range, the maximum deviation of this method was 2.31% in the 18 groups of contrast data. The experimental results show that the denoising effect of the scene spectrum was better than that of the lab spectrum. This means that this method can improve the SNR of the spectrum effectively, which is seriously polluted by additive noise. PMID:20101989

  3. Interferometric side-scan sonar signal denoised by wavelets

    NASA Astrophysics Data System (ADS)

    Sintes, Christophe R.; Legris, Michel; Solaiman, Basel

    2003-04-01

    This paper concerns the possibilities that side scan sonar have to determine the bathymetry. New side scan sonars, which are able to image the sea bottom with a high definition, estimate the relief with the same definition as conventional sonar images, using an interferometric multisensors system. Drawbacks concern the accuracy and errors of the numerical altitude model. Interferometric methods use a phase difference to determine a time delay between two sensors. The phase difference belongs to a finite interval (-π, +π), but the time delay between two sensors does not belong to a finite interval: the phase is 2π biased. The used sonar is designend for the use of the vernier technique, which allows to remove this bias. The difficulty comes from interferometric noise, which generates errors on the 2π bias estimation derived from the verier. The traditional way to reduce noise impact on the interferometric signal, is to average data. This method does not preserve the resolution of the bathymetric estimation. This paper presents an attempt to improve the accuracy and resolution of the interferometric signal through a wavelets based method of image despecklization. Traditionally, despecklization is processed on the logarithm of absolute value of the signal. But for this application, the proposed interferometric despecklizaiotn is achieved directly on the interferometric signal by integrating information, guided by the despeckled image. Finally, this multiscale analysis corresponds to an auto adaptive average filtering. A variant of this method is introduced and based on this assumption. This method used the identify function to reconstruct the signal. On the presented results, phase despecklization improves considerably the quality of the interferometric signal in terms of to noise ratio, without an important degradation of resolution.

  4. Causal Inference in Retrospective Studies.

    ERIC Educational Resources Information Center

    Holland, Paul W.; Rubin, Donald B.

    1988-01-01

    The problem of drawing causal inferences from retrospective case-controlled studies is considered. A model for causal inference in prospective studies is applied to retrospective studies. Limitations of case-controlled studies are formulated concerning relevant parameters that can be estimated in such studies. A coffee-drinking/myocardial…

  5. Improving Inferences from Multiple Methods.

    ERIC Educational Resources Information Center

    Shotland, R. Lance; Mark, Melvin M.

    1987-01-01

    Multiple evaluation methods (MEMs) can cause an inferential challenge, although there are strategies to strengthen inferences. Practical and theoretical issues involved in the use by social scientists of MEMs, three potential problems in drawing inferences from MEMs, and short- and long-term strategies for alleviating these problems are outlined.…

  6. Causal Inference and Developmental Psychology

    ERIC Educational Resources Information Center

    Foster, E. Michael

    2010-01-01

    Causal inference is of central importance to developmental psychology. Many key questions in the field revolve around improving the lives of children and their families. These include identifying risk factors that if manipulated in some way would foster child development. Such a task inherently involves causal inference: One wants to know whether…

  7. Learning to Observe "and" Infer

    ERIC Educational Resources Information Center

    Hanuscin, Deborah L.; Park Rogers, Meredith A.

    2008-01-01

    Researchers describe the need for students to have multiple opportunities and social interaction to learn about the differences between observation and inference and their role in developing scientific explanations (Harlen 2001; Simpson 2000). Helping children develop their skills of observation and inference in science while emphasizing the…

  8. Social Inference Through Technology

    NASA Astrophysics Data System (ADS)

    Oulasvirta, Antti

    Awareness cues are computer-mediated, real-time indicators of people’s undertakings, whereabouts, and intentions. Already in the mid-1970 s, UNIX users could use commands such as “finger” and “talk” to find out who was online and to chat. The small icons in instant messaging (IM) applications that indicate coconversants’ presence in the discussion space are the successors of “finger” output. Similar indicators can be found in online communities, media-sharing services, Internet relay chat (IRC), and location-based messaging applications. But presence and availability indicators are only the tip of the iceberg. Technological progress has enabled richer, more accurate, and more intimate indicators. For example, there are mobile services that allow friends to query and follow each other’s locations. Remote monitoring systems developed for health care allow relatives and doctors to assess the wellbeing of homebound patients (see, e.g., Tang and Venables 2000). But users also utilize cues that have not been deliberately designed for this purpose. For example, online gamers pay attention to other characters’ behavior to infer what the other players are like “in real life.” There is a common denominator underlying these examples: shared activities rely on the technology’s representation of the remote person. The other human being is not physically present but present only through a narrow technological channel.

  9. Inference from aging information.

    PubMed

    de Oliveira, Evaldo Araujo; Caticha, Nestor

    2010-06-01

    For many learning tasks the duration of the data collection can be greater than the time scale for changes of the underlying data distribution. The question we ask is how to include the information that data are aging. Ad hoc methods to achieve this include the use of validity windows that prevent the learning machine from making inferences based on old data. This introduces the problem of how to define the size of validity windows. In this brief, a new adaptive Bayesian inspired algorithm is presented for learning drifting concepts. It uses the analogy of validity windows in an adaptive Bayesian way to incorporate changes in the data distribution over time. We apply a theoretical approach based on information geometry to the classification problem and measure its performance in simulations. The uncertainty about the appropriate size of the memory windows is dealt with in a Bayesian manner by integrating over the distribution of the adaptive window size. Thus, the posterior distribution of the weights may develop algebraic tails. The learning algorithm results from tracking the mean and variance of the posterior distribution of the weights. It was found that the algebraic tails of this posterior distribution give the learning algorithm the ability to cope with an evolving environment by permitting the escape from local traps. PMID:20421181

  10. Spectral likelihood expansions for Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nagel, Joseph B.; Sudret, Bruno

    2016-03-01

    A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.

  11. The application of wavelet shrinkage denoising to magnetic Barkhausen noise measurements

    SciTech Connect

    Thomas, James

    2014-02-18

    The application of Magnetic Barkhausen Noise (MBN) as a non-destructive method of defect detection has proliferated throughout the manufacturing community. Instrument technology and measurement methodology have matured commensurately as applications have moved from the R and D labs to the fully automated manufacturing environment. These new applications present a new set of challenges including a bevy of error sources. A significant obstacle in many industrial applications is a decrease in signal to noise ratio due to (i) environmental EMI and (II) compromises in sensor design for the purposes of automation. The stochastic nature of MBN presents a challenge to any method of noise reduction. An application of wavelet shrinkage denoising is proposed as a method of decreasing extraneous noise in MBN measurements. The method is tested and yields marked improvement on measurements subject to EMI, grounding noise, and even measurements in ideal conditions.

  12. A blind detection scheme based on modified wavelet denoising algorithm for wireless optical communications

    NASA Astrophysics Data System (ADS)

    Li, Ruijie; Dang, Anhong

    2015-10-01

    This paper investigates a detection scheme without channel state information for wireless optical communication (WOC) systems in turbulence induced fading channel. The proposed scheme can effectively diminish the additive noise caused by background radiation and photodetector, as well as the intensity scintillation caused by turbulence. The additive noise can be mitigated significantly using the modified wavelet threshold denoising algorithm, and then, the intensity scintillation can be attenuated by exploiting the temporal correlation of the WOC channel. Moreover, to improve the performance beyond that of the maximum likelihood decision, the maximum a posteriori probability (MAP) criterion is considered. Compared with conventional blind detection algorithm, simulation results show that the proposed detection scheme can improve the signal-to-noise ratio (SNR) performance about 4.38 dB while the bit error rate and scintillation index (SI) are 1×10-6 and 0.02, respectively.

  13. A Self-Alignment Algorithm for SINS Based on Gravitational Apparent Motion and Sensor Data Denoising

    PubMed Central

    Liu, Yiting; Xu, Xiaosu; Liu, Xixiang; Yao, Yiqing; Wu, Liang; Sun, Jin

    2015-01-01

    Initial alignment is always a key topic and difficult to achieve in an inertial navigation system (INS). In this paper a novel self-initial alignment algorithm is proposed using gravitational apparent motion vectors at three different moments and vector-operation. Simulation and analysis showed that this method easily suffers from the random noise contained in accelerometer measurements which are used to construct apparent motion directly. Aiming to resolve this problem, an online sensor data denoising method based on a Kalman filter is proposed and a novel reconstruction method for apparent motion is designed to avoid the collinearity among vectors participating in the alignment solution. Simulation, turntable tests and vehicle tests indicate that the proposed alignment algorithm can fulfill initial alignment of strapdown INS (SINS) under both static and swinging conditions. The accuracy can either reach or approach the theoretical values determined by sensor precision under static or swinging conditions. PMID:25923932

  14. The EM Method in a Probabilistic Wavelet-Based MRI Denoising.

    PubMed

    Martin-Fernandez, Marcos; Villullas, Sergio

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  15. Implemented Wavelet Packet Tree based Denoising Algorithm in Bus Signals of a Wearable Sensorarray

    NASA Astrophysics Data System (ADS)

    Schimmack, M.; Nguyen, S.; Mercorelli, P.

    2015-11-01

    This paper introduces a thermosensing embedded system with a sensor bus that uses wavelets for the purposes of noise location and denoising. From the principle of the filter bank the measured signal is separated in two bands, low and high frequency. The proposed algorithm identifies the defined noise in these two bands. With the Wavelet Packet Transform as a method of Discrete Wavelet Transform, it is able to decompose and reconstruct bus input signals of a sensor network. Using a seminorm, the noise of a sequence can be detected and located, so that the wavelet basis can be rearranged. This particularly allows for elimination of any incoherent parts that make up unavoidable measuring noise of bus signals. The proposed method was built based on wavelet algorithms from the WaveLab 850 library of the Stanford University (USA). This work gives an insight to the workings of Wavelet Transformation.

  16. The EM Method in a Probabilistic Wavelet-Based MRI Denoising

    PubMed Central

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  17. Adaptive low-rank approximation and denoised Monte Carlo approach for high-dimensional Lindblad equations

    NASA Astrophysics Data System (ADS)

    Le Bris, C.; Rouchon, P.; Roussel, J.

    2015-12-01

    We present a twofold contribution to the numerical simulation of Lindblad equations. First, an adaptive numerical approach to approximate Lindblad equations using low-rank dynamics is described: a deterministic low-rank approximation of the density operator is computed, and its rank is adjusted dynamically, using an on-the-fly estimator of the error committed when reducing the dimension. On the other hand, when the intrinsic dimension of the Lindblad equation is too high to allow for such a deterministic approximation, we combine classical ensemble averages of quantum Monte Carlo trajectories and a denoising technique. Specifically, a variance reduction method based on the consideration of a low-rank dynamics as a control variate is developed. Numerical tests for quantum collapse and revivals show the efficiency of each approach, along with the complementarity of the two approaches.

  18. A Self-Alignment Algorithm for SINS Based on Gravitational Apparent Motion and Sensor Data Denoising.

    PubMed

    Liu, Yiting; Xu, Xiaosu; Liu, Xixiang; Yao, Yiqing; Wu, Liang; Sun, Jin

    2015-01-01

    Initial alignment is always a key topic and difficult to achieve in an inertial navigation system (INS). In this paper a novel self-initial alignment algorithm is proposed using gravitational apparent motion vectors at three different moments and vector-operation. Simulation and analysis showed that this method easily suffers from the random noise contained in accelerometer measurements which are used to construct apparent motion directly. Aiming to resolve this problem, an online sensor data denoising method based on a Kalman filter is proposed and a novel reconstruction method for apparent motion is designed to avoid the collinearity among vectors participating in the alignment solution. Simulation, turntable tests and vehicle tests indicate that the proposed alignment algorithm can fulfill initial alignment of strapdown INS (SINS) under both static and swinging conditions. The accuracy can either reach or approach the theoretical values determined by sensor precision under static or swinging conditions. PMID:25923932

  19. MMW and THz images denoising based on adaptive CBM3D

    NASA Astrophysics Data System (ADS)

    Dai, Li; Zhang, Yousai; Li, Yuanjiang; Wang, Haoxiang

    2014-04-01

    Over the past decades, millimeter wave and terahertz radiation has received a lot of interest due to advances in emission and detection technologies which allowed the widely application of the millimeter wave and terahertz imaging technology. This paper focuses on solving the problem of this sort of images existing stripe noise, block effect and other interfered information. A new kind of nonlocal average method is put forward. Suitable level Gaussian noise is added to resonate with the image. Adaptive color block-matching 3D filtering is used to denoise. Experimental results demonstrate that it improves the visual effect and removes interference at the same time, making the analysis of the image and target detection more easily.

  20. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    PubMed

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells. PMID:23458301

  1. Geometric moment based nonlocal-means filter for ultrasound image denoising

    NASA Astrophysics Data System (ADS)

    Dou, Yangchao; Zhang, Xuming; Ding, Mingyue; Chen, Yimin

    2011-06-01

    It is inevitable that there is speckle noise in ultrasound image. Despeckling is the important process. The original nonlocal means (NLM) filter can remove speckle noise and protect the texture information effectively when the image corruption degree is relatively low. But when the noise in the image is strong, NLM will produce fictitious texture information, which has the disadvantageous influence on its denoising performance. In this paper, a novel nonlocal means (NLM) filter is proposed. We introduce geometric moments into the NLM filter. Though geometric moments are not orthogonal moments, it is popular by its concision, and its restoration ability is not yet proved. Results on synthetic data and real ultrasound image show that the proposed method can get better despeckling performance than other state-of-the-art method.

  2. Identifiability and inference of pathway motifs by epistasis analysis.

    PubMed

    Phenix, Hilary; Perkins, Theodore; Kærn, Mads

    2013-06-01

    The accuracy of genetic network inference is limited by the assumptions used to determine if one hypothetical model is better than another in explaining experimental observations. Most previous work on epistasis analysis-in which one attempts to infer pathway relationships by determining equivalences among traits following mutations-has been based on Boolean or linear models. Here, we delineate the ultimate limits of epistasis-based inference by systematically surveying all two-gene network motifs and use symbolic algebra with arbitrary regulation functions to examine trait equivalences. Our analysis divides the motifs into equivalence classes, where different genetic perturbations result in indistinguishable experimental outcomes. We demonstrate that this partitioning can reveal important information about network architecture, and show, using simulated data, that it greatly improves the accuracy of genetic network inference methods. Because of the minimal assumptions involved, equivalence partitioning has broad applicability for gene network inference. PMID:23822501

  3. Identifiability and inference of pathway motifs by epistasis analysis

    NASA Astrophysics Data System (ADS)

    Phenix, Hilary; Perkins, Theodore; Kærn, Mads

    2013-06-01

    The accuracy of genetic network inference is limited by the assumptions used to determine if one hypothetical model is better than another in explaining experimental observations. Most previous work on epistasis analysis—in which one attempts to infer pathway relationships by determining equivalences among traits following mutations—has been based on Boolean or linear models. Here, we delineate the ultimate limits of epistasis-based inference by systematically surveying all two-gene network motifs and use symbolic algebra with arbitrary regulation functions to examine trait equivalences. Our analysis divides the motifs into equivalence classes, where different genetic perturbations result in indistinguishable experimental outcomes. We demonstrate that this partitioning can reveal important information about network architecture, and show, using simulated data, that it greatly improves the accuracy of genetic network inference methods. Because of the minimal assumptions involved, equivalence partitioning has broad applicability for gene network inference.

  4. Bootstrapped DEPICT for error estimation in PET functional imaging.

    PubMed

    Kukreja, Sunil L; Gunn, Roger N

    2004-03-01

    Basis pursuit denoising is a new approach for data-driven estimation of parametric images from dynamic positron emission tomography (PET) data. At present, this kinetic modeling technique does not allow for the estimation of the errors on the parameters. These estimates are useful when performing subsequent statistical analysis, such as, inference across a group of subjects or when applying partial volume correction algorithms. The difficulty with calculating the error estimates is a consequence of using an overcomplete dictionary of kinetic basis functions. In this paper, a bootstrap approach for the estimation of parameter errors from dynamic PET data is presented. This paper shows that the bootstrap can be used successfully to compute parameter errors on a region of interest or parametric image basis. Validation studies evaluate the methods performance on simulated and measured PET data ([(11)C]Diprenorphine-opiate receptor and [(11)C]Raclopride-dopamine D(2) receptor). The method is presented in the context of PET neuroreceptor binding studies, however, it has general applicability to a wide range of PET/SPET radiotracers in neurology, oncology and cardiology. PMID:15006677

  5. Ensemble Inference and Inferability of Gene Regulatory Networks

    PubMed Central

    Ud-Dean, S. M. Minhaz; Gunawan, Rudiyanto

    2014-01-01

    The inference of gene regulatory network (GRN) from gene expression data is an unsolved problem of great importance. This inference has been stated, though not proven, to be underdetermined implying that there could be many equivalent (indistinguishable) solutions. Motivated by this fundamental limitation, we have developed new framework and algorithm, called TRaCE, for the ensemble inference of GRNs. The ensemble corresponds to the inherent uncertainty associated with discriminating direct and indirect gene regulations from steady-state data of gene knock-out (KO) experiments. We applied TRaCE to analyze the inferability of random GRNs and the GRNs of E. coli and yeast from single- and double-gene KO experiments. The results showed that, with the exception of networks with very few edges, GRNs are typically not inferable even when the data are ideal (unbiased and noise-free). Finally, we compared the performance of TRaCE with top performing methods of DREAM4 in silico network inference challenge. PMID:25093509

  6. Design Methodology of a New Wavelet Basis Function for Fetal Phonocardiographic Signals

    PubMed Central

    Chourasia, Vijay S.; Tiwari, Anil Kumar

    2013-01-01

    Fetal phonocardiography (fPCG) based antenatal care system is economical and has a potential to use for long-term monitoring due to noninvasive nature of the system. The main limitation of this technique is that noise gets superimposed on the useful signal during its acquisition and transmission. Conventional filtering may result into loss of valuable diagnostic information from these signals. This calls for a robust, versatile, and adaptable denoising method applicable in different operative circumstances. In this work, a novel algorithm based on wavelet transform has been developed for denoising of fPCG signals. Successful implementation of wavelet theory in denoising is heavily dependent on selection of suitable wavelet basis function. This work introduces a new mother wavelet basis function for denoising of fPCG signals. The performance of newly developed wavelet is found to be better when compared with the existing wavelets. For this purpose, a two-channel filter bank, based on characteristics of fPCG signal, is designed. The resultant denoised fPCG signals retain the important diagnostic information contained in the original fPCG signal. PMID:23766693

  7. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Chitchian, Shahab; Mayer, Markus A.; Boretsky, Adam R.; van Kuijk, Frederik J.; Motamedi, Massoud

    2012-11-01

    Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained.

  8. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform

    PubMed Central

    Mayer, Markus A.; Boretsky, Adam R.; van Kuijk, Frederik J.; Motamedi, Massoud

    2012-01-01

    Abstract. Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained. PMID:23117804

  9. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform.

    PubMed

    Chitchian, Shahab; Mayer, Markus A; Boretsky, Adam R; van Kuijk, Frederik J; Motamedi, Massoud

    2012-11-01

    ABSTRACT. Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained. PMID:23117804

  10. The empirical accuracy of uncertain inference models

    NASA Technical Reports Server (NTRS)

    Vaughan, David S.; Yadrick, Robert M.; Perrin, Bruce M.; Wise, Ben P.

    1987-01-01

    Uncertainty is a pervasive feature of the domains in which expert systems are designed to function. Research design to test uncertain inference methods for accuracy and robustness, in accordance with standard engineering practice is reviewed. Several studies were conducted to assess how well various methods perform on problems constructed so that correct answers are known, and to find out what underlying features of a problem cause strong or weak performance. For each method studied, situations were identified in which performance deteriorates dramatically. Over a broad range of problems, some well known methods do only about as well as a simple linear regression model, and often much worse than a simple independence probability model. The results indicate that some commercially available expert system shells should be used with caution, because the uncertain inference models that they implement can yield rather inaccurate results.

  11. Bayesian Inference: with ecological applications

    USGS Publications Warehouse

    Link, William A.; Barker, Richard J.

    2010-01-01

    This text provides a mathematically rigorous yet accessible and engaging introduction to Bayesian inference with relevant examples that will be of interest to biologists working in the fields of ecology, wildlife management and environmental studies as well as students in advanced undergraduate statistics.. This text opens the door to Bayesian inference, taking advantage of modern computational efficiencies and easily accessible software to evaluate complex hierarchical models.

  12. Pathway network inference from gene expression data

    PubMed Central

    2014-01-01

    Background The development of high-throughput omics technologies enabled genome-wide measurements of the activity of cellular elements and provides the analytical resources for the progress of the Systems Biology discipline. Analysis and interpretation of gene expression data has evolved from the gene to the pathway and interaction level, i.e. from the detection of differentially expressed genes, to the establishment of gene interaction networks and the identification of enriched functional categories. Still, the understanding of biological systems requires a further level of analysis that addresses the characterization of the interaction between functional modules. Results We present a novel computational methodology to study the functional interconnections among the molecular elements of a biological system. The PANA approach uses high-throughput genomics measurements and a functional annotation scheme to extract an activity profile from each functional block -or pathway- followed by machine-learning methods to infer the relationships between these functional profiles. The result is a global, interconnected network of pathways that represents the functional cross-talk within the molecular system. We have applied this approach to describe the functional transcriptional connections during the yeast cell cycle and to identify pathways that change their connectivity in a disease condition using an Alzheimer example. Conclusions PANA is a useful tool to deepen in our understanding of the functional interdependences that operate within complex biological systems. We show the approach is algorithmically consistent and the inferred network is well supported by the available functional data. The method allows the dissection of the molecular basis of the functional connections and we describe the different regulatory mechanisms that explain the network's topology obtained for the yeast cell cycle data. PMID:25032889

  13. De-noising and retrieving algorithm of Mie lidar data based on the particle filter and the Fernald method.

    PubMed

    Li, Chen; Pan, Zengxin; Mao, Feiyue; Gong, Wei; Chen, Shihua; Min, Qilong

    2015-10-01

    The signal-to-noise ratio (SNR) of an atmospheric lidar decreases rapidly as range increases, so that maintaining high accuracy when retrieving lidar data at the far end is difficult. To avoid this problem, many de-noising algorithms have been developed; in particular, an effective de-noising algorithm has been proposed to simultaneously retrieve lidar data and obtain a de-noised signal by combining the ensemble Kalman filter (EnKF) and the Fernald method. This algorithm enhances the retrieval accuracy and effective measure range of a lidar based on the Fernald method, but sometimes leads to a shift (bias) in the near range as a result of the over-smoothing caused by the EnKF. This study proposes a new scheme that avoids this phenomenon using a particle filter (PF) instead of the EnKF in the de-noising algorithm. Synthetic experiments show that the PF performs better than the EnKF and Fernald methods. The root mean square error of PF are 52.55% and 38.14% of that of the Fernald and EnKF methods, and PF increases the SNR by 44.36% and 11.57% of that of the Fernald and EnKF methods, respectively. For experiments with real signals, the relative bias of the EnKF is 5.72%, which is reduced to 2.15% by the PF in the near range. Furthermore, the suppression impact on the random noise in the far range is also made significant via the PF. An extensive application of the PF method can be useful in determining the local and global properties of aerosols. PMID:26480164

  14. Active inference, communication and hermeneutics.

    PubMed

    Friston, Karl J; Frith, Christopher D

    2015-07-01

    Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others--during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions--both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then--in principle--they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa. PMID:25957007

  15. Causal inference and developmental psychology.

    PubMed

    Foster, E Michael

    2010-11-01

    Causal inference is of central importance to developmental psychology. Many key questions in the field revolve around improving the lives of children and their families. These include identifying risk factors that if manipulated in some way would foster child development. Such a task inherently involves causal inference: One wants to know whether the risk factor actually causes outcomes. Random assignment is not possible in many instances, and for that reason, psychologists must rely on observational studies. Such studies identify associations, and causal interpretation of such associations requires additional assumptions. Research in developmental psychology generally has relied on various forms of linear regression, but this methodology has limitations for causal inference. Fortunately, methodological developments in various fields are providing new tools for causal inference-tools that rely on more plausible assumptions. This article describes the limitations of regression for causal inference and describes how new tools might offer better causal inference. This discussion highlights the importance of properly identifying covariates to include (and exclude) from the analysis. This discussion considers the directed acyclic graph for use in accomplishing this task. With the proper covariates having been chosen, many of the available methods rely on the assumption of "ignorability." The article discusses the meaning of ignorability and considers alternatives to this assumption, such as instrumental variables estimation. Finally, the article considers the use of the tools discussed in the context of a specific research question, the effect of family structure on child development. PMID:20677855

  16. Active inference, communication and hermeneutics☆

    PubMed Central

    Friston, Karl J.; Frith, Christopher D.

    2015-01-01

    Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others – during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions – both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then – in principle – they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa. PMID:25957007

  17. Edge preserved enhancement of medical images using adaptive fusion-based denoising by shearlet transform and total variation algorithm

    NASA Astrophysics Data System (ADS)

    Gupta, Deep; Anand, Radhey Shyam; Tyagi, Barjeev

    2013-10-01

    Edge preserved enhancement is of great interest in medical images. Noise present in medical images affects the quality, contrast resolution, and most importantly, texture information and can make post-processing difficult also. An enhancement approach using an adaptive fusion algorithm is proposed which utilizes the features of shearlet transform (ST) and total variation (TV) approach. In the proposed method, three different denoised images processed with TV method, shearlet denoising, and edge information recovered from the remnant of the TV method and processed with the ST are fused adaptively. The result of enhanced images processed with the proposed method helps to improve the visibility and detectability of medical images. For the proposed method, different weights are evaluated from the different variance maps of individual denoised image and the edge extracted information from the remnant of the TV approach. The performance of the proposed method is evaluated by conducting various experiments on both the standard images and different medical images such as computed tomography, magnetic resonance, and ultrasound. Experiments show that the proposed method provides an improvement not only in noise reduction but also in the preservation of more edges and image details as compared to the others.

  18. Denoising of B{sub 1}{sup +} field maps for noise-robust image reconstruction in electrical properties tomography

    SciTech Connect

    Michel, Eric; Hernandez, Daniel; Cho, Min Hyoung; Lee, Soo Yeol

    2014-10-15

    Purpose: To validate the use of adaptive nonlinear filters in reconstructing conductivity and permittivity images from the noisy B{sub 1}{sup +} maps in electrical properties tomography (EPT). Methods: In EPT, electrical property images are computed by taking Laplacian of the B{sub 1}{sup +} maps. To mitigate the noise amplification in computing the Laplacian, the authors applied adaptive nonlinear denoising filters to the measured complex B{sub 1}{sup +} maps. After the denoising process, they computed the Laplacian by central differences. They performed EPT experiments on phantoms and a human brain at 3 T along with corresponding EPT simulations on finite-difference time-domain models. They evaluated the EPT images comparing them with the ones obtained by previous EPT reconstruction methods. Results: In both the EPT simulations and experiments, the nonlinear filtering greatly improved the EPT image quality when evaluated in terms of the mean and standard deviation of the electrical property values at the regions of interest. The proposed method also improved the overall similarity between the reconstructed conductivity images and the true shapes of the conductivity distribution. Conclusions: The nonlinear denoising enabled us to obtain better-quality EPT images of the phantoms and the human brain at 3 T.

  19. Light field image denoising using a linear 4D frequency-hyperfan all-in-focus filter

    NASA Astrophysics Data System (ADS)

    Dansereau, Donald G.; Bongiorno, Daniel L.; Pizarro, Oscar; Williams, Stefan B.

    2013-02-01

    Imaging in low light is problematic as sensor noise can dominate imagery, and increasing illumination or aperture size is not always effective or practical. Computational photography offers a promising solution in the form of the light field camera, which by capturing redundant information offers an opportunity for elegant noise rejection. We show that the light field of a Lambertian scene has a 4D hyperfan-shaped frequency-domain region of support at the intersection of a dual-fan and a hypercone. By designing and implementing a filter with appropriately shaped passband we accomplish denoising with a single all-in-focus linear filter. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenselet- based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including synthetic focus, fan-shaped antialiasing filters, and a range of modern nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, over a variety of metrics, and in real-world scenarios. Finally, we show that the hyperfan's performance scales with aperture count.

  20. Inferring the temperature dependence of population parameters: the effects of experimental design and inference algorithm.

    PubMed

    Palamara, Gian Marco; Childs, Dylan Z; Clements, Christopher F; Petchey, Owen L; Plebani, Marco; Smith, Matthew J

    2014-12-01

    Understanding and quantifying the temperature dependence of population parameters, such as intrinsic growth rate and carrying capacity, is critical for predicting the ecological responses to environmental change. Many studies provide empirical estimates of such temperature dependencies, but a thorough investigation of the methods used to infer them has not been performed yet. We created artificial population time series using a stochastic logistic model parameterized with the Arrhenius equation, so that activation energy drives the temperature dependence of population parameters. We simulated different experimental designs and used different inference methods, varying the likelihood functions and other aspects of the parameter estimation methods. Finally, we applied the best performing inference methods to real data for the species Paramecium caudatum. The relative error of the estimates of activation energy varied between 5% and 30%. The fraction of habitat sampled played the most important role in determining the relative error; sampling at least 1% of the habitat kept it below 50%. We found that methods that simultaneously use all time series data (direct methods) and methods that estimate population parameters separately for each temperature (indirect methods) are complementary. Indirect methods provide a clearer insight into the shape of the functional form describing the temperature dependence of population parameters; direct methods enable a more accurate estimation of the parameters of such functional forms. Using both methods, we found that growth rate and carrying capacity of Paramecium caudatum scale with temperature according to different activation energies. Our study shows how careful choice of experimental design and inference methods can increase the accuracy of the inferred relationships between temperature and population parameters. The comparison of estimation methods provided here can increase the accuracy of model predictions, with important

  1. Physics of Inference

    NASA Astrophysics Data System (ADS)

    Toroczkai, Zoltan

    Jaynes's maximum entropy method provides a family of principled models that allow the prediction of a system's properties as constrained by empirical data (observables). However, their use is often hindered by the degeneracy problem characterized by spontaneous symmetry breaking, where predictions fail. Here we show that degeneracy appears when the corresponding density of states function is not log-concave, which is typically the consequence of nonlinear relationships between the constraining observables. We illustrate this phenomenon on several examples, including from complex networks, combinatorics and classical spin systems (e.g., Blume-Emery-Griffiths lattice-spin models). Exploiting these nonlinear relationships we then propose a solution to the degeneracy problem for a large class of systems via transformations that render the density of states function log-concave. The effectiveness of the method is demonstrated on real-world network data. Finally, we discuss the implications of these findings on the relationship between the geometrical properties of the density of states function and phase transitions in spin systems. Supported in part by Grant No. FA9550-12-1-0405 from AFOSR/DARPA and by Grant No. HDTRA 1-09-1-0039 from DTRA.

  2. Optimal inference with suboptimal models: Addiction and active Bayesian inference

    PubMed Central

    Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl

    2015-01-01

    When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321

  3. Use of Empirical Mode Decomposition based Denoised NDVI in Extended Three-Temperature Model to estimate Evapotranspiration in Northeast Indian Ecosystems

    NASA Astrophysics Data System (ADS)

    Padhee, S. K.

    2015-12-01

    Evapotranspiration (ET) is an essential component involved in the energy balance and water budgeting methods, and its precise assessment are crucial for estimation of various hydrological parameters. Traditional point estimation methods for ET computation offer quantitative analysis, but lag in spatial distribution. The use of Remote Sensing (RS) data with good spatial, spectral and temporal resolution having broad spatial coverage, could lead the estimations with some advantages. However, approaches which requires data rich environment, demands time and resources. The estimation of spatially distributed soil evaporation (Es) and transpiration from canopy (Ec) by RS data, followed by their combination to provide the total ET, could be a simpler approach for accurate estimates of ET flux at macro-scale level. The 'Extended Three Temperature Model' (Extended 3T Model) is an established model based on same approach and is capable to compute ET and its partition of Es and Ec within the same algorithm. A case study was conducted using Extended 3T Model and MODIS products for the Brahmaputra river basin within the Northeast India for years 2000-2010. The extended 3T model was used by including its pre-requisite the land surface temperature (Ts), which was separated into the surface temperature of dry soil (Tsm) and the surface temperature of vegetation (Tcm), decided by a derivative of vegetation index (NDVI) called fractional vegetation cover (f). However, NDVI time series which is nonlinear and nonstationary can be decomposed by the Empirical Mode Decomposition (EMD) into components called intrinsic mode functions (IMFs), based on inherent temporal scales. The highest frequency component which was found to represent noise was subtracted from the original NDVI series to get the denoised product from which f was derived. The separated land surface temperatures (Tsm and Tcm) were used to calculate the Es and Ec followed by estimation of total ET. The spatiotemporal

  4. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    NASA Astrophysics Data System (ADS)

    Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain; Messaoudi, Cedric; Marco, Sergio

    2015-01-01

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  5. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    SciTech Connect

    Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain E-mail: cedric.messaoudi@curie.fr Messaoudi, Cedric E-mail: cedric.messaoudi@curie.fr Marco, Sergio E-mail: cedric.messaoudi@curie.fr

    2015-01-13

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  6. Quality of Computationally Inferred Gene Ontology Annotations

    PubMed Central

    Škunca, Nives; Altenhoff, Adrian; Dessimoz, Christophe

    2012-01-01

    Gene Ontology (GO) has established itself as the undisputed standard for protein function annotation. Most annotations are inferred electronically, i.e. without individual curator supervision, but they are widely considered unreliable. At the same time, we crucially depend on those automated annotations, as most newly sequenced genomes are non-model organisms. Here, we introduce a methodology to systematically and quantitatively evaluate electronic annotations. By exploiting changes in successive releases of the UniProt Gene Ontology Annotation database, we assessed the quality of electronic annotations in terms of specificity, reliability, and coverage. Overall, we not only found that electronic annotations have significantly improved in recent years, but also that their reliability now rivals that of annotations inferred by curators when they use evidence other than experiments from primary literature. This work provides the means to identify the subset of electronic annotations that can be relied upon—an important outcome given that >98% of all annotations are inferred without direct curation. PMID:22693439

  7. Evolutionary inferences from the analysis of exchangeability

    PubMed Central

    Hendry, Andrew P.; Kaeuffer, Renaud; Crispo, Erika; Peichel, Catherine L.; Bolnick, Daniel I.

    2013-01-01

    Evolutionary inferences are usually based on statistical models that compare mean genotypes and phenotypes (or their frequencies) among populations. An alternative is to use the actual distribution of genotypes and phenotypes to infer the “exchangeability” of individuals among populations. We illustrate this approach by using discriminant functions on principal components to classify individuals among paired lake and stream populations of threespine stickleback in each of six independent watersheds. Classification based on neutral and non-neutral microsatellite markers was highest to the population of origin and next-highest to populations in the same watershed. These patterns are consistent with the influence of historical contingency (separate colonization of each watershed) and subsequent gene flow (within but not between watersheds). In comparison to this low genetic exchangeability, ecological (diet) and morphological (trophic and armor traits) exchangeability was relatively high – particularly among populations from similar habitats. These patterns reflect the role of natural selection in driving parallel changes adaptive changes when independent populations colonize similar habitats. Importantly, however, substantial non-parallelism was also evident. Our results show that analyses based on exchangeability can confirm inferences based on statistical analyses of means or frequencies, while also refining insights into the drivers of – and constraints on – evolutionary diversification. PMID:24299398

  8. CAUSAL INFERENCE IN BIOLOGY NETWORKS WITH INTEGRATED BELIEF PROPAGATION

    PubMed Central

    CHANG, RUI; KARR, JONATHAN R; SCHADT, ERIC E

    2014-01-01

    Inferring causal relationships among molecular and higher order phenotypes is a critical step in elucidating the complexity of living systems. Here we propose a novel method for inferring causality that is no longer constrained by the conditional dependency arguments that limit the ability of statistical causal inference methods to resolve causal relationships within sets of graphical models that are Markov equivalent. Our method utilizes Bayesian belief propagation to infer the responses of perturbation events on molecular traits given a hypothesized graph structure. A distance measure between the inferred response distribution and the observed data is defined to assess the ‘fitness’ of the hypothesized causal relationships. To test our algorithm, we infer causal relationships within equivalence classes of gene networks in which the form of the functional interactions that are possible are assumed to be nonlinear, given synthetic microarray and RNA sequencing data. We also apply our method to infer causality in real metabolic network with v-structure and feedback loop. We show that our method can recapitulate the causal structure and recover the feedback loop only from steady-state data which conventional method cannot. PMID:25592596

  9. Thermodynamics of cellular statistical inference

    NASA Astrophysics Data System (ADS)

    Lang, Alex; Fisher, Charles; Mehta, Pankaj

    2014-03-01

    Successful organisms must be capable of accurately sensing the surrounding environment in order to locate nutrients and evade toxins or predators. However, single cell organisms face a multitude of limitations on their accuracy of sensing. Berg and Purcell first examined the canonical example of statistical limitations to cellular learning of a diffusing chemical and established a fundamental limit to statistical accuracy. Recent work has shown that the Berg and Purcell learning limit can be exceeded using Maximum Likelihood Estimation. Here, we recast the cellular sensing problem as a statistical inference problem and discuss the relationship between the efficiency of an estimator and its thermodynamic properties. We explicitly model a single non-equilibrium receptor and examine the constraints on statistical inference imposed by noisy biochemical networks. Our work shows that cells must balance sample number, specificity, and energy consumption when performing statistical inference. These tradeoffs place significant constraints on the practical implementation of statistical estimators in a cell.

  10. Causal inference from observational data.

    PubMed

    Listl, Stefan; Jürges, Hendrik; Watt, Richard G

    2016-10-01

    Randomized controlled trials have long been considered the 'gold standard' for causal inference in clinical research. In the absence of randomized experiments, identification of reliable intervention points to improve oral health is often perceived as a challenge. But other fields of science, such as social science, have always been challenged by ethical constraints to conducting randomized controlled trials. Methods have been established to make causal inference using observational data, and these methods are becoming increasingly relevant in clinical medicine, health policy and public health research. This study provides an overview of state-of-the-art methods specifically designed for causal inference in observational data, including difference-in-differences (DiD) analyses, instrumental variables (IV), regression discontinuity designs (RDD) and fixed-effects panel data analysis. The described methods may be particularly useful in dental research, not least because of the increasing availability of routinely collected administrative data and electronic health records ('big data'). PMID:27111146

  11. We infer light in space.

    PubMed

    Schirillo, James A

    2013-10-01

    In studies of lightness and color constancy, the terms lightness and brightness refer to the qualia corresponding to perceived surface reflectance and perceived luminance, respectively. However, what has rarely been considered is the fact that the volume of space containing surfaces appears neither empty, void, nor black, but filled with light. Helmholtz (1866/1962) came closest to describing this phenomenon when discussing inferred illumination, but previous theoretical treatments have fallen short by restricting their considerations to the surfaces of objects. The present work is among the first to explore how we infer the light present in empty space. It concludes with several research examples supporting the theory that humans can infer the differential levels and chromaticities of illumination in three-dimensional space. PMID:23435628

  12. Randomized parcellation based inference.

    PubMed

    Da Mota, Benoit; Fritsch, Virgile; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Bromberg, Uli; Conrod, Patricia; Gallinat, Jürgen; Garavan, Hugh; Martinot, Jean-Luc; Nees, Frauke; Paus, Tomas; Pausova, Zdenka; Rietschel, Marcella; Smolka, Michael N; Ströhle, Andreas; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2014-04-01

    Neuroimaging group analyses are used to relate inter-subject signal differences observed in brain imaging with behavioral or genetic variables and to assess risks factors of brain diseases. The lack of stability and of sensitivity of current voxel-based analysis schemes may however lead to non-reproducible results. We introduce a new approach to overcome the limitations of standard methods, in which active voxels are detected according to a consensus on several random parcellations of the brain images, while a permutation test controls the false positive risk. Both on synthetic and real data, this approach shows higher sensitivity, better accuracy and higher reproducibility than state-of-the-art methods. In a neuroimaging-genetic application, we find that it succeeds in detecting a significant association between a genetic variant next to the COMT gene and the BOLD signal in the left thalamus for a functional Magnetic Resonance Imaging contrast associated with incorrect responses of the subjects from a Stop Signal Task protocol. PMID:24262376

  13. LOWER LEVEL INFERENCE CONTROL IN STATISTICAL DATABASE SYSTEMS

    SciTech Connect

    Lipton, D.L.; Wong, H.K.T.

    1984-02-01

    An inference is the process of transforming unclassified data values into confidential data values. Most previous research in inference control has studied the use of statistical aggregates to deduce individual records. However, several other types of inference are also possible. Unknown functional dependencies may be apparent to users who have 'expert' knowledge about the characteristics of a population. Some correlations between attributes may be concluded from 'commonly-known' facts about the world. To counter these threats, security managers should use random sampling of databases of similar populations, as well as expert systems. 'Expert' users of the DATABASE SYSTEM may form inferences from the variable performance of the user interface. Users may observe on-line turn-around time, accounting statistics. the error message received, and the point at which an interactive protocol sequence fails. One may obtain information about the frequency distributions of attribute values, and the validity of data object names from this information. At the back-end of a database system, improved software engineering practices will reduce opportunities to bypass functional units of the database system. The term 'DATA OBJECT' should be expanded to incorporate these data object types which generate new classes of threats. The security of DATABASES and DATABASE SySTEMS must be recognized as separate but related problems. Thus, by increased awareness of lower level inferences, system security managers may effectively nullify the threat posed by lower level inferences.

  14. Bayesian inferences about the self (and others): A review

    PubMed Central

    Moutoussis, Michael; Fearon, Pasco; El-Deredy, Wael; Dolan, Raymond J.; Friston, Karl J.

    2014-01-01

    Viewing the brain as an organ of approximate Bayesian inference can help us understand how it represents the self. We suggest that inferred representations of the self have a normative function: to predict and optimise the likely outcomes of social interactions. Technically, we cast this predict-and-optimise as maximising the chance of favourable outcomes through active inference. Here the utility of outcomes can be conceptualised as prior beliefs about final states. Actions based on interpersonal representations can therefore be understood as minimising surprise – under the prior belief that one will end up in states with high utility. Interpersonal representations thus serve to render interactions more predictable, while the affective valence of interpersonal inference renders self-perception evaluative. Distortions of self-representation contribute to major psychiatric disorders such as depression, personality disorder and paranoia. The approach we review may therefore operationalise the study of interpersonal representations in pathological states. PMID:24583455

  15. Eight challenges in phylodynamic inference

    PubMed Central

    Frost, Simon D.W.; Pybus, Oliver G.; Gog, Julia R.; Viboud, Cecile; Bonhoeffer, Sebastian; Bedford, Trevor

    2015-01-01

    The field of phylodynamics, which attempts to enhance our understanding of infectious disease dynamics using pathogen phylogenies, has made great strides in the past decade. Basic epidemiological and evolutionary models are now well characterized with inferential frameworks in place. However, significant challenges remain in extending phylodynamic inference to more complex systems. These challenges include accounting for evolutionary complexities such as changing mutation rates, selection, reassortment, and recombination, as well as epidemiological complexities such as stochastic population dynamics, host population structure, and different patterns at the within-host and between-host scales. An additional challenge exists in making efficient inferences from an ever increasing corpus of sequence data. PMID:25843391

  16. Bayesian Inference in Satellite Gravity Inversion

    NASA Technical Reports Server (NTRS)

    Kis, K. I.; Taylor, Patrick T.; Wittmann, G.; Kim, Hyung Rae; Torony, B.; Mayer-Guerr, T.

    2005-01-01

    To solve a geophysical inverse problem means applying measurements to determine the parameters of the selected model. The inverse problem is formulated as the Bayesian inference. The Gaussian probability density functions are applied in the Bayes's equation. The CHAMP satellite gravity data are determined at the altitude of 400 kilometer altitude over the South part of the Pannonian basin. The model of interpretation is the right vertical cylinder. The parameters of the model are obtained from the minimum problem solved by the Simplex method.

  17. Developing a denoising filter for electron microscopy and tomography data in the cloud

    PubMed Central

    Starosolski, Zbigniew; Szczepanski, Marek; Wahle, Manuel; Rusu, Mirabela

    2012-01-01

    The low radiation conditions and the predominantly phase-object image formation of cryo-electron microscopy (cryo-EM) result in extremely high noise levels and low contrast in the recorded micrographs. The process of single particle or tomographic 3D reconstruction does not completely eliminate this noise and is even capable of introducing new sources of noise during alignment or when correcting for instrument parameters. The recently developed Digital Paths Supervised Variance (DPSV) denoising filter uses local variance information to control regional noise in a robust and adaptive manner. The performance of the DPSV filter was evaluated in this review qualitatively and quantitatively using simulated and experimental data from cryo-EM and tomography in two and three dimensions. We also assessed the benefit of filtering experimental reconstructions for visualization purposes and for enhancing the accuracy of feature detection. The DPSV filter eliminates high-frequency noise artifacts (density gaps), which would normally preclude the accurate segmentation of tomography reconstructions or the detection of alpha-helices in single-particle reconstructions. This collaborative software development project was carried out entirely by virtual interactions among the authors using publicly available development and file sharing tools. PMID:23066432

  18. Joint non-Gaussian denoising and superresolving of raw high frame rate videos.

    PubMed

    Jinli Suo; Yue Deng; Liheng Bian; Qionghai Dai

    2014-03-01

    High frame rate cameras capture sharp videos of highly dynamic scenes by trading off signal-noise-ratio and image resolution, so combinational super-resolving and denoising is crucial for enhancing high speed videos and extending their applications. The solution is nontrivial due to the fact that two deteriorations co-occur during capturing and noise is nonlinearly dependent on signal strength. To handle this problem, we propose conducting noise separation and super resolution under a unified optimization framework, which models both spatiotemporal priors of high quality videos and signal-dependent noise. Mathematically, we align the frames along temporal axis and pursue the solution under the following three criterion: 1) the sharp noise-free image stack is low rank with some missing pixels denoting occlusions; 2) the noise follows a given nonlinear noise model; and 3) the recovered sharp image can be reconstructed well with sparse coefficients and an over complete dictionary learned from high quality natural images. In computation aspects, we propose to obtain the final result by solving a convex optimization using the modern local linearization techniques. In the experiments, we validate the proposed approach in both synthetic and real captured data. PMID:24723520

  19. Optimizing the De-Noise Neural Network Model for GPS Time-Series Monitoring of Structures

    PubMed Central

    Kaloop, Mosbeh R.; Hu, Jong Wan

    2015-01-01

    The Global Positioning System (GPS) is recently used widely in structures and other applications. Notwithstanding, the GPS accuracy still suffers from the errors afflicting the measurements, particularly the short-period displacement of structural components. Previously, the multi filter method is utilized to remove the displacement errors. This paper aims at using a novel application for the neural network prediction models to improve the GPS monitoring time series data. Four prediction models for the learning algorithms are applied and used with neural network solutions: back-propagation, Cascade-forward back-propagation, adaptive filter and extended Kalman filter, to estimate which model can be recommended. The noise simulation and bridge’s short-period GPS of the monitoring displacement component of one Hz sampling frequency are used to validate the four models and the previous method. The results show that the Adaptive neural networks filter is suggested for de-noising the observations, specifically for the GPS displacement components of structures. Also, this model is expected to have significant influence on the design of structures in the low frequency responses and measurements’ contents. PMID:26402687

  20. Marginalised Stacked Denoising Autoencoders for Robust Representation of Real-Time Multi-View Action Recognition.

    PubMed

    Gu, Feng; Flórez-Revuelta, Francisco; Monekosso, Dorothy; Remagnino, Paolo

    2015-01-01

    Multi-view action recognition has gained a great interest in video surveillance, human computer interaction, and multimedia retrieval, where multiple cameras of different types are deployed to provide a complementary field of views. Fusion of multiple camera views evidently leads to more robust decisions on both tracking multiple targets and analysing complex human activities, especially where there are occlusions. In this paper, we incorporate the marginalised stacked denoising autoencoders (mSDA) algorithm to further improve the bag of words (BoWs) representation in terms of robustness and usefulness for multi-view action recognition. The resulting representations are fed into three simple fusion strategies as well as a multiple kernel learning algorithm at the classification stage. Based on the internal evaluation, the codebook size of BoWs and the number of layers of mSDA may not significantly affect recognition performance. According to results on three multi-view benchmark datasets, the proposed framework improves recognition performance across all three datasets and outputs record recognition performance, beating the state-of-art algorithms in the literature. It is also capable of performing real-time action recognition at a frame rate ranging from 33 to 45, which could be further improved by using more powerful machines in future applications. PMID:26193271

  1. Marginalised Stacked Denoising Autoencoders for Robust Representation of Real-Time Multi-View Action Recognition

    PubMed Central

    Gu, Feng; Flórez-Revuelta, Francisco; Monekosso, Dorothy; Remagnino, Paolo

    2015-01-01

    Multi-view action recognition has gained a great interest in video surveillance, human computer interaction, and multimedia retrieval, where multiple cameras of different types are deployed to provide a complementary field of views. Fusion of multiple camera views evidently leads to more robust decisions on both tracking multiple targets and analysing complex human activities, especially where there are occlusions. In this paper, we incorporate the marginalised stacked denoising autoencoders (mSDA) algorithm to further improve the bag of words (BoWs) representation in terms of robustness and usefulness for multi-view action recognition. The resulting representations are fed into three simple fusion strategies as well as a multiple kernel learning algorithm at the classification stage. Based on the internal evaluation, the codebook size of BoWs and the number of layers of mSDA may not significantly affect recognition performance. According to results on three multi-view benchmark datasets, the proposed framework improves recognition performance across all three datasets and outputs record recognition performance, beating the state-of-art algorithms in the literature. It is also capable of performing real-time action recognition at a frame rate ranging from 33 to 45, which could be further improved by using more powerful machines in future applications. PMID:26193271

  2. Simultaneous denoising and reconstruction of 5D seismic data via damped rank-reduction method

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang; Zhang, Dong; Jin, Zhaoyu; Chen, Xiaohong; Zu, Shaohuan; Huang, Weilin; Gan, Shuwei

    2016-06-01

    The Cadzow rank-reduction method can be effectively utilized in simultaneously denoising and reconstructing 5D seismic data that depends on four spatial dimensions. The classic version of Cadzow rank-reduction method arranges the 4D spatial data into a level-four block Hankel/Toeplitz matrix and then applies truncated singular value decomposition (TSVD) for rank-reduction. When the observed data is extremely noisy, which is often the feature of real seismic data, traditional TSVD cannot be adequate for attenuating the noise and reconstructing the signals. The reconstructed data tends to contain a significant amount of residual noise using the traditional TSVD method, which can be explained by the fact that the reconstructed data space is a mixture of both signal subspace and noise subspace. In order to better decompose the block Hankel matrix into signal and noise components, we introduced a damping operator into the traditional TSVD formula, which we call the damped rank-reduction method. The damped rank-reduction method can obtain a perfect reconstruction performance even when the observed data has extremely low signal-to-noise ratio (SNR). The feasibility of the improved 5D seismic data reconstruction method was validated via both 5D synthetic and field data examples. We presented comprehensive analysis of the data examples and obtained valuable experience and guidelines in better utilizing the proposed method in practice. Since the proposed method is convenient to implement and can achieve immediate improvement, we suggest its wide application in the industry.

  3. Simultaneous denoising and reconstruction of 5-D seismic data via damped rank-reduction method

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang; Zhang, Dong; Jin, Zhaoyu; Chen, Xiaohong; Zu, Shaohuan; Huang, Weilin; Gan, Shuwei

    2016-09-01

    The Cadzow rank-reduction method can be effectively utilized in simultaneously denoising and reconstructing 5-D seismic data that depend on four spatial dimensions. The classic version of Cadzow rank-reduction method arranges the 4-D spatial data into a level-four block Hankel/Toeplitz matrix and then applies truncated singular value decomposition (TSVD) for rank reduction. When the observed data are extremely noisy, which is often the feature of real seismic data, traditional TSVD cannot be adequate for attenuating the noise and reconstructing the signals. The reconstructed data tend to contain a significant amount of residual noise using the traditional TSVD method, which can be explained by the fact that the reconstructed data space is a mixture of both signal subspace and noise subspace. In order to better decompose the block Hankel matrix into signal and noise components, we introduced a damping operator into the traditional TSVD formula, which we call the damped rank-reduction method. The damped rank-reduction method can obtain a perfect reconstruction performance even when the observed data have extremely low signal-to-noise ratio. The feasibility of the improved 5-D seismic data reconstruction method was validated via both 5-D synthetic and field data examples. We presented comprehensive analysis of the data examples and obtained valuable experience and guidelines in better utilizing the proposed method in practice. Since the proposed method is convenient to implement and can achieve immediate improvement, we suggest its wide application in the industry.

  4. Towards denoising XMCD movies of fast magnetization dynamics using extended Kalman filter.

    PubMed

    Kopp, M; Harmeling, S; Schütz, G; Schölkopf, B; Fähnle, M

    2015-01-01

    The Kalman filter is a well-established approach to get information on the time-dependent state of a system from noisy observations. It was developed in the context of the Apollo project to see the deviation of the true trajectory of a rocket from the desired trajectory. Afterwards it was applied to many different systems with small numbers of components of the respective state vector (typically about 10). In all cases the equation of motion for the state vector was known exactly. The fast dissipative magnetization dynamics is often investigated by x-ray magnetic circular dichroism movies (XMCD movies), which are often very noisy. In this situation the number of components of the state vector is extremely large (about 10(5)), and the equation of motion for the dissipative magnetization dynamics (especially the values of the material parameters of this equation) is not well known. In the present paper it is shown by theoretical considerations that - nevertheless - there is no principle problem for the use of the Kalman filter to denoise XMCD movies of fast dissipative magnetization dynamics. PMID:25461588

  5. Feature-preserving mesh denoising via normal guided quadric error metrics

    NASA Astrophysics Data System (ADS)

    Yu, Jinze; Wei, Mingqiang; Qin, Jing; Wu, Jianhuang; Heng, Pheng-Ann

    2014-11-01

    While modern optical and laser 3D scanners can generate high accuracy mesh models, to largely avoid their introducing noise which prohibits practical applications still results in high cost. Thus, optimizing noisy meshes while preserving their geometric details is necessary for production, which still remains as challenging work. In this paper we propose a novel and efficient two-stage feature-preserving mesh denoising framework which can remove noise while preserving fine features of a surface mesh. We improve the capability of feature preservation of our vertex updating scheme by employing an extension of the quadric error metrics (QEM), which can track and minimize updating errors and hence well preserve the overall shape as well as detailed features of a mesh. We further leverage vertex normals to guide the vertex updating process, as the normal field of a mesh reflects the geometry of the underlying surface. In addition, to obtain a more accurate normal field to guide vertex updating, we develop an improved normal filter by integrating advantages of existing filters. Compared with traditional gradient descent based schemes, our method performs better on challenging regions with rich geometric features. Moreover, a local entropy metric is proposed to measure stability of a mesh and the effectiveness of vertex updating algorithms. Qualitative and quantitative experiments demonstrate that our approach can effectively remove noise from noisy meshes while preserving or recovering geometrical features of original objects.

  6. The effective image denoising method for MEMS based IR image arrays

    NASA Astrophysics Data System (ADS)

    Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin; Liu, Ming; Hui, Mei; Zhou, Xiaoxiao

    2008-12-01

    MEMS have become viable systems to utilize for uncooled infrared imaging in recent years. They offer advantages due to their simplicity, low cost and scalability to high-resolution FPAs without prohibitive increase in cost. An uncooled thermal detector array with low NETD is designed and fabricated using MEMS bimaterial microcantilever structures that bend in response to thermal change. The IR images of objects obtained by these FPAs are readout by an optical method. For the IR images, processed by a sparse representation-based image denoising and inpainting algorithm, which generalizing the K-Means clustering process, for adapting dictionaries in order to achieve sparse signal representations. The processed image quality is improved obviously. Great compute and analysis have been realized by using the discussed algorithm to the simulated data and in applications on real data. The experimental results demonstrate, better RMSE and highest Peak Signal-to-Noise Ratio (PSNR) compared with traditional methods can be obtained. At last we discuss the factors that determine the ultimate performance of the FPA. And we indicated that one of the unique advantages of the present approach is the scalability to larger imaging arrays.

  7. Generalized average of signals (GAS) - a new method for denoising and phase detection

    NASA Astrophysics Data System (ADS)

    Malek, J.; Kolinsky, P.; Strunc, J.; Valenta, J.

    2007-12-01

    A novel method called Generalized Average of Signals (GAS) was developed and tested during the last two years (Málek et al., in press). This method is designed for processing of seismograms from dense seismic arrays and is convenient mainly for denoising and weak phase detection. The main idea of the GAS method is based on non-linear stacking of seismograms in frequency domain, which considerably improves signal-to-noise ratio of coherent seismograms. Several synthetic tests of the GAS method are presented and the results are compared with the PWS method of Schimell and Paulssen (1997). Moreover, examples of application on real data are presented. These examples were chosen to show a broad applicability of the method in experiments of different scales. The first one shows identification of S-waves on seismograms from shallow seismic. The second one concerns identification of converted waves from local earthquakes registered at the WEBNET local network in western Bohemia. Finally, the third one depicts identification of PKIKP onsets on seismograms of teleseismic earthquakes. Schimmel, M., Paulssen H. (1997): Noise reduction and detection of weak, coherent signals through phase- weighted stacks. Geophys. J. Int. 130, 497-505. Málek J., Kolínský P., Strunc J. and Valenta J. (2007): Generalized average of signals (GAS) - a new method for detection of very weak waves in seismograms. Acta Geodyn. et Geomater., in press.

  8. Computer-assisted counting of retinal cells by automatic segmentation after TV denoising

    PubMed Central

    2013-01-01

    Background Quantitative evaluation of mosaics of photoreceptors and neurons is essential in studies on development, aging and degeneration of the retina. Manual counting of samples is a time consuming procedure while attempts to automatization are subject to various restrictions from biological and preparation variability leading to both over- and underestimation of cell numbers. Here we present an adaptive algorithm to overcome many of these problems. Digital micrographs were obtained from cone photoreceptor mosaics visualized by anti-opsin immuno-cytochemistry in retinal wholemounts from a variety of mammalian species including primates. Segmentation of photoreceptors (from background, debris, blood vessels, other cell types) was performed by a procedure based on Rudin-Osher-Fatemi total variation (TV) denoising. Once 3 parameters are manually adjusted based on a sample, similarly structured images can be batch processed. The module is implemented in MATLAB and fully documented online. Results The object recognition procedure was tested on samples with a typical range of signal and background variations. We obtained results with error ratios of less than 10% in 16 of 18 samples and a mean error of less than 6% compared to manual counts. Conclusions The presented method provides a traceable module for automated acquisition of retinal cell density data. Remaining errors, including addition of background items, splitting or merging of objects might be further reduced by introduction of additional parameters. The module may be integrated into extended environments with features such as 3D-acquisition and recognition. PMID:24138794

  9. Directional denoising and line enhancement for device segmentation in real time fluoroscopic imaging

    NASA Astrophysics Data System (ADS)

    Wagner, Martin; Royalty, Kevin; Oberstar, Erick; Strother, Charles; Mistretta, Charles

    2015-03-01

    Purpose: The purpose of this work is to improve the segmentation of interventional devices (e.g. guidewires) in fluoroscopic images. This is required for the real time 3D reconstruction from two angiographic views where noise can cause severe reconstruction artifacts and incomplete reconstruction. The proposed method reduces the noise while enhancing the thin line structures of the device in images with subtracted background. Methods: A two-step approach is presented here. The first step estimates, for each pixel and a given number of directions, a measure for the probability that the point is part of a line segment in the corresponding direction. This can be done efficiently using binary masks. In the second step, a directional filter kernel is applied for pixel that are assumed to be part of a line. For all other pixels a mean filter is used. Results: The proposed algorithm was able to achieve an average contrast to noise ratio (CNR) of 6.3 compared to the bilateral filter with 5.8. For the device segmentation using global thresholding the number of missing or wrong pixels is reduced to 25 % compared to 40 % using the bilateral approach. Conclusion: The proposed algorithm is a simple and efficient approach, which can easily be parallelized for the use on modern graphics processing units. It improves the segmentation results of the device compared to other denoising methods, and therefore reduces artifacts and increases the quality of the reconstruction without increasing the delay in real time applications notably.

  10. Randomized denoising autoencoders for smaller and efficient imaging based AD clinical trials

    PubMed Central

    Ithapu, Vamsi K.; Singh, Vikas; Okonkwo, Ozioma; Johnson, Sterling C.

    2015-01-01

    There is growing body of research devoted to designing imaging-based biomarkers that identify Alzheimer’s disease (AD) in its prodromal stage using statistical machine learning methods. Recently several authors investigated how clinical trials for AD can be made more efficient (i.e., smaller sample size) using predictive measures from such classification methods. In this paper, we explain why predictive measures given by such SVM type objectives may be less than ideal for use in the setting described above. We give a solution based on a novel deep learning model, randomized denoising autoencoders (rDA), which regresses on training labels y while also accounting for the variance, a property which is very useful for clinical trial design. Our results give strong improvements in sample size estimates over strategies based on multi-kernel learning. Also, rDA predictions appear to more accurately correlate to stages of disease. Separately, our formulation empirically shows how deep architectures can be applied in the large d, small n regime — the default situation in medical imaging. This result is of independent interest. PMID:25485413

  11. Inferring differentiation pathways from gene expression

    PubMed Central

    Costa, Ivan G.; Roepcke, Stefan; Hafemeister, Christoph; Schliep, Alexander

    2008-01-01

    Motivation: The regulation of proliferation and differentiation of embryonic and adult stem cells into mature cells is central to developmental biology. Gene expression measured in distinguishable developmental stages helps to elucidate underlying molecular processes. In previous work we showed that functional gene modules, which act distinctly in the course of development, can be represented by a mixture of trees. In general, the similarities in the gene expression programs of cell populations reflect the similarities in the differentiation path. Results: We propose a novel model for gene expression profiles and an unsupervised learning method to estimate developmental similarity and infer differentiation pathways. We assess the performance of our model on simulated data and compare it with favorable results to related methods. We also infer differentiation pathways and predict functional modules in gene expression data of lymphoid development. Conclusions: We demonstrate for the first time how, in principal, the incorporation of structural knowledge about the dependence structure helps to reveal differentiation pathways and potentially relevant functional gene modules from microarray datasets. Our method applies in any area of developmental biology where it is possible to obtain cells of distinguishable differentiation stages. Availability: The implementation of our method (GPL license), data and additional results are available at http://algorithmics.molgen.mpg.de/Supplements/InfDif/ Contact: filho@molgen.mpg.de, schliep@molgen.mpg.de Supplementary information: Supplementary data is available at Bioinformatics online. PMID:18586709

  12. A fast method for video deblurring based on a combination of gradient methods and denoising algorithms in Matlab and C environments

    NASA Astrophysics Data System (ADS)

    Mirzadeh, Zeynab; Mehri, Razieh; Rabbani, Hossein

    2010-01-01

    In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure. In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial approximation - intersection of confidence intervals) method that use an anisotropic window around each point and obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code, makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire program and so the runtime should be

  13. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  14. Science Shorts: Observation versus Inference

    ERIC Educational Resources Information Center

    Leager, Craig R.

    2008-01-01

    When you observe something, how do you know for sure what you are seeing, feeling, smelling, or hearing? Asking students to think critically about their encounters with the natural world will help to strengthen their understanding and application of the science-process skills of observation and inference. In the following lesson, students make…

  15. Sample Size and Correlational Inference

    ERIC Educational Resources Information Center

    Anderson, Richard B.; Doherty, Michael E.; Friedrich, Jeff C.

    2008-01-01

    In 4 studies, the authors examined the hypothesis that the structure of the informational environment makes small samples more informative than large ones for drawing inferences about population correlations. The specific purpose of the studies was to test predictions arising from the signal detection simulations of R. B. Anderson, M. E. Doherty,…

  16. Word Learning as Bayesian Inference

    ERIC Educational Resources Information Center

    Xu, Fei; Tenenbaum, Joshua B.

    2007-01-01

    The authors present a Bayesian framework for understanding how adults and children learn the meanings of words. The theory explains how learners can generalize meaningfully from just one or a few positive examples of a novel word's referents, by making rational inductive inferences that integrate prior knowledge about plausible word meanings with…

  17. The mechanisms of temporal inference

    NASA Technical Reports Server (NTRS)

    Fox, B. R.; Green, S. R.

    1987-01-01

    The properties of a temporal language are determined by its constituent elements: the temporal objects which it can represent, the attributes of those objects, the relationships between them, the axioms which define the default relationships, and the rules which define the statements that can be formulated. The methods of inference which can be applied to a temporal language are derived in part from a small number of axioms which define the meaning of equality and order and how those relationships can be propagated. More complex inferences involve detailed analysis of the stated relationships. Perhaps the most challenging area of temporal inference is reasoning over disjunctive temporal constraints. Simple forms of disjunction do not sufficiently increase the expressive power of a language while unrestricted use of disjunction makes the analysis NP-hard. In many cases a set of disjunctive constraints can be converted to disjunctive normal form and familiar methods of inference can be applied to the conjunctive sub-expressions. This process itself is NP-hard but it is made more tractable by careful expansion of a tree-structured search space.

  18. Perceptual Inference and Autistic Traits

    ERIC Educational Resources Information Center

    Skewes, Joshua C; Jegindø, Else-Marie; Gebauer, Line

    2015-01-01

    Autistic people are better at perceiving details. Major theories explain this in terms of bottom-up sensory mechanisms or in terms of top-down cognitive biases. Recently, it has become possible to link these theories within a common framework. This framework assumes that perception is implicit neural inference, combining sensory evidence with…

  19. Improving Explanatory Inferences from Assessments

    ERIC Educational Resources Information Center

    Diakow, Ronli Phyllis

    2013-01-01

    This dissertation comprises three papers that propose, discuss, and illustrate models to make improved inferences about research questions regarding student achievement in education. Addressing the types of questions common in educational research today requires three different "extensions" to traditional educational assessment: (1)…

  20. Classical low-pass filter and real-time wavelet-based denoising technique implemented on a DSP: a comparison study

    NASA Astrophysics Data System (ADS)

    Dolabdjian, Ch.; Fadili, J.; Huertas Leyva, E.

    2002-11-01

    We have implemented a real-time numerical denoising algorithm, using the Discrete Wavelet Transform (DWT), on a TMS320C3x Digital Signal Processor (DSP). We also compared from a theoretical and practical viewpoints this post-processing approach to a more classical low-pass filter. This comparison was carried out using an ECG-type signal (ElectroCardiogram). The denoising approach is an elegant and extremely fast alternative to the classical linear filters class. It is particularly adapted to non-stationary signals such as those encountered in biological applications. The denoising allows to substantially improve detection of such signals over Fourier-based techniques. This processing step is a vital element in our acquisition chain using high sensitivity magnetic sensors. It should enhance detection of cardiac-type magnetic signals or magnetic particles in movement.