Science.gov

Sample records for denoising inferred functional

  1. Denoising inferred functional association networks obtained by gene fusion analysis

    PubMed Central

    Kamburov, Atanas; Goldovsky, Leon; Freilich, Shiri; Kapazoglou, Aliki; Kunin, Victor; Enright, Anton J; Tsaftaris, Athanasios; Ouzounis, Christos A

    2007-01-01

    Background Gene fusion detection – also known as the 'Rosetta Stone' method – involves the identification of fused composite genes in a set of reference genomes, which indicates potential interactions between its un-fused counterpart genes in query genomes. The precision of this method typically improves with an ever-increasing number of reference genomes. Results In order to explore the usefulness and scope of this approach for protein interaction prediction and generate a high-quality, non-redundant set of interacting pairs of proteins across a wide taxonomic range, we have exhaustively performed gene fusion analysis for 184 genomes using an efficient variant of a previously developed protocol. By analyzing interaction graphs and applying a threshold that limits the maximum number of possible interactions within the largest graph components, we show that we can reduce the number of implausible interactions due to the detection of promiscuous domains. With this generally applicable approach, we generate a robust set of over 2 million distinct and testable interactions encompassing 696,894 proteins in 184 species or strains, most of which have never been the subject of high-throughput experimental proteomics. We investigate the cumulative effect of increasing numbers of genomes on the fidelity and quantity of predictions, and show that, for large numbers of genomes, predictions do not become saturated but continue to grow linearly, for the majority of the species. We also examine the percentage of component (and composite) proteins with relation to the number of genes and further validate the functional categories that are highly represented in this robust set of detected genome-wide interactions. Conclusion We illustrate the phylogenetic and functional diversity of gene fusion events across genomes, and their usefulness for accurate prediction of protein interaction and function. PMID:18081932

  2. Bayesian Inference for Neighborhood Filters With Application in Denoising.

    PubMed

    Huang, Chao-Tsung

    2015-11-01

    Range-weighted neighborhood filters are useful and popular for their edge-preserving property and simplicity, but they are originally proposed as intuitive tools. Previous works needed to connect them to other tools or models for indirect property reasoning or parameter estimation. In this paper, we introduce a unified empirical Bayesian framework to do both directly. A neighborhood noise model is proposed to reason and infer the Yaroslavsky, bilateral, and modified non-local means filters by joint maximum a posteriori and maximum likelihood estimation. Then, the essential parameter, range variance, can be estimated via model fitting to the empirical distribution of an observable chi scale mixture variable. An algorithm based on expectation-maximization and quasi-Newton optimization is devised to perform the model fitting efficiently. Finally, we apply this framework to the problem of color-image denoising. A recursive fitting and filtering scheme is proposed to improve the image quality. Extensive experiments are performed for a variety of configurations, including different kernel functions, filter types and support sizes, color channel numbers, and noise types. The results show that the proposed framework can fit noisy images well and the range variance can be estimated successfully and efficiently. PMID:26259244

  3. Point Set Denoising Using Bootstrap-Based Radial Basis Function

    PubMed Central

    Ramli, Ahmad; Abd. Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study. PMID:27315105

  4. A Neuro-Fuzzy Inference System Combining Wavelet Denoising, Principal Component Analysis, and Sequential Probability Ratio Test for Sensor Monitoring

    SciTech Connect

    Na, Man Gyun; Oh, Seungrohk

    2002-11-15

    A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information of other sensors. The parameters of the neuro-fuzzy inference system that estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce the time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system, and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors.

  5. A New Adaptive Diffusive Function for Magnetic Resonance Imaging Denoising Based on Pixel Similarity

    PubMed Central

    Heydari, Mostafa; Karami, Mohammad Reza

    2015-01-01

    Although there are many methods for image denoising, but partial differential equation (PDE) based denoising attracted much attention in the field of medical image processing such as magnetic resonance imaging (MRI). The main advantage of PDE-based denoising approach is laid in its ability to smooth image in a nonlinear way, which effectively removes the noise, as well as preserving edge through anisotropic diffusion controlled by the diffusive function. This function was first introduced by Perona and Malik (P-M) in their model. They proposed two functions that are most frequently used in PDE-based methods. Since these functions consider only the gradient information of a diffused pixel, they cannot remove noise in noisy images with low signal-to-noise (SNR). In this paper we propose a modified diffusive function with fractional power that is based on pixel similarity to improve P-M model for low SNR. We also will show that our proposed function will stabilize the P-M method. As experimental results show, our proposed function that is modified version of P-M function effectively improves the SNR and preserves edges more than P-M functions in low SNR. PMID:26955563

  6. [Research on ECG de-noising method based on ensemble empirical mode decomposition and wavelet transform using improved threshold function].

    PubMed

    Ye, Linlin; Yang, Dan; Wang, Xu

    2014-06-01

    A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal. PMID:25219236

  7. Image denoising in bidimensional empirical mode decomposition domain: the role of Student's probability distribution function.

    PubMed

    Lahmiri, Salim

    2016-03-01

    Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications. PMID:27222723

  8. Functional network inference of the suprachiasmatic nucleus.

    PubMed

    Abel, John H; Meeker, Kirsten; Granados-Fuentes, Daniel; St John, Peter C; Wang, Thomas J; Bales, Benjamin B; Doyle, Francis J; Herzog, Erik D; Petzold, Linda R

    2016-04-19

    In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. Although the intracellular genetic oscillator and intercellular biochemical coupling mechanisms have been examined previously, the network topology driving synchronization of the SCN has not been elucidated. This network has been particularly challenging to probe, due to its oscillatory components and slow coupling timescale. In this work, we investigated the SCN network at a single-cell resolution through a chemically induced desynchronization. We then inferred functional connections in the SCN by applying the maximal information coefficient statistic to bioluminescence reporter data from individual neurons while they resynchronized their circadian cycling. Our results demonstrate that the functional network of circadian cells associated with resynchronization has small-world characteristics, with a node degree distribution that is exponential. We show that hubs of this small-world network are preferentially located in the central SCN, with sparsely connected shells surrounding these cores. Finally, we used two computational models of circadian neurons to validate our predictions of network structure. PMID:27044085

  9. Green Channel Guiding Denoising on Bayer Image

    PubMed Central

    Zhang, Maojun

    2014-01-01

    Denoising is an indispensable function for digital cameras. In respect that noise is diffused during the demosaicking, the denoising ought to work directly on bayer data. The difficulty of denoising on bayer image is the interlaced mosaic pattern of red, green, and blue. Guided filter is a novel time efficient explicit filter kernel which can incorporate additional information from the guidance image, but it is still not applied for bayer image. In this work, we observe that the green channel of bayer mode is higher in both sampling rate and Signal-to-Noise Ratio (SNR) than the red and blue ones. Therefore the green channel can be used to guide denoising. This kind of guidance integrates the different color channels together. Experiments on both actual and simulated bayer images indicate that green channel acts well as the guidance signal, and the proposed method is competitive with other popular filter kernel denoising methods. PMID:24741370

  10. Functional neuroanatomy of intuitive physical inference.

    PubMed

    Fischer, Jason; Mikhael, John G; Tenenbaum, Joshua B; Kanwisher, Nancy

    2016-08-23

    To engage with the world-to understand the scene in front of us, plan actions, and predict what will happen next-we must have an intuitive grasp of the world's physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events-a "physics engine" in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general "multiple demand" system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action. PMID:27503892

  11. Denoising of high-resolution single-particle electron-microscopy density maps by their approximation using three-dimensional Gaussian functions.

    PubMed

    Jonić, S; Vargas, J; Melero, R; Gómez-Blanco, J; Carazo, J M; Sorzano, C O S

    2016-06-01

    Cryo-electron microscopy (cryo-EM) of frozen-hydrated preparations of isolated macromolecular complexes is the method of choice to obtain the structure of complexes that cannot be easily studied by other experimental methods due to their flexibility or large size. An increasing number of macromolecular structures are currently being obtained at subnanometer resolution but the interpretation of structural details in such EM-derived maps is often difficult because of noise at these high-frequency signal components that reduces their contrast. In this paper, we show that the method for EM density-map approximation using Gaussian functions can be used for denoising of single-particle EM maps of high (typically subnanometer) resolution. We show its denoising performance using simulated and experimental EM density maps of several complexes. PMID:27085420

  12. Automatic Denoising of Functional MRI Data: Combining Independent Component Analysis and Hierarchical Fusion of Classifiers

    PubMed Central

    Salimi-Khorshidi, Gholamreza; Douaud, Gwenaëlle; Beckmann, Christian F; Glasser, Matthew F; Griffanti, Ludovica; Smith, Stephen M

    2014-01-01

    Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis (ICA) - one of the most widely used techniques for the exploratory analysis of fMRI data - has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject “at rest”). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing “signal” (brain activity) can be distinguished form the “noise” components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX (“FMRIB’s ICA-based X-noiseifier”), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different Classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of

  13. Nonparametric inference on median residual life function.

    PubMed

    Jeong, Jong-Hyeon; Jung, Sin-Ho; Costantino, Joseph P

    2008-03-01

    A simple approach to the estimation of the median residual lifetime is proposed for a single group by inverting a function of the Kaplan-Meier estimators. A test statistic is proposed to compare two median residual lifetimes at any fixed time point. The test statistic does not involve estimation of the underlying probability density function of failure times under censoring. Extensive simulation studies are performed to validate the proposed test statistic in terms of type I error probabilities and powers at various time points. One of the oldest data sets from the National Surgical Adjuvant Breast and Bowel Project (NSABP), which has more than a quarter century of follow-up, is used to illustrate the method. The analysis results indicate that, without systematic post-operative therapy, a significant difference in median residual lifetimes between node-negative and node-positive breast cancer patients persists for about 10 years after surgery. The new estimates of the median residual lifetime could serve as a baseline for physicians to explain any incremental effects of post-operative treatments in terms of delaying breast cancer recurrence or prolonging remaining lifetimes of breast cancer patients. PMID:17501936

  14. Functional inferences of environmental coccolithovirus biodiversity.

    PubMed

    Nissimov, Jozef I; Jones, Mark; Napier, Johnathan A; Munn, Colin B; Kimmance, Susan A; Allen, Michael J

    2013-10-01

    The cosmopolitan calcifying alga Emiliania huxleyi is one of the most abundant bloom forming coccolithophore species in the oceans and plays an important role in global biogeochemical cycling. Coccolithoviruses are a major cause of coccolithophore bloom termination and have been studied in laboratory, mesocosm and open ocean studies. However, little is known about the dynamic interactions between the host and its viruses, and less is known about the natural diversity and role of functionally important genes within natural coccolithovirus communities. Here, we investigate the temporal and spatial distribution of coccolithoviruses by the use of molecular fingerprinting techniques PCR, DGGE and genomic sequencing. The natural biodiversity of the virus genes encoding the major capsid protein (MCP) and serine palmitoyltransferase (SPT) were analysed in samples obtained from the Atlantic Meridional Transect (AMT), the North Sea and the L4 site in the Western Channel Observatory. We discovered nine new coccolithovirus genotypes across the AMT and L4 site, with the majority of MCP sequences observed at the deep chlorophyll maximum layer of the sampled sites on the transect. We also found four new SPT gene variations in the North Sea and at L4. Their translated fragments and the full protein sequence of SPT from laboratory strains EhV-86 and EhV-99B1 were modelled and revealed that the theoretical fold differs among strains. Variation identified in the structural distance between the two domains of the SPT protein may have an impact on the catalytic capabilities of its active site. In summary, the combined use of 'standard' markers (i.e. MCP), in combination with metabolically relevant markers (i.e. SPT) are useful in the study of the phylogeny and functional biodiversity of coccolithoviruses, and can provide an interesting intracellular insight into the evolution of these viruses and their ability to infect and replicate within their algal hosts. PMID:24006045

  15. Classical methods for interpreting objective function minimization as intelligent inference

    SciTech Connect

    Golden, R.M.

    1996-12-31

    Most recognition algorithms and neural networks can be formally viewed as seeking a minimum value of an appropriate objective function during either classification or learning phases. The goal of this paper is to argue that in order to show a recognition algorithm is making intelligent inferences, it is not sufficient to show that the recognition algorithm is computing (or trying to compute) the global minimum of some objective function. One must explicitly define a {open_quotes}relational system{close_quotes} for the recognition algorithm or neural network which identifies the: (i) sample space, (ii) the relevant sigmafield of events generated by the sample space, and (iii) the {open_quotes}relation{close_quotes} for that relational system. Only when such a {open_quotes}relational system{close_quotes} is properly defined, is it possible to formally establish the sense in which computing the global minimum of an objective function is an intelligent, inference.

  16. Denoising PCR-amplified metagenome data

    PubMed Central

    2012-01-01

    Background PCR amplification and high-throughput sequencing theoretically enable the characterization of the finest-scale diversity in natural microbial and viral populations, but each of these methods introduces random errors that are difficult to distinguish from genuine biological diversity. Several approaches have been proposed to denoise these data but lack either speed or accuracy. Results We introduce a new denoising algorithm that we call DADA (Divisive Amplicon Denoising Algorithm). Without training data, DADA infers both the sample genotypes and error parameters that produced a metagenome data set. We demonstrate performance on control data sequenced on Roche’s 454 platform, and compare the results to the most accurate denoising software currently available, AmpliconNoise. Conclusions DADA is more accurate and over an order of magnitude faster than AmpliconNoise. It eliminates the need for training data to establish error parameters, fully utilizes sequence-abundance information, and enables inclusion of context-dependent PCR error rates. It should be readily extensible to other sequencing platforms such as Illumina. PMID:23113967

  17. Local thresholding de-noise speech signal

    NASA Astrophysics Data System (ADS)

    Luo, Haitao

    2013-07-01

    De-noise speech signal if it is noisy. Construct a wavelet according to Daubechies' method, and derive a wavelet packet from the constructed scaling and wavelet functions. Decompose the noisy speech signal by wavelet packet. Develop algorithms to detect beginning and ending point of speech. Construct polynomial function for local thresholding. Apply different strategies to de-noise and compress the decomposed terminal nodes coefficients. Reconstruct the wavelet packet tree. Re-build audio file using reconstructed data and compare the effectiveness of different strategies.

  18. Receiver function deconvolution using transdimensional hierarchical Bayesian inference

    NASA Astrophysics Data System (ADS)

    Kolb, J. M.; Lekić, V.

    2014-06-01

    Teleseismic waves can convert from shear to compressional (Sp) or compressional to shear (Ps) across impedance contrasts in the subsurface. Deconvolving the parent waveforms (P for Ps or S for Sp) from the daughter waveforms (S for Ps or P for Sp) generates receiver functions which can be used to analyse velocity structure beneath the receiver. Though a variety of deconvolution techniques have been developed, they are all adversely affected by background and signal-generated noise. In order to take into account the unknown noise characteristics, we propose a method based on transdimensional hierarchical Bayesian inference in which both the noise magnitude and noise spectral character are parameters in calculating the likelihood probability distribution. We use a reversible-jump implementation of a Markov chain Monte Carlo algorithm to find an ensemble of receiver functions whose relative fits to the data have been calculated while simultaneously inferring the values of the noise parameters. Our noise parametrization is determined from pre-event noise so that it approximates observed noise characteristics. We test the algorithm on synthetic waveforms contaminated with noise generated from a covariance matrix obtained from observed noise. We show that the method retrieves easily interpretable receiver functions even in the presence of high noise levels. We also show that we can obtain useful estimates of noise amplitude and frequency content. Analysis of the ensemble solutions produced by our method can be used to quantify the uncertainties associated with individual receiver functions as well as with individual features within them, providing an objective way for deciding which features warrant geological interpretation. This method should make possible more robust inferences on subsurface structure using receiver function analysis, especially in areas of poor data coverage or under noisy station conditions.

  19. Beyond the bounds of orthology: functional inference from metagenomic context.

    PubMed

    Vey, Gregory; Moreno-Hagelsieb, Gabriel

    2010-07-01

    The effectiveness of the computational inference of function by genomic context is bounded by the diversity of known microbial genomes. Although metagenomes offer access to previously inaccessible organisms, their fragmentary nature prevents the conventional establishment of orthologous relationships required for reliably predicting functional interactions. We introduce a protocol for the prediction of functional interactions using data sources without information about orthologous relationships. To illustrate this process, we use the Sargasso Sea metagenome to construct a functional interaction network for the Escherichia coli K12 genome. We identify two reliability metrics, target intergenic distance and source interaction count, and apply them to selectively filter the predictions retained to construct the network of functional interactions. The resulting network contains 2297 nodes with 10 072 edges with a positive predictive value of 0.80. The metagenome yielded 8423 functional interactions beyond those found using only the genomic orthologs as a data source. This amounted to a 134% increase in the total number of functional interactions that are predicted by combining the metagenome and the genomic orthologs versus the genomic orthologs alone. In the absence of detectable orthologous relationships it remains feasible to derive a reliable set of predicted functional interactions. This offers a strategy for harnessing other metagenomes and homologs in general. Because metagenomes allow access to previously unreachable microorganisms, this will result in expanding the universe of known functional interactions thus furthering our understanding of functional organization. PMID:20419183

  20. Network inference from functional experimental data (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Desrosiers, Patrick; Labrecque, Simon; Tremblay, Maxime; Bélanger, Mathieu; De Dorlodot, Bertrand; Côté, Daniel C.

    2016-03-01

    Functional connectivity maps of neuronal networks are critical tools to understand how neurons form circuits, how information is encoded and processed by neurons, how memory is shaped, and how these basic processes are altered under pathological conditions. Current light microscopy allows to observe calcium or electrical activity of thousands of neurons simultaneously, yet assessing comprehensive connectivity maps directly from such data remains a non-trivial analytical task. There exist simple statistical methods, such as cross-correlation and Granger causality, but they only detect linear interactions between neurons. Other more involved inference methods inspired by information theory, such as mutual information and transfer entropy, identify more accurately connections between neurons but also require more computational resources. We carried out a comparative study of common connectivity inference methods. The relative accuracy and computational cost of each method was determined via simulated fluorescence traces generated with realistic computational models of interacting neurons in networks of different topologies (clustered or non-clustered) and sizes (10-1000 neurons). To bridge the computational and experimental works, we observed the intracellular calcium activity of live hippocampal neuronal cultures infected with the fluorescent calcium marker GCaMP6f. The spontaneous activity of the networks, consisting of 50-100 neurons per field of view, was recorded from 20 to 50 Hz on a microscope controlled by a homemade software. We implemented all connectivity inference methods in the software, which rapidly loads calcium fluorescence movies, segments the images, extracts the fluorescence traces, and assesses the functional connections (with strengths and directions) between each pair of neurons. We used this software to assess, in real time, the functional connectivity from real calcium imaging data in basal conditions, under plasticity protocols, and epileptic

  1. Visualization of group inference data in functional neuroimaging.

    PubMed

    Gläscher, Jan

    2009-01-01

    While thresholded statistical parametric maps can convey an accurate account for the location and spatial extent of an effect in functional neuroimaging studies, their use is somewhat limited for characterizing more complex experimental effects, such as interactions in a factorial design. The resulting necessity for plotting the underlying data has long been recognized. Statistical Parametric Mapping (SPM) is a widely used software package for analyzing functional neuroimaging data that offers a variety of options for visualizing data from first level analyses. However, nowadays, the thrust of the statistical inference lies at the second level thus allowing for population inference. Unfortunately, the options for visualizing data from second level analyses are quite sparse. rfxplot is a new toolbox designed to alleviate this problem by providing a comprehensive array of options for plotting data from within second level analyses in SPM. These include graphs of average effect sizes (across subjects), averaged fitted responses and event-related blood oxygen level-dependent (BOLD) time courses. All data are retrieved from the underlying first level analyses and voxel selection can be tailored to the maximum effect in each subject within a defined search volume. All plot configurations can be easily configured via a graphical user-interface as well as non-interactively via a script. The large variety of plot options renders rfxplot suitable both for data exploration as well as producing high-quality figures for publications. PMID:19140033

  2. Explanation and inference: mechanistic and functional explanations guide property generalization

    PubMed Central

    Lombrozo, Tania; Gwynne, Nicholas Z.

    2014-01-01

    The ability to generalize from the known to the unknown is central to learning and inference. Two experiments explore the relationship between how a property is explained and how that property is generalized to novel species and artifacts. The experiments contrast the consequences of explaining a property mechanistically, by appeal to parts and processes, with the consequences of explaining the property functionally, by appeal to functions and goals. The findings suggest that properties that are explained functionally are more likely to be generalized on the basis of shared functions, with a weaker relationship between mechanistic explanations and generalization on the basis of shared parts and processes. The influence of explanation type on generalization holds even though all participants are provided with the same mechanistic and functional information, and whether an explanation type is freely generated (Experiment 1), experimentally provided (Experiment 2), or experimentally induced (Experiment 2). The experiments also demonstrate that explanations and generalizations of a particular type (mechanistic or functional) can be experimentally induced by providing sample explanations of that type, with a comparable effect when the sample explanations come from the same domain or from a different domains. These results suggest that explanations serve as a guide to generalization, and contribute to a growing body of work supporting the value of distinguishing mechanistic and functional explanations. PMID:25309384

  3. MR image denoising method for brain surface 3D modeling

    NASA Astrophysics Data System (ADS)

    Zhao, De-xin; Liu, Peng-jie; Zhang, De-gan

    2014-11-01

    Three-dimensional (3D) modeling of medical images is a critical part of surgical simulation. In this paper, we focus on the magnetic resonance (MR) images denoising for brain modeling reconstruction, and exploit a practical solution. We attempt to remove the noise existing in the MR imaging signal and preserve the image characteristics. A wavelet-based adaptive curve shrinkage function is presented in spherical coordinates system. The comparative experiments show that the denoising method can preserve better image details and enhance the coefficients of contours. Using these denoised images, the brain 3D visualization is given through surface triangle mesh model, which demonstrates the effectiveness of the proposed method.

  4. Denoising ECG signal based on ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Zhi-dong, Zhao; Liu, Juan; Wang, Sheng-tao

    2011-10-01

    The electrocardiogram (ECG) has been used extensively for detection of heart disease. Frequently the signal is corrupted by various kinds of noise such as muscle noise, electromyogram (EMG) interference, instrument noise etc. In this paper, a new ECG denoising method is proposed based on the recently developed ensemble empirical mode decomposition (EEMD). Noisy ECG signal is decomposed into a series of intrinsic mode functions (IMFs). The statistically significant information content is build by the empirical energy model of IMFs. Noisy ECG signal collected from clinic recording is processed using the method. The results show that on contrast with traditional methods, the novel denoising method can achieve the optimal denoising of the ECG signal.

  5. Medical-Legal Inferences From Functional Neuroimaging Evidence.

    PubMed

    Mayberg

    1996-07-01

    Positron emission (PET) and single-photon emission tomography (SPECT) are validated functional imaging techniques for the in vivo measurement of many neuro-phsyiological and neurochemical parameters. Research studies of patients with a broad range of neurological and psychiatric illness have been published. Reproducible and specific patterns of altered cerebral blood flow and glucose metabolism, however, have been demonstrated and confirmed for only a limited number of specific illnesses. The association of functional scan patterns with specific deficits is less conclusive. Correlations of regional abnormalities with clinical symptoms such as motor weakness, aphasia, and visual spatial dysfunction are the most reproducible but are more poorly localized than lesion-deficit studies would suggest. Findings are even less consistent for nonlocalizing behavioral symptoms such as memory difficulties, poor concentration, irritability, or chronic pain, and no reliable patterns have been demonstrated. In a forensic context, homicidal and sadistic tendencies, aberrant sexual drive, violent impulsivity, psychopathic and sociopathic personality traits, as well as impaired judgement and poor insight, have no known PET or SPECT patterns, and their presence in an individual with any PET or SPECT scan finding cannot be inferred or concluded. Furthermore, the reliable prediction of any specific neurological, psychiatric, or behavioral deficits from specific scan findings has not been demonstrated. Unambiguous results from experiments designed to specifically examine the causative relationships between regional brain dysfunction and these types of complex behaviors are needed before any introduction of functional scans into the courts can be considered scientifically justified or legally admissible. PMID:10320420

  6. Astronomical image denoising using dictionary learning

    NASA Astrophysics Data System (ADS)

    Beckouche, S.; Starck, J. L.; Fadili, J.

    2013-08-01

    Astronomical images suffer a constant presence of multiple defects that are consequences of the atmospheric conditions and of the intrinsic properties of the acquisition equipment. One of the most frequent defects in astronomical imaging is the presence of additive noise which makes a denoising step mandatory before processing data. During the last decade, a particular modeling scheme, based on sparse representations, has drawn the attention of an ever growing community of researchers. Sparse representations offer a promising framework to many image and signal processing tasks, especially denoising and restoration applications. At first, the harmonics, wavelets and similar bases, and overcomplete representations have been considered as candidate domains to seek the sparsest representation. A new generation of algorithms, based on data-driven dictionaries, evolved rapidly and compete now with the off-the-shelf fixed dictionaries. Although designing a dictionary relies on guessing the representative elementary forms and functions, the framework of dictionary learning offers the possibility of constructing the dictionary using the data themselves, which provides us with a more flexible setup to sparse modeling and allows us to build more sophisticated dictionaries. In this paper, we introduce the centered dictionary learning (CDL) method and we study its performance for astronomical image denoising. We show how CDL outperforms wavelet or classic dictionary learning denoising techniques on astronomical images, and we give a comparison of the effects of these different algorithms on the photometry of the denoised images. The current version of the code is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/556/A132

  7. Improved Rotating Kernel Transformation Based Contourlet Domain Image Denoising Framework

    PubMed Central

    Guo, Qing; Dong, Fangmin; Ren, Xuhong; Feng, Shiyu; Gao, Bruce Zhi

    2016-01-01

    A contourlet domain image denoising framework based on a novel Improved Rotating Kernel Transformation is proposed, where the difference of subbands in contourlet domain is taken into account. In detail: (1). A novel Improved Rotating Kernel Transformation (IRKT) is proposed to calculate the direction statistic of the image; The validity of the IRKT is verified by the corresponding extracted edge information comparing with the state-of-the-art edge detection algorithm. (2). The direction statistic represents the difference between subbands and is introduced to the threshold function based contourlet domain denoising approaches in the form of weights to get the novel framework. The proposed framework is utilized to improve the contourlet soft-thresholding (CTSoft) and contourlet bivariate-thresholding (CTB) algorithms. The denoising results on the conventional testing images and the Optical Coherence Tomography (OCT) medical images show that the proposed methods improve the existing contourlet based thresholding denoising algorithm, especially for the medical images. PMID:27148597

  8. Image denoising filter based on patch-based difference refinement

    NASA Astrophysics Data System (ADS)

    Park, Sang Wook; Kang, Moon Gi

    2012-06-01

    In the denoising literature, research based on the nonlocal means (NLM) filter has been done and there have been many variations and improvements regarding weight function and parameter optimization. Here, a NLM filter with patch-based difference (PBD) refinement is presented. PBD refinement, which is the weighted average of the PBD values, is performed with respect to the difference images of all the locations in a refinement kernel. With refined and denoised PBD values, pattern adaptive smoothing threshold and noise suppressed NLM filter weights are calculated. Owing to the refinement of the PBD values, the patterns are divided into flat regions and texture regions by comparing the sorted values in the PBD domain to the threshold value including the noise standard deviation. Then, two different smoothing thresholds are utilized for each region denoising, respectively, and the NLM filter is applied finally. Experimental results of the proposed scheme are shown in comparison with several state-of-the-arts NLM based denoising methods.

  9. Nonlocal Markovian models for image denoising

    NASA Astrophysics Data System (ADS)

    Salvadeo, Denis H. P.; Mascarenhas, Nelson D. A.; Levada, Alexandre L. M.

    2016-01-01

    Currently, the state-of-the art methods for image denoising are patch-based approaches. Redundant information present in nonlocal regions (patches) of the image is considered for better image modeling, resulting in an improved quality of filtering. In this respect, nonlocal Markov random field (MRF) models are proposed by redefining the energy functions of classical MRF models to adopt a nonlocal approach. With the new energy functions, the pairwise pixel interaction is weighted according to the similarities between the patches corresponding to each pair. Also, a maximum pseudolikelihood estimation of the spatial dependency parameter (β) for these models is presented here. For evaluating this proposal, these models are used as an a priori model in a maximum a posteriori estimation to denoise additive white Gaussian noise in images. Finally, results display a notable improvement in both quantitative and qualitative terms in comparison with the local MRFs.

  10. Constructing a Flexible Likelihood Function for Spectroscopic Inference

    NASA Astrophysics Data System (ADS)

    Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Hogg, David W.; Green, Gregory M.

    2015-10-01

    We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf.

  11. CT reconstruction via denoising approximate message passing

    NASA Astrophysics Data System (ADS)

    Perelli, Alessandro; Lexa, Michael A.; Can, Ali; Davies, Mike E.

    2016-05-01

    In this paper, we adapt and apply a compressed sensing based reconstruction algorithm to the problem of computed tomography reconstruction for luggage inspection. Specifically, we propose a variant of the denoising generalized approximate message passing (D-GAMP) algorithm and compare its performance to the performance of traditional filtered back projection and to a penalized weighted least squares (PWLS) based reconstruction method. D-GAMP is an iterative algorithm that at each iteration estimates the conditional probability of the image given the measurements and employs a non-linear "denoising" function which implicitly imposes an image prior. Results on real baggage show that D-GAMP is well-suited to limited-view acquisitions.

  12. A New Adaptive Image Denoising Method

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    In this paper, a new adaptive image denoising method is proposed that follows the soft-thresholding technique. In our method, a new threshold function is also proposed, which is determined by taking the various combinations of noise level, noise-free signal variance, subband size, and decomposition level. It is simple and adaptive as it depends on the data-driven parameters estimation in each subband. The state-of-the-art denoising methods viz. VisuShrink, SureShrink, BayesShrink, WIDNTF and IDTVWT are not able to modify the coefficients in an efficient manner to provide the good quality of image. Our method removes the noise from the noisy image significantly and provides better visual quality of an image.

  13. Structure-based inference of molecular functions of proteins of unknown function from Berkeley Structural Genomics Center

    SciTech Connect

    Kim, Sung-Hou; Shin, Dong Hae; Hou, Jingtong; Chandonia, John-Marc; Das, Debanu; Choi, In-Geol; Kim, Rosalind; Kim, Sung-Hou

    2007-09-02

    Advances in sequence genomics have resulted in an accumulation of a huge number of protein sequences derived from genome sequences. However, the functions of a large portion of them cannot be inferred based on the current methods of sequence homology detection to proteins of known functions. Three-dimensional structure can have an important impact in providing inference of molecular function (physical and chemical function) of a protein of unknown function. Structural genomics centers worldwide have been determining many 3-D structures of the proteins of unknown functions, and possible molecular functions of them have been inferred based on their structures. Combined with bioinformatics and enzymatic assay tools, the successful acceleration of the process of protein structure determination through high throughput pipelines enables the rapid functional annotation of a large fraction of hypothetical proteins. We present a brief summary of the process we used at the Berkeley Structural Genomics Center to infer molecular functions of proteins of unknown function.

  14. Research and Implementation of Heart Sound Denoising

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  15. Birdsong Denoising Using Wavelets.

    PubMed

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  16. Birdsong Denoising Using Wavelets

    PubMed Central

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  17. Study on an improved wavelet shift-invariant threshold denoising for pulsed laser induced glucose photoacoustic signals

    NASA Astrophysics Data System (ADS)

    Wang, Zhengzi; Ren, Zhong; Liu, Guodong

    2015-10-01

    Noninvasive measurement of blood glucose concentration has become a hotspot research in the world due to its characteristic of convenient, rapid and non-destructive etc. The blood glucose concentration monitoring based on photoacoustic technique has attracted many attentions because the detected signal is ultrasonic signals rather than the photo signals. But during the acquisition of the photoacoustic signals of glucose, the photoacoustic signals are not avoid to be polluted by some factors, such as the pulsed laser, electronic noises and circumstance noises etc. These disturbances will impact the measurement accuracy of the glucose concentration, So, the denoising of the glucose photoacoustic signals is a key work. In this paper, a wavelet shift-invariant threshold denoising method is improved, and a novel wavelet threshold function is proposed. For the novel wavelet threshold function, two threshold values and two different factors are set, and the novel function is high order derivative and continuous, which can be looked as the compromise between the wavelet soft threshold denoising and hard threshold denoising. Simulation experimental results illustrate that, compared with other wavelet threshold denoising, this improved wavelet shift-invariant threshold denoising has higher signal-to-noise ratio(SNR) and smaller root mean-square error (RMSE) value. And this improved denoising also has better denoising effect than others. Therefore, this improved denoising has a certain of potential value in the denoising of glucose photoacoustic signals.

  18. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  19. Bayesian Inference for Functional Dynamics Exploring in fMRI Data.

    PubMed

    Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao; Pan, Yi; Zhang, Jing

    2016-01-01

    This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come. PMID:27034708

  20. Bayesian Inference for Functional Dynamics Exploring in fMRI Data

    PubMed Central

    Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao

    2016-01-01

    This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come. PMID:27034708

  1. [DR image denoising based on Laplace-Impact mixture model].

    PubMed

    Feng, Guo-Dong; He, Xiang-Bin; Zhou, He-Qin

    2009-07-01

    A novel DR image denoising algorithm based on Laplace-Impact mixture model in dual-tree complex wavelet domain is proposed in this paper. It uses local variance to build probability density function of Laplace-Impact model fitted to the distribution of high-frequency subband coefficients well. Within Laplace-Impact framework, this paper describes a novel method for image denoising based on designing minimum mean squared error (MMSE) estimators, which relies on strong correlation between amplitudes of nearby coefficients. The experimental results show that the algorithm proposed in this paper outperforms several state-of-art denoising methods such as Bayes least squared Gaussian scale mixture and Laplace prior. PMID:19938519

  2. Role of Utility and Inference in the Evolution of Functional Information

    PubMed Central

    Sharov, Alexei A.

    2009-01-01

    Functional information means an encoded network of functions in living organisms from molecular signaling pathways to an organism’s behavior. It is represented by two components: code and an interpretation system, which together form a self-sustaining semantic closure. Semantic closure allows some freedom between components because small variations of the code are still interpretable. The interpretation system consists of inference rules that control the correspondence between the code and the function (phenotype) and determines the shape of the fitness landscape. The utility factor operates at multiple time scales: short-term selection drives evolution towards higher survival and reproduction rate within a given fitness landscape, and long-term selection favors those fitness landscapes that support adaptability and lead to evolutionary expansion of certain lineages. Inference rules make short-term selection possible by shaping the fitness landscape and defining possible directions of evolution, but they are under control of the long-term selection of lineages. Communication normally occurs within a set of agents with compatible interpretation systems, which I call communication system. Functional information cannot be directly transferred between communication systems with incompatible inference rules. Each biological species is a genetic communication system that carries unique functional information together with inference rules that determine evolutionary directions and constraints. This view of the relation between utility and inference can resolve the conflict between realism/positivism and pragmatism. Realism overemphasizes the role of inference in evolution of human knowledge because it assumes that logic is embedded in reality. Pragmatism substitutes usefulness for truth and therefore ignores the advantage of inference. The proposed concept of evolutionary pragmatism rejects the idea that logic is embedded in reality; instead, inference rules are

  3. Craniofacial biomechanics and functional and dietary inferences in hominin paleontology.

    PubMed

    Grine, Frederick E; Judex, Stefan; Daegling, David J; Ozcivici, Engin; Ungar, Peter S; Teaford, Mark F; Sponheimer, Matt; Scott, Jessica; Scott, Robert S; Walker, Alan

    2010-04-01

    Finite element analysis (FEA) is a potentially powerful tool by which the mechanical behaviors of different skeletal and dental designs can be investigated, and, as such, has become increasingly popular for biomechanical modeling and inferring the behavior of extinct organisms. However, the use of FEA to extrapolate from characterization of the mechanical environment to questions of trophic or ecological adaptation in a fossil taxon is both challenging and perilous. Here, we consider the problems and prospects of FEA applications in paleoanthropology, and provide a critical examination of one such study of the trophic adaptations of Australopithecus africanus. This particular FEA is evaluated with regard to 1) the nature of the A. africanus cranial composite, 2) model validation, 3) decisions made with respect to model parameters, 4) adequacy of data presentation, and 5) interpretation of the results. Each suggests that the results reflect methodological decisions as much as any underlying biological significance. Notwithstanding these issues, this model yields predictions that follow from the posited emphasis on premolar use by A. africanus. These predictions are tested with data from the paleontological record, including a phylogenetically-informed consideration of relative premolar size, and postcanine microwear fabrics and antemortem enamel chipping. In each instance, the data fail to conform to predictions from the model. This model thus serves to emphasize the need for caution in the application of FEA in paleoanthropological enquiry. Theoretical models can be instrumental in the construction of testable hypotheses; but ultimately, the studies that serve to test these hypotheses - rather than data from the models - should remain the source of information pertaining to hominin paleobiology and evolution. PMID:20227747

  4. Iterative denoising of ghost imaging.

    PubMed

    Yao, Xu-Ri; Yu, Wen-Kai; Liu, Xue-Feng; Li, Long-Zhen; Li, Ming-Fei; Wu, Ling-An; Zhai, Guang-Jie

    2014-10-01

    We present a new technique to denoise ghost imaging (GI) in which conventional intensity correlation GI and an iteration process have been combined to give an accurate estimate of the actual noise affecting image quality. The blurring influence of the speckle areas in the beam is reduced in the iteration by setting a threshold. It is shown that with an appropriate choice of threshold value, the quality of the iterative GI reconstructed image is much better than that of differential GI for the same number of measurements. This denoising method thus offers a very effective approach to promote the implementation of GI in real applications. PMID:25322001

  5. Photogrammetric DSM denoising

    NASA Astrophysics Data System (ADS)

    Nex, F.; Gerke, M.

    2014-08-01

    Image matching techniques can nowadays provide very dense point clouds and they are often considered a valid alternative to LiDAR point cloud. However, photogrammetric point clouds are often characterized by a higher level of random noise compared to LiDAR data and by the presence of large outliers. These problems constitute a limitation in the practical use of photogrammetric data for many applications but an effective way to enhance the generated point cloud has still to be found. In this paper we concentrate on the restoration of Digital Surface Models (DSM), computed from dense image matching point clouds. A photogrammetric DSM, i.e. a 2.5D representation of the surface is still one of the major products derived from point clouds. Four different algorithms devoted to DSM denoising are presented: a standard median filter approach, a bilateral filter, a variational approach (TGV: Total Generalized Variation), as well as a newly developed algorithm, which is embedded into a Markov Random Field (MRF) framework and optimized through graph-cuts. The ability of each algorithm to recover the original DSM has been quantitatively evaluated. To do that, a synthetic DSM has been generated and different typologies of noise have been added to mimic the typical errors of photogrammetric DSMs. The evaluation reveals that standard filters like median and edge preserving smoothing through a bilateral filter approach cannot sufficiently remove typical errors occurring in a photogrammetric DSM. The TGV-based approach much better removes random noise, but large areas with outliers still remain. Our own method which explicitly models the degradation properties of those DSM outperforms the others in all aspects.

  6. Generalised partition functions: inferences on phase space distributions

    NASA Astrophysics Data System (ADS)

    Treumann, Rudolf A.; Baumjohann, Wolfgang

    2016-06-01

    It is demonstrated that the statistical mechanical partition function can be used to construct various different forms of phase space distributions. This indicates that its structure is not restricted to the Gibbs-Boltzmann factor prescription which is based on counting statistics. With the widely used replacement of the Boltzmann factor by a generalised Lorentzian (also known as the q-deformed exponential function, where κ = 1/|q - 1|, with κ, q ∈ R) both the kappa-Bose and kappa-Fermi partition functions are obtained in quite a straightforward way, from which the conventional Bose and Fermi distributions follow for κ → ∞. For κ ≠ ∞ these are subject to the restrictions that they can be used only at temperatures far from zero. They thus, as shown earlier, have little value for quantum physics. This is reasonable, because physical κ systems imply strong correlations which are absent at zero temperature where apart from stochastics all dynamical interactions are frozen. In the classical large temperature limit one obtains physically reasonable κ distributions which depend on energy respectively momentum as well as on chemical potential. Looking for other functional dependencies, we examine Bessel functions whether they can be used for obtaining valid distributions. Again and for the same reason, no Fermi and Bose distributions exist in the low temperature limit. However, a classical Bessel-Boltzmann distribution can be constructed which is a Bessel-modified Lorentzian distribution. Whether it makes any physical sense remains an open question. This is not investigated here. The choice of Bessel functions is motivated solely by their convergence properties and not by reference to any physical demands. This result suggests that the Gibbs-Boltzmann partition function is fundamental not only to Gibbs-Boltzmann but also to a large class of generalised Lorentzian distributions as well as to the corresponding nonextensive statistical mechanics.

  7. On the functional equivalence of fuzzy inference systems and spline-based networks.

    PubMed

    Hunt, K J; Haas, R; Brown, M

    1995-06-01

    The conditions under which spline-based networks are functionally equivalent to the Takagi-Sugeno-model of fuzzy inference are formally established. We consider a generalized form of basis function network whose basis functions are splines. The result admits a wide range of fuzzy membership functions which are commonly encountered in fuzzy systems design. We use the theoretical background of functional equivalence to develop a hybrid fuzzy-spline net for inverse dynamic modeling of a hydraulically driven robot manipulator. PMID:7496588

  8. Electrocardiogram signal denoising based on a new improved wavelet thresholding.

    PubMed

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method. PMID:27587134

  9. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    NASA Astrophysics Data System (ADS)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  10. Structure and function of the mammalian middle ear. II: Inferring function from structure.

    PubMed

    Mason, Matthew J

    2016-02-01

    Anatomists and zoologists who study middle ear morphology are often interested to know what the structure of an ear can reveal about the auditory acuity and hearing range of the animal in question. This paper represents an introduction to middle ear function targetted towards biological scientists with little experience in the field of auditory acoustics. Simple models of impedance matching are first described, based on the familiar concepts of the area and lever ratios of the middle ear. However, using the Mongolian gerbil Meriones unguiculatus as a test case, it is shown that the predictions made by such 'ideal transformer' models are generally not consistent with measurements derived from recent experimental studies. Electrical analogue models represent a better way to understand some of the complex, frequency-dependent responses of the middle ear: these have been used to model the effects of middle ear subcavities, and the possible function of the auditory ossicles as a transmission line. The concepts behind such models are explained here, again aimed at those with little background knowledge. Functional inferences based on middle ear anatomy are more likely to be valid at low frequencies. Acoustic impedance at low frequencies is dominated by compliance; expanded middle ear cavities, found in small desert mammals including gerbils, jerboas and the sengi Macroscelides, are expected to improve low-frequency sound transmission, as long as the ossicular system is not too stiff. PMID:26100915

  11. Autocorrelation based denoising of manatee vocalizations using the undecimated discrete wavelet transform.

    PubMed

    Gur, Berke M; Niezrecki, Christopher

    2007-07-01

    Recent interest in the West Indian manatee (Trichechus manatus latirostris) vocalizations has been primarily induced by an effort to reduce manatee mortality rates due to watercraft collisions. A warning system based on passive acoustic detection of manatee vocalizations is desired. The success and feasibility of such a system depends on effective denoising of the vocalizations in the presence of high levels of background noise. In the last decade, simple and effective wavelet domain nonlinear denoising methods have emerged as an alternative to linear estimation methods. However, the denoising performances of these methods degrades considerably with decreasing signal-to-noise ratio (SNR) and therefore are not suited for denoising manatee vocalizations in which the typical SNR is below 0 dB. Manatee vocalizations possess a strong harmonic content and a slow decaying autocorrelation function. In this paper, an efficient denoising scheme that exploits both the autocorrelation function of manatee vocalizations and effectiveness of the nonlinear wavelet transform based denoising algorithms is introduced. The suggested wavelet-based denoising algorithm is shown to outperform linear filtering methods, extending the detection range of vocalizations. PMID:17614478

  12. Locally Based Kernel PLS Regression De-noising with Application to Event-Related Potentials

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Tino, Peter

    2002-01-01

    The close relation of signal de-noising and regression problems dealing with the estimation of functions reflecting dependency between a set of inputs and dependent outputs corrupted with some level of noise have been employed in our approach.

  13. Crustal structure beneath northeast India inferred from receiver function modeling

    NASA Astrophysics Data System (ADS)

    Borah, Kajaljyoti; Bora, Dipok K.; Goyal, Ayush; Kumar, Raju

    2016-09-01

    We estimated crustal shear velocity structure beneath ten broadband seismic stations of northeast India, by using H-Vp/Vs stacking method and a non-linear direct search approach, Neighbourhood Algorithm (NA) technique followed by joint inversion of Rayleigh wave group velocity and receiver function, calculated from teleseismic earthquakes data. Results show significant variations of thickness, shear velocities (Vs) and Vp/Vs ratio in the crust of the study region. The inverted shear wave velocity models show crustal thickness variations of 32-36 km in Shillong Plateau (North), 36-40 in Assam Valley and ∼44 km in Lesser Himalaya (South). Average Vp/Vs ratio in Shillong Plateau is less (1.73-1.77) compared to Assam Valley and Lesser Himalaya (∼1.80). Average crustal shear velocity beneath the study region varies from 3.4 to 3.5 km/s. Sediment structure beneath Shillong Plateau and Assam Valley shows 1-2 km thick sediment layer with low Vs (2.5-2.9 km/s) and high Vp/Vs ratio (1.8-2.1), while it is observed to be of greater thickness (4 km) with similar Vs and high Vp/Vs (∼2.5) in RUP (Lesser Himalaya). Both Shillong Plateau and Assam Valley show thick upper and middle crust (10-20 km), and thin (4-9 km) lower crust. Average Vp/Vs ratio in Assam Valley and Shillong Plateau suggest that the crust is felsic-to-intermediate and intermediate-to-mafic beneath Shillong Plateau and Assam Valley, respectively. Results show that lower crust rocks beneath the Shillong Plateau and Assam Valley lies between mafic granulite and mafic garnet granulite.

  14. Empirical mode decomposition based background removal and de-noising in polarization interference imaging spectrometer.

    PubMed

    Zhang, Chunmin; Ren, Wenyi; Mu, Tingkui; Fu, Lili; Jia, Chenling

    2013-02-11

    Based on empirical mode decomposition (EMD), the background removal and de-noising procedures of the data taken by polarization interference imaging interferometer (PIIS) are implemented. Through numerical simulation, it is discovered that the data processing methods are effective. The assumption that the noise mostly exists in the first intrinsic mode function is verified, and the parameters in the EMD thresholding de-noising methods is determined. In comparison, the wavelet and windowed Fourier transform based thresholding de-noising methods are introduced. The de-noised results are evaluated by the SNR, spectral resolution and peak value of the de-noised spectrums. All the methods are used to suppress the effect from the Gaussian and Poisson noise. The de-noising efficiency is higher for the spectrum contaminated by Gaussian noise. The interferogram obtained by the PIIS is processed by the proposed methods. Both the interferogram without background and noise free spectrum are obtained effectively. The adaptive and robust EMD based methods are effective to the background removal and de-noising in PIIS. PMID:23481716

  15. Pragmatic Inference Abilities in Individuals with Asperger Syndrome or High-Functioning Autism. A Review

    ERIC Educational Resources Information Center

    Loukusa, Soile; Moilanen, Irma

    2009-01-01

    This review summarizes studies involving pragmatic language comprehension and inference abilities in individuals with Asperger syndrome or high-functioning autism. Systematic searches of three electronic databases, selected journals, and reference lists identified 20 studies meeting the inclusion criteria. These studies were evaluated in terms of:…

  16. Dynamic Denoising of Tracking Sequences

    PubMed Central

    Michailovich, Oleg; Tannenbaum, Allen

    2009-01-01

    In this paper, we describe an approach to the problem of simultaneously enhancing image sequences and tracking the objects of interest represented by the latter. The enhancement part of the algorithm is based on Bayesian wavelet denoising, which has been chosen due to its exceptional ability to incorporate diverse a priori information into the process of image recovery. In particular, we demonstrate that, in dynamic settings, useful statistical priors can come both from some reasonable assumptions on the properties of the image to be enhanced as well as from the images that have already been observed before the current scene. Using such priors forms the main contribution of the present paper which is the proposal of the dynamic denoising as a tool for simultaneously enhancing and tracking image sequences. Within the proposed framework, the previous observations of a dynamic scene are employed to enhance its present observation. The mechanism that allows the fusion of the information within successive image frames is Bayesian estimation, while transferring the useful information between the images is governed by a Kalman filter that is used for both prediction and estimation of the dynamics of tracked objects. Therefore, in this methodology, the processes of target tracking and image enhancement “collaborate” in an interlacing manner, rather than being applied separately. The dynamic denoising is demonstrated on several examples of SAR imagery. The results demonstrated in this paper indicate a number of advantages of the proposed dynamic denoising over “static” approaches, in which the tracking images are enhanced independently of each other. PMID:18482881

  17. Function Formula Oriented Construction of Bayesian Inference Nets for Diagnosis of Cardiovascular Disease

    PubMed Central

    Sekar, Booma Devi; Dong, Mingchui

    2014-01-01

    An intelligent cardiovascular disease (CVD) diagnosis system using hemodynamic parameters (HDPs) derived from sphygmogram (SPG) signal is presented to support the emerging patient-centric healthcare models. To replicate clinical approach of diagnosis through a staged decision process, the Bayesian inference nets (BIN) are adapted. New approaches to construct a hierarchical multistage BIN using defined function formulas and a method employing fuzzy logic (FL) technology to quantify inference nodes with dynamic values of statistical parameters are proposed. The suggested methodology is validated by constructing hierarchical Bayesian fuzzy inference nets (HBFIN) to diagnose various heart pathologies from the deduced HDPs. The preliminary diagnostic results show that the proposed methodology has salient validity and effectiveness in the diagnosis of cardiovascular disease. PMID:25247174

  18. Function formula oriented construction of Bayesian inference nets for diagnosis of cardiovascular disease.

    PubMed

    Sekar, Booma Devi; Dong, Mingchui

    2014-01-01

    An intelligent cardiovascular disease (CVD) diagnosis system using hemodynamic parameters (HDPs) derived from sphygmogram (SPG) signal is presented to support the emerging patient-centric healthcare models. To replicate clinical approach of diagnosis through a staged decision process, the Bayesian inference nets (BIN) are adapted. New approaches to construct a hierarchical multistage BIN using defined function formulas and a method employing fuzzy logic (FL) technology to quantify inference nodes with dynamic values of statistical parameters are proposed. The suggested methodology is validated by constructing hierarchical Bayesian fuzzy inference nets (HBFIN) to diagnose various heart pathologies from the deduced HDPs. The preliminary diagnostic results show that the proposed methodology has salient validity and effectiveness in the diagnosis of cardiovascular disease. PMID:25247174

  19. A Model-Based Analysis to Infer the Functional Content of a Gene List

    PubMed Central

    Newton, Michael A.; He, Qiuling; Kendziorski, Christina

    2012-01-01

    An important challenge in statistical genomics concerns integrating experimental data with exogenous information about gene function. A number of statistical methods are available to address this challenge, but most do not accommodate complexities in the functional record. To infer activity of a functional category (e.g., a gene ontology term), most methods use gene-level data on that category, but do not use other functional properties of the same genes. Not doing so creates undue errors in inference. Recent developments in model-based category analysis aim to overcome this difficulty, but in attempting to do so they are faced with serious computational problems. This paper investigates statistical properties and the structure of posterior computation in one such model for the analysis of functional category data. We examine the graphical structures underlying posterior computation in the original parameterization and in a new parameterization aimed at leveraging elements of the model. We characterize identifiability of the underlying activation states, describe a new prior distribution, and introduce approximations that aim to support numerical methods for posterior inference. PMID:22499692

  20. Vikodak - A Modular Framework for Inferring Functional Potential of Microbial Communities from 16S Metagenomic Datasets

    PubMed Central

    Nagpal, Sunil; Haque, Mohammed Monzoorul; Mande, Sharmila S.

    2016-01-01

    Background The overall metabolic/functional potential of any given environmental niche is a function of the sum total of genes/proteins/enzymes that are encoded and expressed by various interacting microbes residing in that niche. Consequently, prior (collated) information pertaining to genes, enzymes encoded by the resident microbes can aid in indirectly (re)constructing/ inferring the metabolic/ functional potential of a given microbial community (given its taxonomic abundance profile). In this study, we present Vikodak—a multi-modular package that is based on the above assumption and automates inferring and/ or comparing the functional characteristics of an environment using taxonomic abundance generated from one or more environmental sample datasets. With the underlying assumptions of co-metabolism and independent contributions of different microbes in a community, a concerted effort has been made to accommodate microbial co-existence patterns in various modules incorporated in Vikodak. Results Validation experiments on over 1400 metagenomic samples have confirmed the utility of Vikodak in (a) deciphering enzyme abundance profiles of any KEGG metabolic pathway, (b) functional resolution of distinct metagenomic environments, (c) inferring patterns of functional interaction between resident microbes, and (d) automating statistical comparison of functional features of studied microbiomes. Novel features incorporated in Vikodak also facilitate automatic removal of false positives and spurious functional predictions. Conclusions With novel provisions for comprehensive functional analysis, inclusion of microbial co-existence pattern based algorithms, automated inter-environment comparisons; in-depth analysis of individual metabolic pathways and greater flexibilities at the user end, Vikodak is expected to be an important value addition to the family of existing tools for 16S based function prediction. Availability and Implementation A web implementation of Vikodak

  1. Empirical Mode Decomposition Technique with Conditional Mutual Information for Denoising Operational Sensor Data

    SciTech Connect

    Omitaomu, Olufemi A; Protopopescu, Vladimir A; Ganguly, Auroop R

    2011-01-01

    A new approach is developed for denoising signals using the Empirical Mode Decomposition (EMD) technique and the Information-theoretic method. The EMD technique is applied to decompose a noisy sensor signal into the so-called intrinsic mode functions (IMFs). These functions are of the same length and in the same time domain as the original signal. Therefore, the EMD technique preserves varying frequency in time. Assuming the given signal is corrupted by high-frequency Gaussian noise implies that most of the noise should be captured by the first few modes. Therefore, our proposition is to separate the modes into high-frequency and low-frequency groups. We applied an information-theoretic method, namely mutual information, to determine the cut-off for separating the modes. A denoising procedure is applied only to the high-frequency group using a shrinkage approach. Upon denoising, this group is combined with the original low-frequency group to obtain the overall denoised signal. We illustrate our approach with simulated and real-world data sets. The results are compared to two popular denoising techniques in the literature, namely discrete Fourier transform (DFT) and discrete wavelet transform (DWT). We found that our approach performs better than DWT and DFT in most cases, and comparatively to DWT in some cases in terms of: (i) mean square error, (ii) recomputed signal-to-noise ratio, and (iii) visual quality of the denoised signals.

  2. Approximation Of Multi-Valued Inverse Functions Using Clustering And Sugeno Fuzzy Inference

    NASA Technical Reports Server (NTRS)

    Walden, Maria A.; Bikdash, Marwan; Homaifar, Abdollah

    1998-01-01

    Finding the inverse of a continuous function can be challenging and computationally expensive when the inverse function is multi-valued. Difficulties may be compounded when the function itself is difficult to evaluate. We show that we can use fuzzy-logic approximators such as Sugeno inference systems to compute the inverse on-line. To do so, a fuzzy clustering algorithm can be used in conjunction with a discriminating function to split the function data into branches for the different values of the forward function. These data sets are then fed into a recursive least-squares learning algorithm that finds the proper coefficients of the Sugeno approximators; each Sugeno approximator finds one value of the inverse function. Discussions about the accuracy of the approximation will be included.

  3. Determination of optimal wavelet denoising parameters for red edge feature extraction from hyperspectral data

    NASA Astrophysics Data System (ADS)

    Shafri, Helmi Z. M.; Yusof, Mohd R. M.

    2009-05-01

    A study of wavelet denoising on hyperspectral reflectance data, specifically the red edge position (REP) and its first derivative is presented in this paper. A synthetic data set was created using a sigmoid to simulate the red edge feature for this study. The sigmoid is injected with Gaussian white noise to simulate noisy reflectance data from handheld spectroradiometers. The use of synthetic data enables better quantification and statistical study of the effects of wavelet denoising on the features of hyperspectral data, specifically the REP. The simulation study will help to identify the most suitable wavelet parameters for denoising and represents the applicability of the wavelet-based denoising procedure in hyperspectral sensing for vegetation. The suitability of the thresholding rules and mother wavelets used in wavelet denoising is evaluated by comparing the denoised sigmoid function with the clean sigmoid, in terms of the shift in the inflection point meant to represent the REP, and also the overall change in the denoised signal compared with the clean one. The VisuShrink soft threshold was used with rescaling based on the noise estimate, in conjunction with wavelets of the Daubechies, Symlet and Coiflet families. It was found that for the VisuShrink threshold with single level noise estimate rescaling, the Daubechies 9 and Symlet 8 wavelets produced the least distortion in the location of sigmoid inflection point and the overall curve. The selected mother wavelets were used to denoise oil palm reflectance data to enable determination of the red edge position by locating the peak of the first derivative.

  4. INTEGRATING EVOLUTIONARY AND FUNCTIONAL APPROACHES TO INFER ADAPTATION AT SPECIFIC LOCI

    PubMed Central

    Storz, Jay F.; Wheat, Christopher W.

    2010-01-01

    Inferences about adaptation at specific loci are often exclusively based on the static analysis of DNA sequence variation. Ideally, population-genetic evidence for positive selection serves as a stepping-off point for experimental studies to elucidate the functional significance of the putatively adaptive variation. We argue that inferences about adaptation at specific loci are best achieved by integrating the indirect, retrospective insights provided by population-genetic analyses with the more direct, mechanistic insights provided by functional experiments. Integrative studies of adaptive genetic variation may sometimes be motivated by experimental insights into molecular function, which then provide the impetus to perform population genetic tests to evaluate whether the functional variation is of adaptive significance. In other cases, studies may be initiated by genome scans of DNA variation to identify candidate loci for recent adaptation. Results of such analyses can then motivate experimental efforts to test whether the identified candidate loci do in fact contribute to functional variation in some fitness-related phenotype. Functional studies can provide corroborative evidence for positive selection at particular loci, and can potentially reveal specific molecular mechanisms of adaptation. PMID:20500215

  5. Analysis and selection of the methods for fruit image denoise

    NASA Astrophysics Data System (ADS)

    Gui, Jiangsheng; Ma, Benxue; Rao, Xiuqin; Ying, Yibin

    2007-09-01

    Applications of machine vision in automated inspection and sorting of fruits have been widely studied by scientists and. Preprocess of the fruit image is needed when it contain much noise. There are many methods for image denoise in literatures and can acquire some nice results, but which will be selected from these methods is a trouble problem. In this research, total variation (TV) and shock filter with diffusion function were introduced, and together with other 6 common used denoise method s for different type noise type were tested. The result demonstrated that when the noise type was Gaussian or random, and SNR of original image was over 8,TV method can achieve the best resume result, when the SNR of original image was under 8, Winner filter can get the best resume result; when the noise type was salt pepper, median filter can achieve the best resume result

  6. Denoising Medical Images using Calculus of Variations

    PubMed Central

    Kohan, Mahdi Nakhaie; Behnam, Hamid

    2011-01-01

    We propose a method for medical image denoising using calculus of variations and local variance estimation by shaped windows. This method reduces any additive noise and preserves small patterns and edges of images. A pyramid structure-texture decomposition of images is used to separate noise and texture components based on local variance measures. The experimental results show that the proposed method has visual improvement as well as a better SNR, RMSE and PSNR than common medical image denoising methods. Experimental results in denoising a sample Magnetic Resonance image show that SNR, PSNR and RMSE have been improved by 19, 9 and 21 percents respectively. PMID:22606674

  7. Vector anisotropic filter for multispectral image denoising

    NASA Astrophysics Data System (ADS)

    Ben Said, Ahmed; Foufou, Sebti; Hadjidj, Rachid

    2015-04-01

    In this paper, we propose an approach to extend the application of anisotropic Gaussian filtering for multi- spectral image denoising. We study the case of images corrupted with additive Gaussian noise and use sparse matrix transform for noise covariance matrix estimation. Specifically we show that if an image has a low local variability, we can make the assumption that in the noisy image, the local variability originates from the noise variance only. We apply the proposed approach for the denoising of multispectral images corrupted by noise and compare the proposed method with some existing methods. Results demonstrate an improvement in the denoising performance.

  8. Using evolutionary sequence variation to make inferences about protein structure and function

    NASA Astrophysics Data System (ADS)

    Colwell, Lucy

    2015-03-01

    The evolutionary trajectory of a protein through sequence space is constrained by its function. Collections of sequence homologs record the outcomes of millions of evolutionary experiments in which the protein evolves according to these constraints. The explosive growth in the number of available protein sequences raises the possibility of using the natural variation present in homologous protein sequences to infer these constraints and thus identify residues that control different protein phenotypes. Because in many cases phenotypic changes are controlled by more than one amino acid, the mutations that separate one phenotype from another may not be independent, requiring us to understand the correlation structure of the data. To address this we build a maximum entropy probability model for the protein sequence. The parameters of the inferred model are constrained by the statistics of a large sequence alignment. Pairs of sequence positions with the strongest interactions accurately predict contacts in protein tertiary structure, enabling all atom structural models to be constructed. We describe development of a theoretical inference framework that enables the relationship between the amount of available input data and the reliability of structural predictions to be better understood.

  9. Nonlocal means denoising of ECG signals.

    PubMed

    Tracey, Brian H; Miller, Eric L

    2012-09-01

    Patch-based methods have attracted significant attention in recent years within the field of image processing for a variety of problems including denoising, inpainting, and super-resolution interpolation. Despite their prevalence for processing 2-D signals, they have received little attention in the 1-D signal processing literature. In this letter, we explore application of one such method, the nonlocal means (NLM) approach, to the denoising of biomedical signals. Using ECG as an example, we demonstrate that a straightforward NLM-based denoising scheme provides signal-to-noise ratio improvements very similar to state of the art wavelet-based methods, while giving ~3 × or greater reduction in metrics measuring distortion of the denoised waveform. PMID:22829361

  10. Impact of Prematurity and Perinatal Antibiotics on the Developing Intestinal Microbiota: A Functional Inference Study

    PubMed Central

    Arboleya, Silvia; Sánchez, Borja; Solís, Gonzalo; Fernández, Nuria; Suárez, Marta; Hernández-Barranco, Ana M.; Milani, Christian; Margolles, Abelardo; de los Reyes-Gavilán, Clara G.; Ventura, Marco; Gueimonde, Miguel

    2016-01-01

    Background: The microbial colonization of the neonatal gut provides a critical stimulus for normal maturation and development. This process of early microbiota establishment, known to be affected by several factors, constitutes an important determinant for later health. Methods: We studied the establishment of the microbiota in preterm and full-term infants and the impact of perinatal antibiotics upon this process in premature babies. To this end, 16S rRNA gene sequence-based microbiota assessment was performed at phylum level and functional inference analyses were conducted. Moreover, the levels of the main intestinal microbial metabolites, the short-chain fatty acids (SCFA) acetate, propionate and butyrate, were measured by Gas-Chromatography Flame ionization/Mass spectrometry detection. Results: Prematurity affects microbiota composition at phylum level, leading to increases of Proteobacteria and reduction of other intestinal microorganisms. Perinatal antibiotic use further affected the microbiota of the preterm infant. These changes involved a concomitant alteration in the levels of intestinal SCFA. Moreover, functional inference analyses allowed for identifying metabolic pathways potentially affected by prematurity and perinatal antibiotics use. Conclusion: A deficiency or delay in the establishment of normal microbiota function seems to be present in preterm infants. Perinatal antibiotic use, such as intrapartum prophylaxis, affected the early life microbiota establishment in preterm newborns, which may have consequences for later health. PMID:27136545

  11. Inferring the functional effect of gene expression changes in signaling pathways.

    PubMed

    Sebastián-León, Patricia; Carbonell, José; Salavert, Francisco; Sanchez, Rubén; Medina, Ignacio; Dopazo, Joaquín

    2013-07-01

    Signaling pathways constitute a valuable source of information that allows interpreting the way in which alterations in gene activities affect to particular cell functionalities. There are web tools available that allow viewing and editing pathways, as well as representing experimental data on them. However, few methods aimed to identify the signaling circuits, within a pathway, associated to the biological problem studied exist and none of them provide a convenient graphical web interface. We present PATHiWAYS, a web-based signaling pathway visualization system that infers changes in signaling that affect cell functionality from the measurements of gene expression values in typical expression microarray case-control experiments. A simple probabilistic model of the pathway is used to estimate the probabilities for signal transmission from any receptor to any final effector molecule (taking into account the pathway topology) using for this the individual probabilities of gene product presence/absence inferred from gene expression values. Significant changes in these probabilities allow linking different cell functionalities triggered by the pathway to the biological problem studied. PATHiWAYS is available at: http://pathiways.babelomics.org/. PMID:23748960

  12. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy. PMID:20840902

  13. Inferring deep-brain activity from cortical activity using functional near-infrared spectroscopy

    PubMed Central

    Liu, Ning; Cui, Xu; Bryant, Daniel M.; Glover, Gary H.; Reiss, Allan L.

    2015-01-01

    Functional near-infrared spectroscopy (fNIRS) is an increasingly popular technology for studying brain function because it is non-invasive, non-irradiating and relatively inexpensive. Further, fNIRS potentially allows measurement of hemodynamic activity with high temporal resolution (milliseconds) and in naturalistic settings. However, in comparison with other imaging modalities, namely fMRI, fNIRS has a significant drawback: limited sensitivity to hemodynamic changes in deep-brain regions. To overcome this limitation, we developed a computational method to infer deep-brain activity using fNIRS measurements of cortical activity. Using simultaneous fNIRS and fMRI, we measured brain activity in 17 participants as they completed three cognitive tasks. A support vector regression (SVR) learning algorithm was used to predict activity in twelve deep-brain regions using information from surface fNIRS measurements. We compared these predictions against actual fMRI-measured activity using Pearson’s correlation to quantify prediction performance. To provide a benchmark for comparison, we also used fMRI measurements of cortical activity to infer deep-brain activity. When using fMRI-measured activity from the entire cortex, we were able to predict deep-brain activity in the fusiform cortex with an average correlation coefficient of 0.80 and in all deep-brain regions with an average correlation coefficient of 0.67. The top 15% of predictions using fNIRS signal achieved an accuracy of 0.7. To our knowledge, this study is the first to investigate the feasibility of using cortical activity to infer deep-brain activity. This new method has the potential to extend fNIRS applications in cognitive and clinical neuroscience research. PMID:25798327

  14. Inferring deep biosphere function and diversity through (near) surface biosphere portals (Invited)

    NASA Astrophysics Data System (ADS)

    Meyer-Dombard, D. R.; Cardace, D.; Woycheese, K. M.; Swingley, W.; Schubotz, F.; Shock, E.

    2013-12-01

    The consideration of surface expressions of the deep subsurface- such as springs- remains one of the most economically viable means to query the deep biosphere's diversity and function. Hot spring source pools are ideal portals for accessing and inferring the taxonomic and functional diversity of related deep subsurface microbial communities. Consideration of the geochemical composition of deep vs. surface fluids provides context for interpretation of community function. Further, parallel assessment of 16S rRNA data, metagenomic sequencing, and isotopic compositions of biomass in surface springs allows inference of the functional capacities of subsurface ecosystems. Springs in Yellowstone National Park (YNP), the Philippines, and Turkey are considered here, incorporating near-surface, transition, and surface ecosystems to identify 'legacy' taxa and functions of the deep biosphere. We find that source pools often support functional capacity suited to subsurface ecosystems. For example, in hot ecosystems, source pools are strictly chemosynthetic, and surface environments with measureable dissolved oxygen may contain evidence of community functions more favorable under anaerobic conditions. Metagenomic reads from a YNP ecosystem indicate the genetic capacity for sulfate reduction at high temperature. However, inorganic sulfate reduction is only minimally energy-yielding in these surface environments suggesting the potential that sulfate reduction is a 'legacy' function of deeper biosphere ecosystems. Carbon fixation tactics shift with increased surface exposure of the thermal fluids. Genes related to the rTCA cycle and the acetyl co-A pathway are most prevalent in highest temperature, anaerobic sites. At lower temperature sites, fewer total carbon fixation genes were observed, perhaps indicating an increase in heterotrophic metabolism with increased surface exposure. In hydrogen and methane rich springs in the Philippines and Turkey, methanogenic taxa dominate source

  15. GLMdenoise: a fast, automated technique for denoising task-based fMRI data.

    PubMed

    Kay, Kendrick N; Rokem, Ariel; Winawer, Jonathan; Dougherty, Robert F; Wandell, Brian A

    2013-01-01

    In task-based functional magnetic resonance imaging (fMRI), researchers seek to measure fMRI signals related to a given task or condition. In many circumstances, measuring this signal of interest is limited by noise. In this study, we present GLMdenoise, a technique that improves signal-to-noise ratio (SNR) by entering noise regressors into a general linear model (GLM) analysis of fMRI data. The noise regressors are derived by conducting an initial model fit to determine voxels unrelated to the experimental paradigm, performing principal components analysis (PCA) on the time-series of these voxels, and using cross-validation to select the optimal number of principal components to use as noise regressors. Due to the use of data resampling, GLMdenoise requires and is best suited for datasets involving multiple runs (where conditions repeat across runs). We show that GLMdenoise consistently improves cross-validation accuracy of GLM estimates on a variety of event-related experimental datasets and is accompanied by substantial gains in SNR. To promote practical application of methods, we provide MATLAB code implementing GLMdenoise. Furthermore, to help compare GLMdenoise to other denoising methods, we present the Denoise Benchmark (DNB), a public database and architecture for evaluating denoising methods. The DNB consists of the datasets described in this paper, a code framework that enables automatic evaluation of a denoising method, and implementations of several denoising methods, including GLMdenoise, the use of motion parameters as noise regressors, ICA-based denoising, and RETROICOR/RVHRCOR. Using the DNB, we find that GLMdenoise performs best out of all of the denoising methods we tested. PMID:24381539

  16. LncRNA ontology: inferring lncRNA functions based on chromatin states and expression patterns

    PubMed Central

    Li, Yongsheng; Chen, Hong; Pan, Tao; Jiang, Chunjie; Zhao, Zheng; Wang, Zishan; Zhang, Jinwen; Xu, Juan; Li, Xia

    2015-01-01

    Accumulating evidences suggest that long non-coding RNAs (lncRNAs) perform important functions. Genome-wide chromatin-states area rich source of information about cellular state, yielding insights beyond what is typically obtained by transcriptome profiling. We propose an integrative method for genome-wide functional predictions of lncRNAs by combining chromatin states data with gene expression patterns. We first validated the method using protein-coding genes with known function annotations. Our validation results indicated that our integrative method performs better than co-expression analysis, and is accurate across different conditions. Next, by applying the integrative model genome-wide, we predicted the probable functions for more than 97% of human lncRNAs. The putative functions inferred by our method match with previously annotated by the targets of lncRNAs. Moreover, the linkage from the cellular processes influenced by cancer-associated lncRNAs to the cancer hallmarks provided a “lncRNA point-of-view” on tumor biology. Our approach provides a functional annotation of the lncRNAs, which we developed into a web-based application, LncRNA Ontology, to provide visualization, analysis, and downloading of lncRNA putative functions. PMID:26485761

  17. Inferring modules of functionally interacting proteins using the Bond Energy Algorithm

    PubMed Central

    Watanabe, Ryosuke LA; Morett, Enrique; Vallejo, Edgar E

    2008-01-01

    Background Non-homology based methods such as phylogenetic profiles are effective for predicting functional relationships between proteins with no considerable sequence or structure similarity. Those methods rely heavily on traditional similarity metrics defined on pairs of phylogenetic patterns. Proteins do not exclusively interact in pairs as the final biological function of a protein in the cellular context is often hold by a group of proteins. In order to accurately infer modules of functionally interacting proteins, the consideration of not only direct but also indirect relationships is required. In this paper, we used the Bond Energy Algorithm (BEA) to predict functionally related groups of proteins. With BEA we create clusters of phylogenetic profiles based on the associations of the surrounding elements of the analyzed data using a metric that considers linked relationships among elements in the data set. Results Using phylogenetic profiles obtained from the Cluster of Orthologous Groups of Proteins (COG) database, we conducted a series of clustering experiments using BEA to predict (upper level) relationships between profiles. We evaluated our results by comparing with COG's functional categories, And even more, with the experimentally determined functional relationships between proteins provided by the DIP and ECOCYC databases. Our results demonstrate that BEA is capable of predicting meaningful modules of functionally related proteins. BEA outperforms traditionally used clustering methods, such as k-means and hierarchical clustering by predicting functional relationships between proteins with higher accuracy. Conclusion This study shows that the linked relationships of phylogenetic profiles obtained by BEA is useful for detecting functional associations between profiles and extending functional modules not found by traditional methods. BEA is capable of detecting relationship among phylogenetic patterns by linking them through a common element shared in

  18. Denoising and dimensionality reduction of genomic data

    NASA Astrophysics Data System (ADS)

    Capobianco, Enrico

    2005-05-01

    Genomics represents a challenging research field for many quantitative scientists, and recently a vast variety of statistical techniques and machine learning algorithms have been proposed and inspired by cross-disciplinary work with computational and systems biologists. In genomic applications, the researcher deals with noisy and complex high-dimensional feature spaces; a wealth of genes whose expression levels are experimentally measured, can often be observed for just a few time points, thus limiting the available samples. This unbalanced combination suggests that it might be hard for standard statistical inference techniques to come up with good general solutions, likewise for machine learning algorithms to avoid heavy computational work. Thus, one naturally turns to two major aspects of the problem: sparsity and intrinsic dimensionality. These two aspects are studied in this paper, where for both denoising and dimensionality reduction, a very efficient technique, i.e., Independent Component Analysis, is used. The numerical results are very promising, and lead to a very good quality of gene feature selection, due to the signal separation power enabled by the decomposition technique. We investigate how the use of replicates can improve these results, and deal with noise through a stabilization strategy which combines the estimated components and extracts the most informative biological information from them. Exploiting the inherent level of sparsity is a key issue in genetic regulatory networks, where the connectivity matrix needs to account for the real links among genes and discard many redundancies. Most experimental evidence suggests that real gene-gene connections represent indeed a subset of what is usually mapped onto either a huge gene vector or a typically dense and highly structured network. Inferring gene network connectivity from the expression levels represents a challenging inverse problem that is at present stimulating key research in biomedical

  19. Improved extreme value weighted sparse representational image denoising with random perturbation

    NASA Astrophysics Data System (ADS)

    Xuan, Shibin; Han, Yulan

    2015-11-01

    Research into the removal of mixed noise is a hot topic in the field of image denoising. Currently, weighted encoding with sparse nonlocal regularization represents an excellent mixed noise removal method. To make the fitting function closer to the requirements of a robust estimation technique, an extreme value technique is used that allows the fitting function to satisfy three conditions of robust estimation on a larger interval. Moreover, a random disturbance sequence is integrated into the denoising model to prevent the iterative solving process from falling into local optima. A radon transform-based noise detection algorithm and an adaptive median filter are used to obtain a high-quality initial solution for the iterative procedure of the image denoising model. Experimental results indicate that this improved method efficiently enhances the weighted encoding with a sparse nonlocal regularization model. The proposed method can effectively remove mixed noise from corrupted images, while better preserving the edges and details of the processed image.

  20. Brain imaging and cognitive neuroscience. Toward strong inference in attributing function to structure.

    PubMed

    Sarter, M; Berntson, G G; Cacioppo, J T

    1996-01-01

    Cognitive neuroscience has emerged from the neurosciences and cognitive psychology as a scientific discipline that aims at the determination of "how brain function gives rise to mental activity" (S. M. Kosslyn & L. M. Shin, 1992, p. 146). While research in cognitive neuroscience combines many levels of neuroscientific and psychological analyses, modern imaging techniques that monitor brain activity during behavioral or cognitive operations have significantly contributed to the emergence of this discipline. The conclusions deduced from these studies are inherently localizationistic in nature; in other words, they describe cognitive functions as being localized in focal brain regions (brain activity in a defined brain region, phi, is involved in specific cognitive function, psi). A broad discussion about the virtues and limitations of such conclusions may help avoid the emergence of a mentalistic localizationism (i.e., the attribution of mentalistic concepts such as happiness, morality, or consciousness to brain structure) and illustrates the importance of a convergence with information generated by different research strategies (such as, for example, evidence generated by studies in which the effects of experimental manipulations of local neuronal processes on cognitive functions are assessed). Progress in capitalizing on brain-imaging studies to investigate questions of the form "brain structure or event phi is associated with cognitive function psi" may be impeded because of the way in which inferences are typically formulated in the brain imaging literature. A conceptual framework to advance the interpretation of data describing the relationships between cognitive phenomena and brain structure activity is provided. PMID:8585670

  1. On the inference of function from structure using biomechanical modelling and simulation of extinct organisms.

    PubMed

    Hutchinson, John R

    2012-02-23

    Biomechanical modelling and simulation techniques offer some hope for unravelling the complex inter-relationships of structure and function perhaps even for extinct organisms, but have their limitations owing to this complexity and the many unknown parameters for fossil taxa. Validation and sensitivity analysis are two indispensable approaches for quantifying the accuracy and reliability of such models or simulations. But there are other subtleties in biomechanical modelling that include investigator judgements about the level of simplicity versus complexity in model design or how uncertainty and subjectivity are dealt with. Furthermore, investigator attitudes toward models encompass a broad spectrum between extreme credulity and nihilism, influencing how modelling is conducted and perceived. Fundamentally, more data and more testing of methodology are required for the field to mature and build confidence in its inferences. PMID:21666064

  2. Pragmatic inferences in high-functioning adults with autism and Asperger syndrome.

    PubMed

    Pijnacker, Judith; Hagoort, Peter; Buitelaar, Jan; Teunisse, Jan-Pieter; Geurts, Bart

    2009-04-01

    Although people with autism spectrum disorders (ASD) often have severe problems with pragmatic aspects of language, little is known about their pragmatic reasoning. We carried out a behavioral study on high-functioning adults with autistic disorder (n = 11) and Asperger syndrome (n = 17) and matched controls (n = 28) to investigate whether they are capable of deriving scalar implicatures, which are generally considered to be pragmatic inferences. Participants were presented with underinformative sentences like "Some sparrows are birds". This sentence is logically true, but pragmatically inappropriate if the scalar implicature "Not all sparrows are birds" is derived. The present findings indicate that the combined ASD group was just as likely as controls to derive scalar implicatures, yet there was a difference between participants with autistic disorder and Asperger syndrome, suggesting a potential differentiation between these disorders in pragmatic reasoning. Moreover, our results suggest that verbal intelligence is a constraint for task performance in autistic disorder but not in Asperger syndrome. PMID:19052858

  3. Geodesic denoising for optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Shahrian Varnousfaderani, Ehsan; Vogl, Wolf-Dieter; Wu, Jing; Gerendas, Bianca S.; Simader, Christian; Langs, Georg; Waldstein, Sebastian M.; Schmidt-Erfurth, Ursula

    2016-03-01

    Optical coherence tomography (OCT) is an optical signal acquisition method capturing micrometer resolution, cross-sectional three-dimensional images. OCT images are used widely in ophthalmology to diagnose and monitor retinal diseases such as age-related macular degeneration (AMD) and Glaucoma. While OCT allows the visualization of retinal structures such as vessels and retinal layers, image quality and contrast is reduced by speckle noise, obfuscating small, low intensity structures and structural boundaries. Existing denoising methods for OCT images may remove clinically significant image features such as texture and boundaries of anomalies. In this paper, we propose a novel patch based denoising method, Geodesic Denoising. The method reduces noise in OCT images while preserving clinically significant, although small, pathological structures, such as fluid-filled cysts in diseased retinas. Our method selects optimal image patch distribution representations based on geodesic patch similarity to noisy samples. Patch distributions are then randomly sampled to build a set of best matching candidates for every noisy sample, and the denoised value is computed based on a geodesic weighted average of the best candidate samples. Our method is evaluated qualitatively on real pathological OCT scans and quantitatively on a proposed set of ground truth, noise free synthetic OCT scans with artificially added noise and pathologies. Experimental results show that performance of our method is comparable with state of the art denoising methods while outperforming them in preserving the critical clinically relevant structures.

  4. Machinery vibration signal denoising based on learned dictionary and sparse representation

    NASA Astrophysics Data System (ADS)

    Guo, Liang; Gao, Hongli; Li, Jun; Huang, Haifeng; Zhang, Xiaochen

    2015-07-01

    Mechanical vibration signal denoising has been an import problem for machine damage assessment and health monitoring. Wavelet transfer and sparse reconstruction are the powerful and practical methods. However, those methods are based on the fixed basis functions or atoms. In this paper, a novel method is presented. The atoms used to represent signals are learned from the raw signal. And in order to satisfy the requirements of real-time signal processing, an online dictionary learning algorithm is adopted. Orthogonal matching pursuit is applied to extract the most pursuit column in the dictionary. At last, denoised signal is calculated with the sparse vector and learned dictionary. A simulation signal and real bearing fault signal are utilized to evaluate the improved performance of the proposed method through the comparison with kinds of denoising algorithms. Then Its computing efficiency is demonstrated by an illustrative runtime example. The results show that the proposed method outperforms current algorithms with efficiency calculation.

  5. Image-Specific Prior Adaptation for Denoising.

    PubMed

    Lu, Xin; Lin, Zhe; Jin, Hailin; Yang, Jianchao; Wang, James Z

    2015-12-01

    Image priors are essential to many image restoration applications, including denoising, deblurring, and inpainting. Existing methods use either priors from the given image (internal) or priors from a separate collection of images (external). We find through statistical analysis that unifying the internal and external patch priors may yield a better patch prior. We propose a novel prior learning algorithm that combines the strength of both internal and external priors. In particular, we first learn a generic Gaussian mixture model from a collection of training images and then adapt the model to the given image by simultaneously adding additional components and refining the component parameters. We apply this image-specific prior to image denoising. The experimental results show that our approach yields better or competitive denoising results in terms of both the peak signal-to-noise ratio and structural similarity. PMID:26316129

  6. Multiresolution Bilateral Filtering for Image Denoising

    PubMed Central

    Zhang, Ming; Gunturk, Bahadir K.

    2008-01-01

    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges; it has shown to be an effective image denoising technique. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. There are two main contributions of this paper. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising applications. The second contribution is an extension of the bilateral filter: multiresolution bilateral filter, where bilateral filtering is applied to the approximation (low-frequency) subbands of a signal decomposed using a wavelet filter bank. The multiresolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. Experimental results with both simulated and real data are provided. PMID:19004705

  7. Echocardiogram enhancement using supervised manifold denoising.

    PubMed

    Wu, Hui; Huynh, Toan T; Souvenir, Richard

    2015-08-01

    This paper presents data-driven methods for echocardiogram enhancement. Existing denoising algorithms typically rely on a single noise model, and do not generalize to the composite noise sources typically found in real-world echocardiograms. Our methods leverage the low-dimensional intrinsic structure of echocardiogram videos. We assume that echocardiogram images are noisy samples from an underlying manifold parametrized by cardiac motion and denoise images via back-projection onto a learned (non-linear) manifold. Our methods incorporate synchronized side information (e.g., electrocardiography), which is often collected alongside the visual data. We evaluate the proposed methods on a synthetic data set and real-world echocardiograms. Quantitative results show improved performance of our methods over recent image despeckling methods and video denoising methods, and a visual analysis of real-world data shows noticeable image enhancement, even in the challenging case of noise due to dropout artifacts. PMID:26072166

  8. An image denoising application using shearlets

    NASA Astrophysics Data System (ADS)

    Sevindir, Hulya Kodal; Yazici, Cuneyt

    2013-10-01

    Medical imaging is a multidisciplinary field related to computer science, electrical/electronic engineering, physics, mathematics and medicine. There has been dramatic increase in variety, availability and resolution of medical imaging devices for the last half century. For proper medical imaging highly trained technicians and clinicians are needed to pull out clinically pertinent information from medical data correctly. Artificial systems must be designed to analyze medical data sets either in a partially or even a fully automatic manner to fulfil the need. For this purpose there has been numerous ongoing research for finding optimal representations in image processing and computer vision [1, 18]. Medical images almost always contain artefacts and it is crucial to remove these artefacts to obtain healthy results. Out of many methods for denoising images, in this paper, two denoising methods, wavelets and shearlets, have been applied to mammography images. Comparing these two methods, shearlets give better results for denoising such data.

  9. Low-rank separated representation surrogates of high-dimensional stochastic functions: Application in Bayesian inference

    SciTech Connect

    Validi, AbdoulAhad

    2014-03-01

    This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.

  10. LC-MS/MS based proteomic analysis and functional inference of hypothetical proteins in Desulfovibrio vulgaris

    SciTech Connect

    Zhang, Weiwen; Culley, David E.; Gritsenko, Marina A.; Moore, Ronald J.; Nie, Lei; Scholten, Johannes C.; Petritis, Konstantinos; Strittmatter, Eric F.; Camp, David G.; Smith, Richard D.; Brockman, Fred J.

    2006-11-03

    ABSTRACT In the previous study, the whole-genome gene expression profiles of D. vulgaris in response to oxidative stress and heat shock were determined. The results showed 24-28% of the responsive genes were hypothetical proteins that have not been experimentally characterized or whose function can not be deduced by simple sequence comparison. To further explore the protecting mechanisms employed in D. vulgaris against the oxidative stress and heat shock, attempt was made in this study to infer functions of these hypothetical proteins by phylogenomic profiling along with detailed sequence comparison against various publicly available databases. By this approach we were ableto assign possible functions to 25 responsive hypothetical proteins. The findings included that DVU0725, induced by oxidative stress, may be involved in lipopolysaccharide biosynthesis, implying that the alternation of lipopolysaccharide on cell surface might service as a mechanism against oxidative stress in D. vulgaris. In addition, two responsive proteins, DVU0024 encoding a putative transcriptional regulator and DVU1670 encoding predicted redox protein, were sharing co-evolution atterns with rubrerythrin in Archaeoglobus fulgidus and Clostridium perfringens, respectively, implying that they might be part of the stress response and protective systems in D. vulgaris. The study demonstrated that phylogenomic profiling is a useful tool in interpretation of experimental genomics data, and also provided further insight on cellular response to oxidative stress and heat shock in D. vulgaris.

  11. Doppler ultrasound signal denoising based on wavelet frames.

    PubMed

    Zhang, Y; Wang, Y; Wang, W; Liu, B

    2001-05-01

    A novel approach was proposed to denoise the Doppler ultrasound signal. Using this method, wavelet coefficients of the Doppler signal at multiple scales were first obtained using the discrete wavelet frame analysis. Then, a soft thresholding-based denoising algorithm was employed to deal with these coefficients to get the denoised signal. In the simulation experiments, the SNR improvements and the maximum frequency estimation precision were studied for the denoised signal. From the simulation and clinical studies, it was concluded that the performance of this discrete wavelet frame (DWF) approach is higher than that of the standard (critically sampled) wavelet transform (DWT) for the Doppler ultrasound signal denoising. PMID:11381694

  12. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  13. Magnetic resonance image denoising using multiple filters

    NASA Astrophysics Data System (ADS)

    Ai, Danni; Wang, Jinjuan; Miwa, Yuichi

    2013-07-01

    We introduced and compared ten denoisingfilters which are all proposed during last fifteen years. Especially, the state-of-art denoisingalgorithms, NLM and BM3D, have attracted much attention. Several expansions are proposed to improve the noise reduction based on these two algorithms. On the other hand, optimal dictionaries, sparse representations and appropriate shapes of the transform's support are also considered for the image denoising. The comparison among variousfiltersis implemented by measuring the SNR of a phantom image and denoising effectiveness of a clinical image. The computational time is finally evaluated.

  14. A total variation denoising algorithm for hyperspectral data

    NASA Astrophysics Data System (ADS)

    Li, Ting; Chen, Xiao-mei; Xue, Bo; Li, Qian-qian; Ni, Guo-qiang

    2010-11-01

    Since noise can undermine the effectiveness of information extracted from hyperspectral imagery, noise reduction is a prerequisite for many classification-based applications of hyperspectral imagery. In this paper, an effective three dimensional total variation denoising algorithm for hyperspectral imagery is introduced. First, a three dimensional objective function of total variation denoising model is derived from the classical two dimensional TV algorithms. For the consideration of the fact that the noise of hyperspectral imagery shows different characteristics in spatial and spectral domain, the objective function is further improved by utilizing two terms (spatial term and spectral term) and separate regularization parameters respectively which can adjust the trade-off between the two terms. Then, the improved objective function is discretized by approximating gradients with local differences, optimized by a quadratic convex function and finally solved by a majorization-minimization based iteration algorithm. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in a desert-dominated area in 2007. Experimental results show that, properly choosing the values of parameters, the new approach removes the indention and restores the spectral absorption peaks more effectively while having a similar improvement of signal-to-noise-ratio as minimum noise fraction (MNF) method.

  15. Analysis the application of several denoising algorithm in the astronomical image denoising

    NASA Astrophysics Data System (ADS)

    Jiang, Chao; Geng, Ze-xun; Bao, Yong-qiang; Wei, Xiao-feng; Pan, Ying-feng

    2014-02-01

    Image denoising is an important method of preprocessing, it is one of the forelands in the field of Computer Graphic and Computer Vision. Astronomical target imaging are most vulnerable to atmospheric turbulence and noise interference, in order to reconstruct the high quality image of the target, we need to restore the high frequency signal of image, but noise also belongs to the high frequency signal, so there will be noise amplification in the reconstruction process. In order to avoid this phenomenon, join image denoising in the process of reconstruction is a feasible solution. This paper mainly research on the principle of four classic denoising algorithm, which are TV, BLS - GSM, NLM and BM3D, we use simulate data for image denoising to analysis the performance of the four algorithms, experiments demonstrate that the four algorithms can remove the noise, the BM3D algorithm not only have high quality of denosing, but also have the highest efficiency at the same time.

  16. Optimization of wavelet- and curvelet-based denoising algorithms by multivariate SURE and GCV

    NASA Astrophysics Data System (ADS)

    Mortezanejad, R.; Gholami, A.

    2016-06-01

    One of the most crucial challenges in seismic data processing is the reduction of noise in the data or improving the signal-to-noise ratio (SNR). Wavelet- and curvelet-based denoising algorithms have become popular to address random noise attenuation for seismic sections. Wavelet basis, thresholding function, and threshold value are three key factors of such algorithms, having a profound effect on the quality of the denoised section. Therefore, given a signal, it is necessary to optimize the denoising operator over these factors to achieve the best performance. In this paper a general denoising algorithm is developed as a multi-variant (variable) filter which performs in multi-scale transform domains (e.g. wavelet and curvelet). In the wavelet domain this general filter is a function of the type of wavelet, characterized by its smoothness, thresholding rule, and threshold value, while in the curvelet domain it is only a function of thresholding rule and threshold value. Also, two methods, Stein’s unbiased risk estimate (SURE) and generalized cross validation (GCV), evaluated using a Monte Carlo technique, are utilized to optimize the algorithm in both wavelet and curvelet domains for a given seismic signal. The best wavelet function is selected from a family of fractional B-spline wavelets. The optimum thresholding rule is selected from general thresholding functions which contain the most well known thresholding functions, and the threshold value is chosen from a set of possible values. The results obtained from numerical tests show high performance of the proposed method in both wavelet and curvelet domains in comparison to conventional methods when denoising seismic data.

  17. Inferring muscle functional roles of the ostrich pelvic limb during walking and running using computer optimization.

    PubMed

    Rankin, Jeffery W; Rubenson, Jonas; Hutchinson, John R

    2016-05-01

    Owing to their cursorial background, ostriches (Struthio camelus) walk and run with high metabolic economy, can reach very fast running speeds and quickly execute cutting manoeuvres. These capabilities are believed to be a result of their ability to coordinate muscles to take advantage of specialized passive limb structures. This study aimed to infer the functional roles of ostrich pelvic limb muscles during gait. Existing gait data were combined with a newly developed musculoskeletal model to generate simulations of ostrich walking and running that predict muscle excitations, force and mechanical work. Consistent with previous avian electromyography studies, predicted excitation patterns showed that individual muscles tended to be excited primarily during only stance or swing. Work and force estimates show that ostrich gaits are partially hip-driven with the bi-articular hip-knee muscles driving stance mechanics. Conversely, the knee extensors acted as brakes, absorbing energy. The digital extensors generated large amounts of both negative and positive mechanical work, with increased magnitudes during running, providing further evidence that ostriches make extensive use of tendinous elastic energy storage to improve economy. The simulations also highlight the need to carefully consider non-muscular soft tissues that may play a role in ostrich gait. PMID:27146688

  18. Inferring muscle functional roles of the ostrich pelvic limb during walking and running using computer optimization

    PubMed Central

    Rubenson, Jonas

    2016-01-01

    Owing to their cursorial background, ostriches (Struthio camelus) walk and run with high metabolic economy, can reach very fast running speeds and quickly execute cutting manoeuvres. These capabilities are believed to be a result of their ability to coordinate muscles to take advantage of specialized passive limb structures. This study aimed to infer the functional roles of ostrich pelvic limb muscles during gait. Existing gait data were combined with a newly developed musculoskeletal model to generate simulations of ostrich walking and running that predict muscle excitations, force and mechanical work. Consistent with previous avian electromyography studies, predicted excitation patterns showed that individual muscles tended to be excited primarily during only stance or swing. Work and force estimates show that ostrich gaits are partially hip-driven with the bi-articular hip–knee muscles driving stance mechanics. Conversely, the knee extensors acted as brakes, absorbing energy. The digital extensors generated large amounts of both negative and positive mechanical work, with increased magnitudes during running, providing further evidence that ostriches make extensive use of tendinous elastic energy storage to improve economy. The simulations also highlight the need to carefully consider non-muscular soft tissues that may play a role in ostrich gait. PMID:27146688

  19. Inference of the cold dark matter substructure mass function at z = 0.2 using strong gravitational lenses

    NASA Astrophysics Data System (ADS)

    Vegetti, S.; Koopmans, L. V. E.; Auger, M. W.; Treu, T.; Bolton, A. S.

    2014-08-01

    We present the results of a search for galaxy substructures in a sample of 11 gravitational lens galaxies from the Sloan Lens ACS Survey by Bolton et al. We find no significant detection of mass clumps, except for a luminous satellite in the system SDSS J0956+5110. We use these non-detections, in combination with a previous detection in the system SDSS J0946+1006, to derive constraints on the substructure mass function in massive early-type host galaxies with an average redshift ˜ 0.2 and an average velocity dispersion <σeff> ˜ 270 km s-1. We perform a Bayesian inference on the substructure mass function, within a median region of about 32 kpc2 around the Einstein radius ( ˜ 4.2 kpc). We infer a mean projected substructure mass fraction f = 0.0076_{-0.0052}^{+0.0208} at the 68 per cent confidence level and a substructure mass function slope α < 2.93 at the 95 per cent confidence level for a uniform prior probability density on α. For a Gaussian prior based on cold dark matter (CDM) simulations, we infer f = 0.0064^{+0.0080}_{-0.0042} and a slope of α = 1.90^{+0.098}_{-0.098} at the 68 per cent confidence level. Since only one substructure was detected in the full sample, we have little information on the mass function slope, which is therefore poorly constrained (i.e. the Bayes factor shows no positive preference for any of the two models). The inferred fraction is consistent with the expectations from CDM simulations and with inference from flux ratio anomalies at the 68 per cent confidence level.

  20. Efficient bias correction for magnetic resonance image denoising.

    PubMed

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. PMID:23074149

  1. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  2. Dissociable functions of reward inference in the lateral prefrontal cortex and the striatum

    PubMed Central

    Tanaka, Shingo; Pan, Xiaochuan; Oguchi, Mineki; Taylor, Jessica E.; Sakagami, Masamichi

    2015-01-01

    In a complex and uncertain world, how do we select appropriate behavior? One possibility is that we choose actions that are highly reinforced by their probabilistic consequences (model-free processing). However, we may instead plan actions prior to their actual execution by predicting their consequences (model-based processing). It has been suggested that the brain contains multiple yet distinct systems involved in reward prediction. Several studies have tried to allocate model-free and model-based systems to the striatum and the lateral prefrontal cortex (LPFC), respectively. Although there is much support for this hypothesis, recent research has revealed discrepancies. To understand the nature of the reward prediction systems in the LPFC and the striatum, a series of single-unit recording experiments were conducted. LPFC neurons were found to infer the reward associated with the stimuli even when the monkeys had not yet learned the stimulus-reward (SR) associations directly. Striatal neurons seemed to predict the reward for each stimulus only after directly experiencing the SR contingency. However, the one exception was “Exclusive Or” situations in which striatal neurons could predict the reward without direct experience. Previous single-unit studies in monkeys have reported that neurons in the LPFC encode category information, and represent reward information specific to a group of stimuli. Here, as an extension of these, we review recent evidence that a group of LPFC neurons can predict reward specific to a category of visual stimuli defined by relevant behavioral responses. We suggest that the functional difference in reward prediction between the LPFC and the striatum is that while LPFC neurons can utilize abstract code, striatal neurons can code individual associations between stimuli and reward but cannot utilize abstract code. PMID:26236266

  3. Dissociable functions of reward inference in the lateral prefrontal cortex and the striatum.

    PubMed

    Tanaka, Shingo; Pan, Xiaochuan; Oguchi, Mineki; Taylor, Jessica E; Sakagami, Masamichi

    2015-01-01

    In a complex and uncertain world, how do we select appropriate behavior? One possibility is that we choose actions that are highly reinforced by their probabilistic consequences (model-free processing). However, we may instead plan actions prior to their actual execution by predicting their consequences (model-based processing). It has been suggested that the brain contains multiple yet distinct systems involved in reward prediction. Several studies have tried to allocate model-free and model-based systems to the striatum and the lateral prefrontal cortex (LPFC), respectively. Although there is much support for this hypothesis, recent research has revealed discrepancies. To understand the nature of the reward prediction systems in the LPFC and the striatum, a series of single-unit recording experiments were conducted. LPFC neurons were found to infer the reward associated with the stimuli even when the monkeys had not yet learned the stimulus-reward (SR) associations directly. Striatal neurons seemed to predict the reward for each stimulus only after directly experiencing the SR contingency. However, the one exception was "Exclusive Or" situations in which striatal neurons could predict the reward without direct experience. Previous single-unit studies in monkeys have reported that neurons in the LPFC encode category information, and represent reward information specific to a group of stimuli. Here, as an extension of these, we review recent evidence that a group of LPFC neurons can predict reward specific to a category of visual stimuli defined by relevant behavioral responses. We suggest that the functional difference in reward prediction between the LPFC and the striatum is that while LPFC neurons can utilize abstract code, striatal neurons can code individual associations between stimuli and reward but cannot utilize abstract code. PMID:26236266

  4. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  5. Performance comparison of denoising filters for source camera identification

    NASA Astrophysics Data System (ADS)

    Cortiana, A.; Conotter, V.; Boato, G.; De Natale, F. G. B.

    2011-02-01

    Source identification for digital content is one of the main branches of digital image forensics. It relies on the extraction of the photo-response non-uniformity (PRNU) noise as a unique intrinsic fingerprint that efficiently characterizes the digital device which generated the content. Such noise is estimated as the difference between the content and its de-noised version obtained via denoising filter processing. This paper proposes a performance comparison of different denoising filters for source identification purposes. In particular, results achieved with a sophisticated 3D filter are presented and discussed with respect to state-of-the-art denoising filters previously employed in such a context.

  6. Postprocessing of Compressed Images via Sequential Denoising

    NASA Astrophysics Data System (ADS)

    Dar, Yehuda; Bruckstein, Alfred M.; Elad, Michael; Giryes, Raja

    2016-07-01

    In this work we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via Alternating Direction Method of Multipliers (ADMM), leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. Specifically, we demonstrate impressive gains in image quality for several leading compression methods - JPEG, JPEG2000, and HEVC.

  7. Postprocessing of Compressed Images via Sequential Denoising.

    PubMed

    Dar, Yehuda; Bruckstein, Alfred M; Elad, Michael; Giryes, Raja

    2016-07-01

    In this paper, we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via alternating direction method of multipliers, leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. In particular, we demonstrate impressive gains in image quality for several leading compression methods-JPEG, JPEG2000, and HEVC. PMID:27214878

  8. Adaptive Image Denoising by Mixture Adaptation.

    PubMed

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms. PMID:27416593

  9. Simultaneous denoising and compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.

    2013-01-01

    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  10. Infrared image denoising by nonlocal means filtering

    NASA Astrophysics Data System (ADS)

    Dee-Noor, Barak; Stern, Adrian; Yitzhaky, Yitzhak; Kopeika, Natan

    2012-05-01

    The recently introduced non-local means (NLM) image denoising technique broke the traditional paradigm according to which image pixels are processed by their surroundings. Non-local means technique was demonstrated to outperform state-of-the art denoising techniques when applied to images in the visible. This technique is even more powerful when applied to low contrast images, which makes it tractable for denoising infrared (IR) images. In this work we investigate the performance of NLM applied to infrared images. We also present a new technique designed to speed-up the NLM filtering process. The main drawback of the NLM is the large computational time required by the process of searching similar patches. Several techniques were developed during the last years to reduce the computational burden. Here we present a new techniques designed to reduce computational cost and sustain optimal filtering results of NLM technique. We show that the new technique, which we call Multi-Resolution Search NLM (MRS-NLM), reduces significantly the computational cost of the filtering process and we present a study of its performance on IR images.

  11. A phylogeny-based benchmarking test for orthology inference reveals the limitations of function-based validation.

    PubMed

    Trachana, Kalliopi; Forslund, Kristoffer; Larsson, Tomas; Powell, Sean; Doerks, Tobias; von Mering, Christian; Bork, Peer

    2014-01-01

    Accurate orthology prediction is crucial for many applications in the post-genomic era. The lack of broadly accepted benchmark tests precludes a comprehensive analysis of orthology inference. So far, functional annotation between orthologs serves as a performance proxy. However, this violates the fundamental principle of orthology as an evolutionary definition, while it is often not applicable due to limited experimental evidence for most species. Therefore, we constructed high quality "gold standard" orthologous groups that can serve as a benchmark set for orthology inference in bacterial species. Herein, we used this dataset to demonstrate 1) why a manually curated, phylogeny-based dataset is more appropriate for benchmarking orthology than other popular practices and 2) how it guides database design and parameterization through careful error quantification. More specifically, we illustrate how function-based tests often fail to identify false assignments, misjudging the true performance of orthology inference methods. We also examined how our dataset can instruct the selection of a "core" species repertoire to improve detection accuracy. We conclude that including more genomes at the proper evolutionary distances can influence the overall quality of orthology detection. The curated gene families, called Reference Orthologous Groups, are publicly available at http://eggnog.embl.de/orthobench2. PMID:25369365

  12. A Phylogeny-Based Benchmarking Test for Orthology Inference Reveals the Limitations of Function-Based Validation

    PubMed Central

    Larsson, Tomas; Powell, Sean; Doerks, Tobias; von Mering, Christian

    2014-01-01

    Accurate orthology prediction is crucial for many applications in the post-genomic era. The lack of broadly accepted benchmark tests precludes a comprehensive analysis of orthology inference. So far, functional annotation between orthologs serves as a performance proxy. However, this violates the fundamental principle of orthology as an evolutionary definition, while it is often not applicable due to limited experimental evidence for most species. Therefore, we constructed high quality "gold standard" orthologous groups that can serve as a benchmark set for orthology inference in bacterial species. Herein, we used this dataset to demonstrate 1) why a manually curated, phylogeny-based dataset is more appropriate for benchmarking orthology than other popular practices and 2) how it guides database design and parameterization through careful error quantification. More specifically, we illustrate how function-based tests often fail to identify false assignments, misjudging the true performance of orthology inference methods. We also examined how our dataset can instruct the selection of a “core” species repertoire to improve detection accuracy. We conclude that including more genomes at the proper evolutionary distances can influence the overall quality of orthology detection. The curated gene families, called Reference Orthologous Groups, are publicly available at http://eggnog.embl.de/orthobench2. PMID:25369365

  13. Inference of S-system models of genetic networks by solving one-dimensional function optimization problems.

    PubMed

    Kimura, S; Araki, D; Matsumura, K; Okada-Hatakeyama, M

    2012-02-01

    Voit and Almeida have proposed the decoupling approach as a method for inferring the S-system models of genetic networks. The decoupling approach defines the inference of a genetic network as a problem requiring the solutions of sets of algebraic equations. The computation can be accomplished in a very short time, as the approach estimates S-system parameters without solving any of the differential equations. Yet the defined algebraic equations are non-linear, which sometimes prevents us from finding reasonable S-system parameters. In this study, we propose a new technique to overcome this drawback of the decoupling approach. This technique transforms the problem of solving each set of algebraic equations into a one-dimensional function optimization problem. The computation can still be accomplished in a relatively short time, as the problem is transformed by solving a linear programming problem. We confirm the effectiveness of the proposed approach through numerical experiments. PMID:22155075

  14. Experimental wavelet based denoising for indoor infrared wireless communications.

    PubMed

    Rajbhandari, Sujan; Ghassemlooy, Zabih; Angelova, Maia

    2013-06-01

    This paper reports the experimental wavelet denoising techniques carried out for the first time for a number of modulation schemes for indoor optical wireless communications in the presence of fluorescent light interference. The experimental results are verified using computer simulations, clearly illustrating the advantage of the wavelet denoising technique in comparison to the high pass filtering for all baseband modulation schemes. PMID:23736631

  15. Denoising and deblurring of Fourier transform infrared spectroscopic imaging data

    NASA Astrophysics Data System (ADS)

    Nguyen, Tan H.; Reddy, Rohith K.; Walsh, Michael J.; Schulmerich, Matthew; Popescu, Gabriel; Do, Minh N.; Bhargava, Rohit

    2012-03-01

    Fourier transform infrared (FT-IR) spectroscopic imaging is a powerful tool to obtain chemical information from images of heterogeneous, chemically diverse samples. Significant advances in instrumentation and data processing in the recent past have led to improved instrument design and relatively widespread use of FT-IR imaging, in a variety of systems ranging from biomedical tissue to polymer composites. Various techniques for improving signal to noise ratio (SNR), data collection time and spatial resolution have been proposed previously. In this paper we present an integrated framework that addresses all these factors comprehensively. We utilize the low-rank nature of the data and model the instrument point spread function to denoise data, and then simultaneously deblurr and estimate unknown information from images, using a Bayesian variational approach. We show that more spatial detail and improved image quality can be obtained using the proposed framework. The proposed technique is validated through experiments on a standard USAF target and on prostate tissue specimens.

  16. A connection between score matching and denoising autoencoders.

    PubMed

    Vincent, Pascal

    2011-07-01

    Denoising autoencoders have been previously shown to be competitive alternatives to restricted Boltzmann machines for unsupervised pretraining of each layer of a deep architecture. We show that a simple denoising autoencoder training criterion is equivalent to matching the score (with respect to the data) of a specific energy-based model to that of a nonparametric Parzen density estimator of the data. This yields several useful insights. It defines a proper probabilistic model for the denoising autoencoder technique, which makes it in principle possible to sample from them or rank examples by their energy. It suggests a different way to apply score matching that is related to learning to denoise and does not require computing second derivatives. It justifies the use of tied weights between the encoder and decoder and suggests ways to extend the success of denoising autoencoders to a larger family of energy-based models. PMID:21492012

  17. Dual-domain denoising in three dimensional magnetic resonance imaging

    PubMed Central

    Peng, Jing; Zhou, Jiliu; Wu, Xi

    2016-01-01

    Denoising is a crucial preprocessing procedure for three dimensional magnetic resonance imaging (3D MRI). Existing denoising methods are predominantly implemented in a single domain, ignoring information in other domains. However, denoising methods are becoming increasingly complex, making analysis and implementation challenging. The present study aimed to develop a dual-domain image denoising (DDID) algorithm for 3D MRI that encapsulates information from the spatial and transform domains. In the present study, the DDID method was used to distinguish signal from noise in the spatial and frequency domains, after which robust accurate noise estimation was introduced for iterative filtering, which is simple and beneficial for computation. In addition, the proposed method was compared quantitatively and qualitatively with existing methods for synthetic and in vivo MRI datasets. The results of the present study suggested that the novel DDID algorithm performed well and provided competitive results, as compared with existing MRI denoising filters. PMID:27446257

  18. Combining interior and exterior characteristics for remote sensing image denoising

    NASA Astrophysics Data System (ADS)

    Peng, Ni; Sun, Shujin; Wang, Runsheng; Zhong, Ping

    2016-04-01

    Remote sensing image denoising faces many challenges since a remote sensing image usually covers a wide area and thus contains complex contents. Using the patch-based statistical characteristics is a flexible method to improve the denoising performance. There are usually two kinds of statistical characteristics available: interior and exterior characteristics. Different statistical characteristics have their own strengths to restore specific image contents. Combining different statistical characteristics to use their strengths together may have the potential to improve denoising results. This work proposes a method combining statistical characteristics to adaptively select statistical characteristics for different image contents. The proposed approach is implemented through a new characteristics selection criterion learned over training data. Moreover, with the proposed combination method, this work develops a denoising algorithm for remote sensing images. Experimental results show that our method can make full use of the advantages of interior and exterior characteristics for different image contents and thus improve the denoising performance.

  19. Denoising portal images by means of wavelet techniques

    NASA Astrophysics Data System (ADS)

    Gonzalez Lopez, Antonio Francisco

    Portal images are used in radiotherapy for the verification of patient positioning. The distinguishing feature of this image type lies in its formation process: the same beam used for patient treatment is used for image formation. The high energy of the photons used in radiotherapy strongly limits the quality of portal images: Low contrast between tissues, low spatial resolution and low signal to noise ratio. This Thesis studies the enhancement of these images, in particular denoising of portal images. The statistical properties of portal images and noise are studied: power spectra, statistical dependencies between image and noise and marginal, joint and conditional distributions in the wavelet domain. Later, various denoising methods are applied to noisy portal images. Methods operating in the wavelet domain are the basis of this Thesis. In addition, the Wiener filter and the non local means filter (NLM), operating in the image domain, are used as a reference. Other topics studied in this Thesis are spatial resolution, wavelet processing and image processing in dosimetry in radiotherapy. In this regard, the spatial resolution of portal imaging systems is studied; a new method for determining the spatial resolution of the imaging equipments in digital radiology is presented; the calculation of the power spectrum in the wavelet domain is studied; reducing uncertainty in film dosimetry is investigated; a method for the dosimetry of small radiation fields with radiochromic film is presented; the optimal signal resolution is determined, as a function of the noise level and the quantization step, in the digitization process of films and the useful optical density range is set, as a function of the required uncertainty level, for a densitometric system. Marginal distributions of portal images are similar to those of natural images. This also applies to the statistical relationships between wavelet coefficients, intra-band and inter-band. These facts result in a better

  20. OPTICAL COHERENCE TOMOGRAPHY HEART TUBE IMAGE DENOISING BASED ON CONTOURLET TRANSFORM.

    PubMed

    Guo, Qing; Sun, Shuifa; Dong, Fangmin; Gao, Bruce Z; Wang, Rui

    2012-01-01

    Optical Coherence Tomography(OCT) gradually becomes a very important imaging technology in the Biomedical field for its noninvasive, nondestructive and real-time properties. However, the interpretation and application of the OCT images are limited by the ubiquitous noise. In this paper, a denoising algorithm based on contourlet transform for the OCT heart tube image is proposed. A bivariate function is constructed to model the joint probability density function (pdf) of the coefficient and its cousin in contourlet domain. A bivariate shrinkage function is deduced to denoise the image by the maximum a posteriori (MAP) estimation. Three metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and equivalent number of look (ENL), are used to evaluate the denoised image using the proposed algorithm. The results show that the signal-to-noise ratio is improved while the edges of object are preserved by the proposed algorithm. Systemic comparisons with other conventional algorithms, such as mean filter, median filter, RKT filter, Lee filter, as well as bivariate shrinkage function for wavelet-based algorithm are conducted. The advantage of the proposed algorithm over these methods is illustrated. PMID:25364626

  1. Musculoskeletal ultrasound image denoising using Daubechies wavelets

    NASA Astrophysics Data System (ADS)

    Gupta, Rishu; Elamvazuthi, I.; Vasant, P.

    2012-11-01

    Among various existing medical imaging modalities Ultrasound is providing promising future because of its ease availability and use of non-ionizing radiations. In this paper we have attempted to denoise ultrasound image using daubechies wavelet and analyze the results with peak signal to noise ratio and coefficient of correlation as performance measurement index. The different daubechies from 1 to 6 is used on four different ultrasound bone fracture images with three different levels from 1 to 3. The images for visual inspection and PSNR, Coefficient of correlation values are graphically shown for quantitaive analysis of resultant images.

  2. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    SciTech Connect

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frederic

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimation of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a

  3. Minimum entropy approach to denoising time-frequency distributions

    NASA Astrophysics Data System (ADS)

    Aviyente, Selin; Williams, William J.

    2001-11-01

    Signals used in time-frequency analysis are usually corrupted by noise. Therefore, denoising the time-frequency representation is a necessity for producing readable time-frequency images. Denoising is defined as the operation of smoothing a noisy signal or image for producing a noise free representation. Linear smoothing of time-frequency distributions (TFDs) suppresses noise at the expense of considerable smearing of the signal components. For this reason, nonlinear denoising has been preferred. A common example to nonlinear denoising methods is the wavelet thresholding. In this paper, we introduce an entropy based approach to denoising time-frequency distributions. This new approach uses the spectrogram decomposition of time-frequency kernels proposed by Cunningham and Williams.In order to denoise the time-frequency distribution, we combine those spectrograms with smallest entropy values, thus ensuring that each spectrogram is well concentrated on the time-frequency plane and contains as little noise as possible. Renyi entropy is used as the measure to quantify the complexity of each spectrogram. The threshold for the number of spectrograms to combine is chosen adaptively based on the tradeoff between entropy and variance. The denoised time-frequency distributions for several signals are shown to demonstrate the effectiveness of the method. The improvement in performance is quantitatively evaluated.

  4. Denoising-enhancing images on elastic manifolds.

    PubMed

    Ratner, Vadim; Zeevi, Yehoshua Y

    2011-08-01

    The conflicting demands for simultaneous low-pass and high-pass processing, required in image denoising and enhancement, still present an outstanding challenge, although a great deal of progress has been made by means of adaptive diffusion-type algorithms. To further advance such processing methods and algorithms, we introduce a family of second-order (in time) partial differential equations. These equations describe the motion of a thin elastic sheet in a damping environment. They are also derived by a variational approach in the context of image processing. The new operator enables better edge preservation in denoising applications by offering an adaptive lowpass filter, which preserves high-frequency components in the pass-band better than the adaptive diffusion filter, while offering slower error propagation across edges. We explore the action of this powerful operator in the context of image processing and exploit for this purpose the wealth of knowledge accumulated in physics and mathematics about the action and behavior of this operator. The resulting methods are further generalized for color and/or texture image processing, by embedding images in multidimensional manifolds. A specific application of the proposed new approach to superresolution is outlined. PMID:21342847

  5. Optimal wavelet denoising for smart biomonitor systems

    NASA Astrophysics Data System (ADS)

    Messer, Sheila R.; Agzarian, John; Abbott, Derek

    2001-03-01

    Future smart-systems promise many benefits for biomedical diagnostics. The ideal is for simple portable systems that display and interpret information from smart integrated probes or MEMS-based devices. In this paper, we will discuss a step towards this vision with a heart bio-monitor case study. An electronic stethoscope is used to record heart sounds and the problem of extracting noise from the signal is addressed via the use of wavelets and averaging. In our example of heartbeat analysis, phonocardiograms (PCGs) have many advantages in that they may be replayed and analysed for spectral and frequency information. Many sources of noise may pollute a PCG including foetal breath sounds if the subject is pregnant, lung and breath sounds, environmental noise and noise from contact between the recording device and the skin. Wavelets can be employed to denoise the PCG. The signal is decomposed by a discrete wavelet transform. Due to the efficient decomposition of heart signals, their wavelet coefficients tend to be much larger than those due to noise. Thus, coefficients below a certain level are regarded as noise and are thresholded out. The signal can then be reconstructed without significant loss of information in the signal. The questions that this study attempts to answer are which wavelet families, levels of decomposition, and thresholding techniques best remove the noise in a PCG. The use of averaging in combination with wavelet denoising is also addressed. Possible applications of the Hilbert Transform to heart sound analysis are discussed.

  6. Denoising solar radiation data using coiflet wavelets

    SciTech Connect

    Karim, Samsul Ariffin Abdul Janier, Josefina B. Muthuvalu, Mohana Sundaram; Hasan, Mohammad Khatim; Sulaiman, Jumat; Ismail, Mohd Tahir

    2014-10-24

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.

  7. Fault Detection of a Roller-Bearing System through the EMD of a Wavelet Denoised Signal

    PubMed Central

    Ahn, Jong-Hyo; Kwak, Dae-Ho; Koh, Bong-Hwan

    2014-01-01

    This paper investigates fault detection of a roller bearing system using a wavelet denoising scheme and proper orthogonal value (POV) of an intrinsic mode function (IMF) covariance matrix. The IMF of the bearing vibration signal is obtained through empirical mode decomposition (EMD). The signal screening process in the wavelet domain eliminates noise-corrupted portions that may lead to inaccurate prognosis of bearing conditions. We segmented the denoised bearing signal into several intervals, and decomposed each of them into IMFs. The first IMF of each segment is collected to become a covariance matrix for calculating the POV. We show that covariance matrices from healthy and damaged bearings exhibit different POV profiles, which can be a damage-sensitive feature. We also illustrate the conventional approach of feature extraction, of observing the kurtosis value of the measured signal, to compare the functionality of the proposed technique. The study demonstrates the feasibility of wavelet-based de-noising, and shows through laboratory experiments that tracking the proper orthogonal values of the covariance matrix of the IMF can be an effective and reliable measure for monitoring bearing fault. PMID:25196008

  8. Inferring gene function from evolutionary change in signatures of translation efficiency

    PubMed Central

    2014-01-01

    Background The genetic code is redundant, meaning that most amino acids can be encoded by more than one codon. Highly expressed genes tend to use optimal codons to increase the accuracy and speed of translation. Thus, codon usage biases provide a signature of the relative expression levels of genes, which can, uniquely, be quantified across the domains of life. Results Here we describe a general statistical framework to exploit this phenomenon and to systematically associate genes with environments and phenotypic traits through changes in codon adaptation. By inferring evolutionary signatures of translation efficiency in 911 bacterial and archaeal genomes while controlling for confounding effects of phylogeny and inter-correlated phenotypes, we linked 187 gene families to 24 diverse phenotypic traits. A series of experiments in Escherichia coli revealed that 13 of 15, 19 of 23, and 3 of 6 gene families with changes in codon adaptation in aerotolerant, thermophilic, or halophilic microbes. Respectively, confer specific resistance to, respectively, hydrogen peroxide, heat, and high salinity. Further, we demonstrate experimentally that changes in codon optimality alone are sufficient to enhance stress resistance. Finally, we present evidence that multiple genes with altered codon optimality in aerobes confer oxidative stress resistance by controlling the levels of iron and NAD(P)H. Conclusions Taken together, these results provide experimental evidence for a widespread connection between changes in translation efficiency and phenotypic adaptation. As the number of sequenced genomes increases, this novel genomic context method for linking genes to phenotypes based on sequence alone will become increasingly useful. PMID:24580753

  9. Effect of taxonomic resolution on ecological and palaeoecological inference - a test using testate amoeba water table depth transfer functions

    NASA Astrophysics Data System (ADS)

    Mitchell, Edward A. D.; Lamentowicz, Mariusz; Payne, Richard J.; Mazei, Yuri

    2014-05-01

    Sound taxonomy is a major requirement for quantitative environmental reconstruction using biological data. Transfer function performance should theoretically be expected to decrease with reduced taxonomic resolution. However for many groups of organisms taxonomy is imperfect and species level identification not always possible. We conducted numerical experiments on five testate amoeba water table (DWT) transfer function data sets. We sequentially reduced the number of taxonomic groups by successively merging morphologically similar species and removing inconspicuous species. We then assessed how these changes affected model performance and palaeoenvironmental reconstruction using two fossil data sets. Model performance decreased with decreasing taxonomic resolution, but this had only limited effects on patterns of inferred DWT, at least to detect major dry/wet shifts. Higher-resolution taxonomy may however still be useful to detect more subtle changes, or for reconstructed shifts to be significant.

  10. A Genome-Scale Investigation of How Sequence, Function, and Tree-Based Gene Properties Influence Phylogenetic Inference

    PubMed Central

    Shen, Xing-Xing; Salichos, Leonidas; Rokas, Antonis

    2016-01-01

    Molecular phylogenetic inference is inherently dependent on choices in both methodology and data. Many insightful studies have shown how choices in methodology, such as the model of sequence evolution or optimality criterion used, can strongly influence inference. In contrast, much less is known about the impact of choices in the properties of the data, typically genes, on phylogenetic inference. We investigated the relationships between 52 gene properties (24 sequence-based, 19 function-based, and 9 tree-based) with each other and with three measures of phylogenetic signal in two assembled data sets of 2,832 yeast and 2,002 mammalian genes. We found that most gene properties, such as evolutionary rate (measured through the percent average of pairwise identity across taxa) and total tree length, were highly correlated with each other. Similarly, several gene properties, such as gene alignment length, Guanine-Cytosine content, and the proportion of tree distance on internal branches divided by relative composition variability (treeness/RCV), were strongly correlated with phylogenetic signal. Analysis of partial correlations between gene properties and phylogenetic signal in which gene evolutionary rate and alignment length were simultaneously controlled, showed similar patterns of correlations, albeit weaker in strength. Examination of the relative importance of each gene property on phylogenetic signal identified gene alignment length, alongside with number of parsimony-informative sites and variable sites, as the most important predictors. Interestingly, the subsets of gene properties that optimally predicted phylogenetic signal differed considerably across our three phylogenetic measures and two data sets; however, gene alignment length and RCV were consistently included as predictors of all three phylogenetic measures in both yeasts and mammals. These results suggest that a handful of sequence-based gene properties are reliable predictors of phylogenetic signal

  11. Tectonomagmatic origin of Precambrian rocks of Mexico and Argentina inferred from multi-dimensional discriminant-function based discrimination diagrams

    NASA Astrophysics Data System (ADS)

    Pandarinath, Kailasa

    2014-12-01

    Several new multi-dimensional tectonomagmatic discrimination diagrams employing log-ratio variables of chemical elements and probability based procedure have been developed during the last 10 years for basic-ultrabasic, intermediate and acid igneous rocks. There are numerous studies on extensive evaluations of these newly developed diagrams which have indicated their successful application to know the original tectonic setting of younger and older as well as sea-water and hydrothermally altered volcanic rocks. In the present study, these diagrams were applied to Precambrian rocks of Mexico (southern and north-eastern) and Argentina. The study indicated the original tectonic setting of Precambrian rocks from the Oaxaca Complex of southern Mexico as follows: (1) dominant rift (within-plate) setting for rocks of 1117-988 Ma age; (2) dominant rift and less-dominant arc setting for rocks of 1157-1130 Ma age; and (3) a combined tectonic setting of collision and rift for Etla Granitoid Pluton (917 Ma age). The diagrams have indicated the original tectonic setting of the Precambrian rocks from the north-eastern Mexico as: (1) a dominant arc tectonic setting for the rocks of 988 Ma age; and (2) an arc and collision setting for the rocks of 1200-1157 Ma age. Similarly, the diagrams have indicated the dominant original tectonic setting for the Precambrian rocks from Argentina as: (1) with-in plate (continental rift-ocean island) and continental rift (CR) setting for the rocks of 800 Ma and 845 Ma age, respectively; and (2) an arc setting for the rocks of 1174-1169 Ma and of 1212-1188 Ma age. The inferred tectonic setting for these Precambrian rocks are, in general, in accordance to the tectonic setting reported in the literature, though there are some inconsistence inference of tectonic settings by some of the diagrams. The present study confirms the importance of these newly developed discriminant-function based diagrams in inferring the original tectonic setting of

  12. A Genome-Scale Investigation of How Sequence, Function, and Tree-Based Gene Properties Influence Phylogenetic Inference.

    PubMed

    Shen, Xing-Xing; Salichos, Leonidas; Rokas, Antonis

    2016-01-01

    Molecular phylogenetic inference is inherently dependent on choices in both methodology and data. Many insightful studies have shown how choices in methodology, such as the model of sequence evolution or optimality criterion used, can strongly influence inference. In contrast, much less is known about the impact of choices in the properties of the data, typically genes, on phylogenetic inference. We investigated the relationships between 52 gene properties (24 sequence-based, 19 function-based, and 9 tree-based) with each other and with three measures of phylogenetic signal in two assembled data sets of 2,832 yeast and 2,002 mammalian genes. We found that most gene properties, such as evolutionary rate (measured through the percent average of pairwise identity across taxa) and total tree length, were highly correlated with each other. Similarly, several gene properties, such as gene alignment length, Guanine-Cytosine content, and the proportion of tree distance on internal branches divided by relative composition variability (treeness/RCV), were strongly correlated with phylogenetic signal. Analysis of partial correlations between gene properties and phylogenetic signal in which gene evolutionary rate and alignment length were simultaneously controlled, showed similar patterns of correlations, albeit weaker in strength. Examination of the relative importance of each gene property on phylogenetic signal identified gene alignment length, alongside with number of parsimony-informative sites and variable sites, as the most important predictors. Interestingly, the subsets of gene properties that optimally predicted phylogenetic signal differed considerably across our three phylogenetic measures and two data sets; however, gene alignment length and RCV were consistently included as predictors of all three phylogenetic measures in both yeasts and mammals. These results suggest that a handful of sequence-based gene properties are reliable predictors of phylogenetic signal

  13. Bayesian inverse modeling of vadose zone hydraulic properties in a layered soil profile with data-driven likelihood function inference

    NASA Astrophysics Data System (ADS)

    Over, M. W.; Wollschlaeger, U.; Osorio-Murillo, C. A.; Ames, D. P.; Rubin, Y.

    2013-12-01

    Good estimates for water retention and hydraulic conductivity functions are essential for accurate modeling of the nonlinear water dynamics of unsaturated soils. Parametric mathematical models for these functions are utilized in numerical applications of vadose zone dynamics; therefore, characterization of the model parameters to represent in situ soil properties is the goal of many inversion or calibration techniques. A critical, statistical challenge of existing approaches is the subjective, user-definition of a likelihood function or objective function - a step known to introduce bias in the results. We present a methodology for Bayesian inversion where the likelihood function is inferred directly from the simulation data, which eliminates subjectivity. Additionally, our approach assumes that there is no one parameterization that is appropriate for soils, but rather that the parameters are randomly distributed. This introduces the familiar concept from groundwater hydrogeology of structural models into vadose zone applications, but without attempting to apply geostatistics, which is extremely difficult in unsaturated problems. We validate our robust statistical approach on field data obtained during a multi-layer, natural boundary condition experiment and compare with previous optimizations using the same data. Our confidence intervals for the water retention and hydraulic conductivity functions as well as joint posterior probability distributions of the Mualem-van Genuchten parameters compare well with the previous work. The entire analysis was carried out using the free, open-source MAD# software available at http://mad.codeplex.com/.

  14. Inferring Functional Interaction and Transition Patterns via Dynamic Bayesian Variable Partition Models

    PubMed Central

    Zhang, Jing; Li, Xiang; Li, Cong; Lian, Zhichao; Huang, Xiu; Zhong, Guocheng; Zhu, Dajiang; Li, Kaiming; Jin, Changfeng; Hu, Xintao; Han, Junwei; Guo, Lei; Hu, Xiaoping; Li, Lingjiang; Liu, Tianming

    2014-01-01

    Multivariate connectivity and functional dynamics have been of wide interest in the neuroimaging field, and a variety of methods have been developed to study functional interactions and dynamics. In contrast, the temporal dynamic transitions of multivariate functional interactions among brain networks, in particular, in resting state, have been much less explored. This paper presents a novel dynamic Bayesian variable partition model (DBVPM) that simultaneously considers and models multivariate functional interactions and their dynamics via a unified Bayesian framework. The basic idea is to detect the temporal boundaries of piecewise quasi-stable functional interaction patterns, which are then modeled by representative signature patterns and whose temporal transitions are characterized by finite-state transition machines. Results on both simulated and experimental datasets demonstrated the effectiveness and accuracy of the DBVPM in dividing temporally transiting functional interaction patterns. The application of DBVPM on a post-traumatic stress disorder (PTSD) dataset revealed substantially different multivariate functional interaction signatures and temporal transitions in the default mode and emotion networks of PTSD patients, in comparison with those in healthy controls. This result demonstrated the utility of DBVPM in elucidating salient features that cannot be revealed by static pair-wise functional connectivity analysis. PMID:24222313

  15. INFERRING FUNCTIONAL NETWORK-BASED SIGNATURES VIA STRUCTURALLY-WEIGHTED LASSO MODEL

    PubMed Central

    Zhu, Dajiang; Shen, Dinggang; Liu, Tianming

    2014-01-01

    Most current research approaches for functional/effective connectivity analysis focus on pair-wise connectivity and cannot deal with network-scale functional interactions. In this paper, we propose a structurally-weighted LASSO (SW-LASSO) regression model to represent the functional interaction among multiple regions of interests (ROIs) based on resting state fMRI (R-fMRI) data. The structural connectivity constraints derived from diffusion tenor imaging (DTI) data will guide the selection of the weights which adjust the penalty levels of different coefficients corresponding to different ROIs. Using the Default Mode Network (DMN) as a test-bed, our results indicate that the learned SW-LASSO has good capability of differentiating Mild Cognitive Impairment (MCI) subjects from their normal controls and has promising potential to characterize the brain functions among different condition, thus serving as the functional network-based signature. PMID:25002915

  16. A new method for mobile phone image denoising

    NASA Astrophysics Data System (ADS)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  17. Denoising time-domain induced polarisation data using wavelet techniques

    NASA Astrophysics Data System (ADS)

    Deo, Ravin N.; Cull, James P.

    2016-05-01

    Time-domain induced polarisation (TDIP) methods are routinely used for near-surface evaluations in quasi-urban environments harbouring networks of buried civil infrastructure. A conventional technique for improving signal to noise ratio in such environments is by using analogue or digital low-pass filtering followed by stacking and rectification. However, this induces large distortions in the processed data. In this study, we have conducted the first application of wavelet based denoising techniques for processing raw TDIP data. Our investigation included laboratory and field measurements to better understand the advantages and limitations of this technique. It was found that distortions arising from conventional filtering can be significantly avoided with the use of wavelet based denoising techniques. With recent advances in full-waveform acquisition and analysis, incorporation of wavelet denoising techniques can further enhance surveying capabilities. In this work, we present the rationale for utilising wavelet denoising methods and discuss some important implications, which can positively influence TDIP methods.

  18. Denoising of chaotic signal using independent component analysis and empirical mode decomposition with circulate translating

    NASA Astrophysics Data System (ADS)

    Wen-Bo, Wang; Xiao-Dong, Zhang; Yuchan, Chang; Xiang-Li, Wang; Zhao, Wang; Xi, Chen; Lei, Zheng

    2016-01-01

    In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. Project supported by the National Science and Technology, China (Grant No. 2012BAJ15B04), the National Natural Science Foundation of China (Grant Nos. 41071270 and 61473213), the Natural Science Foundation of Hubei Province, China (Grant No. 2015CFB424), the State Key Laboratory Foundation of Satellite Ocean Environment Dynamics, China (Grant No. SOED1405), the Hubei Provincial Key Laboratory Foundation of Metallurgical Industry Process System Science, China (Grant No. Z201303), and the Hubei Key Laboratory Foundation of Transportation Internet of Things, Wuhan University of Technology, China (Grant No.2015III015-B02).

  19. Wavelet-based denoising method for real phonocardiography signal recorded by mobile devices in noisy environment.

    PubMed

    Gradolewski, Dawid; Redlarski, Grzegorz

    2014-09-01

    The main obstacle in development of intelligent autodiagnosis medical systems based on the analysis of phonocardiography (PCG) signals is noise. The noise can be caused by digestive and respiration sounds, movements or even signals from the surrounding environment and it is characterized by wide frequency and intensity spectrum. This spectrum overlaps the heart tones spectrum, which makes the problem of PCG signal filtrating complex. The most common method for filtering such signals are wavelet denoising algorithms. In previous studies, in order to determine the optimum wavelet denoising parameters the disturbances were simulated by Gaussian white noise. However, this paper shows that this noise has a variable character. Therefore, the purpose of this paper is adaptation of a wavelet denoising algorithm for the filtration of real PCG signal disturbances from signals recorded by a mobile devices in a noisy environment. The best results were obtained for Coif 5 wavelet at the 10th decomposition level with the use of a minimaxi threshold selection algorithm and mln rescaling function. The performance of the algorithm was tested on four pathological heart sounds: early systolic murmur, ejection click, late systolic murmur and pansystolic murmur. PMID:25038586

  20. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India. PMID:26697285

  1. Denoising Two-Photon Calcium Imaging Data

    PubMed Central

    Malik, Wasim Q.; Schummers, James; Sur, Mriganka; Brown, Emery N.

    2011-01-01

    Two-photon calcium imaging is now an important tool for in vivo imaging of biological systems. By enabling neuronal population imaging with subcellular resolution, this modality offers an approach for gaining a fundamental understanding of brain anatomy and physiology. Proper analysis of calcium imaging data requires denoising, that is separating the signal from complex physiological noise. To analyze two-photon brain imaging data, we present a signal plus colored noise model in which the signal is represented as harmonic regression and the correlated noise is represented as an order autoregressive process. We provide an efficient cyclic descent algorithm to compute approximate maximum likelihood parameter estimates by combing a weighted least-squares procedure with the Burg algorithm. We use Akaike information criterion to guide selection of the harmonic regression and the autoregressive model orders. Our flexible yet parsimonious modeling approach reliably separates stimulus-evoked fluorescence response from background activity and noise, assesses goodness of fit, and estimates confidence intervals and signal-to-noise ratio. This refined separation leads to appreciably enhanced image contrast for individual cells including clear delineation of subcellular details and network activity. The application of our approach to in vivo imaging data recorded in the ferret primary visual cortex demonstrates that our method yields substantially denoised signal estimates. We also provide a general Volterra series framework for deriving this and other signal plus correlated noise models for imaging. This approach to analyzing two-photon calcium imaging data may be readily adapted to other computational biology problems which apply correlated noise models. PMID:21687727

  2. Denoising, deconvolving, and decomposing photon observations. Derivation of the D3PO algorithm

    NASA Astrophysics Data System (ADS)

    Selig, Marco; Enßlin, Torsten A.

    2015-02-01

    The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution is not dependent on the underlying position space, the implementation of the D3PO algorithm uses the nifty package to ensure applicability to various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 × 32 arcmin2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A74

  3. Ecological Inference

    NASA Astrophysics Data System (ADS)

    King, Gary; Rosen, Ori; Tanner, Martin A.

    2004-09-01

    This collection of essays brings together a diverse group of scholars to survey the latest strategies for solving ecological inference problems in various fields. The last half-decade has witnessed an explosion of research in ecological inference--the process of trying to infer individual behavior from aggregate data. Although uncertainties and information lost in aggregation make ecological inference one of the most problematic types of research to rely on, these inferences are required in many academic fields, as well as by legislatures and the Courts in redistricting, by business in marketing research, and by governments in policy analysis.

  4. Preliminary Results of the Lithospheric Structure Beneath the Aeolian Archipelago (Italy) Inferred from Teleseismic Receiver Functions

    NASA Astrophysics Data System (ADS)

    Musumeci, C.; Martinez-Arevalo, C.; de Lis Mancilla3, F.; Patanè, D.

    2009-12-01

    The Aeolian archipelago (Italy) represents an approximately one-million-year-old volcanic arc related to the subduction of the Ionian oceanic plate beneath the Calabrian continental crust. The objective of this work is to develop a better understanding of the regional structure of the whole archipelago. The crustal structure under each station was obtained applying P-receiver function technique to the teleseismic P-coda data recorded by the broadband seismic network (10 stations) installed by the Istituto Nazionale di Geofisica e Volcanologia (INGV-CT). Receiver functions were computed by using the Extended-Time Multitaper Frequency Domain Cross-Correlation Receiver Function (ET-MTRF) method. The preliminary results suggest a very similar listhospheric structure below all the islands of the Aeolian archipelago, with the exception of Stromboli. The boundary between the subducting ocean crust of the Ionian plate and the Thyrrenian mantle is clearly observed below all the stations.

  5. Phylogenetic Gaussian Process Model for the Inference of Functionally Important Regions in Protein Tertiary Structures

    PubMed Central

    Huang, Yi-Fei; Golding, G. Brian

    2014-01-01

    A critical question in biology is the identification of functionally important amino acid sites in proteins. Because functionally important sites are under stronger purifying selection, site-specific substitution rates tend to be lower than usual at these sites. A large number of phylogenetic models have been developed to estimate site-specific substitution rates in proteins and the extraordinarily low substitution rates have been used as evidence of function. Most of the existing tools, e.g. Rate4Site, assume that site-specific substitution rates are independent across sites. However, site-specific substitution rates may be strongly correlated in the protein tertiary structure, since functionally important sites tend to be clustered together to form functional patches. We have developed a new model, GP4Rate, which incorporates the Gaussian process model with the standard phylogenetic model to identify slowly evolved regions in protein tertiary structures. GP4Rate uses the Gaussian process to define a nonparametric prior distribution of site-specific substitution rates, which naturally captures the spatial correlation of substitution rates. Simulations suggest that GP4Rate can potentially estimate site-specific substitution rates with a much higher accuracy than Rate4Site and tends to report slowly evolved regions rather than individual sites. In addition, GP4Rate can estimate the strength of the spatial correlation of substitution rates from the data. By applying GP4Rate to a set of mammalian B7-1 genes, we found a highly conserved region which coincides with experimental evidence. GP4Rate may be a useful tool for the in silico prediction of functionally important regions in the proteins with known structures. PMID:24453956

  6. STATISTICAL INFERENCE PROCEDURES FOR PROBABILITY SELECTION FUNCTIONS IN LONG-TERM MONITORING PROGRAMS

    EPA Science Inventory

    This report develops the theory and illustrates the use of selection functions to describe changes over time in the distributions of environmentally important variables at sites sampled as part of environmental monitoring programs. he first part of the report provides a review of...

  7. Microbial manipulation of immune function for asthma prevention: inferences from clinical trials.

    PubMed

    Yoo, Jennifer; Tcheurekdjian, Haig; Lynch, Susan V; Cabana, Michael; Boushey, Homer A

    2007-07-01

    The "hygiene hypothesis" proposes that the increase in allergic diseases in developing countries reflects a decrease in infections during childhood. Cohort studies suggest, however, that the risks of asthma are increased in children who suffer severe illness from a viral respiratory infection in infancy. This apparent inconsistency can be reconciled through consideration of epidemiologic, clinical, and animal studies. The elements of this line of reasoning are that viral infections can predispose to organ-specific expression of allergic sensitization, and that the severity of illness is shaped by the maturity of immune function, which in turn is influenced by previous contact with bacteria and viruses, whether pathogenic or not. Clinical studies of children and interventional studies of animals indeed suggest that the exposure to microbes through the gastrointestinal tract powerfully shapes immune function. Intestinal microbiota differ in infants who later develop allergic diseases, and feeding Lactobacillus casei to infants at risk has been shown to reduce their rate of developing eczema. This has prompted studies of feeding probiotics as a primary prevention strategy for asthma. We propose that the efficacy of this approach depends on its success in inducing maturation of immune function important in defense against viral infection, rather than on its effectiveness in preventing allergic sensitization. It follows that the endpoints of studies of feeding probiotics to infants at risk for asthma should include not simply tests of responsiveness to allergens, but also assessment of intestinal flora, immune function, and the clinical response to respiratory viral infection. PMID:17607013

  8. Pragmatic Inferences in High-Functioning Adults with Autism and Asperger Syndrome

    ERIC Educational Resources Information Center

    Pijnacker, Judith; Hagoort, Peter; Buitelaar, Jan; Teunisse, Jan-Pieter; Geurts, Bart

    2009-01-01

    Although people with autism spectrum disorders (ASD) often have severe problems with pragmatic aspects of language, little is known about their pragmatic reasoning. We carried out a behavioral study on high-functioning adults with autistic disorder (n = 11) and Asperger syndrome (n = 17) and matched controls (n = 28) to investigate whether they…

  9. Using Functional Behavioral Assessment Data to Infer Learning Histories and Guide Interventions: A Consultation Case Study

    ERIC Educational Resources Information Center

    Parker, Megan; Skinner, Christopher; Booher, Joshua

    2010-01-01

    A teacher requested behavioral consultation services to address a first-grade student's disruptive behavior. Functional behavior assessment (FBA) suggested the behavior was being reinforced by "negative" teacher attention (e.g., reprimands, redirections, response cost). Based on this analysis, the teacher and consultant posited that this student…

  10. Inference for the median residual life function in sequential multiple assignment randomized trials

    PubMed Central

    Kidwell, Kelley M.; Ko, Jin H.; Wahed, Abdus S.

    2014-01-01

    In survival analysis, median residual lifetime is often used as a summary measure to assess treatment effectiveness; it is not clear, however, how such a quantity could be estimated for a given dynamic treatment regimen using data from sequential randomized clinical trials. We propose a method to estimate a dynamic treatment regimen-specific median residual life (MERL) function from sequential multiple assignment randomized trials. We present the MERL estimator, which is based on inverse probability weighting, as well as, two variance estimates for the MERL estimator. One variance estimate follows from Lunceford, Davidian and Tsiatis’ 2002 survival function-based variance estimate and the other uses the sandwich estimator. The MERL estimator is evaluated, and its two variance estimates are compared through simulation studies, showing that the estimator and both variance estimates produce approximately unbiased results in large samples. To demonstrate our methods, the estimator has been applied to data from a sequentially randomized leukemia clinical trial. PMID:24254496

  11. PrOnto database : GO term functional dissimilarity inferred from biological data

    PubMed Central

    Chapple, Charles E.; Herrmann, Carl; Brun, Christine

    2015-01-01

    Moonlighting proteins are defined by their involvement in multiple, unrelated functions. The computational prediction of such proteins requires a formal method of assessing the similarity of cellular processes, for example, by identifying dissimilar Gene Ontology terms. While many measures of Gene Ontology term similarity exist, most depend on abstract mathematical analyses of the structure of the GO tree and do not necessarily represent the underlying biology. Here, we propose two metrics of GO term functional dissimilarity derived from biological information, one based on the protein annotations and the other on the interactions between proteins. They have been collected in the PrOnto database, a novel tool which can be of particular use for the identification of moonlighting proteins. The database can be queried via an web-based interface which is freely available at http://tagc.univ-mrs.fr/pronto. PMID:26089836

  12. Simple Math is Enough: Two Examples of Inferring Functional Associations from Genomic Data

    NASA Technical Reports Server (NTRS)

    Liang, Shoudan

    2003-01-01

    Non-random features in the genomic data are usually biologically meaningful. The key is to choose the feature well. Having a p-value based score prioritizes the findings. If two proteins share a unusually large number of common interaction partners, they tend to be involved in the same biological process. We used this finding to predict the functions of 81 un-annotated proteins in yeast.

  13. Inferring functional connectivity in MRI using Bayesian network structure learning with a modified PC algorithm.

    PubMed

    Iyer, Swathi P; Shafran, Izhak; Grayson, David; Gates, Kathleen; Nigg, Joel T; Fair, Damien A

    2013-07-15

    Resting state functional connectivity MRI (rs-fcMRI) is a popular technique used to gauge the functional relatedness between regions in the brain for typical and special populations. Most of the work to date determines this relationship by using Pearson's correlation on BOLD fMRI timeseries. However, it has been recognized that there are at least two key limitations to this method. First, it is not possible to resolve the direct and indirect connections/influences. Second, the direction of information flow between the regions cannot be differentiated. In the current paper, we follow-up on recent work by Smith et al. (2011), and apply PC algorithm to both simulated data and empirical data to determine whether these two factors can be discerned with group average, as opposed to single subject, functional connectivity data. When applied on simulated individual subjects, the algorithm performs well determining indirect and direct connection but fails in determining directionality. However, when applied at group level, PC algorithm gives strong results for both indirect and direct connections and the direction of information flow. Applying the algorithm on empirical data, using a diffusion-weighted imaging (DWI) structural connectivity matrix as the baseline, the PC algorithm outperformed the direct correlations. We conclude that, under certain conditions, the PC algorithm leads to an improved estimate of brain network structure compared to the traditional connectivity analysis based on correlations. PMID:23501054

  14. Epigenetic regulation of human placental function and pregnancy outcome: considerations for causal inference.

    PubMed

    Januar, Vania; Desoye, Gernot; Novakovic, Boris; Cvitic, Silvija; Saffery, Richard

    2015-10-01

    Epigenetic mechanisms, often defined as regulating gene activity independently of underlying DNA sequence, are crucial for healthy development. The sum total of epigenetic marks within a cell or tissue (the epigenome) is sensitive to environmental influence, and disruption of the epigenome in utero has been associated with adverse pregnancy outcomes. Not surprisingly, given its multifaceted functions and important role in regulating pregnancy outcome, the placenta shows unique epigenetic features. Interestingly however, many of these are only otherwise seen in human malignancy (the pseudomalignant placental epigenome). Epigenetic variation in the placenta is now emerging as a candidate mediator of environmental influence on placental functioning and a key regulator of pregnancy outcome. However, replication of findings is generally lacking, most likely due to small sample sizes and a lack of standardization of analytical approaches. Defining DNA methylation "signatures" in the placenta associated with maternal and fetal outcomes offers tremendous potential to improve pregnancy outcomes, but care must be taken in interpretation of findings. Future placental epigenetic research would do well to address the issues present in epigenetic epidemiology more generally, including careful consideration of sample size, potentially confounding factors, issues of tissue heterogeneity, reverse causation, and the role of genetics in modulating epigenetic profile. The importance of animal or in vitro models in establishing a functional role of epigenetic variation identified in human beings, which is key to establishing causation, should not be underestimated. PMID:26428498

  15. The luminosity function at z ∼ 8 from 97 Y-band dropouts: Inferences about reionization

    SciTech Connect

    Schmidt, Kasper B.; Treu, Tommaso; Kelly, Brandon C.; Trenti, Michele; Bradley, Larry D.; Stiavelli, Massimo; Oesch, Pascal A.; Shull, J. Michael

    2014-05-01

    We present the largest search to date for Y-band dropout galaxies (z ∼ 8 Lyman break galaxies, LBGs) based on 350 arcmin{sup 2} of Hubble Space Telescope observations in the V, Y, J, and H bands from the Brightest of Reionizing Galaxies (BoRG) survey. In addition to previously published data, the BoRG13 data set presented here includes approximately 50 arcmin{sup 2} of new data and deeper observations of two previous BoRG pointings, from which we present 9 new z ∼ 8 LBG candidates, bringing the total number of BoRG Y-band dropouts to 38 with 25.5 ≤ m{sub J} ≤ 27.6 (AB system). We introduce a new Bayesian formalism for estimating the galaxy luminosity function, which does not require binning (and thus smearing) of the data and includes a likelihood based on the formally correct binomial distribution as opposed to the often-used approximate Poisson distribution. We demonstrate the utility of the new method on a sample of 97 Y-band dropouts that combines the bright BoRG galaxies with the fainter sources published in Bouwens et al. from the Hubble Ultra Deep Field and Early Release Science programs. We show that the z ∼ 8 luminosity function is well described by a Schechter function over its full dynamic range with a characteristic magnitude M{sup ⋆}=−20.15{sub −0.38}{sup +0.29}, a faint-end slope of α=−1.87{sub −0.26}{sup +0.26}, and a number density of log{sub 10} ϕ{sup ⋆}[Mpc{sup −3}]=−3.24{sub −0.24}{sup +0.25}. Integrated down to M = –17.7, this luminosity function yields a luminosity density log{sub 10} ϵ[erg s{sup −1} Hz{sup −1} Mpc{sup −3}]=25.52{sub −0.05}{sup +0.05}. Our luminosity function analysis is consistent with previously published determinations within 1σ. The error analysis suggests that uncertainties on the faint-end slope are still too large to draw a firm conclusion about its evolution with redshift. We use our statistical framework to discuss the implication of our study for the physics of

  16. Function of pretribosphenic and tribosphenic mammalian molars inferred from 3D animation.

    PubMed

    Schultz, Julia A; Martin, Thomas

    2014-10-01

    Appearance of the tribosphenic molar in the Late Jurassic (160 Ma) is a crucial innovation for food processing in mammalian evolution. This molar type is characterized by a protocone, a talonid basin and a two-phased chewing cycle, all of which are apomorphic. In this functional study on the teeth of Late Jurassic Dryolestes leiriensis and the living marsupial Monodelphis domestica, we demonstrate that pretribosphenic and tribosphenic molars show fundamental differences of food reduction strategies, representing a shift in dental function during the transition of tribosphenic mammals. By using the Occlusal Fingerprint Analyser (OFA), we simulated the chewing motions of the pretribosphenic Dryolestes that represents an evolutionary precursor condition to such tribosphenic mammals as Monodelphis. Animation of chewing path and detection of collisional contacts between virtual models of teeth suggests that Dryolestes differs from the classical two-phased chewing movement of tribosphenidans, due to the narrowing of the interdental space in cervical (crown-root transition) direction, the inclination angle of the hypoflexid groove, and the unicuspid talonid. The pretribosphenic chewing cycle is equivalent to phase I of the tribosphenic chewing cycle, but the former lacks phase II of the tribosphenic chewing. The new approach can analyze the chewing cycle of the jaw by using polygonal 3D models of tooth surfaces, in a way that is complementary to the electromyography and strain gauge studies of muscle function of living animals. The technique allows alignment and scaling of isolated fossil teeth and utilizes the wear facet orientation and striation of the teeth to reconstruct the chewing path of extinct mammals. PMID:25091547

  17. Function of pretribosphenic and tribosphenic mammalian molars inferred from 3D animation

    NASA Astrophysics Data System (ADS)

    Schultz, Julia A.; Martin, Thomas

    2014-10-01

    Appearance of the tribosphenic molar in the Late Jurassic (160 Ma) is a crucial innovation for food processing in mammalian evolution. This molar type is characterized by a protocone, a talonid basin and a two-phased chewing cycle, all of which are apomorphic. In this functional study on the teeth of Late Jurassic Dryolestes leiriensis and the living marsupial Monodelphis domestica, we demonstrate that pretribosphenic and tribosphenic molars show fundamental differences of food reduction strategies, representing a shift in dental function during the transition of tribosphenic mammals. By using the Occlusal Fingerprint Analyser (OFA), we simulated the chewing motions of the pretribosphenic Dryolestes that represents an evolutionary precursor condition to such tribosphenic mammals as Monodelphis. Animation of chewing path and detection of collisional contacts between virtual models of teeth suggests that Dryolestes differs from the classical two-phased chewing movement of tribosphenidans, due to the narrowing of the interdental space in cervical (crown-root transition) direction, the inclination angle of the hypoflexid groove, and the unicuspid talonid. The pretribosphenic chewing cycle is equivalent to phase I of the tribosphenic chewing cycle, but the former lacks phase II of the tribosphenic chewing. The new approach can analyze the chewing cycle of the jaw by using polygonal 3D models of tooth surfaces, in a way that is complementary to the electromyography and strain gauge studies of muscle function of living animals. The technique allows alignment and scaling of isolated fossil teeth and utilizes the wear facet orientation and striation of the teeth to reconstruct the chewing path of extinct mammals.

  18. Inferring cortical function in the mouse visual system through large-scale systems neuroscience.

    PubMed

    Hawrylycz, Michael; Anastassiou, Costas; Arkhipov, Anton; Berg, Jim; Buice, Michael; Cain, Nicholas; Gouwens, Nathan W; Gratiy, Sergey; Iyer, Ramakrishnan; Lee, Jung Hoon; Mihalas, Stefan; Mitelut, Catalin; Olsen, Shawn; Reid, R Clay; Teeter, Corinne; de Vries, Saskia; Waters, Jack; Zeng, Hongkui; Koch, Christof

    2016-07-01

    The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort. PMID:27382147

  19. Bayesian nonparametric inference on quantile residual life function: Application to breast cancer data.

    PubMed

    Park, Taeyoung; Jeong, Jong-Hyeon; Lee, Jae Won

    2012-08-15

    There is often an interest in estimating a residual life function as a summary measure of survival data. For ease in presentation of the potential therapeutic effect of a new drug, investigators may summarize survival data in terms of the remaining life years of patients. Under heavy right censoring, however, some reasonably high quantiles (e.g., median) of a residual lifetime distribution cannot be always estimated via a popular nonparametric approach on the basis of the Kaplan-Meier estimator. To overcome the difficulties in dealing with heavily censored survival data, this paper develops a Bayesian nonparametric approach that takes advantage of a fully model-based but highly flexible probabilistic framework. We use a Dirichlet process mixture of Weibull distributions to avoid strong parametric assumptions on the unknown failure time distribution, making it possible to estimate any quantile residual life function under heavy censoring. Posterior computation through Markov chain Monte Carlo is straightforward and efficient because of conjugacy properties and partial collapse. We illustrate the proposed methods by using both simulated data and heavily censored survival data from a recent breast cancer clinical trial conducted by the National Surgical Adjuvant Breast and Bowel Project. PMID:22437758

  20. Functional morphology of the hallucal metatarsal with implications for inferring grasping ability in extinct primates.

    PubMed

    Goodenberger, Katherine E; Boyer, Doug M; Orr, Caley M; Jacobs, Rachel L; Femiani, John C; Patel, Biren A

    2015-03-01

    Primate evolutionary morphologists have argued that selection for life in a fine branch niche resulted in grasping specializations that are reflected in the hallucal metatarsal (Mt1) morphology of extant "prosimians", while a transition to use of relatively larger, horizontal substrates explains the apparent loss of such characters in anthropoids. Accordingly, these morphological characters-Mt1 torsion, peroneal process length and thickness, and physiological abduction angle-have been used to reconstruct grasping ability and locomotor mode in the earliest fossil primates. Although these characters are prominently featured in debates on the origin and subsequent radiation of Primates, questions remain about their functional significance. This study examines the relationship between these morphological characters of the Mt1 and a novel metric of pedal grasping ability for a large number of extant taxa in a phylogenetic framework. Results indicate greater Mt1 torsion in taxa that engage in hallucal grasping and in those that utilize relatively small substrates more frequently. This study provides evidence that Carpolestes simpsoni has a torsion value more similar to grasping primates than to any scandentian. The results also show that taxa that habitually grasp vertical substrates are distinguished from other taxa in having relatively longer peroneal processes. Furthermore, a longer peroneal process is also correlated with calcaneal elongation, a metric previously found to reflect leaping proclivity. A more refined understanding of the functional associations between Mt1 morphology and behavior in extant primates enhances the potential for using these morphological characters to comprehend primate (locomotor) evolution. PMID:25378276

  1. Inferring cortical function in the mouse visual system through large-scale systems neuroscience

    PubMed Central

    Hawrylycz, Michael; Anastassiou, Costas; Arkhipov, Anton; Berg, Jim; Buice, Michael; Cain, Nicholas; Gouwens, Nathan W.; Gratiy, Sergey; Iyer, Ramakrishnan; Lee, Jung Hoon; Mihalas, Stefan; Mitelut, Catalin; Olsen, Shawn; Reid, R. Clay; Teeter, Corinne; de Vries, Saskia; Waters, Jack; Zeng, Hongkui; Koch, Christof

    2016-01-01

    The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort. PMID:27382147

  2. Bayesian inference in an item response theory model with a generalized student t link function

    NASA Astrophysics Data System (ADS)

    Azevedo, Caio L. N.; Migon, Helio S.

    2012-10-01

    In this paper we introduce a new item response theory (IRT) model with a generalized Student t-link function with unknown degrees of freedom (df), named generalized t-link (GtL) IRT model. In this model we consider only the difficulty parameter in the item response function. GtL is an alternative to the two parameter logit and probit models, since the degrees of freedom (df) play a similar role to the discrimination parameter. However, the behavior of the curves of the GtL is different from those of the two parameter models and the usual Student t link, since in GtL the curve obtained from different df's can cross the probit curves in more than one latent trait level. The GtL model has similar proprieties to the generalized linear mixed models, such as the existence of sufficient statistics and easy parameter interpretation. Also, many techniques of parameter estimation, model fit assessment and residual analysis developed for that models can be used for the GtL model. We develop fully Bayesian estimation and model fit assessment tools through a Metropolis-Hastings step within Gibbs sampling algorithm. We consider a prior sensitivity choice concerning the degrees of freedom. The simulation study indicates that the algorithm recovers all parameters properly. In addition, some Bayesian model fit assessment tools are considered. Finally, a real data set is analyzed using our approach and other usual models. The results indicate that our model fits the data better than the two parameter models.

  3. Crustal structure beneath the Japanese Islands inferred from receiver function analysis using similar earthquakes

    NASA Astrophysics Data System (ADS)

    Igarashi, Toshihiro

    2016-04-01

    The stress concentration and strain accumulation process due to inter-plate coupling of the subducting plate should have a large effect on inland shallow earthquakes that occur in the overriding plate. Information on the crustal structure and the crustal thickness is important to understanding their process. In this study, I applied receiver function analysis using similar earthquakes to estimate the crustal velocity structures beneath the Japanese Islands. Because similar earthquakes are caused repeatedly at almost the same place, they are useful for extracting information on spatial distribution and temporal changes of seismic velocity structures beneath the seismic stations. I used telemetric seismographic network data covered the Japanese Islands and moderate-sized similar earthquakes which occurred in the southern Hemisphere with epicentral distances between 30 and 90 degrees for about 26 years from October 1989. Data analysis was performed separately before and after the 2011 Tohoku-Oki earthquake. To identify the spatial distribution of crustal structure, I searched for the best-correlated model between an observed receiver function at each station and synthetic ones by using a grid search method. As results, I clarified the spatial distribution of the crustal velocity structures. The spatial patterns of velocities from the ground surface to 5 km deep are corresponding with basement depth models although the velocities are slower than those of tomography models. They indicate thick sediment layers in several plain and basin areas. The crustal velocity perturbations are consistent with existing tomography models. The active volcanoes correspond low-velocity zones from the upper crust to the crust-mantle transition. A comparison of the crustal structure before and after the 2011 Tohoku-Oki earthquake suggests that the northeastern Japan arc changed to lower velocities in some areas. This kind of velocity changes might be due to other effects such as changes of

  4. Effect of denoising on supervised lung parenchymal clusters

    NASA Astrophysics Data System (ADS)

    Jayamani, Padmapriya; Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Denoising is a critical preconditioning step for quantitative analysis of medical images. Despite promises for more consistent diagnosis, denoising techniques are seldom explored in clinical settings. While this may be attributed to the esoteric nature of the parameter sensitve algorithms, lack of quantitative measures on their ecacy to enhance the clinical decision making is a primary cause of physician apathy. This paper addresses this issue by exploring the eect of denoising on the integrity of supervised lung parenchymal clusters. Multiple Volumes of Interests (VOIs) were selected across multiple high resolution CT scans to represent samples of dierent patterns (normal, emphysema, ground glass, honey combing and reticular). The VOIs were labeled through consensus of four radiologists. The original datasets were ltered by multiple denoising techniques (median ltering, anisotropic diusion, bilateral ltering and non-local means) and the corresponding ltered VOIs were extracted. Plurality of cluster indices based on multiple histogram-based pair-wise similarity measures were used to assess the quality of supervised clusters in the original and ltered space. The resultant rank orders were analyzed using the Borda criteria to nd the denoising-similarity measure combination that has the best cluster quality. Our exhaustive analyis reveals (a) for a number of similarity measures, the cluster quality is inferior in the ltered space; and (b) for measures that benet from denoising, a simple median ltering outperforms non-local means and bilateral ltering. Our study suggests the need to judiciously choose, if required, a denoising technique that does not deteriorate the integrity of supervised clusters.

  5. Sediment thickness beneath the Indo-Gangetic Plain and Siwalik Himalaya inferred from receiver function modelling

    NASA Astrophysics Data System (ADS)

    Borah, Kajaljyoti; Kanna, Nagaraju; Rai, S. S.; Prakasam, K. S.

    2015-03-01

    The Indo-Gangetic Plain and the adjoining Siwalik Himalaya are the seismically most vulnerable regions due to high density of human population and presence of thick sediments that amplify the seismic waves due to an earthquake in the region. We investigate the sedimentary structure and crustal thickness of the region through joint inversion of the receiver function time series at 14 broadband seismograph locations and the available Rayleigh velocity data for the region. Results show significant variability of sedimentary layer thicknesses from 1.0 to 2.0 km beneath the Delhi region to 2.0-5.0 km beneath the Indo-Gangetic Plain and the Siwalik Himalaya. As we progress from the Delhi to the Indo-Gangetic Plain, we observe a decrease in the shear velocity in sedimentary layer from ∼2.0 km/s to ∼1.3 km/s while the layer thickness increases progressively from ∼1.0 km in south to 2.0-5.0 km in the north. Average S-velocity in the sedimentary layer beneath the Siwalik Himalaya is ∼2.1 km/s. Crustal thicknesses varies from ∼42 in the Delhi region, ∼48 km in the Indo-Gangetic Plain, ∼50 km in the western part of Siwalik Himalaya to ∼60 km in the Kumaon region of Siwalik Himalaya.

  6. The SWELLS survey - VI. Hierarchical inference of the initial mass functions of bulges and discs

    NASA Astrophysics Data System (ADS)

    Brewer, Brendon J.; Marshall, Philip J.; Auger, Matthew W.; Treu, Tommaso; Dutton, Aaron A.; Barnabè, Matteo

    2014-01-01

    The long-standing assumption that the stellar initial mass function (IMF) is universal has recently been challenged by a number of observations. Several studies have shown that a `heavy' IMF (e.g. with a Salpeter-like abundance of low-mass stars and thus normalization) is preferred for massive early-type galaxies, while this IMF is inconsistent with the properties of less massive, later-type galaxies. These discoveries motivate the hypothesis that the IMF may vary (possibly very slightly) across galaxies and across components of individual galaxies (e.g. bulges versus discs). In this paper, we use a sample of 19 late-type strong gravitational lenses from the Sloan WFC Edge-on Late-type Lens Survey (SWELLS) to investigate the IMFs of the bulges and discs in late-type galaxies. We perform a joint analysis of the galaxies' total masses (constrained by strong gravitational lensing) and stellar masses (constrained by optical and near-infrared colours in the context of a stellar population synthesis model, up to an IMF normalization parameter). Using minimal assumptions apart from the physical constraint that the total stellar mass m* within any aperture must be less than the total mass mtot within the aperture, we find that the bulges of the galaxies cannot have IMFs heavier (i.e. implying high mass per unit luminosity) than Salpeter, while the disc IMFs are not well constrained by this data set. We also discuss the necessity for hierarchical modelling when combining incomplete information about multiple astronomical objects. This modelling approach allows us to place upper limits on the size of any departures from universality. More data, including spatially resolved kinematics (as in Paper V) and stellar population diagnostics over a range of bulge and disc masses, are needed to robustly quantify how the IMF varies within galaxies.

  7. Seismic Discontinuities within the Crust and Mantle Beneath Indonesia as Inferred from P Receiver Functions

    NASA Astrophysics Data System (ADS)

    Woelbern, I.; Rumpker, G.

    2015-12-01

    Indonesia is situated at the southern margin of SE Asia, which comprises an assemblage of Gondwana-derived continental terranes, suture zones and volcanic arcs. The formation of SE Asia is believed to have started in Early Devonian. Its complex history involves the opening and closure of three distinct Tethys oceans, each accompanied by the rifting of continental fragments. We apply the receiver function technique to data of the temporary MERAMEX network operated in Central Java from May to October 2004 by the GeoForschungsZentrum Potsdam. The network consisted of 112 mobile stations with a spacing of about 10 km covering the full width of the island between the southern and northern coast lines. The tectonic history is reflected in a complex crustal structure of Central Java exhibiting strong topography of the Moho discontinuity related to different tectonic units. A discontinuity of negative impedance contrast is observed throughout the mid-crust interpreted as the top of a low-velocity layer which shows no depth correlation with the Moho interface. Converted phases generated at greater depth beneath Indonesia indicate the existence of multiple seismic discontinuities within the upper mantle and even below. The strongest signal originates from the base of the mantle transition zone, i.e. the 660 km discontinuity. The phase related to the 410 km discontinuity is less pronounced, but clearly identifiable as well. The derived thickness of the mantle-transition zone is in good agreement with the IASP91 velocity model. Additional phases are observed at roughly 33 s and 90 s relative to the P onset, corresponding to about 300 km and 920 km, respectively. A signal of reversed polarity indicates the top of a low velocity layer at about 370 km depth overlying the mantle transition zone.

  8. Complex geometry of the subducted Pacific slab inferred from receiver function

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiqing; Wu, Qingju; Zhang, Guangcheng

    2014-05-01

    In recent years, slab tear has received considerable attention and been reported in many arc-arc junctures in Pacific plate subdution zones. From 2009 to 2011, we deployed two portable experiments equipped with CMG-3ESPC seismometers and the recorders of REFTEK-130B in NE China. The two linear seismic arrays were designed nearly parallel, and each of them containing about 60 seismic stations extended about 1200 km from west to east spanning all surface geological terrains of NE China. The south one was firstly set up and continually operated over two year, while the north deployment worked only about one year. By using the teleseismic data collected by these two arrays, we calculate the P receiver functions to map topographic variation of the upper mantle discontinuities. Our sampled region is located where the juncture between the subducting Kuril and Japan slabs reaches the 660-km discontinuity. Distinct variation of the 660-km discontinuity is mapped beneath the regions. A deeper-than-normal 660 km discontinuity is observed locally in the southeastern part of our sampled region. The depression of the 660 km discontinuity may be resulted from an oceanic lithospheric slab deflected in the mantle transition zone, in good agreement with the result of earlier tomographic and other seismic studies in this region. The northeastern portion of our sampled region, however, does not show clearly the deflection of the slab. The variation of the tomography of the 660-km discontinuity in our sampled regions may indicate a complex geometry of the subducted Pacific slab.

  9. Why are dunkels sticky? Preschoolers infer functionality and intentional creation for artifact properties learned from generic language.

    PubMed

    Cimpian, Andrei; Cadena, Cristina

    2010-10-01

    Artifacts pose a potential learning problem for children because the mapping between their features and their functions is often not transparent. In solving this problem, children are likely to rely on a number of information sources (e.g., others' actions, affordances). We argue that children's sensitivity to nuances in the language used to describe artifacts is an important, but so far unacknowledged, piece of this puzzle. Specifically, we hypothesize that children are sensitive to whether an unfamiliar artifact's features are highlighted using generic (e.g., "Dunkels are sticky") or non-generic (e.g., "This dunkel is sticky") language. Across two studies, older-but not younger-preschoolers who heard such features introduced via generic statements inferred that they are a functional part of the artifact's design more often than children who heard the same features introduced via non-generic statements. The ability to pick up on this linguistic cue may expand considerably the amount of conceptual information about artifacts that children derive from conversations with adults. PMID:20656283

  10. Load identification approach based on basis pursuit denoising algorithm

    NASA Astrophysics Data System (ADS)

    Ginsberg, D.; Ruby, M.; Fritzen, C. P.

    2015-07-01

    The information of the external loads is of great interest in many fields of structural analysis, such as structural health monitoring (SHM) systems or assessment of damage after extreme events. However, in most cases it is not possible to measure the external forces directly, so they need to be reconstructed. Load reconstruction refers to the problem of estimating an input to a dynamic system when the system output and the impulse response functions are usually the knowns. Generally, this leads to a so called ill-posed inverse problem, which involves solving an underdetermined linear system of equations. For most practical applications it can be assumed that the applied loads are not arbitrarily distributed in time and space, at least some specific characteristics about the external excitation are known a priori. In this contribution this knowledge was used to develop a more suitable force reconstruction method, which allows identifying the time history and the force location simultaneously by employing significantly fewer sensors compared to other reconstruction approaches. The properties of the external force are used to transform the ill-posed problem into a sparse recovery task. The sparse solution is acquired by solving a minimization problem known as basis pursuit denoising (BPDN). The possibility of reconstructing loads based on noisy structural measurement signals will be demonstrated by considering two frequently occurring loading conditions: harmonic excitation and impact events, separately and combined. First a simulation study of a simple plate structure is carried out and thereafter an experimental investigation of a real beam is performed.

  11. Hybrid regularizers-based adaptive anisotropic diffusion for image denoising.

    PubMed

    Liu, Kui; Tan, Jieqing; Ai, Liefu

    2016-01-01

    To eliminate the staircasing effect for total variation filter and synchronously avoid the edges blurring for fourth-order PDE filter, a hybrid regularizers-based adaptive anisotropic diffusion is proposed for image denoising. In the proposed model, the [Formula: see text]-norm is considered as the fidelity term and the regularization term is composed of a total variation regularization and a fourth-order filter. The two filters can be adaptively selected according to the diffusion function. When the pixels locate at the edges, the total variation filter is selected to filter the image, which can preserve the edges. When the pixels belong to the flat regions, the fourth-order filter is adopted to smooth the image, which can eliminate the staircase artifacts. In addition, the split Bregman and relaxation approach are employed in our numerical algorithm to speed up the computation. Experimental results demonstrate that our proposed model outperforms the state-of-the-art models cited in the paper in both the qualitative and quantitative evaluations. PMID:27047730

  12. Application of time-resolved glucose concentration photoacoustic signals based on an improved wavelet denoising

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-10-01

    Real-time monitoring of blood glucose concentration (BGC) is a great important procedure in controlling diabetes mellitus and preventing the complication for diabetic patients. Noninvasive measurement of BGC has already become a research hotspot because it can overcome the physical and psychological harm. Photoacoustic spectroscopy is a well-established, hybrid and alternative technique used to determine the BGC. According to the theory of photoacoustic technique, the blood is irradiated by plused laser with nano-second repeation time and micro-joule power, the photoacoustic singals contained the information of BGC are generated due to the thermal-elastic mechanism, then the BGC level can be interpreted from photoacoustic signal via the data analysis. But in practice, the time-resolved photoacoustic signals of BGC are polluted by the varities of noises, e.g., the interference of background sounds and multi-component of blood. The quality of photoacoustic signal of BGC directly impacts the precision of BGC measurement. So, an improved wavelet denoising method was proposed to eliminate the noises contained in BGC photoacoustic signals. To overcome the shortcoming of traditional wavelet threshold denoising, an improved dual-threshold wavelet function was proposed in this paper. Simulation experimental results illustrated that the denoising result of this improved wavelet method was better than that of traditional soft and hard threshold function. To varify the feasibility of this improved function, the actual photoacoustic BGC signals were test, the test reslut demonstrated that the signal-to-noises ratio(SNR) of the improved function increases about 40-80%, and its root-mean-square error (RMSE) decreases about 38.7-52.8%.

  13. Fractional domain varying-order differential denoising method

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-Shan; Zhang, Feng; Li, Bing-Zhao; Tao, Ran

    2014-10-01

    Removal of noise is an important step in the image restoration process, and it remains a challenging problem in image processing. Denoising is a process used to remove the noise from the corrupted image, while retaining the edges and other detailed features as much as possible. Recently, denoising in the fractional domain is a hot research topic. The fractional-order anisotropic diffusion method can bring a less blocky effect and preserve edges in image denoising, a method that has received much interest in the literature. Based on this method, we propose a new method for image denoising, in which fractional-varying-order differential, rather than constant-order differential, is used. The theoretical analysis and experimental results show that compared with the state-of-the-art fractional-order anisotropic diffusion method, the proposed fractional-varying-order differential denoising model can preserve structure and texture well, while quickly removing noise, and yields good visual effects and better peak signal-to-noise ratio.

  14. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  15. Wavelet Denoising of Mobile Radiation Data

    SciTech Connect

    Campbell, D B

    2008-10-31

    The FY08 phase of this project investigated the merits of video fusion as a method for mitigating the false alarms encountered by vehicle borne detection systems in an effort to realize performance gains associated with wavelet denoising. The fusion strategy exploited the significant correlations which exist between data obtained from radiation detectors and video systems with coincident fields of view. The additional information provided by optical systems can greatly increase the capabilities of these detection systems by reducing the burden of false alarms and through the generation of actionable information. The investigation into the use of wavelet analysis techniques as a means of filtering the gross-counts signal obtained from moving radiation detectors showed promise for vehicle borne systems. However, the applicability of these techniques to man-portable systems is limited due to minimal gains in performance over the rapid feedback available to system operators under walking conditions. Furthermore, the fusion of video holds significant promise for systems operating from vehicles or systems organized into stationary arrays; however, the added complexity and hardware required by this technique renders it infeasible for man-portable systems.

  16. POGs2: A Web Portal to Facilitate Cross-Species Inferences About Protein Architecture and Function in Plants

    PubMed Central

    Tomcal, Michael; Stiffler, Nicholas; Barkan, Alice

    2013-01-01

    The Putative orthologous Groups 2 Database (POGs2) (http://pogs.uoregon.edu/) integrates information about the inferred proteomes of four plant species (Arabidopsis thaliana, Zea mays, Orza sativa, and Populus trichocarpa) in a display that facilitates comparisons among orthologs and extrapolation of annotations among species. A single-page view collates key functional data for members of each Putative Orthologous Group (POG): graphical representations of InterPro domains, predicted and established intracellular locations, and imported gene descriptions. The display incorporates POGs predicted by two different algorithms as well as gene trees, allowing users to evaluate the validity of POG memberships. The web interface provides ready access to sequences and alignments of POG members, as well as sequences, alignments, and domain architectures of closely-related paralogs. A simple and flexible search interface permits queries by BLAST and by any combination of gene identifier, keywords, domain names, InterPro identifiers, and intracellular location. The concurrent display of domain architectures for orthologous proteins highlights errors in gene models and false-negatives in domain predictions. The POGs2 layout is also useful for exploring candidate genes identified by transposon tagging, QTL mapping, map-based cloning, and proteomics, and for navigating between orthologous groups that belong to the same gene family. PMID:24340041

  17. POGs2: a web portal to facilitate cross-species inferences about protein architecture and function in plants.

    PubMed

    Tomcal, Michael; Stiffler, Nicholas; Barkan, Alice

    2013-01-01

    The Putative orthologous Groups 2 Database (POGs2) (http://pogs.uoregon.edu/) integrates information about the inferred proteomes of four plant species (Arabidopsis thaliana, Zea mays, Orza sativa, and Populus trichocarpa) in a display that facilitates comparisons among orthologs and extrapolation of annotations among species. A single-page view collates key functional data for members of each Putative Orthologous Group (POG): graphical representations of InterPro domains, predicted and established intracellular locations, and imported gene descriptions. The display incorporates POGs predicted by two different algorithms as well as gene trees, allowing users to evaluate the validity of POG memberships. The web interface provides ready access to sequences and alignments of POG members, as well as sequences, alignments, and domain architectures of closely-related paralogs. A simple and flexible search interface permits queries by BLAST and by any combination of gene identifier, keywords, domain names, InterPro identifiers, and intracellular location. The concurrent display of domain architectures for orthologous proteins highlights errors in gene models and false-negatives in domain predictions. The POGs2 layout is also useful for exploring candidate genes identified by transposon tagging, QTL mapping, map-based cloning, and proteomics, and for navigating between orthologous groups that belong to the same gene family. PMID:24340041

  18. Non-local MRI denoising using random sampling.

    PubMed

    Hu, Jinrong; Zhou, Jiliu; Wu, Xi

    2016-09-01

    In this paper, we propose a random sampling non-local mean (SNLM) algorithm to eliminate noise in 3D MRI datasets. Non-local means (NLM) algorithms have been implemented efficiently for MRI denoising, but are always limited by high computational complexity. Compared to conventional methods, which raster through the entire search window when computing similarity weights, the proposed SNLM algorithm randomly selects a small subset of voxels which dramatically decreases the computational burden, together with competitive denoising result. Moreover, structure tensor which encapsulates high-order information was introduced as an optimal sampling pattern for further improvement. Numerical experiments demonstrated that the proposed SNLM method can get a good balance between denoising quality and computation efficiency. At a relative sampling ratio (i.e. ξ=0.05), SNLM can remove noise as effectively as full NLM, meanwhile the running time can be reduced to 1/20 of NLM's. PMID:27114338

  19. A new study on mammographic image denoising using multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Dong, Min; Guo, Ya-Nan; Ma, Yi-De; Ma, Yu-run; Lu, Xiang-yu; Wang, Ke-ju

    2015-12-01

    Mammography is the most simple and effective technology for early detection of breast cancer. However, the lesion areas of breast are difficult to detect which due to mammograms are mixed with noise. This work focuses on discussing various multiresolution denoising techniques which include the classical methods based on wavelet and contourlet; moreover the emerging multiresolution methods are also researched. In this work, a new denoising method based on dual tree contourlet transform (DCT) is proposed, the DCT possess the advantage of approximate shift invariant, directionality and anisotropy. The proposed denoising method is implemented on the mammogram, the experimental results show that the emerging multiresolution method succeeded in maintaining the edges and texture details; and it can obtain better performance than the other methods both on visual effects and in terms of the Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM) values.

  20. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  1. Sinogram denoising via simultaneous sparse representation in learned dictionaries.

    PubMed

    Karimi, Davood; Ward, Rabab K

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster. PMID:27055224

  2. Entropic Inference

    NASA Astrophysics Data System (ADS)

    Caticha, Ariel

    2011-03-01

    In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme.

  3. GPU-Accelerated Denoising in 3D (GD3D)

    Energy Science and Technology Software Center (ESTSC)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  4. Simultaneous de-noising in phase contrast tomography

    NASA Astrophysics Data System (ADS)

    Koehler, Thomas; Roessl, Ewald

    2012-07-01

    In this work, we investigate methods for de-noising of tomographic differential phase contrast and absorption contrast images. We exploit the fact that in grating-based differential phase contrast imaging (DPCI), first, several images are acquired simultaneously in exactly the same geometry, and second, these different images can show very different contrast-to-noise-ratios. These features of grating-based DPCI are used to generalize the conventional bilateral filter. Experiments using simulations show a superior de-noising performance of the generalized algorithm compared with the conventional one.

  5. Image denoising with the dual-tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Yaseen, Alauldeen S.; Pavlova, Olga N.; Pavlov, Alexey N.; Hramov, Alexander E.

    2016-04-01

    The purpose of this study is to compare image denoising techniques based on real and complex wavelet-transforms. Possibilities provided by the classical discrete wavelet transform (DWT) with hard and soft thresholding are considered, and influences of the wavelet basis and image resizing are discussed. The quality of image denoising for the standard 2-D DWT and the dual-tree complex wavelet transform (DT-CWT) is studied. It is shown that DT-CWT outperforms 2-D DWT at the appropriate selection of the threshold level.

  6. The NIFTy way of Bayesian signal inference

    NASA Astrophysics Data System (ADS)

    Selig, Marco

    2014-12-01

    We introduce NIFTy, "Numerical Information Field Theory", a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTy can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTy as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.

  7. The NIFTY way of Bayesian signal inference

    SciTech Connect

    Selig, Marco

    2014-12-05

    We introduce NIFTY, 'Numerical Information Field Theory', a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTY can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTY as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D{sup 3}PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.

  8. Comparison of automatic denoising methods for phonocardiograms with extraction of signal parameters via the Hilbert Transform

    NASA Astrophysics Data System (ADS)

    Messer, Sheila R.; Agzarian, John; Abbott, Derek

    2001-05-01

    Phonocardiograms (PCGs) have many advantages over traditional auscultation (listening to the heart) because they may be replayed, may be analyzed for spectral and frequency content, and frequencies inaudible to the human ear may be recorded. However, various sources of noise may pollute a PCG including lung sounds, environmental noise and noise generated from contact between the recording device and the skin. Because PCG signals are known to be nonlinear and it is often not possible to determine their noise content, traditional de-noising methods may not be effectively applied. However, other methods including wavelet de-noising, wavelet packet de-noising and averaging can be employed to de-noise the PCG. This study examines and compares these de-noising methods. This study answers such questions as to which de-noising method gives a better SNR, the magnitude of signal information that is lost as a result of the de-noising process, the appropriate uses of the different methods down to such specifics as to which wavelets and decomposition levels give best results in wavelet and wavelet packet de-noising. In general, the wavelet and wavelet packet de-noising performed roughly equally with optimal de-noising occurring at 3-5 levels of decomposition. Averaging also proved a highly useful de- noising technique; however, in some cases averaging is not appropriate. The Hilbert Transform is used to illustrate the results of the de-noising process and to extract instantaneous features including instantaneous amplitude, frequency, and phase.

  9. THE PANCHROMATIC HUBBLE ANDROMEDA TREASURY. IV. A PROBABILISTIC APPROACH TO INFERRING THE HIGH-MASS STELLAR INITIAL MASS FUNCTION AND OTHER POWER-LAW FUNCTIONS

    SciTech Connect

    Weisz, Daniel R.; Fouesneau, Morgan; Dalcanton, Julianne J.; Clifton Johnson, L.; Beerman, Lori C.; Williams, Benjamin F.; Hogg, David W.; Foreman-Mackey, Daniel T.; Rix, Hans-Walter; Gouliermis, Dimitrios; Dolphin, Andrew E.; Lang, Dustin; Bell, Eric F.; Gordon, Karl D.; Kalirai, Jason S.; Skillman, Evan D.

    2013-01-10

    We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M {approx}> 1 M {sub Sun }). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, {alpha}, are unbiased and that the uncertainty, {Delta}{alpha}, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on {alpha}, and provide an analytic approximation for {Delta}{alpha} as a function of the observed number of stars and mass range. Comparison with literature studies shows that {approx}3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield ({alpha}) = 2.46, with a 1{sigma} dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the

  10. The Panchromatic Hubble Andromeda Treasury. IV. A Probabilistic Approach to Inferring the High-mass Stellar Initial Mass Function and Other Power-law Functions

    NASA Astrophysics Data System (ADS)

    Weisz, Daniel R.; Fouesneau, Morgan; Hogg, David W.; Rix, Hans-Walter; Dolphin, Andrew E.; Dalcanton, Julianne J.; Foreman-Mackey, Daniel T.; Lang, Dustin; Johnson, L. Clifton; Beerman, Lori C.; Bell, Eric F.; Gordon, Karl D.; Gouliermis, Dimitrios; Kalirai, Jason S.; Skillman, Evan D.; Williams, Benjamin F.

    2013-01-01

    We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M >~ 1 M ⊙). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, α, are unbiased and that the uncertainty, Δα, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on α, and provide an analytic approximation for Δα as a function of the observed number of stars and mass range. Comparison with literature studies shows that ~3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield langαrang = 2.46, with a 1σ dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the completeness for stars of a given mass. The precision on MF

  11. Adaptive redundant multiwavelet denoising with improved neighboring coefficients for gearbox fault detection

    NASA Astrophysics Data System (ADS)

    Chen, Jinglong; Zi, Yanyang; He, Zhengjia; Wang, Xiaodong

    2013-07-01

    Gearbox fault detection under strong background noise is a challenging task. It is feasible to make the fault feature distinct through multiwavelet denoising. In addition to the advantage of multi-resolution analysis, multiwavelet with several scaling functions and wavelet functions can detect the different fault features effectively. However, the fixed basis functions not related to the given signal may lower the accuracy of fault detection. Moreover, the multiwavelet transform may result in Gibbs phenomena in the step of reconstruction. Furthermore, both traditional term-by-term threshold and neighboring coefficients do not consider the direct spatial dependency of wavelet coefficients at adjacent scale. To overcome these deficiencies, adaptive redundant multiwavelet (ARM) denoising with improved neighboring coefficients (NeighCoeff) is proposed. Based on symmetric multiwavelet lifting scheme (SMLS), taking kurtosis—partial envelope spectrum entropy as the evaluation objective and genetic algorithms as the optimization method, ARM is proposed. Considering the intra-scale and inter-scale dependency of wavelet coefficients, the improved NeighCoeff method is developed and incorporated into ARM. The proposed method is applied to both the simulated signal and the practical gearbox vibration signal under different conditions. The results show its effectiveness and reliance for gearbox fault detection.

  12. A non-gradient-based energy minimization approach to the image denoising problem

    NASA Astrophysics Data System (ADS)

    Lukić, Tibor; Žunić, Joviša

    2014-09-01

    A common approach to denoising images is to minimize an energy function combining a quadratic data fidelity term with a total variation-based regularization. The total variation, comprising the gradient magnitude function, originally comes from mathematical analysis and is defined on a continuous domain only. When working in a discrete domain (e.g. when dealing with digital images), the accuracy in the gradient computation is limited by the applied image resolution. In this paper we propose a new approach, where the gradient magnitude function is replaced with an operator with similar properties (i.e. it also expresses the intensity variation in a neighborhood of the considered point), but is concurrently applicable in both continuous and discrete space. This operator is the shape elongation measure, one of the shape descriptors intensively used in shape-based image processing and computer vision tasks. The experiments provided in this paper confirm the capability of the proposed approach for providing high-quality reconstructions. Based on the performance comparison of a number of test images, we can say that the new method outperforms the energy minimization-based denoising methods often used in the literature for method comparison.

  13. Discrete shearlet transform on GPU with applications in anomaly detection and denoising

    NASA Astrophysics Data System (ADS)

    Gibert, Xavier; Patel, Vishal M.; Labate, Demetrio; Chellappa, Rama

    2014-12-01

    Shearlets have emerged in recent years as one of the most successful methods for the multiscale analysis of multidimensional signals. Unlike wavelets, shearlets form a pyramid of well-localized functions defined not only over a range of scales and locations, but also over a range of orientations and with highly anisotropic supports. As a result, shearlets are much more effective than traditional wavelets in handling the geometry of multidimensional data, and this was exploited in a wide range of applications from image and signal processing. However, despite their desirable properties, the wider applicability of shearlets is limited by the computational complexity of current software implementations. For example, denoising a single 512 × 512 image using a current implementation of the shearlet-based shrinkage algorithm can take between 10 s and 2 min, depending on the number of CPU cores, and much longer processing times are required for video denoising. On the other hand, due to the parallel nature of the shearlet transform, it is possible to use graphics processing units (GPU) to accelerate its implementation. In this paper, we present an open source stand-alone implementation of the 2D discrete shearlet transform using CUDA C++ as well as GPU-accelerated MATLAB implementations of the 2D and 3D shearlet transforms. We have instrumented the code so that we can analyze the running time of each kernel under different GPU hardware. In addition to denoising, we describe a novel application of shearlets for detecting anomalies in textured images. In this application, computation times can be reduced by a factor of 50 or more, compared to multicore CPU implementations.

  14. Denoising of 4D Cardiac Micro-CT Data Using Median-Centric Bilateral Filtration

    PubMed Central

    Clark, D.; Johnson, G.A.; Badea, C.T.

    2012-01-01

    Bilateral filtration has proven an effective tool for denoising CT data. The classic filter utilizes Gaussian domain and range weighting functions in 2D. More recently, other distributions have yielded more accurate results in specific applications, and the bilateral filtration framework has been extended to higher dimensions. In this study, brute-force optimization is employed to evaluate the use of several alternative distributions for both domain and range weighting: Andrew's Sine Wave, El Fallah Ford, Gaussian, Flat, Lorentzian, Huber's Minimax, Tukey's Bi-weight, and Cosine. Two variations on the classic bilateral filter which use median filtration to reduce bias in range weights are also investigated: median-centric and hybrid bilateral filtration. Using the 4D MOBY mouse phantom reconstructed with noise (stdev. ~ 65 HU), hybrid bilateral filtration, a combination of the classic and median-centric filters, with Flat domain and range weighting is shown to provide optimal denoising results (PSNRs: 31.69, classic; 31.58 median-centric; 32.25, hybrid). To validate these phantom studies, the optimal filters are also applied to in vivo, 4D cardiac micro-CT data acquired in the mouse. In a constant region of the left ventricle, hybrid bilateral filtration with Flat domain and range weighting is shown to provide optimal smoothing (stdev: original, 72.2 HU; classic, 20.3 HU; median-centric, 24.1 HU; hybrid, 15.9 HU). While the optimal results were obtained using 4D filtration, the 3D hybrid filter is ultimately recommended for denoising 4D cardiac micro-CT data because it is more computationally tractable and less prone to artifacts (MOBY PSNR: 32.05; left ventricle stdev: 20.5 HU). PMID:24386540

  15. A procedure for denoising dual-axis swallowing accelerometry signals.

    PubMed

    Sejdić, Ervin; Steele, Catriona M; Chau, Tom

    2010-01-01

    Dual-axis swallowing accelerometry is an emerging tool for the assessment of dysphagia (swallowing difficulties). These signals however can be very noisy as a result of physiological and motion artifacts. In this note, we propose a novel scheme for denoising those signals, i.e. a computationally efficient search for the optimal denoising threshold within a reduced wavelet subspace. To determine a viable subspace, the algorithm relies on the minimum value of the estimated upper bound for the reconstruction error. A numerical analysis of the proposed scheme using synthetic test signals demonstrated that the proposed scheme is computationally more efficient than minimum noiseless description length (MNDL)-based denoising. It also yields smaller reconstruction errors than MNDL, SURE and Donoho denoising methods. When applied to dual-axis swallowing accelerometry signals, the proposed scheme exhibits improved performance for dry, wet and wet chin tuck swallows. These results are important for the further development of medical devices based on dual-axis swallowing accelerometry signals. PMID:19940343

  16. A wavelet multiscale denoising algorithm for magnetic resonance (MR) images

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Fei, Baowei

    2011-02-01

    Based on the Radon transform, a wavelet multiscale denoising method is proposed for MR images. The approach explicitly accounts for the Rician nature of MR data. Based on noise statistics we apply the Radon transform to the original MR images and use the Gaussian noise model to process the MR sinogram image. A translation invariant wavelet transform is employed to decompose the MR 'sinogram' into multiscales in order to effectively denoise the images. Based on the nature of Rician noise we estimate noise variance in different scales. For the final denoised sinogram we apply the inverse Radon transform in order to reconstruct the original MR images. Phantom, simulation brain MR images, and human brain MR images were used to validate our method. The experiment results show the superiority of the proposed scheme over the traditional methods. Our method can reduce Rician noise while preserving the key image details and features. The wavelet denoising method can have wide applications in MRI as well as other imaging modalities.

  17. Perceptual inference.

    PubMed

    Aggelopoulos, Nikolaos C

    2015-08-01

    Perceptual inference refers to the ability to infer sensory stimuli from predictions that result from internal neural representations built through prior experience. Methods of Bayesian statistical inference and decision theory model cognition adequately by using error sensing either in guiding action or in "generative" models that predict the sensory information. In this framework, perception can be seen as a process qualitatively distinct from sensation, a process of information evaluation using previously acquired and stored representations (memories) that is guided by sensory feedback. The stored representations can be utilised as internal models of sensory stimuli enabling long term associations, for example in operant conditioning. Evidence for perceptual inference is contributed by such phenomena as the cortical co-localisation of object perception with object memory, the response invariance in the responses of some neurons to variations in the stimulus, as well as from situations in which perception can be dissociated from sensation. In the context of perceptual inference, sensory areas of the cerebral cortex that have been facilitated by a priming signal may be regarded as comparators in a closed feedback loop, similar to the better known motor reflexes in the sensorimotor system. The adult cerebral cortex can be regarded as similar to a servomechanism, in using sensory feedback to correct internal models, producing predictions of the outside world on the basis of past experience. PMID:25976632

  18. Crustal anisotropy in northeastern Tibetan Plateau inferred from receiver functions: Rock textures caused by metamorphic fluids and lower crust flow?

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Park, Jeffrey; Rye, Danny M.

    2015-10-01

    The crust of Tibetan Plateau may have formed via shortening/thickening or large-scale underthrusting, and subsequently modified via lower crust channel flows and volatile-mediated regional metamorphism. The amplitude and distribution of crustal anisotropy record the history of continental deformation, offering clues to its formation and later modification. In this study, we first investigate the back-azimuth dependence of Ps converted phases using multitaper receiver functions (RFs). We analyze teleseismic data for 35 temporary broadband stations in the ASCENT experiment located in northeastern Tibet. We stack receiver functions after a moving-window moveout correction. Major features of RFs include: 1) Ps arrivals at 8-10 s on the radial components, suggesting a 70-90-km crustal thickness in the study area; 2) two-lobed back-azimuth variation for intra-crustal Ps phases in the upper crust (< 20 km), consistent with tilted symmetry axis anisotropy or dipping interfaces; 3) significant Ps arrivals with four-lobed back-azimuth variation distributed in distinct layers in the middle and lower crust (up to 60 km), corresponding to (sub)horizontal-axis anisotropy; and 4) weak or no evidence of azimuthal anisotropy in the lowermost crust. To study the anisotropy, we compare the observed RF stacks with one-dimensional reflectivity synthetic seismograms in anisotropic media, and fit major features by "trial and error" forward modeling. Crustal anisotropy offers few clues on plateau formation, but strong evidence of ongoing deformation and metamorphism. We infer strong horizontal-axis anisotropy concentrated in the middle and lower crust, which could be explained by vertically aligned sheet silicates, open cracks filled with magma or other fluid, vertical vein structures or by 1-10-km-scale chimney structures that have focused metamorphic fluids. Simple dynamic models encounter difficulty in generating vertically aligned sheet silicates. Instead, we interpret our data to

  19. The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal

    NASA Astrophysics Data System (ADS)

    Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis

    2016-05-01

    The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.

  20. The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal

    NASA Astrophysics Data System (ADS)

    Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis

    2016-08-01

    The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.

  1. Enhancement of signal denoising and multiple fault signatures detecting in rotating machinery using dual-tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Wang, Yanxue; He, Zhengjia; Zi, Yanyang

    2010-01-01

    In order to enhance the desired features related to some special type of machine fault, a technique based on the dual-tree complex wavelet transform (DTCWT) is proposed in this paper. It is demonstrated that DTCWT enjoys better shift invariance and reduced spectral aliasing than second-generation wavelet transform (SGWT) and empirical mode decomposition by means of numerical simulations. These advantages of the DTCWT arise from the relationship between the two dual-tree wavelet basis functions, instead of the matching of the used single wavelet basis function to the signal being analyzed. Since noise inevitably exists in the measured signals, an enhanced vibration signals denoising algorithm incorporating DTCWT with NeighCoeff shrinkage is also developed. Denoising results of vibration signals resulting from a crack gear indicate the proposed denoising method can effectively remove noise and retain the valuable information as much as possible compared to those DWT- and SGWT-based NeighCoeff shrinkage denoising methods. As is well known, excavation of comprehensive signatures embedded in the vibration signals is of practical importance to clearly clarify the roots of the fault, especially the combined faults. In the case of multiple features detection, diagnosis results of rolling element bearings with combined faults and an actual industrial equipment confirm that the proposed DTCWT-based method is a powerful and versatile tool and consistently outperforms SGWT and fast kurtogram, which are widely used recently. Moreover, it must be noted, the proposed method is completely suitable for on-line surveillance and diagnosis due to its good robustness and efficient algorithm.

  2. Multitaper Spectral Analysis and Wavelet Denoising Applied to Helioseismic Data

    NASA Technical Reports Server (NTRS)

    Komm, R. W.; Gu, Y.; Hill, F.; Stark, P. B.; Fodor, I. K.

    1999-01-01

    Estimates of solar normal mode frequencies from helioseismic observations can be improved by using Multitaper Spectral Analysis (MTSA) to estimate spectra from the time series, then using wavelet denoising of the log spectra. MTSA leads to a power spectrum estimate with reduced variance and better leakage properties than the conventional periodogram. Under the assumption of stationarity and mild regularity conditions, the log multitaper spectrum has a statistical distribution that is approximately Gaussian, so wavelet denoising is asymptotically an optimal method to reduce the noise in the estimated spectra. We find that a single m-upsilon spectrum benefits greatly from MTSA followed by wavelet denoising, and that wavelet denoising by itself can be used to improve m-averaged spectra. We compare estimates using two different 5-taper estimates (Stepian and sine tapers) and the periodogram estimate, for GONG time series at selected angular degrees l. We compare those three spectra with and without wavelet-denoising, both visually, and in terms of the mode parameters estimated from the pre-processed spectra using the GONG peak-fitting algorithm. The two multitaper estimates give equivalent results. The number of modes fitted well by the GONG algorithm is 20% to 60% larger (depending on l and the temporal frequency) when applied to the multitaper estimates than when applied to the periodogram. The estimated mode parameters (frequency, amplitude and width) are comparable for the three power spectrum estimates, except for modes with very small mode widths (a few frequency bins), where the multitaper spectra broadened the modest compared with the periodogram. We tested the influence of the number of tapers used and found that narrow modes at low n values are broadened to the extent that they can no longer be fit if the number of tapers is too large. For helioseismic time series of this length and temporal resolution, the optimal number of tapers is less than 10.

  3. A fast non-local image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Dauwe, A.; Goossens, B.; Luong, H. Q.; Philips, W.

    2008-02-01

    In this paper we propose several improvements to the original non-local means algorithm introduced by Buades et al. which obtains state-of-the-art denoising results. The strength of this algorithm is to exploit the repetitive character of the image in order to denoise the image unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Due to the enormous amount of weight computations, the original algorithm has a high computational cost. An improvement of image quality towards the original algorithm is to ignore the contributions from dissimilar windows. Even though their weights are very small at first sight, the new estimated pixel value can be severely biased due to the many small contributions. This bad influence of dissimilar windows can be eliminated by setting their corresponding weights to zero. Using the preclassification based on the first three statistical moments, only contributions from similar neighborhoods are computed. To decide whether a window is similar or dissimilar, we will derive thresholds for images corrupted with additive white Gaussian noise. Our accelerated approach is further optimized by taking advantage of the symmetry in the weights, which roughly halves the computation time, and by using a lookup table to speed up the weight computations. Compared to the original algorithm, our proposed method produces images with increased PSNR and better visual performance in less computation time. Our proposed method even outperforms state-of-the-art wavelet denoising techniques in both visual quality and PSNR values for images containing a lot of repetitive structures such as textures: the denoised images are much sharper and contain less artifacts. The proposed optimizations can also be applied in other image processing tasks which employ the concept of repetitive structures such as intra-frame super-resolution or detection of digital image forgery.

  4. Dictionary-based image denoising for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Allner, Sebastian; Mei, Kai; Pfeiffer, Franz; Noël, Peter B.

    2016-03-01

    Compared to conventional computed tomography (CT), dual energy CT allows for improved material decomposition by conducting measurements at two distinct energy spectra. Since radiation exposure is a major concern in clinical CT, there is a need for tools to reduce the noise level in images while preserving diagnostic information. One way to achieve this goal is the application of image-based denoising algorithms after an analytical reconstruction has been performed. We have developed a modified dictionary denoising algorithm for dual energy CT aimed at exploiting the high spatial correlation between between images obtained from different energy spectra. Both the low-and high energy image are partitioned into small patches which are subsequently normalized. Combined patches with improved signal-to-noise ratio are formed by a weighted addition of corresponding normalized patches from both images. Assuming that corresponding low-and high energy image patches are related by a linear transformation, the signal in both patches is added coherently while noise is neglected. Conventional dictionary denoising is then performed on the combined patches. Compared to conventional dictionary denoising and bilateral filtering, our algorithm achieved superior performance in terms of qualitative and quantitative image quality measures. We demonstrate, in simulation studies, that this approach can produce 2d-histograms of the high- and low-energy reconstruction which are characterized by significantly improved material features and separation. Moreover, in comparison to other approaches that attempt denoising without simultaneously using both energy signals, superior similarity to the ground truth can be found with our proposed algorithm.

  5. MicroRNA-Target Network Inference and Local Network Enrichment Analysis Identify Two microRNA Clusters with Distinct Functions in Head and Neck Squamous Cell Carcinoma

    PubMed Central

    Sass, Steffen; Pitea, Adriana; Unger, Kristian; Hess, Julia; Mueller, Nikola S.; Theis, Fabian J.

    2015-01-01

    MicroRNAs represent ~22 nt long endogenous small RNA molecules that have been experimentally shown to regulate gene expression post-transcriptionally. One main interest in miRNA research is the investigation of their functional roles, which can typically be accomplished by identification of mi-/mRNA interactions and functional annotation of target gene sets. We here present a novel method “miRlastic”, which infers miRNA-target interactions using transcriptomic data as well as prior knowledge and performs functional annotation of target genes by exploiting the local structure of the inferred network. For the network inference, we applied linear regression modeling with elastic net regularization on matched microRNA and messenger RNA expression profiling data to perform feature selection on prior knowledge from sequence-based target prediction resources. The novelty of miRlastic inference originates in predicting data-driven intra-transcriptome regulatory relationships through feature selection. With synthetic data, we showed that miRlastic outperformed commonly used methods and was suitable even for low sample sizes. To gain insight into the functional role of miRNAs and to determine joint functional properties of miRNA clusters, we introduced a local enrichment analysis procedure. The principle of this procedure lies in identifying regions of high functional similarity by evaluating the shortest paths between genes in the network. We can finally assign functional roles to the miRNAs by taking their regulatory relationships into account. We thoroughly evaluated miRlastic on a cohort of head and neck cancer (HNSCC) patients provided by The Cancer Genome Atlas. We inferred an mi-/mRNA regulatory network for human papilloma virus (HPV)-associated miRNAs in HNSCC. The resulting network best enriched for experimentally validated miRNA-target interaction, when compared to common methods. Finally, the local enrichment step identified two functional clusters of mi

  6. Functional characterization of somatic mutations in cancer using network-based inference of protein activity | Office of Cancer Genomics

    Cancer.gov

    Identifying the multiple dysregulated oncoproteins that contribute to tumorigenesis in a given patient is crucial for developing personalized treatment plans. However, accurate inference of aberrant protein activity in biological samples is still challenging as genetic alterations are only partially predictive and direct measurements of protein activity are generally not feasible.

  7. A hybrid fault diagnosis method based on second generation wavelet de-noising and local mean decomposition for rotating machinery.

    PubMed

    Liu, Zhiwen; He, Zhengjia; Guo, Wei; Tang, Zhangchun

    2016-03-01

    In order to extract fault features of large-scale power equipment from strong background noise, a hybrid fault diagnosis method based on the second generation wavelet de-noising (SGWD) and the local mean decomposition (LMD) is proposed in this paper. In this method, a de-noising algorithm of second generation wavelet transform (SGWT) using neighboring coefficients was employed as the pretreatment to remove noise in rotating machinery vibration signals by virtue of its good effect in enhancing the signal-noise ratio (SNR). Then, the LMD method is used to decompose the de-noised signals into several product functions (PFs). The PF corresponding to the faulty feature signal is selected according to the correlation coefficients criterion. Finally, the frequency spectrum is analyzed by applying the FFT to the selected PF. The proposed method is applied to analyze the vibration signals collected from an experimental gearbox and a real locomotive rolling bearing. The results demonstrate that the proposed method has better performances such as high SNR and fast convergence speed than the normal LMD method. PMID:26753616

  8. Customized maximal-overlap multiwavelet denoising with data-driven group threshold for condition monitoring of rolling mill drivetrain

    NASA Astrophysics Data System (ADS)

    Chen, Jinglong; Wan, Zhiguo; Pan, Jun; Zi, Yanyang; Wang, Yu; Chen, Binqiang; Sun, Hailiang; Yuan, Jing; He, Zhengjia

    2016-02-01

    Fault identification timely of rolling mill drivetrain is significant for guaranteeing product quality and realizing long-term safe operation. So, condition monitoring system of rolling mill drivetrain is designed and developed. However, because compound fault and weak fault feature information is usually sub-merged in heavy background noise, this task still faces challenge. This paper provides a possibility for fault identification of rolling mills drivetrain by proposing customized maximal-overlap multiwavelet denoising method. The effectiveness of wavelet denoising method mainly relies on the appropriate selections of wavelet base, transform strategy and threshold rule. First, in order to realize exact matching and accurate detection of fault feature, customized multiwavelet basis function is constructed via symmetric lifting scheme and then vibration signal is processed by maximal-overlap multiwavelet transform. Next, based on spatial dependency of multiwavelet transform coefficients, spatial neighboring coefficient data-driven group threshold shrinkage strategy is developed for denoising process by choosing the optimal group length and threshold via the minimum of Stein's Unbiased Risk Estimate. The effectiveness of proposed method is first demonstrated through compound fault identification of reduction gearbox on rolling mill. Then it is applied for weak fault identification of dedusting fan bearing on rolling mill and the results support its feasibility.

  9. A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2016-05-01

    An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

  10. Shearlet-based total variation diffusion for denoising.

    PubMed

    Easley, Glenn R; Labate, Demetrio; Colonna, Flavia

    2009-02-01

    We propose a shearlet formulation of the total variation (TV) method for denoising images. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. Common approaches in combining wavelet-like representations such as curvelets with TV or diffusion methods aim at reducing Gibbs-type artifacts after obtaining a nearly optimal estimate. We show that it is possible to obtain much better estimates from a shearlet representation by constraining the residual coefficients using a projected adaptive total variation scheme in the shearlet domain. We also analyze the performance of a shearlet-based diffusion method. Numerical examples demonstrate that these schemes are highly effective at denoising complex images and outperform a related method based on the use of the curvelet transform. Furthermore, the shearlet-TV scheme requires far fewer iterations than similar competitors. PMID:19095539

  11. Statistical denoising of signals in the S-transform domain

    NASA Astrophysics Data System (ADS)

    Weishi, Man; Jinghuai, Gao

    2009-06-01

    In this paper, the denoising of stochastic noise in the S-transform (ST) and generalized S-transform (GST) domains is discussed. First, the mean power spectrum (MPS) of white noise is derived in the ST and GST domains. The results show that the MPS varies linearly with the frequency in the ST and GST domains (with a Gaussian window). Second, the local power spectrum (LPS) of red noise is studied by employing the Monte Carlo method in the two domains. The results suggest that the LPS of Gaussian red noise can be transformed into a chi-square distribution with two degrees of freedom. On the basis of the difference between the LPS distribution of signals and noise, a denoising method is presented through hypothesis testing. The effectiveness of the method is confirmed by testing synthetic seismic data and a chirp signal.

  12. Examining Alternatives to Wavelet Denoising for Astronomical Source Finding

    NASA Astrophysics Data System (ADS)

    Jurek, R.; Brown, S.

    2012-08-01

    The Square Kilometre Array and its pathfinders ASKAP and MeerKAT will produce prodigious amounts of data that necessitate automated source finding. The performance of automated source finders can be improved by pre-processing a dataset. In preparation for the WALLABY and DINGO surveys, we have used a test HI datacube constructed from actual Westerbork Telescope noise and WHISP HI galaxies to test the real world improvement of linear smoothing, the Duchamp source finder's wavelet denoising, iterative median smoothing and mathematical morphology subtraction, on intensity threshold source finding of spectral line datasets. To compare these pre-processing methods we have generated completeness-reliability performance curves for each method and a range of input parameters. We find that iterative median smoothing produces the best source finding results for ASKAP HI spectral line observations, but wavelet denoising is a safer pre-processing technique. In this paper we also present our implementations of iterative median smoothing and mathematical morphology subtraction.

  13. Wavelet-based ultrasound image denoising: performance analysis and comparison.

    PubMed

    Rizi, F Yousefi; Noubari, H Ahmadi; Setarehdan, S K

    2011-01-01

    Ultrasound images are generally affected by multiplicative speckle noise, which is mainly due to the coherent nature of the scattering phenomenon. Speckle noise filtering is thus a critical pre-processing step in medical ultrasound imaging provided that the diagnostic features of interest are not lost. A comparative study of the performance of alternative wavelet based ultrasound image denoising methods is presented in this article. In particular, the contourlet and curvelet techniques with dual tree complex and real and double density wavelet transform denoising methods were applied to real ultrasound images and results were quantitatively compared. The results show that curvelet-based method performs superior as compared to other methods and can effectively reduce most of the speckle noise content of a given image. PMID:22255196

  14. Comparative study of wavelet denoising in myoelectric control applications.

    PubMed

    Sharma, Tanu; Veer, Karan

    2016-04-01

    Here, the wavelet analysis has been investigated to improve the quality of myoelectric signal before use in prosthetic design. Effective Surface Electromyogram (SEMG) signals were estimated by first decomposing the obtained signal using wavelet transform and then analysing the decomposed coefficients by threshold methods. With the appropriate choice of wavelet, it is possible to reduce interference noise effectively in the SEMG signal. However, the most effective wavelet for SEMG denoising is chosen by calculating the root mean square value and signal power values. The combined results of root mean square value and signal power shows that wavelet db4 performs the best denoising among the wavelets. Furthermore, time domain and frequency domain methods were applied for SEMG signal analysis to investigate the effect of muscle-force contraction on the signal. It was found that, during sustained contractions, the mean frequency (MNF) and median frequency (MDF) increase as muscle force levels increase. PMID:26887581

  15. Undecimated Wavelet Transforms for Image De-noising

    SciTech Connect

    Gyaourova, A; Kamath, C; Fodor, I K

    2002-11-19

    A few different approaches exist for computing undecimated wavelet transform. In this work we construct three undecimated schemes and evaluate their performance for image noise reduction. We use standard wavelet based de-noising techniques and compare the performance of our algorithms with the original undecimated wavelet transform, as well as with the decimated wavelet transform. The experiments we have made show that our algorithms have better noise removal/blurring ratio.

  16. Spatio-Temporal Multiscale Denoising of Fluoroscopic Sequence.

    PubMed

    Amiot, Carole; Girard, Catherine; Chanussot, Jocelyn; Pescatore, Jeremie; Desvignes, Michel

    2016-06-01

    In the past 20 years, a wide range of complex fluoroscopically guided procedures have shown considerable growth. Biologic effects of the exposure (radiation induced burn, cancer) lead to reduce the dose during the intervention, for the safety of patients and medical staff. However, when the dose is reduced, image quality decreases, with a high level of noise and a very low contrast. Efficient restoration and denoising algorithms should overcome this drawback. We propose a spatio-temporal filter operating in a multi-scales space. This filter relies on a first order, motion compensated, recursive temporal denoising. Temporal high frequency content is first detected and then matched over time to allow for a strong denoising in the temporal axis. We study this filter in the curvelet domain and in the dual-tree complex wavelet domain, and compare those results to state of the art methods. Quantitative and qualitative analysis on both synthetic and real fluoroscopic sequences demonstrate that the proposed filter allows a great dose reduction. PMID:26812705

  17. Comparison of de-noising techniques for FIRST images

    SciTech Connect

    Fodor, I K; Kamath, C

    2001-01-22

    Data obtained through scientific observations are often contaminated by noise and artifacts from various sources. As a result, a first step in mining these data is to isolate the signal of interest by minimizing the effects of the contaminations. Once the data has been cleaned or de-noised, data mining can proceed as usual. In this paper, we describe our work in denoising astronomical images from the Faint Images of the Radio Sky at Twenty-Centimeters (FIRST) survey. We are mining this survey to detect radio-emitting galaxies with a bent-double morphology. This task is made difficult by the noise in the images caused by the processing of the sensor data. We compare three different approaches to de-noising: thresholding of wavelet coefficients advocated in the statistical community, traditional Altering methods used in the image processing community, and a simple thresholding scheme proposed by FIRST astronomers. While each approach has its merits and pitfalls, we found that for our purpose, the simple thresholding scheme worked relatively well for the FIRST dataset.

  18. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  19. Biases in the inferred mass-to-light ratio of globular clusters: no need for variations in the stellar mass function

    NASA Astrophysics Data System (ADS)

    Shanahan, Rosemary L.; Gieles, Mark

    2015-03-01

    From a study of the integrated light properties of 200 globular clusters (GCs) in M31, Strader et al. found that the mass-to-light ratios are lower than what is expected from simple stellar population models with a `canonical' stellar initial mass function (IMF), with the discrepancy being larger at high metallicities. We use dynamical multimass models, that include a prescription for equipartition, to quantify the bias in the inferred dynamical mass as the result of the assumption that light follows mass. For a universal IMF and a metallicity-dependent present-day mass function, we find that the inferred mass from integrated light properties systematically underestimates the true mass, and that the bias is more important at high metallicities, as was found for the M31 GCs. We show that mass segregation and a flattening of the mass function have opposing effects of similar magnitude on the mass inferred from integrated properties. This makes the mass-to-light ratio as derived from integrated properties an inadequate probe of the low-mass end of the stellar mass function. There is, therefore, no need for variations in the IMF, nor the need to invoke depletion of low-mass stars, to explain the observations. Finally, we find that the retention fraction of stellar-mass black holes (BHs) is an equally important parameter in understanding the mass segregation bias. We speculatively put forward to idea that kinematical data of GCs can in fact be used to constrain the total mass in stellar-mass BHs in GCs.

  20. Adaptive nonlocal means filtering based on local noise level for CT denoising

    SciTech Connect

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.

    2014-01-15

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  1. Denoised and texture enhanced MVCT to improve soft tissue conspicuity

    SciTech Connect

    Sheng, Ke Qi, Sharon X.; Gou, Shuiping; Wu, Jiaolong

    2014-10-15

    Purpose: MVCT images have been used in TomoTherapy treatment to align patients based on bony anatomies but its usefulness for soft tissue registration, delineation, and adaptive radiation therapy is limited due to insignificant photoelectric interaction components and the presence of noise resulting from low detector quantum efficiency of megavoltage x-rays. Algebraic reconstruction with sparsity regularizers as well as local denoising methods has not significantly improved the soft tissue conspicuity. The authors aim to utilize a nonlocal means denoising method and texture enhancement to recover the soft tissue information in MVCT (DeTECT). Methods: A block matching 3D (BM3D) algorithm was adapted to reduce the noise while keeping the texture information of the MVCT images. Following imaging denoising, a saliency map was created to further enhance visual conspicuity of low contrast structures. In this study, BM3D and saliency maps were applied to MVCT images of a CT imaging quality phantom, a head and neck, and four prostate patients. Following these steps, the contrast-to-noise ratios (CNRs) were quantified. Results: By applying BM3D denoising and saliency map, postprocessed MVCT images show remarkable improvements in imaging contrast without compromising resolution. For the head and neck patient, the difficult-to-see lymph nodes and vein in the carotid space in the original MVCT image became conspicuous in DeTECT. For the prostate patients, the ambiguous boundary between the bladder and the prostate in the original MVCT was clarified. The CNRs of phantom low contrast inserts were improved from 1.48 and 3.8 to 13.67 and 16.17, respectively. The CNRs of two regions-of-interest were improved from 1.5 and 3.17 to 3.14 and 15.76, respectively, for the head and neck patient. DeTECT also increased the CNR of prostate from 0.13 to 1.46 for the four prostate patients. The results are substantially better than a local denoising method using anisotropic diffusion

  2. Comparative analysis on some spatial-domain filters for fringe pattern denoising.

    PubMed

    Wang, Haixia; Kemao, Qian

    2011-04-20

    Fringe patterns produced by various optical interferometric techniques encode information such as shape, deformation, and refractive index. Noise affects further processing of the fringe patterns. Denoising is often needed before fringe pattern demodulation. Filtering along the fringe orientation is an effective option. Such filters include coherence enhancing diffusion, spin filtering with curve windows, second-order oriented partial-differential equations, and the regularized quadratic cost function for oriented fringe pattern filtering. These filters are analyzed to establish the relationships among them. Theoretical analysis shows that the four filters are largely equivalent to each other. Quantitative results are given on simulated fringe patterns to validate the theoretical analysis and to compare the performance of these filters. PMID:21509060

  3. Blind Deblurring and Denoising of Images Corrupted by Unidirectional Object Motion Blur and Sensor Noise.

    PubMed

    Zhang, Yi; Hirakawa, Keigo

    2016-09-01

    Low light photography suffers from blur and noise. In this paper, we propose a novel method to recover a dense estimate of spatially varying blur kernel as well as a denoised and deblurred image from a single noisy and object motion blurred image. A proposed method takes the advantage of the sparse representation of double discrete wavelet transform-a generative model of image blur that simplifies the wavelet analysis of a blurred image-and the Bayesian perspective of modeling the prior distribution of the latent sharp wavelet coefficient and the likelihood function that makes the noise handling explicit. We demonstrate the effectiveness of the proposed method on moderate noise and severely blurred images using simulated and real camera data. PMID:27337717

  4. Blind source separation based x-ray image denoising from an image sequence

    NASA Astrophysics Data System (ADS)

    Yu, Chun-Yu; Li, Yan; Fei, Bin; Li, Wei-Liang

    2015-09-01

    Blind source separation (BSS) based x-ray image denoising from an image sequence is proposed. Without priori knowledge, the useful image signal can be separated from an x-ray image sequence, for original images are supposed as different combinations of stable image signal and random image noise. The BSS algorithms such as fixed-point independent component analysis and second-order statistics singular value decomposition are used and compared with multi-frame averaging which is a common algorithm for improving image's signal-to-noise ratio (SNR). Denoising performance is evaluated in SNR, standard deviation, entropy, and runtime. Analysis indicates that BSS is applicable to image denoising; the denoised image's quality will get better when more frames are included in an x-ray image sequence, but it will cost more time; there should be trade-off between denoising performance and runtime, which means that the number of frames included in an image sequence is enough.

  5. Nonlocal two dimensional denoising of frequency specific chirp evoked ABR single trials.

    PubMed

    Schubert, J Kristof; Teuber, Tanja; Steidl, Gabriele; Strauss, Daniel J; Corona-Strauss, Farah I

    2012-01-01

    Recently, we have shown that denoising evoked potential (EP) images is possible using two dimensional diffusion filtering methods. This restoration allows for an integration of regularities over multiple stimulations into the denoising process. In the present work we propose the nonlocal means (NLM) method for EP image denoising. The EP images were constructed using auditory brainstem responses (ABR) collected in young healthy subjects using frequency specific and broadband chirp stimulations. It is concluded that the NLM method is more efficient than conventional approaches in EP imaging denoising, specially in the case of ABRs, where the relevant information can be easily masked by the ongoing EEG activity, i.e., signals suffer from rather low signal-to-noise ratio SNR. The proposed approach is for the a posteriori denoising of single trials after the experiment and not for real time applications. PMID:23366439

  6. A New Method for Nonlocal Means Image Denoising Using Multiple Images

    PubMed Central

    Wang, Xingzheng; Wang, Haoqian; Yang, Jiangfeng; Zhang, Yongbing

    2016-01-01

    The basic principle of nonlocal means is to denoise a pixel using the weighted average of the neighbourhood pixels, while the weight is decided by the similarity of these pixels. The key issue of the nonlocal means method is how to select similar patches and design the weight of them. There are two main contributions of this paper: The first contribution is that we use two images to denoise the pixel. These two noised images are with the same noise deviation. Instead of using only one image, we calculate the weight from two noised images. After the first denoising process, we get a pre-denoised image and a residual image. The second contribution is combining the nonlocal property between residual image and pre-denoised image. The improved nonlocal means method pays more attention on the similarity than the original one, which turns out to be very effective in eliminating gaussian noise. Experimental results with simulated data are provided. PMID:27459293

  7. Phase-aware candidate selection for time-of-flight depth map denoising

    NASA Astrophysics Data System (ADS)

    Hach, Thomas; Seybold, Tamara; Böttcher, Hendrik

    2015-03-01

    This paper presents a new pre-processing algorithm for Time-of-Flight (TOF) depth map denoising. Typically, denoising algorithms use the raw depth map as it comes from the sensor. Systematic artifacts due to the measurement principle are not taken into account which degrades the denoising results. For phase measurement TOF sensing, a major artifact is observed as salt-and-pepper noise caused by the measurement's ambiguity. Our pre-processing algorithm is able to isolate and unwrap affected pixels deploying the physical behavior of the capturing system yielding Gaussian noise. Using this pre-processing method before applying the denoising step clearly improves the parameter estimation for the denoising filter together with its final results.

  8. The performance and reliability of wavelet denoising for Doppler ultrasound fetal heart rate signal preprocessing.

    PubMed

    Papadimitriou, S; Papadopoulos, V; Gatzounas, D; Tzigounis, V; Bezerianos, A

    1997-01-01

    The present paper deals with the performance and the reliability of a Wavelet Denoising method for Doppler ultrasound Fetal Heart Rate (FHR) recordings. It displays strong evidence that the denoising process extracts the actual noise components. The analysis is approached with three methods. First, the power spectrum of the denoised FHR displays more clearly an 1/fa scaling law, i.e. the characteristic of fractal time series. Second, the rescaled scale analysis technique reveals a Hurst exponent at the range of 0.7-0.8 that corresponds to a long memory persistent process. Moreover, the variance of the Hurst exponent across time scales is smaller at the denoised signal. Third, a chaotic attractor reconstructed with the embedding dimension technique becomes evident at the denoised signals, while it is completely obscured at the unfiltered ones. PMID:10179728

  9. Denoising the Speaking Brain: Toward a Robust Technique for Correcting Artifact-Contaminated fMRI Data under Severe Motion

    PubMed Central

    Xu, Yisheng; Tong, Yunxia; Liu, Siyuan; Chow, Ho Ming; AbdulSabur, Nuria Y.; Mattay, Govind S.; Braun, Allen R.

    2014-01-01

    A comprehensive set of methods based on spatial independent component analysis (sICA) is presented as a robust technique for artifact removal, applicable to a broad range of functional magnetic resonance imaging (fMRI) experiments that have been plagued by motion-related artifacts. Although the applications of sICA for fMRI denoising have been studied previously, three fundamental elements of this approach have not been established as follows: 1) a mechanistically-based ground truth for component classification; 2) a general framework for evaluating the performance and generalizability of automated classifiers; 3) a reliable method for validating the effectiveness of denoising. Here we perform a thorough investigation of these issues and demonstrate the power of our technique by resolving the problem of severe imaging artifacts associated with continuous overt speech production. As a key methodological feature, a dual-mask sICA method is proposed to isolate a variety of imaging artifacts by directly revealing their extracerebral spatial origins. It also plays an important role for understanding the mechanistic properties of noise components in conjunction with temporal measures of physical or physiological motion. The potentials of a spatially-based machine learning classifier and the general criteria for feature selection have both been examined, in order to maximize the performance and generalizability of automated component classification. The effectiveness of denoising is quantitatively validated by comparing the activation maps of fMRI with those of positron emission tomography acquired under the same task conditions. The general applicability of this technique is further demonstrated by the successful reduction of distance-dependent effect of head motion on resting-state functional connectivity. PMID:25225001

  10. Feature-Preserving Mesh Denoising via Anisotropic Surface Fitting

    PubMed Central

    Yu, Zeyun

    2012-01-01

    We propose in this paper a robust surface mesh denoising method that can effectively remove mesh noise while faithfully preserving sharp features. This method utilizes surface fitting and projection techniques. Sharp features are preserved in the surface fitting algorithm by considering an anisotropic neighborhood of each vertex detected by the normal-weighted distance. In addition, to handle the mesh with a high level of noise, we perform a pre-filtering of surface normals prior to the neighborhood searching. A number of experimental results and comparisons demonstrate the excellent performance of our method in preserving important surface geometries while filtering mesh noise. PMID:22328806

  11. Denoising in digital speckle pattern interferometry using wave atoms.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2007-05-15

    We present an effective method for speckle noise removal in digital speckle pattern interferometry, which is based on a wave-atom thresholding technique. Wave atoms are a variant of 2D wavelet packets with a parabolic scaling relation and improve the sparse representation of fringe patterns when compared with traditional expansions. The performance of the denoising method is analyzed by using computer-simulated fringes, and the results are compared with those produced by wavelet and curvelet thresholding techniques. An application of the proposed method to reduce speckle noise in experimental data is also presented. PMID:17440544

  12. Improving Students' Ability to Intuitively Infer Resistance from Magnitude of Current and Potential Difference Information: A Functional Learning Approach

    ERIC Educational Resources Information Center

    Chasseigne, Gerard; Giraudeau, Caroline; Lafon, Peggy; Mullet, Etienne

    2011-01-01

    The study examined the knowledge of the functional relations between potential difference, magnitude of current, and resistance among seventh graders, ninth graders, 11th graders (in technical schools), and college students. It also tested the efficiency of a learning device named "functional learning" derived from cognitive psychology on the…

  13. A method for predicting DCT-based denoising efficiency for grayscale images corrupted by AWGN and additive spatially correlated noise

    NASA Astrophysics Data System (ADS)

    Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.

    2015-03-01

    Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.

  14. Experimental and theoretical analysis of wavelet-based denoising filter for echocardiographic images.

    PubMed

    Kang, S C; Hong, S H

    2001-01-01

    One of the most significant features of diagnostic echocardiographic images is to reduce speckle noise and make better image quality. In this paper we proposed a simple and effective filter design for image denoising and contrast enhancement based on multiscale wavelet denoising method. Wavelet threshold algorithms replace wavelet coefficients with small magnitude by zero and keep or shrink the other coefficients. This is basically a local procedure, since wavelet coefficients characterize the local regularity of a function. After we estimate distribution of noise within echocardiographic image, then apply to fitness Wavelet threshold algorithm. A common way of the estimating the speckle noise level in coherent imaging is to calculate the mean-to-standard-deviation ratio of the pixel intensity, often termed the Equivalent Number of Looks(ENL), over a uniform image area. Unfortunately, we found this measure not very robust mainly because of the difficulty to identify a uniform area in a real image. For this reason, we will only use here the S/MSE ratio and which corresponds to the standard SNR in case of additivie noise. We have simulated some echocardiographic images by specialized hardware for real-time application;processing of a 512*512 images takes about 1 min. Our experiments show that the optimal threshold level depends on the spectral content of the image. High spectral content tends to over-estimate the noise standard deviation estimation performed at the finest level of the DWT. As a result, a lower threshold parameter is required to get the optimal S/MSE. The standard WCS theory predicts a threshold that depends on the number of signal samples only. PMID:11604864

  15. Making Inferences: Comprehension of Physical Causality, Intentionality, and Emotions in Discourse by High-Functioning Older Children, Adolescents, and Adults with Autism

    ERIC Educational Resources Information Center

    Bodner, Kimberly E.; Engelhardt, Christopher R.; Minshew, Nancy J.; Williams, Diane L.

    2015-01-01

    Studies investigating inferential reasoning in autism spectrum disorder (ASD) have focused on the ability to make socially-related inferences or inferences more generally. Important variables for intervention planning such as whether inferences depend on physical experiences or the nature of social information have received less consideration. A…

  16. Decoding the Role of the Insula in Human Cognition: Functional Parcellation and Large-Scale Reverse Inference

    PubMed Central

    Yarkoni, Tal; Khaw, Mel Win; Sanfey, Alan G.

    2013-01-01

    Recent work has indicated that the insula may be involved in goal-directed cognition, switching between networks, and the conscious awareness of affect and somatosensation. However, these findings have been limited by the insula’s remarkably high base rate of activation and considerable functional heterogeneity. The present study used a relatively unbiased data-driven approach combining resting-state connectivity-based parcellation of the insula with large-scale meta-analysis to understand how the insula is anatomically organized based on functional connectivity patterns as well as the consistency and specificity of the associated cognitive functions. Our findings support a tripartite subdivision of the insula and reveal that the patterns of functional connectivity in the resting-state analysis appear to be relatively conserved across tasks in the meta-analytic coactivation analysis. The function of the networks was meta-analytically “decoded” using the Neurosynth framework and revealed that while the dorsoanterior insula is more consistently involved in human cognition than ventroanterior and posterior networks, each parcellated network is specifically associated with a distinct function. Collectively, this work suggests that the insula is instrumental in integrating disparate functional systems involved in processing affect, sensory-motor processing, and general cognition and is well suited to provide an interface between feelings, cognition, and action. PMID:22437053

  17. Comparison of f2/f1 ratio functions in rabbit and gerbil: Ear-canal DPOAEs vs noninvasively inferred intracochlear DPs

    NASA Astrophysics Data System (ADS)

    Martin, Glen K.; Stagner, Barden B.; Dong, Wei; Lonsbury-Martin, Brenda L.

    2015-12-01

    The properties of distortion product otoacoustic emissions (DPOAEs), i.e., distortion products (DPs) measured in the ear canal, have been thoroughly described. However, considerably less is known about the behavior of intracochlear DPs (iDPs). Detailed comparisons of DPOAEs to iDPs would provide valuable insights on the extent to which ear-canal DPOAEs mirror iDPs. Prior studies described a technique whereby the behavior of iDPs could be inferred by interacting a probe tone (f3) with the iDP of interest to produce a `secondary' DPOAE (DPOAÉ). The behavior of DPOAÉ was then used to deduce the characteristics of the iDP. In the present study, this method was used in rabbits and gerbils to simultaneously compare DPOAE f2/f1-ratio functions to their iDP counterparts. The 2f1-f2 and 2f2-f1 DPOAEs were collected with f1 and f2 primary-tone levels varied from 35-75 dB SPL, and with a 50-dB SPL f3 placed at a DP/f3 ratio of 1.25 to evoke a DPOAÉ at 2f3-(2f1-f2) or 2f3-(2f2-f1). Control experiments demonstrated little effect of the f3-probe tone on DPOAE-ratio functions. Substitution experiments were performed to determine any suppressive effects of the f1 and f2 primaries on the generation of DPOAÉ, as well as to infer the intracochlear level of the iDP once the DPOAÉ was corrected for suppression. Results showed that at low primary-tone levels, 2f1-f2 DPOAE f2/f1-ratio functions peaked around f2/f1=1.25, and exhibited an inverted U-shaped function. In contrast, simultaneously measured 2f1-f2 iDP-ratio functions peaked at f2/f1≈1. Similar growth of the inferred iDP was obtained for higher-level primaries when the ratio functions were corrected for suppressive effects. At these higher levels, DPOAE-ratio functions leveled off and no longer showed the steep reduction at narrow f2/f1 ratios. Overall, noninvasive estimates of 2f1-f2 iDP-ratio functions agreed with reports of similar functions directly measured for 2f1-f2 DPs on the basilar membrane (BM) or in

  18. Evaluation of Effectiveness of Wavelet Based Denoising Schemes Using ANN and SVM for Bearing Condition Classification

    PubMed Central

    G. S., Vijay; H. S., Kumar; Pai P., Srinivasa; N. S., Sriram; Rao, Raj B. K. N.

    2012-01-01

    The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal. PMID:23213323

  19. A comparison of de-noising methods for differential phase shift and associated rainfall estimation

    NASA Astrophysics Data System (ADS)

    Hu, Zhiqun; Liu, Liping; Wu, Linlin; Wei, Qing

    2015-04-01

    Measured differential phase shift UDP is known to be a noisy unstable polarimetric radar variable, such that the quality of UDP data has direct impact on specific differential phase shift KDP estimation, and subsequently, the KDP-based rainfall estimation. Over the past decades, many UDP de-noising methods have been developed; however, the de-noising effects in these methods and their impact on KDP-based rainfall estimation lack comprehensive comparative analysis. In this study, simulated noisy UDP data were generated and de-noised by using several methods such as finite-impulse response (FIR), Kalman, wavelet, traditional mean, and median filters. The biases were compared between KDP from simulated and observed UDP radial profiles after de-noising by these methods. The results suggest that the complicated FIR, Kalman, and wavelet methods have a better de-noising effect than the traditional methods. After UDP was de-noised, the accuracy of the KDP-based rainfall estimation increased significantly based on the analysis of three actual rainfall events. The improvement in estimation was more obvious when KDP was estimated with UDP de-noised by Kalman, FIR, and wavelet methods when the average rainfall was heavier than 5 mm h ≥1. However, the improved estimation was not significant when the precipitation intensity further increased to a rainfall rate beyond 10 mm h ≥1. The performance of wavelet analysis was found to be the most stable of these filters.

  20. Multiresolution generalized N dimension PCA for ultrasound image denoising

    PubMed Central

    2014-01-01

    Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917

  1. Noise distribution and denoising of current density images

    PubMed Central

    Beheshti, Mohammadali; Foomany, Farbod H.; Magtibay, Karl; Jaffray, David A.; Krishnan, Sridhar; Nanthakumar, Kumaraswamy; Umapathy, Karthikeyan

    2015-01-01

    Abstract. Current density imaging (CDI) is a magnetic resonance (MR) imaging technique that could be used to study current pathways inside the tissue. The current distribution is measured indirectly as phase changes. The inherent noise in the MR imaging technique degrades the accuracy of phase measurements leading to imprecise current variations. The outcome can be affected significantly, especially at a low signal-to-noise ratio (SNR). We have shown the residual noise distribution of the phase to be Gaussian-like and the noise in CDI images approximated as a Gaussian. This finding matches experimental results. We further investigated this finding by performing comparative analysis with denoising techniques, using two CDI datasets with two different currents (20 and 45 mA). We found that the block-matching and three-dimensional (BM3D) technique outperforms other techniques when applied on current density (J). The minimum gain in noise power by BM3D applied to J compared with the next best technique in the analysis was found to be around 2 dB per pixel. We characterize the noise profile in CDI images and provide insights on the performance of different denoising techniques when applied at two different stages of current density reconstruction. PMID:26158100

  2. GPU-based cone-beam reconstruction using wavelet denoising

    NASA Astrophysics Data System (ADS)

    Jin, Kyungchan; Park, Jungbyung; Park, Jongchul

    2012-03-01

    The scattering noise artifact resulted in low-dose projection in repetitive cone-beam CT (CBCT) scans decreases the image quality and lessens the accuracy of the diagnosis. To improve the image quality of low-dose CT imaging, the statistical filtering is more effective in noise reduction. However, image filtering and enhancement during the entire reconstruction process exactly may be challenging due to high performance computing. The general reconstruction algorithm for CBCT data is the filtered back-projection, which for a volume of 512×512×512 takes up to a few minutes on a standard system. To speed up reconstruction, massively parallel architecture of current graphical processing unit (GPU) is a platform suitable for acceleration of mathematical calculation. In this paper, we focus on accelerating wavelet denoising and Feldkamp-Davis-Kress (FDK) back-projection using parallel processing on GPU, utilize compute unified device architecture (CUDA) platform and implement CBCT reconstruction based on CUDA technique. Finally, we evaluate our implementation on clinical tooth data sets. Resulting implementation of wavelet denoising is able to process a 1024×1024 image within 2 ms, except data loading process, and our GPU-based CBCT implementation reconstructs a 512×512×512 volume from 400 projection data in less than 1 minute.

  3. Denoising Stimulated Raman Spectroscopic Images by Total Variation Minimization

    PubMed Central

    Liao, Chien-Sheng; Choi, Joon Hee; Zhang, Delong; Chan, Stanley H.; Cheng, Ji-Xin

    2016-01-01

    High-speed coherent Raman scattering imaging is opening a new avenue to unveiling the cellular machinery by visualizing the spatio-temporal dynamics of target molecules or intracellular organelles. By extracting signals from the laser at MHz modulation frequency, current stimulated Raman scattering (SRS) microscopy has reached shot noise limited detection sensitivity. The laser-based local oscillator in SRS microscopy not only generates high levels of signal, but also delivers a large shot noise which degrades image quality and spectral fidelity. Here, we demonstrate a denoising algorithm that removes the noise in both spatial and spectral domains by total variation minimization. The signal-to-noise ratio of SRS spectroscopic images was improved by up to 57 times for diluted dimethyl sulfoxide solutions and by 15 times for biological tissues. Weak Raman peaks of target molecules originally buried in the noise were unraveled. Coupling the denoising algorithm with multivariate curve resolution allowed discrimination of fat stores from protein-rich organelles in C. elegans. Together, our method significantly improved detection sensitivity without frame averaging, which can be useful for in vivo spectroscopic imaging. PMID:26955400

  4. Real-time image denoising algorithm in teleradiology systems

    NASA Astrophysics Data System (ADS)

    Gupta, Pradeep Kumar; Kanhirodan, Rajan

    2006-02-01

    Denoising of medical images in wavelet domain has potential application in transmission technologies such as teleradiology. This technique becomes all the more attractive when we consider the progressive transmission in a teleradiology system. The transmitted images are corrupted mainly due to noisy channels. In this paper, we present a new real time image denoising scheme based on limited restoration of bit-planes of wavelet coefficients. The proposed scheme exploits the fundamental property of wavelet transform - its ability to analyze the image at different resolution levels and the edge information associated with each sub-band. The desired bit-rate control is achieved by applying the restoration on a limited number of bit-planes subject to the optimal smoothing. The proposed method adapts itself to the preference of the medical expert; a single parameter can be used to balance the preservation of (expert-dependent) relevant details against the degree of noise reduction. The proposed scheme relies on the fact that noise commonly manifests itself as a fine-grained structure in image and wavelet transform allows the restoration strategy to adapt itself according to directional features of edges. The proposed approach shows promising results when compared with unrestored case, in context of error reduction. It also has capability to adapt to situations where noise level in the image varies and with the changing requirements of medical-experts. The applicability of the proposed approach has implications in restoration of medical images in teleradiology systems. The proposed scheme is computationally efficient.

  5. HARDI denoising using nonlocal means on S2

    NASA Astrophysics Data System (ADS)

    Kuurstra, Alan; Dolui, Sudipto; Michailovich, Oleg

    2012-02-01

    Diffusion MRI (dMRI) is a unique imaging modality for in vivo delineation of the anatomical structure of white matter in the brain. In particular, high angular resolution diffusion imaging (HARDI) is a specific instance of dMRI which is known to excel in detection of multiple neural fibers within a single voxel. Unfortunately, the angular resolution of HARDI is known to be inversely proportional to SNR, which makes the problem of denoising of HARDI data be of particular practical importance. Since HARDI signals are effectively band-limited, denoising can be accomplished by means of linear filtering. However, the spatial dependency of diffusivity in brain tissue makes it impossible to find a single set of linear filter parameters which is optimal for all types of diffusion signals. Hence, adaptive filtering is required. In this paper, we propose a new type of non-local means (NLM) filtering which possesses the required adaptivity property. As opposed to similar methods in the field, however, the proposed NLM filtering is applied in the spherical domain of spatial orientations. Moreover, the filter uses an original definition of adaptive weights, which are designed to be invariant to both spatial rotations as well as to a particular sampling scheme in use. As well, we provide a detailed description of the proposed filtering procedure, its efficient implementation, as well as experimental results with synthetic data. We demonstrate that our filter has substantially better adaptivity as compared to a number of alternative methods.

  6. Explorative Learning and Functional Inferences on a Five-Step Means-Means-End Problem in Goffin’s Cockatoos (Cacatua goffini)

    PubMed Central

    Auersperg, Alice M. I.; Kacelnik, Alex; von Bayern, Auguste M. P.

    2013-01-01

    To investigate cognitive operations underlying sequential problem solving, we confronted ten Goffin’s cockatoos with a baited box locked by five different inter-locking devices. Subjects were either naïve or had watched a conspecific demonstration, and either faced all devices at once or incrementally. One naïve subject solved the problem without demonstration and with all locks present within the first five sessions (each consisting of one trial of up to 20 minutes), while five others did so after social demonstrations or incremental experience. Performance was aided by species-specific traits including neophilia, a haptic modality and persistence. Most birds showed a ratchet-like progress, rarely failing to solve a stage once they had done it once. In most transfer tests subjects reacted flexibly and sensitively to alterations of the locks’ sequencing and functionality, as expected from the presence of predictive inferences about mechanical interactions between the locks. PMID:23844247

  7. System-Level Insights into the Cellular Interactome of a Non-Model Organism: Inferring, Modelling and Analysing Functional Gene Network of Soybean (Glycine max)

    PubMed Central

    Xu, Yungang; Guo, Maozu; Zou, Quan; Liu, Xiaoyan; Wang, Chunyu; Liu, Yang

    2014-01-01

    Cellular interactome, in which genes and/or their products interact on several levels, forming transcriptional regulatory-, protein interaction-, metabolic-, signal transduction networks, etc., has attracted decades of research focuses. However, such a specific type of network alone can hardly explain the various interactive activities among genes. These networks characterize different interaction relationships, implying their unique intrinsic properties and defects, and covering different slices of biological information. Functional gene network (FGN), a consolidated interaction network that models fuzzy and more generalized notion of gene-gene relations, have been proposed to combine heterogeneous networks with the goal of identifying functional modules supported by multiple interaction types. There are yet no successful precedents of FGNs on sparsely studied non-model organisms, such as soybean (Glycine max), due to the absence of sufficient heterogeneous interaction data. We present an alternative solution for inferring the FGNs of soybean (SoyFGNs), in a pioneering study on the soybean interactome, which is also applicable to other organisms. SoyFGNs exhibit the typical characteristics of biological networks: scale-free, small-world architecture and modularization. Verified by co-expression and KEGG pathways, SoyFGNs are more extensive and accurate than an orthology network derived from Arabidopsis. As a case study, network-guided disease-resistance gene discovery indicates that SoyFGNs can provide system-level studies on gene functions and interactions. This work suggests that inferring and modelling the interactome of a non-model plant are feasible. It will speed up the discovery and definition of the functions and interactions of other genes that control important functions, such as nitrogen fixation and protein or lipid synthesis. The efforts of the study are the basis of our further comprehensive studies on the soybean functional interactome at the genome

  8. The application study of wavelet packet transformation in the de-noising of dynamic EEG data.

    PubMed

    Li, Yifeng; Zhang, Lihui; Li, Baohui; Wei, Xiaoyang; Yan, Guiding; Geng, Xichen; Jin, Zhao; Xu, Yan; Wang, Haixia; Liu, Xiaoyan; Lin, Rong; Wang, Quan

    2015-01-01

    This paper briefly describes the basic principle of wavelet packet analysis, and on this basis introduces the general principle of wavelet packet transformation for signal den-noising. The dynamic EEG data under +Gz acceleration is made a de-noising treatment by using wavelet packet transformation, and the de-noising effects with different thresholds are made a comparison. The study verifies the validity and application value of wavelet packet threshold method for the de-noising of dynamic EEG data under +Gz acceleration. PMID:26405863

  9. Application of the dual-tree complex wavelet transform in biomedical signal denoising.

    PubMed

    Wang, Fang; Ji, Zhong

    2014-01-01

    In biomedical signal processing, Gibbs oscillation and severe frequency aliasing may occur when using the traditional discrete wavelet transform (DWT). Herein, a new denoising algorithm based on the dual-tree complex wavelet transform (DTCWT) is presented. Electrocardiogram (ECG) signals and heart sound signals are denoised based on the DTCWT. The results prove that the DTCWT is efficient. The signal-to-noise ratio (SNR) and the mean square error (MSE) are used to compare the denoising effect. Results of the paired samples t-test show that the new method can remove noise more thoroughly and better retain the boundary and texture of the signal. PMID:24211889

  10. Using fMRI non-local means denoising to uncover activation in sub-cortical structures at 1.5 T for guided HARDI tractography

    PubMed Central

    Bernier, Michaël; Chamberland, Maxime; Houde, Jean-Christophe; Descoteaux, Maxime; Whittingstall, Kevin

    2014-01-01

    In recent years, there has been ever-increasing interest in combining functional magnetic resonance imaging (fMRI) and diffusion magnetic resonance imaging (dMRI) for better understanding the link between cortical activity and connectivity, respectively. However, it is challenging to detect and validate fMRI activity in key sub-cortical areas such as the thalamus, given that they are prone to susceptibility artifacts due to the partial volume effects (PVE) of surrounding tissues (GM/WM interface). This is especially true on relatively low-field clinical MR systems (e.g., 1.5 T). We propose to overcome this limitation by using a spatial denoising technique used in structural MRI and more recently in diffusion MRI called non-local means (NLM) denoising, which uses a patch-based approach to suppress the noise locally. To test this, we measured fMRI in 20 healthy subjects performing three block-based tasks : eyes-open closed (EOC) and left/right finger tapping (FTL, FTR). Overall, we found that NLM yielded more thalamic activity compared to traditional denoising methods. In order to validate our pipeline, we also investigated known structural connectivity going through the thalamus using HARDI tractography: the optic radiations, related to the EOC task, and the cortico-spinal tract (CST) for FTL and FTR. To do so, we reconstructed the tracts using functionally based thalamic and cortical ROIs to initiates seeds of tractography in a two-level coarse-to-fine fashion. We applied this method at the single subject level, which allowed us to see the structural connections underlying fMRI thalamic activity. In summary, we propose a new fMRI processing pipeline which uses a recent spatial denoising technique (NLM) to successfully detect sub-cortical activity which was validated using an advanced dMRI seeding strategy in single subjects at 1.5 T. PMID:25309391