Science.gov

Sample records for denoising inferred functional

  1. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    PubMed

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  2. Point Set Denoising Using Bootstrap-Based Radial Basis Function

    PubMed Central

    Ramli, Ahmad; Abd. Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study. PMID:27315105

  3. Voxel-Wise Functional Connectomics Using Arterial Spin Labeling Functional Magnetic Resonance Imaging: The Role of Denoising.

    PubMed

    Liang, Xiaoyun; Connelly, Alan; Calamante, Fernando

    2015-11-01

    The objective of this study was to investigate voxel-wise functional connectomics using arterial spin labeling (ASL) functional magnetic resonance imaging (fMRI). Since ASL signal has an intrinsically low signal-to-noise ratio (SNR), the role of denoising is evaluated; in particular, a novel denoising method, dual-tree complex wavelet transform (DT-CWT) combined with the nonlocal means (NLM) algorithm is implemented and evaluated. Simulations were conducted to evaluate the performance of the proposed method in denoising images and in detecting functional networks from noisy data (including the accuracy and sensitivity of detection). In addition, denoising was applied to in vivo ASL datasets, followed by network analysis using graph theoretical approaches. Efficiencies cost was used to evaluate the performance of denoising in detecting functional networks from in vivo ASL fMRI data. Simulations showed that denoising is effective in detecting voxel-wise functional networks from low SNR data and/or from data with small total number of time points. The capability of denoised voxel-wise functional connectivity analysis was also demonstrated with in vivo data. We concluded that denoising is important for voxel-wise functional connectivity using ASL fMRI and that the proposed DT-CWT-NLM method should be a useful ASL preprocessing step.

  4. A Neuro-Fuzzy Inference System Combining Wavelet Denoising, Principal Component Analysis, and Sequential Probability Ratio Test for Sensor Monitoring

    SciTech Connect

    Na, Man Gyun; Oh, Seungrohk

    2002-11-15

    A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information of other sensors. The parameters of the neuro-fuzzy inference system that estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce the time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system, and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors.

  5. [Research on ECG de-noising method based on ensemble empirical mode decomposition and wavelet transform using improved threshold function].

    PubMed

    Ye, Linlin; Yang, Dan; Wang, Xu

    2014-06-01

    A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal. PMID:25219236

  6. [Research on ECG de-noising method based on ensemble empirical mode decomposition and wavelet transform using improved threshold function].

    PubMed

    Ye, Linlin; Yang, Dan; Wang, Xu

    2014-06-01

    A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal.

  7. Image denoising in bidimensional empirical mode decomposition domain: the role of Student's probability distribution function.

    PubMed

    Lahmiri, Salim

    2016-03-01

    Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications. PMID:27222723

  8. Research on biochemical spectrum denoising based on a novel wavelet threshold function and an improved translation-invariance method

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Zeng, Lvming; Huang, Zhen; Huang, Shuanggen

    2008-12-01

    In this paper, an improved wavelet threshold denoising with combined translation invariance(TI)method is adopted to remove noises existed in the bio-chemical spectrum. Meanwhile, a novel wavelet threshold function and an optimal threshold determination algorithm are proposed. The new function is continuous and high-order derivable, it can overcome the vibration phenomena generated by the classical threshold function and decrease the error of reconstructed spectrum. So, it is superior to the frequency-domain filtering methods, the soft- and hard-threshold function proposed by D.L. Donoho and the semisoft-threshold function proposed by Gao, etc. The experimental results show that the improved TI wavelet threshold(TI-WT) denoising method can availably eliminate the Pseudo-Gibbs phenomena generated by the traditional wavelet thresholding method. At the same time, the improved wavelet threshold function and the TI-WT method present lower root mean-square-error (RMSE) and higher signal-to-noise ratio(SNR) than the frequency-domain filtering, classical soft and hard-threshold denoising The SNR increasing from 17.3200 to 32.5609, the RMSE decreasing from 4.0244 to 0.6257. Otherwise, The improved denoising method not only makes the spectrum smooth, but also effectively preserves the edge characteristics of the original spectrum.

  9. Nonlinear denoising of functional magnetic resonance imaging time series with wavelets

    NASA Astrophysics Data System (ADS)

    Stausberg, Sven; Lehnertz, Klaus

    2009-04-01

    In functional magnetic resonance imaging (fMRI) the blood oxygenation level dependent (BOLD) effect is used to identify and delineate neuronal activity. The sensitivity of a fMRI-based detection of neuronal activation, however, strongly depends on the relative levels of signal and noise in the time series data, and a large number of different artifact and noise sources interfere with the weak signal changes of the BOLD response. Thus, noise reduction is important to allow an accurate estimation of single activation-related BOLD signals across brain regions. Techniques employed so far include filtering in the time or frequency domain which, however, does not take into account possible nonlinearities of the BOLD response. We here evaluate a previously proposed method for nonlinear denoising of short and transient signals, which combines the wavelet transform with techniques from nonlinear time series analysis. We adopt the method to the problem at hand and show that successful noise reduction and, more importantly, preservation of the shape of individual BOLD signals can be achieved even in the presence of in-band noise.

  10. Functional network inference of the suprachiasmatic nucleus.

    PubMed

    Abel, John H; Meeker, Kirsten; Granados-Fuentes, Daniel; St John, Peter C; Wang, Thomas J; Bales, Benjamin B; Doyle, Francis J; Herzog, Erik D; Petzold, Linda R

    2016-04-19

    In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. Although the intracellular genetic oscillator and intercellular biochemical coupling mechanisms have been examined previously, the network topology driving synchronization of the SCN has not been elucidated. This network has been particularly challenging to probe, due to its oscillatory components and slow coupling timescale. In this work, we investigated the SCN network at a single-cell resolution through a chemically induced desynchronization. We then inferred functional connections in the SCN by applying the maximal information coefficient statistic to bioluminescence reporter data from individual neurons while they resynchronized their circadian cycling. Our results demonstrate that the functional network of circadian cells associated with resynchronization has small-world characteristics, with a node degree distribution that is exponential. We show that hubs of this small-world network are preferentially located in the central SCN, with sparsely connected shells surrounding these cores. Finally, we used two computational models of circadian neurons to validate our predictions of network structure.

  11. Functional network inference of the suprachiasmatic nucleus.

    PubMed

    Abel, John H; Meeker, Kirsten; Granados-Fuentes, Daniel; St John, Peter C; Wang, Thomas J; Bales, Benjamin B; Doyle, Francis J; Herzog, Erik D; Petzold, Linda R

    2016-04-19

    In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. Although the intracellular genetic oscillator and intercellular biochemical coupling mechanisms have been examined previously, the network topology driving synchronization of the SCN has not been elucidated. This network has been particularly challenging to probe, due to its oscillatory components and slow coupling timescale. In this work, we investigated the SCN network at a single-cell resolution through a chemically induced desynchronization. We then inferred functional connections in the SCN by applying the maximal information coefficient statistic to bioluminescence reporter data from individual neurons while they resynchronized their circadian cycling. Our results demonstrate that the functional network of circadian cells associated with resynchronization has small-world characteristics, with a node degree distribution that is exponential. We show that hubs of this small-world network are preferentially located in the central SCN, with sparsely connected shells surrounding these cores. Finally, we used two computational models of circadian neurons to validate our predictions of network structure. PMID:27044085

  12. Functional neuroanatomy of intuitive physical inference.

    PubMed

    Fischer, Jason; Mikhael, John G; Tenenbaum, Joshua B; Kanwisher, Nancy

    2016-08-23

    To engage with the world-to understand the scene in front of us, plan actions, and predict what will happen next-we must have an intuitive grasp of the world's physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events-a "physics engine" in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general "multiple demand" system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action. PMID:27503892

  13. Denoising of high-resolution single-particle electron-microscopy density maps by their approximation using three-dimensional Gaussian functions.

    PubMed

    Jonić, S; Vargas, J; Melero, R; Gómez-Blanco, J; Carazo, J M; Sorzano, C O S

    2016-06-01

    Cryo-electron microscopy (cryo-EM) of frozen-hydrated preparations of isolated macromolecular complexes is the method of choice to obtain the structure of complexes that cannot be easily studied by other experimental methods due to their flexibility or large size. An increasing number of macromolecular structures are currently being obtained at subnanometer resolution but the interpretation of structural details in such EM-derived maps is often difficult because of noise at these high-frequency signal components that reduces their contrast. In this paper, we show that the method for EM density-map approximation using Gaussian functions can be used for denoising of single-particle EM maps of high (typically subnanometer) resolution. We show its denoising performance using simulated and experimental EM density maps of several complexes.

  14. Iterative Regularization Denoising Method Based on OSV Model for BioMedical Image Denoising

    NASA Astrophysics Data System (ADS)

    Guan-nan, Chen; Rong, Chen; Zu-fang, Huang; Ju-qiang, Lin; Shang-yuan, Feng; Yong-zeng, Li; Zhong-jian, Teng

    2011-01-01

    Biomedical image denoising algorithm based on gradient dependent energy functional often compromised the biomedical image features like textures or certain details. This paper proposes an iterative regularization denoising method based on OSV model for biomedical image denoising. By using iterative regularization, the oscillating patterns of texture and detail are added back to fit and compute the original OSV model,and the iterative behavior avoids overfull smoothing while denoising the features of textures and details to a certain extent. In addition, the iterative procedure is proposed in this paper, and the proposed algorithm also be proved the convergence property. Experimental results show that the proposed method can achieve a batter result in preserving not only the features of textures for biomedical image denoising but also the details for biomedical image.

  15. Saddlepoint distribution function approximations in biostatistical inference.

    PubMed

    Kolassa, J E

    2003-01-01

    Applications of saddlepoint approximations to distribution functions are reviewed. Calculations are provided for marginal distributions and conditional distributions. These approximations are applied to problems of testing and generating confidence intervals, particularly in canonical exponential families.

  16. Nonparametric inference on median residual life function.

    PubMed

    Jeong, Jong-Hyeon; Jung, Sin-Ho; Costantino, Joseph P

    2008-03-01

    A simple approach to the estimation of the median residual lifetime is proposed for a single group by inverting a function of the Kaplan-Meier estimators. A test statistic is proposed to compare two median residual lifetimes at any fixed time point. The test statistic does not involve estimation of the underlying probability density function of failure times under censoring. Extensive simulation studies are performed to validate the proposed test statistic in terms of type I error probabilities and powers at various time points. One of the oldest data sets from the National Surgical Adjuvant Breast and Bowel Project (NSABP), which has more than a quarter century of follow-up, is used to illustrate the method. The analysis results indicate that, without systematic post-operative therapy, a significant difference in median residual lifetimes between node-negative and node-positive breast cancer patients persists for about 10 years after surgery. The new estimates of the median residual lifetime could serve as a baseline for physicians to explain any incremental effects of post-operative treatments in terms of delaying breast cancer recurrence or prolonging remaining lifetimes of breast cancer patients. PMID:17501936

  17. Creators' intentions bias judgments of function independently from causal inferences.

    PubMed

    Chaigneau, Sergio E; Castillo, Ramón D; Martínez, Luis

    2008-10-01

    Participants learned about novel artifacts that were created for function X, but later used for function Y. When asked to rate the extent to which X and Y were a given artifact's function, participants consistently rated X higher than Y. In Experiments 1 and 2, participants were also asked to rate artifacts' efficiency to perform X and Y. This allowed us to test if participants' preference for X was mediated by causal inferences. Experiment 1 showed that participants did not infer intentionally created artifacts performed X more efficiently than Y. Experiment 2 showed participants did not infer that only an efficient (but not an inefficient) artifact provided evidence of intentional creation. Causal inferences involving efficiency, did not account for participants' preferences. In Experiment 3, in contrast, when the creator changed her mind about an artifact's function (i.e., from X to Y), the preference for the original function tended to disappear. Creators' intentions were the basis for participants' preference. Results are discussed relative to essentialist theories.

  18. Comments on "Functional equivalence between radial basis function networks and fuzzy inference systems".

    PubMed

    Anderson, H C; Lotfi, A; Westphal, L C; Jang, J R

    1998-01-01

    The above paper claims that under a set of minor restrictions radial basis function networks and fuzzy inference systems are functionally equivalent. The purpose of this letter is to show that this set of restrictions is incomplete and that, when it is completed, the said functional equivalence applies only to a small range of fuzzy inference systems. In addition, a modified set of restrictions is proposed which is applicable for a much wider range of fuzzy inference systems.

  19. Differential Expression and Network Inferences through Functional Data Modeling

    PubMed Central

    Telesca, Donatello; Inoue, Lurdes Y.T.; Neira, Mauricio; Etzioni, Ruth; Gleave, Martin; Nelson, Colleen

    2010-01-01

    Time–course microarray data consist of mRNA expression from a common set of genes collected at different time points. Such data are thought to reflect underlying biological processes developing over time. In this article we propose a model that allows us to examine differential expression and gene network relationships using time course microarray data. We model each gene expression profile as a random functional transformation of the scale, amplitude and phase of a common curve. Inferences about the gene–specific amplitude parameters allow us to examine differential gene expression. Inferences about measures of functional similarity based on estimated time transformation functions allow us to examine gene networks while accounting for features of the gene expression profiles. We discuss applications to simulated data as well as to microarray data on prostate cancer progression. PMID:19053995

  20. Receiver function deconvolution using transdimensional hierarchical Bayesian inference

    NASA Astrophysics Data System (ADS)

    Kolb, J. M.; Lekić, V.

    2014-06-01

    Teleseismic waves can convert from shear to compressional (Sp) or compressional to shear (Ps) across impedance contrasts in the subsurface. Deconvolving the parent waveforms (P for Ps or S for Sp) from the daughter waveforms (S for Ps or P for Sp) generates receiver functions which can be used to analyse velocity structure beneath the receiver. Though a variety of deconvolution techniques have been developed, they are all adversely affected by background and signal-generated noise. In order to take into account the unknown noise characteristics, we propose a method based on transdimensional hierarchical Bayesian inference in which both the noise magnitude and noise spectral character are parameters in calculating the likelihood probability distribution. We use a reversible-jump implementation of a Markov chain Monte Carlo algorithm to find an ensemble of receiver functions whose relative fits to the data have been calculated while simultaneously inferring the values of the noise parameters. Our noise parametrization is determined from pre-event noise so that it approximates observed noise characteristics. We test the algorithm on synthetic waveforms contaminated with noise generated from a covariance matrix obtained from observed noise. We show that the method retrieves easily interpretable receiver functions even in the presence of high noise levels. We also show that we can obtain useful estimates of noise amplitude and frequency content. Analysis of the ensemble solutions produced by our method can be used to quantify the uncertainties associated with individual receiver functions as well as with individual features within them, providing an objective way for deciding which features warrant geological interpretation. This method should make possible more robust inferences on subsurface structure using receiver function analysis, especially in areas of poor data coverage or under noisy station conditions.

  1. Raman spectral data denoising based on wavelet analysis

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Peng, Fei; Cheng, Qinghua; Xu, Dahai

    2008-12-01

    Abstract As one kind of molecule scattering spectroscopy, Raman spectroscopy (RS) is characterized by the frequency excursion that can show the information of molecule. RS has a broad application in biological, chemical, environmental and industrial fields. But signals in Raman spectral analysis often have noise, which greatly influences the achievement of accurate analytical results. The de-noising of RS signals is an important part of spectral analysis. Wavelet transform has been established with the Fourier transform as a data-processing method in analytical fields. The main fields of application are related to de-noising, compression, variable reduction, and signal suppression. In de-noising of Raman Spectroscopy, wavelet is chosen to construct de-noising function because of its excellent properties. In this paper, bior wavelet is adopted to remove the noise in the Raman spectra. It eliminates noise obviously and the result is satisfying. This method can provide some bases for practical de-noising in Raman spectra.

  2. Network inference from functional experimental data (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Desrosiers, Patrick; Labrecque, Simon; Tremblay, Maxime; Bélanger, Mathieu; De Dorlodot, Bertrand; Côté, Daniel C.

    2016-03-01

    Functional connectivity maps of neuronal networks are critical tools to understand how neurons form circuits, how information is encoded and processed by neurons, how memory is shaped, and how these basic processes are altered under pathological conditions. Current light microscopy allows to observe calcium or electrical activity of thousands of neurons simultaneously, yet assessing comprehensive connectivity maps directly from such data remains a non-trivial analytical task. There exist simple statistical methods, such as cross-correlation and Granger causality, but they only detect linear interactions between neurons. Other more involved inference methods inspired by information theory, such as mutual information and transfer entropy, identify more accurately connections between neurons but also require more computational resources. We carried out a comparative study of common connectivity inference methods. The relative accuracy and computational cost of each method was determined via simulated fluorescence traces generated with realistic computational models of interacting neurons in networks of different topologies (clustered or non-clustered) and sizes (10-1000 neurons). To bridge the computational and experimental works, we observed the intracellular calcium activity of live hippocampal neuronal cultures infected with the fluorescent calcium marker GCaMP6f. The spontaneous activity of the networks, consisting of 50-100 neurons per field of view, was recorded from 20 to 50 Hz on a microscope controlled by a homemade software. We implemented all connectivity inference methods in the software, which rapidly loads calcium fluorescence movies, segments the images, extracts the fluorescence traces, and assesses the functional connections (with strengths and directions) between each pair of neurons. We used this software to assess, in real time, the functional connectivity from real calcium imaging data in basal conditions, under plasticity protocols, and epileptic

  3. Beyond the bounds of orthology: functional inference from metagenomic context.

    PubMed

    Vey, Gregory; Moreno-Hagelsieb, Gabriel

    2010-07-01

    The effectiveness of the computational inference of function by genomic context is bounded by the diversity of known microbial genomes. Although metagenomes offer access to previously inaccessible organisms, their fragmentary nature prevents the conventional establishment of orthologous relationships required for reliably predicting functional interactions. We introduce a protocol for the prediction of functional interactions using data sources without information about orthologous relationships. To illustrate this process, we use the Sargasso Sea metagenome to construct a functional interaction network for the Escherichia coli K12 genome. We identify two reliability metrics, target intergenic distance and source interaction count, and apply them to selectively filter the predictions retained to construct the network of functional interactions. The resulting network contains 2297 nodes with 10 072 edges with a positive predictive value of 0.80. The metagenome yielded 8423 functional interactions beyond those found using only the genomic orthologs as a data source. This amounted to a 134% increase in the total number of functional interactions that are predicted by combining the metagenome and the genomic orthologs versus the genomic orthologs alone. In the absence of detectable orthologous relationships it remains feasible to derive a reliable set of predicted functional interactions. This offers a strategy for harnessing other metagenomes and homologs in general. Because metagenomes allow access to previously unreachable microorganisms, this will result in expanding the universe of known functional interactions thus furthering our understanding of functional organization. PMID:20419183

  4. Beyond the bounds of orthology: functional inference from metagenomic context.

    PubMed

    Vey, Gregory; Moreno-Hagelsieb, Gabriel

    2010-07-01

    The effectiveness of the computational inference of function by genomic context is bounded by the diversity of known microbial genomes. Although metagenomes offer access to previously inaccessible organisms, their fragmentary nature prevents the conventional establishment of orthologous relationships required for reliably predicting functional interactions. We introduce a protocol for the prediction of functional interactions using data sources without information about orthologous relationships. To illustrate this process, we use the Sargasso Sea metagenome to construct a functional interaction network for the Escherichia coli K12 genome. We identify two reliability metrics, target intergenic distance and source interaction count, and apply them to selectively filter the predictions retained to construct the network of functional interactions. The resulting network contains 2297 nodes with 10 072 edges with a positive predictive value of 0.80. The metagenome yielded 8423 functional interactions beyond those found using only the genomic orthologs as a data source. This amounted to a 134% increase in the total number of functional interactions that are predicted by combining the metagenome and the genomic orthologs versus the genomic orthologs alone. In the absence of detectable orthologous relationships it remains feasible to derive a reliable set of predicted functional interactions. This offers a strategy for harnessing other metagenomes and homologs in general. Because metagenomes allow access to previously unreachable microorganisms, this will result in expanding the universe of known functional interactions thus furthering our understanding of functional organization.

  5. Inference of gene regulation functions from dynamic transcriptome data

    PubMed Central

    Hillenbrand, Patrick; Maier, Kerstin C; Cramer, Patrick; Gerland, Ulrich

    2016-01-01

    To quantify gene regulation, a function is required that relates transcription factor binding to DNA (input) to the rate of mRNA synthesis from a target gene (output). Such a ‘gene regulation function’ (GRF) generally cannot be measured because the experimental titration of inputs and simultaneous readout of outputs is difficult. Here we show that GRFs may instead be inferred from natural changes in cellular gene expression, as exemplified for the cell cycle in the yeast S. cerevisiae. We develop this inference approach based on a time series of mRNA synthesis rates from a synchronized population of cells observed over three cell cycles. We first estimate the functional form of how input transcription factors determine mRNA output and then derive GRFs for target genes in the CLB2 gene cluster that are expressed during G2/M phase. Systematic analysis of additional GRFs suggests a network architecture that rationalizes transcriptional cell cycle oscillations. We find that a transcription factor network alone can produce oscillations in mRNA expression, but that additional input from cyclin oscillations is required to arrive at the native behaviour of the cell cycle oscillator. DOI: http://dx.doi.org/10.7554/eLife.12188.001 PMID:27652904

  6. A Neuroeconomics Approach to Inferring Utility Functions in Sensorimotor Control

    PubMed Central

    2004-01-01

    Making choices is a fundamental aspect of human life. For over a century experimental economists have characterized the decisions people make based on the concept of a utility function. This function increases with increasing desirability of the outcome, and people are assumed to make decisions so as to maximize utility. When utility depends on several variables, indifference curves arise that represent outcomes with identical utility that are therefore equally desirable. Whereas in economics utility is studied in terms of goods and services, the sensorimotor system may also have utility functions defining the desirability of various outcomes. Here, we investigate the indifference curves when subjects experience forces of varying magnitude and duration. Using a two-alternative forced-choice paradigm, in which subjects chose between different magnitude–duration profiles, we inferred the indifference curves and the utility function. Such a utility function defines, for example, whether subjects prefer to lift a 4-kg weight for 30 s or a 1-kg weight for a minute. The measured utility function depends nonlinearly on the force magnitude and duration and was remarkably conserved across subjects. This suggests that the utility function, a central concept in economics, may be applicable to the study of sensorimotor control. PMID:15383835

  7. Computational approaches for inferring the functions of intrinsically disordered proteins

    PubMed Central

    Varadi, Mihaly; Vranken, Wim; Guharoy, Mainak; Tompa, Peter

    2015-01-01

    Intrinsically disordered proteins (IDPs) are ubiquitously involved in cellular processes and often implicated in human pathological conditions. The critical biological roles of these proteins, despite not adopting a well-defined fold, encouraged structural biologists to revisit their views on the protein structure-function paradigm. Unfortunately, investigating the characteristics and describing the structural behavior of IDPs is far from trivial, and inferring the function(s) of a disordered protein region remains a major challenge. Computational methods have proven particularly relevant for studying IDPs: on the sequence level their dependence on distinct characteristics determined by the local amino acid context makes sequence-based prediction algorithms viable and reliable tools for large scale analyses, while on the structure level the in silico integration of fundamentally different experimental data types is essential to describe the behavior of a flexible protein chain. Here, we offer an overview of the latest developments and computational techniques that aim to uncover how protein function is connected to intrinsic disorder. PMID:26301226

  8. Progress on Bayesian Inference of the Fast Ion Distribution Function

    NASA Astrophysics Data System (ADS)

    Stagner, L.; Heidbrink, W. W.; Chen, X.; Salewski, W.; Grierson, B. A.

    2013-10-01

    The fast-ion distribution function (DF) has a complicated dependence on several phase-space variables. The standard analysis procedure in energetic particle research is to compute the DF theoretically, use that DF in forward modeling to predict diagnostic signals, then compare with measured data. However, when theory and experiment disagree (for one or more diagnostics), it is unclear how to proceed. Bayesian statistics provides a framework to infer the DF, quantify errors, and reconcile discrepant diagnostic measurements. Diagnostic errors and weight functions that describe the phase space sensitivity of the measurements are incorporated into Bayesian likelihood probabilities. Prior probabilities describe physical constraints. This poster will show reconstructions of classically described, low-power, MHD-quiescent distribution functions from actual FIDA measurements. A description of the full weight functions will also be shown. This work is supported in part by the US Department of Energy under SC-G903402, DE-FC02-04ER54698 and DE-AC02-09CH11466.

  9. [DR image denoising based on Laplace-Impact mixture model].

    PubMed

    Feng, Guo-Dong; He, Xiang-Bin; Zhou, He-Qin

    2009-07-01

    A novel DR image denoising algorithm based on Laplace-Impact mixture model in dual-tree complex wavelet domain is proposed in this paper. It uses local variance to build probability density function of Laplace-Impact model fitted to the distribution of high-frequency subband coefficients well. Within Laplace-Impact framework, this paper describes a novel method for image denoising based on designing minimum mean squared error (MMSE) estimators, which relies on strong correlation between amplitudes of nearby coefficients. The experimental results show that the algorithm proposed in this paper outperforms several state-of-art denoising methods such as Bayes least squared Gaussian scale mixture and Laplace prior.

  10. Study on De-noising Technology of Radar Life Signal

    NASA Astrophysics Data System (ADS)

    Yang, Xiu-Fang; Wang, Lian-Huan; Ma, Jiang-Fei; Wang, Pei-Pei

    2016-05-01

    Radar detection is a kind of novel life detection technology, which can be applied to medical monitoring, anti-terrorism and disaster relief street fighting, etc. As the radar life signal is very weak, it is often submerged in the noise. Because of non-stationary and randomness of these clutter signals, it is necessary to denoise efficiently before extracting and separating the useful signal. This paper improves the radar life signal's theoretical model of the continuous wave, does de-noising processing by introducing lifting wavelet transform and determine the best threshold function through comparing the de-noising effects of different threshold functions. The result indicates that both SNR and MSE of the signal are better than the traditional ones by introducing lifting wave transform and using a new improved soft threshold function de-noising method..

  11. Medical-Legal Inferences From Functional Neuroimaging Evidence.

    PubMed

    Mayberg

    1996-07-01

    Positron emission (PET) and single-photon emission tomography (SPECT) are validated functional imaging techniques for the in vivo measurement of many neuro-phsyiological and neurochemical parameters. Research studies of patients with a broad range of neurological and psychiatric illness have been published. Reproducible and specific patterns of altered cerebral blood flow and glucose metabolism, however, have been demonstrated and confirmed for only a limited number of specific illnesses. The association of functional scan patterns with specific deficits is less conclusive. Correlations of regional abnormalities with clinical symptoms such as motor weakness, aphasia, and visual spatial dysfunction are the most reproducible but are more poorly localized than lesion-deficit studies would suggest. Findings are even less consistent for nonlocalizing behavioral symptoms such as memory difficulties, poor concentration, irritability, or chronic pain, and no reliable patterns have been demonstrated. In a forensic context, homicidal and sadistic tendencies, aberrant sexual drive, violent impulsivity, psychopathic and sociopathic personality traits, as well as impaired judgement and poor insight, have no known PET or SPECT patterns, and their presence in an individual with any PET or SPECT scan finding cannot be inferred or concluded. Furthermore, the reliable prediction of any specific neurological, psychiatric, or behavioral deficits from specific scan findings has not been demonstrated. Unambiguous results from experiments designed to specifically examine the causative relationships between regional brain dysfunction and these types of complex behaviors are needed before any introduction of functional scans into the courts can be considered scientifically justified or legally admissible. PMID:10320420

  12. Functional equivalence between radial basis function networks and fuzzy inference systems.

    PubMed

    Jang, J R; Sun, C T

    1993-01-01

    It is shown that, under some minor restrictions, the functional behavior of radial basis function networks (RBFNs) and that of fuzzy inference systems are actually equivalent. This functional equivalence makes it possible to apply what has been discovered (learning rule, representational power, etc.) for one of the models to the other, and vice versa. It is of interest to observe that two models stemming from different origins turn out to be functionally equivalent.

  13. Image denoising filter based on patch-based difference refinement

    NASA Astrophysics Data System (ADS)

    Park, Sang Wook; Kang, Moon Gi

    2012-06-01

    In the denoising literature, research based on the nonlocal means (NLM) filter has been done and there have been many variations and improvements regarding weight function and parameter optimization. Here, a NLM filter with patch-based difference (PBD) refinement is presented. PBD refinement, which is the weighted average of the PBD values, is performed with respect to the difference images of all the locations in a refinement kernel. With refined and denoised PBD values, pattern adaptive smoothing threshold and noise suppressed NLM filter weights are calculated. Owing to the refinement of the PBD values, the patterns are divided into flat regions and texture regions by comparing the sorted values in the PBD domain to the threshold value including the noise standard deviation. Then, two different smoothing thresholds are utilized for each region denoising, respectively, and the NLM filter is applied finally. Experimental results of the proposed scheme are shown in comparison with several state-of-the-arts NLM based denoising methods.

  14. Extending the functional equivalence of radial basis function networks and fuzzy inference systems.

    PubMed

    Hunt, K J; Haas, R; Murray-Smith, R

    1996-01-01

    We establish the functional equivalence of a generalized class of Gaussian radial basis function (RBFs) networks and the full Takagi-Sugeno model (1983) of fuzzy inference. This generalizes an existing result which applies to the standard Gaussian RBF network and a restricted form of the Takagi-Sugeno fuzzy system. The more general framework allows the removal of some of the restrictive conditions of the previous result.

  15. Nonlocal Markovian models for image denoising

    NASA Astrophysics Data System (ADS)

    Salvadeo, Denis H. P.; Mascarenhas, Nelson D. A.; Levada, Alexandre L. M.

    2016-01-01

    Currently, the state-of-the art methods for image denoising are patch-based approaches. Redundant information present in nonlocal regions (patches) of the image is considered for better image modeling, resulting in an improved quality of filtering. In this respect, nonlocal Markov random field (MRF) models are proposed by redefining the energy functions of classical MRF models to adopt a nonlocal approach. With the new energy functions, the pairwise pixel interaction is weighted according to the similarities between the patches corresponding to each pair. Also, a maximum pseudolikelihood estimation of the spatial dependency parameter (β) for these models is presented here. For evaluating this proposal, these models are used as an a priori model in a maximum a posteriori estimation to denoise additive white Gaussian noise in images. Finally, results display a notable improvement in both quantitative and qualitative terms in comparison with the local MRFs.

  16. CONSTRUCTING A FLEXIBLE LIKELIHOOD FUNCTION FOR SPECTROSCOPIC INFERENCE

    SciTech Connect

    Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Green, Gregory M.; Hogg, David W.

    2015-10-20

    We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf.

  17. On the Inference of Functional Circadian Networks Using Granger Causality

    PubMed Central

    Pourzanjani, Arya; Herzog, Erik D.; Petzold, Linda R.

    2015-01-01

    Being able to infer one way direct connections in an oscillatory network such as the suprachiastmatic nucleus (SCN) of the mammalian brain using time series data is difficult but crucial to understanding network dynamics. Although techniques have been developed for inferring networks from time series data, there have been no attempts to adapt these techniques to infer directional connections in oscillatory time series, while accurately distinguishing between direct and indirect connections. In this paper an adaptation of Granger Causality is proposed that allows for inference of circadian networks and oscillatory networks in general called Adaptive Frequency Granger Causality (AFGC). Additionally, an extension of this method is proposed to infer networks with large numbers of cells called LASSO AFGC. The method was validated using simulated data from several different networks. For the smaller networks the method was able to identify all one way direct connections without identifying connections that were not present. For larger networks of up to twenty cells the method shows excellent performance in identifying true and false connections; this is quantified by an area-under-the-curve (AUC) 96.88%. We note that this method like other Granger Causality-based methods, is based on the detection of high frequency signals propagating between cell traces. Thus it requires a relatively high sampling rate and a network that can propagate high frequency signals. PMID:26413748

  18. On the Inference of Functional Circadian Networks Using Granger Causality.

    PubMed

    Pourzanjani, Arya; Herzog, Erik D; Petzold, Linda R

    2015-01-01

    Being able to infer one way direct connections in an oscillatory network such as the suprachiastmatic nucleus (SCN) of the mammalian brain using time series data is difficult but crucial to understanding network dynamics. Although techniques have been developed for inferring networks from time series data, there have been no attempts to adapt these techniques to infer directional connections in oscillatory time series, while accurately distinguishing between direct and indirect connections. In this paper an adaptation of Granger Causality is proposed that allows for inference of circadian networks and oscillatory networks in general called Adaptive Frequency Granger Causality (AFGC). Additionally, an extension of this method is proposed to infer networks with large numbers of cells called LASSO AFGC. The method was validated using simulated data from several different networks. For the smaller networks the method was able to identify all one way direct connections without identifying connections that were not present. For larger networks of up to twenty cells the method shows excellent performance in identifying true and false connections; this is quantified by an area-under-the-curve (AUC) 96.88%. We note that this method like other Granger Causality-based methods, is based on the detection of high frequency signals propagating between cell traces. Thus it requires a relatively high sampling rate and a network that can propagate high frequency signals.

  19. Multicomponent MR Image Denoising

    PubMed Central

    Manjón, José V.; Thacker, Neil A.; Lull, Juan J.; Garcia-Martí, Gracian; Martí-Bonmatí, Luís; Robles, Montserrat

    2009-01-01

    Magnetic Resonance images are normally corrupted by random noise from the measurement process complicating the automatic feature extraction and analysis of clinical data. It is because of this reason that denoising methods have been traditionally applied to improve MR image quality. Many of these methods use the information of a single image without taking into consideration the intrinsic multicomponent nature of MR images. In this paper we propose a new filter to reduce random noise in multicomponent MR images by spatially averaging similar pixels using information from all available image components to perform the denoising process. The proposed algorithm also uses a local Principal Component Analysis decomposition as a postprocessing step to remove more noise by using information not only in the spatial domain but also in the intercomponent domain dealing in a higher noise reduction without significantly affecting the original image resolution. The proposed method has been compared with similar state-of-art methods over synthetic and real clinical multicomponent MR images showing an improved performance in all cases analyzed. PMID:19888431

  20. Structure-based inference of molecular functions of proteins of unknown function from Berkeley Structural Genomics Center

    SciTech Connect

    Kim, Sung-Hou; Shin, Dong Hae; Hou, Jingtong; Chandonia, John-Marc; Das, Debanu; Choi, In-Geol; Kim, Rosalind; Kim, Sung-Hou

    2007-09-02

    Advances in sequence genomics have resulted in an accumulation of a huge number of protein sequences derived from genome sequences. However, the functions of a large portion of them cannot be inferred based on the current methods of sequence homology detection to proteins of known functions. Three-dimensional structure can have an important impact in providing inference of molecular function (physical and chemical function) of a protein of unknown function. Structural genomics centers worldwide have been determining many 3-D structures of the proteins of unknown functions, and possible molecular functions of them have been inferred based on their structures. Combined with bioinformatics and enzymatic assay tools, the successful acceleration of the process of protein structure determination through high throughput pipelines enables the rapid functional annotation of a large fraction of hypothetical proteins. We present a brief summary of the process we used at the Berkeley Structural Genomics Center to infer molecular functions of proteins of unknown function.

  1. Study on an improved wavelet shift-invariant threshold denoising for pulsed laser induced glucose photoacoustic signals

    NASA Astrophysics Data System (ADS)

    Wang, Zhengzi; Ren, Zhong; Liu, Guodong

    2015-10-01

    Noninvasive measurement of blood glucose concentration has become a hotspot research in the world due to its characteristic of convenient, rapid and non-destructive etc. The blood glucose concentration monitoring based on photoacoustic technique has attracted many attentions because the detected signal is ultrasonic signals rather than the photo signals. But during the acquisition of the photoacoustic signals of glucose, the photoacoustic signals are not avoid to be polluted by some factors, such as the pulsed laser, electronic noises and circumstance noises etc. These disturbances will impact the measurement accuracy of the glucose concentration, So, the denoising of the glucose photoacoustic signals is a key work. In this paper, a wavelet shift-invariant threshold denoising method is improved, and a novel wavelet threshold function is proposed. For the novel wavelet threshold function, two threshold values and two different factors are set, and the novel function is high order derivative and continuous, which can be looked as the compromise between the wavelet soft threshold denoising and hard threshold denoising. Simulation experimental results illustrate that, compared with other wavelet threshold denoising, this improved wavelet shift-invariant threshold denoising has higher signal-to-noise ratio(SNR) and smaller root mean-square error (RMSE) value. And this improved denoising also has better denoising effect than others. Therefore, this improved denoising has a certain of potential value in the denoising of glucose photoacoustic signals.

  2. Birdsong Denoising Using Wavelets

    PubMed Central

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  3. Adaptively Tuned Iterative Low Dose CT Image Denoising.

    PubMed

    Hashemi, SayedMasoud; Paul, Narinder S; Beheshti, Soosan; Cobbold, Richard S C

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  4. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  5. Creators' Intentions Bias Judgments of Function Independently from Causal Inferences

    ERIC Educational Resources Information Center

    Chaigneau, Sergio E.; Castillo, Ramon D.; Martinez, Luis

    2008-01-01

    Participants learned about novel artifacts that were created for function X, but later used for function Y. When asked to rate the extent to which X and Y were a given artifact's function, participants consistently rated X higher than Y. In Experiments 1 and 2, participants were also asked to rate artifacts' efficiency to perform X and Y. This…

  6. Bayesian Inference for Functional Dynamics Exploring in fMRI Data.

    PubMed

    Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao; Pan, Yi; Zhang, Jing

    2016-01-01

    This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.

  7. Bayesian Inference for Functional Dynamics Exploring in fMRI Data.

    PubMed

    Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao; Pan, Yi; Zhang, Jing

    2016-01-01

    This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come. PMID:27034708

  8. Bayesian Inference for Functional Dynamics Exploring in fMRI Data

    PubMed Central

    Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao

    2016-01-01

    This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come. PMID:27034708

  9. Role of Utility and Inference in the Evolution of Functional Information

    PubMed Central

    Sharov, Alexei A.

    2009-01-01

    Functional information means an encoded network of functions in living organisms from molecular signaling pathways to an organism’s behavior. It is represented by two components: code and an interpretation system, which together form a self-sustaining semantic closure. Semantic closure allows some freedom between components because small variations of the code are still interpretable. The interpretation system consists of inference rules that control the correspondence between the code and the function (phenotype) and determines the shape of the fitness landscape. The utility factor operates at multiple time scales: short-term selection drives evolution towards higher survival and reproduction rate within a given fitness landscape, and long-term selection favors those fitness landscapes that support adaptability and lead to evolutionary expansion of certain lineages. Inference rules make short-term selection possible by shaping the fitness landscape and defining possible directions of evolution, but they are under control of the long-term selection of lineages. Communication normally occurs within a set of agents with compatible interpretation systems, which I call communication system. Functional information cannot be directly transferred between communication systems with incompatible inference rules. Each biological species is a genetic communication system that carries unique functional information together with inference rules that determine evolutionary directions and constraints. This view of the relation between utility and inference can resolve the conflict between realism/positivism and pragmatism. Realism overemphasizes the role of inference in evolution of human knowledge because it assumes that logic is embedded in reality. Pragmatism substitutes usefulness for truth and therefore ignores the advantage of inference. The proposed concept of evolutionary pragmatism rejects the idea that logic is embedded in reality; instead, inference rules are

  10. Craniofacial biomechanics and functional and dietary inferences in hominin paleontology.

    PubMed

    Grine, Frederick E; Judex, Stefan; Daegling, David J; Ozcivici, Engin; Ungar, Peter S; Teaford, Mark F; Sponheimer, Matt; Scott, Jessica; Scott, Robert S; Walker, Alan

    2010-04-01

    Finite element analysis (FEA) is a potentially powerful tool by which the mechanical behaviors of different skeletal and dental designs can be investigated, and, as such, has become increasingly popular for biomechanical modeling and inferring the behavior of extinct organisms. However, the use of FEA to extrapolate from characterization of the mechanical environment to questions of trophic or ecological adaptation in a fossil taxon is both challenging and perilous. Here, we consider the problems and prospects of FEA applications in paleoanthropology, and provide a critical examination of one such study of the trophic adaptations of Australopithecus africanus. This particular FEA is evaluated with regard to 1) the nature of the A. africanus cranial composite, 2) model validation, 3) decisions made with respect to model parameters, 4) adequacy of data presentation, and 5) interpretation of the results. Each suggests that the results reflect methodological decisions as much as any underlying biological significance. Notwithstanding these issues, this model yields predictions that follow from the posited emphasis on premolar use by A. africanus. These predictions are tested with data from the paleontological record, including a phylogenetically-informed consideration of relative premolar size, and postcanine microwear fabrics and antemortem enamel chipping. In each instance, the data fail to conform to predictions from the model. This model thus serves to emphasize the need for caution in the application of FEA in paleoanthropological enquiry. Theoretical models can be instrumental in the construction of testable hypotheses; but ultimately, the studies that serve to test these hypotheses - rather than data from the models - should remain the source of information pertaining to hominin paleobiology and evolution. PMID:20227747

  11. Craniofacial biomechanics and functional and dietary inferences in hominin paleontology.

    PubMed

    Grine, Frederick E; Judex, Stefan; Daegling, David J; Ozcivici, Engin; Ungar, Peter S; Teaford, Mark F; Sponheimer, Matt; Scott, Jessica; Scott, Robert S; Walker, Alan

    2010-04-01

    Finite element analysis (FEA) is a potentially powerful tool by which the mechanical behaviors of different skeletal and dental designs can be investigated, and, as such, has become increasingly popular for biomechanical modeling and inferring the behavior of extinct organisms. However, the use of FEA to extrapolate from characterization of the mechanical environment to questions of trophic or ecological adaptation in a fossil taxon is both challenging and perilous. Here, we consider the problems and prospects of FEA applications in paleoanthropology, and provide a critical examination of one such study of the trophic adaptations of Australopithecus africanus. This particular FEA is evaluated with regard to 1) the nature of the A. africanus cranial composite, 2) model validation, 3) decisions made with respect to model parameters, 4) adequacy of data presentation, and 5) interpretation of the results. Each suggests that the results reflect methodological decisions as much as any underlying biological significance. Notwithstanding these issues, this model yields predictions that follow from the posited emphasis on premolar use by A. africanus. These predictions are tested with data from the paleontological record, including a phylogenetically-informed consideration of relative premolar size, and postcanine microwear fabrics and antemortem enamel chipping. In each instance, the data fail to conform to predictions from the model. This model thus serves to emphasize the need for caution in the application of FEA in paleoanthropological enquiry. Theoretical models can be instrumental in the construction of testable hypotheses; but ultimately, the studies that serve to test these hypotheses - rather than data from the models - should remain the source of information pertaining to hominin paleobiology and evolution.

  12. Generalised partition functions: inferences on phase space distributions

    NASA Astrophysics Data System (ADS)

    Treumann, Rudolf A.; Baumjohann, Wolfgang

    2016-06-01

    It is demonstrated that the statistical mechanical partition function can be used to construct various different forms of phase space distributions. This indicates that its structure is not restricted to the Gibbs-Boltzmann factor prescription which is based on counting statistics. With the widely used replacement of the Boltzmann factor by a generalised Lorentzian (also known as the q-deformed exponential function, where κ = 1/|q - 1|, with κ, q ∈ R) both the kappa-Bose and kappa-Fermi partition functions are obtained in quite a straightforward way, from which the conventional Bose and Fermi distributions follow for κ → ∞. For κ ≠ ∞ these are subject to the restrictions that they can be used only at temperatures far from zero. They thus, as shown earlier, have little value for quantum physics. This is reasonable, because physical κ systems imply strong correlations which are absent at zero temperature where apart from stochastics all dynamical interactions are frozen. In the classical large temperature limit one obtains physically reasonable κ distributions which depend on energy respectively momentum as well as on chemical potential. Looking for other functional dependencies, we examine Bessel functions whether they can be used for obtaining valid distributions. Again and for the same reason, no Fermi and Bose distributions exist in the low temperature limit. However, a classical Bessel-Boltzmann distribution can be constructed which is a Bessel-modified Lorentzian distribution. Whether it makes any physical sense remains an open question. This is not investigated here. The choice of Bessel functions is motivated solely by their convergence properties and not by reference to any physical demands. This result suggests that the Gibbs-Boltzmann partition function is fundamental not only to Gibbs-Boltzmann but also to a large class of generalised Lorentzian distributions as well as to the corresponding nonextensive statistical mechanics.

  13. The use of gene clusters to infer functional coupling.

    SciTech Connect

    Overbeek, R.; Fonstein, M.; D'Souza, M.; Pusch, G. D.; Mathematics and Computer Science; Integrated Genomics; Univ. of Chicago

    1999-03-01

    Previously, we presented evidence that it is possible to predict functional coupling between genes based on conservation of gene clusters between genomes. With the rapid increase in the availability of prokaryotic sequence data, it has become possible to verify and apply the technique. In this paper, we extend our characterization of the parameters that determine the utility of the approach, and we generalize the approach in a way that supports detection of common classes of functionally coupled genes (e.g., transport and signal transduction clusters). Now that the analysis includes over 30 complete or nearly complete genomes, it has become clear that this approach will play a significant role in supporting efforts to assign functionality to the remaining uncharacterized genes in sequenced genomes.

  14. The Use of Gene Clusters to Infer Functional Coupling

    NASA Astrophysics Data System (ADS)

    Overbeek, Ross; Fonstein, Michael; D'Souza, Mark; Pusch, Gordon D.; Maltsev, Natalia

    1999-03-01

    Previously, we presented evidence that it is possible to predict functional coupling between genes based on conservation of gene clusters between genomes. With the rapid increase in the availability of prokaryotic sequence data, it has become possible to verify and apply the technique. In this paper, we extend our characterization of the parameters that determine the utility of the approach, and we generalize the approach in a way that supports detection of common classes of functionally coupled genes (e.g., transport and signal transduction clusters). Now that the analysis includes over 30 complete or nearly complete genomes, it has become clear that this approach will play a significant role in supporting efforts to assign functionality to the remaining uncharacterized genes in sequenced genomes.

  15. Bayesian spatiotemporal inference in functional magnetic resonance imaging.

    PubMed

    Gössl, C; Auer, D P; Fahrmeir, L

    2001-06-01

    Mapping of the human brain by means of functional magnetic resonance imaging (fMRI) is an emerging field in cognitive and clinical neuroscience. Current techniques to detect activated areas of the brain mostly proceed in two steps. First, conventional methods of correlation, regression, and time series analysis are used to assess activation by a separate, pixelwise comparison of the fMRI signal time courses to the reference function of a presented stimulus. Spatial aspects caused by correlations between neighboring pixels are considered in a separate second step, if at all. The aim of this article is to present hierarchical Bayesian approaches that allow one to simultaneously incorporate temporal and spatial dependencies between pixels directly in the model formulation. For reasons of computational feasibility, models have to be comparatively parsimonious, without oversimplifying. We introduce parametric and semiparametric spatial and spatiotemporal models that proved appropriate and illustrate their performance applied to visual fMRI data.

  16. Study on an improved wavelet threshold denoising for the time-resolved photoacoustic signals of the glucose solution

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2015-08-01

    Although the time-domain method, frequency-domain method and wavelet-domain method can used into signal denoising or filtering, the denoising of blood glucose photoacoustic signals are limited due to some advantages. In this paper, an improved wavelet threshold denoising method is used to remove the noise of the blood glucose photoacoustic signals. In order to overcome some drawbacks of classical wavelet threshold denoising, an improved wavelet threshold function was proposed. In the simulation experiments, the different denoising results are compared between this improved wavelet threshold function and other functions. And the experiments of this improved wavelet threshold function into the denoising of the time-resolved photoacoustic signals of glucose solution are performed. The experimental result verifies that the improved wavelet threshold function denoising is available. The improved wavelet threshold function has better flexibility than others due to the usage of two threshold values and two factors. So, the improved wavelet threshold function has the potential value in the denoising field of blood glucose photoacoustic signals.

  17. Inferring gene expression dynamics via functional regression analysis

    PubMed Central

    Müller, Hans-Georg; Chiou, Jeng-Min; Leng, Xiaoyan

    2008-01-01

    Background Temporal gene expression profiles characterize the time-dynamics of expression of specific genes and are increasingly collected in current gene expression experiments. In the analysis of experiments where gene expression is obtained over the life cycle, it is of interest to relate temporal patterns of gene expression associated with different developmental stages to each other to study patterns of long-term developmental gene regulation. We use tools from functional data analysis to study dynamic changes by relating temporal gene expression profiles of different developmental stages to each other. Results We demonstrate that functional regression methodology can pinpoint relationships that exist between temporary gene expression profiles for different life cycle phases and incorporates dimension reduction as needed for these high-dimensional data. By applying these tools, gene expression profiles for pupa and adult phases are found to be strongly related to the profiles of the same genes obtained during the embryo phase. Moreover, one can distinguish between gene groups that exhibit relationships with positive and others with negative associations between later life and embryonal expression profiles. Specifically, we find a positive relationship in expression for muscle development related genes, and a negative relationship for strictly maternal genes for Drosophila, using temporal gene expression profiles. Conclusion Our findings point to specific reactivation patterns of gene expression during the Drosophila life cycle which differ in characteristic ways between various gene groups. Functional regression emerges as a useful tool for relating gene expression patterns from different developmental stages, and avoids the problems with large numbers of parameters and multiple testing that affect alternative approaches. PMID:18226220

  18. Structure and function of the mammalian middle ear. II: Inferring function from structure.

    PubMed

    Mason, Matthew J

    2016-02-01

    Anatomists and zoologists who study middle ear morphology are often interested to know what the structure of an ear can reveal about the auditory acuity and hearing range of the animal in question. This paper represents an introduction to middle ear function targetted towards biological scientists with little experience in the field of auditory acoustics. Simple models of impedance matching are first described, based on the familiar concepts of the area and lever ratios of the middle ear. However, using the Mongolian gerbil Meriones unguiculatus as a test case, it is shown that the predictions made by such 'ideal transformer' models are generally not consistent with measurements derived from recent experimental studies. Electrical analogue models represent a better way to understand some of the complex, frequency-dependent responses of the middle ear: these have been used to model the effects of middle ear subcavities, and the possible function of the auditory ossicles as a transmission line. The concepts behind such models are explained here, again aimed at those with little background knowledge. Functional inferences based on middle ear anatomy are more likely to be valid at low frequencies. Acoustic impedance at low frequencies is dominated by compliance; expanded middle ear cavities, found in small desert mammals including gerbils, jerboas and the sengi Macroscelides, are expected to improve low-frequency sound transmission, as long as the ossicular system is not too stiff.

  19. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    NASA Astrophysics Data System (ADS)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  20. Electrocardiogram signal denoising based on a new improved wavelet thresholding.

    PubMed

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method. PMID:27587134

  1. Autocorrelation based denoising of manatee vocalizations using the undecimated discrete wavelet transform.

    PubMed

    Gur, Berke M; Niezrecki, Christopher

    2007-07-01

    Recent interest in the West Indian manatee (Trichechus manatus latirostris) vocalizations has been primarily induced by an effort to reduce manatee mortality rates due to watercraft collisions. A warning system based on passive acoustic detection of manatee vocalizations is desired. The success and feasibility of such a system depends on effective denoising of the vocalizations in the presence of high levels of background noise. In the last decade, simple and effective wavelet domain nonlinear denoising methods have emerged as an alternative to linear estimation methods. However, the denoising performances of these methods degrades considerably with decreasing signal-to-noise ratio (SNR) and therefore are not suited for denoising manatee vocalizations in which the typical SNR is below 0 dB. Manatee vocalizations possess a strong harmonic content and a slow decaying autocorrelation function. In this paper, an efficient denoising scheme that exploits both the autocorrelation function of manatee vocalizations and effectiveness of the nonlinear wavelet transform based denoising algorithms is introduced. The suggested wavelet-based denoising algorithm is shown to outperform linear filtering methods, extending the detection range of vocalizations.

  2. Autocorrelation based denoising of manatee vocalizations using the undecimated discrete wavelet transform.

    PubMed

    Gur, Berke M; Niezrecki, Christopher

    2007-07-01

    Recent interest in the West Indian manatee (Trichechus manatus latirostris) vocalizations has been primarily induced by an effort to reduce manatee mortality rates due to watercraft collisions. A warning system based on passive acoustic detection of manatee vocalizations is desired. The success and feasibility of such a system depends on effective denoising of the vocalizations in the presence of high levels of background noise. In the last decade, simple and effective wavelet domain nonlinear denoising methods have emerged as an alternative to linear estimation methods. However, the denoising performances of these methods degrades considerably with decreasing signal-to-noise ratio (SNR) and therefore are not suited for denoising manatee vocalizations in which the typical SNR is below 0 dB. Manatee vocalizations possess a strong harmonic content and a slow decaying autocorrelation function. In this paper, an efficient denoising scheme that exploits both the autocorrelation function of manatee vocalizations and effectiveness of the nonlinear wavelet transform based denoising algorithms is introduced. The suggested wavelet-based denoising algorithm is shown to outperform linear filtering methods, extending the detection range of vocalizations. PMID:17614478

  3. Parameter optimization for image denoising based on block matching and 3D collaborative filtering

    NASA Astrophysics Data System (ADS)

    Pedada, Ramu; Kugu, Emin; Li, Jiang; Yue, Zhanfeng; Shen, Yuzhong

    2009-02-01

    Clinical MRI images are generally corrupted by random noise during acquisition with blurred subtle structure features. Many denoising methods have been proposed to remove noise from corrupted images at the expense of distorted structure features. Therefore, there is always compromise between removing noise and preserving structure information for denoising methods. For a specific denoising method, it is crucial to tune it so that the best tradeoff can be obtained. In this paper, we define several cost functions to assess the quality of noise removal and that of structure information preserved in the denoised image. Strength Pareto Evolutionary Algorithm 2 (SPEA2) is utilized to simultaneously optimize the cost functions by modifying parameters associated with the denoising methods. The effectiveness of the algorithm is demonstrated by applying the proposed optimization procedure to enhance the image denoising results using block matching and 3D collaborative filtering. Experimental results show that the proposed optimization algorithm can significantly improve the performance of image denoising methods in terms of noise removal and structure information preservation.

  4. Locally Based Kernel PLS Regression De-noising with Application to Event-Related Potentials

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Tino, Peter

    2002-01-01

    The close relation of signal de-noising and regression problems dealing with the estimation of functions reflecting dependency between a set of inputs and dependent outputs corrupted with some level of noise have been employed in our approach.

  5. Crustal structure beneath northeast India inferred from receiver function modeling

    NASA Astrophysics Data System (ADS)

    Borah, Kajaljyoti; Bora, Dipok K.; Goyal, Ayush; Kumar, Raju

    2016-09-01

    We estimated crustal shear velocity structure beneath ten broadband seismic stations of northeast India, by using H-Vp/Vs stacking method and a non-linear direct search approach, Neighbourhood Algorithm (NA) technique followed by joint inversion of Rayleigh wave group velocity and receiver function, calculated from teleseismic earthquakes data. Results show significant variations of thickness, shear velocities (Vs) and Vp/Vs ratio in the crust of the study region. The inverted shear wave velocity models show crustal thickness variations of 32-36 km in Shillong Plateau (North), 36-40 in Assam Valley and ∼44 km in Lesser Himalaya (South). Average Vp/Vs ratio in Shillong Plateau is less (1.73-1.77) compared to Assam Valley and Lesser Himalaya (∼1.80). Average crustal shear velocity beneath the study region varies from 3.4 to 3.5 km/s. Sediment structure beneath Shillong Plateau and Assam Valley shows 1-2 km thick sediment layer with low Vs (2.5-2.9 km/s) and high Vp/Vs ratio (1.8-2.1), while it is observed to be of greater thickness (4 km) with similar Vs and high Vp/Vs (∼2.5) in RUP (Lesser Himalaya). Both Shillong Plateau and Assam Valley show thick upper and middle crust (10-20 km), and thin (4-9 km) lower crust. Average Vp/Vs ratio in Assam Valley and Shillong Plateau suggest that the crust is felsic-to-intermediate and intermediate-to-mafic beneath Shillong Plateau and Assam Valley, respectively. Results show that lower crust rocks beneath the Shillong Plateau and Assam Valley lies between mafic granulite and mafic garnet granulite.

  6. Pragmatic Inference Abilities in Individuals with Asperger Syndrome or High-Functioning Autism. A Review

    ERIC Educational Resources Information Center

    Loukusa, Soile; Moilanen, Irma

    2009-01-01

    This review summarizes studies involving pragmatic language comprehension and inference abilities in individuals with Asperger syndrome or high-functioning autism. Systematic searches of three electronic databases, selected journals, and reference lists identified 20 studies meeting the inclusion criteria. These studies were evaluated in terms of:…

  7. Function formula oriented construction of Bayesian inference nets for diagnosis of cardiovascular disease.

    PubMed

    Sekar, Booma Devi; Dong, Mingchui

    2014-01-01

    An intelligent cardiovascular disease (CVD) diagnosis system using hemodynamic parameters (HDPs) derived from sphygmogram (SPG) signal is presented to support the emerging patient-centric healthcare models. To replicate clinical approach of diagnosis through a staged decision process, the Bayesian inference nets (BIN) are adapted. New approaches to construct a hierarchical multistage BIN using defined function formulas and a method employing fuzzy logic (FL) technology to quantify inference nodes with dynamic values of statistical parameters are proposed. The suggested methodology is validated by constructing hierarchical Bayesian fuzzy inference nets (HBFIN) to diagnose various heart pathologies from the deduced HDPs. The preliminary diagnostic results show that the proposed methodology has salient validity and effectiveness in the diagnosis of cardiovascular disease.

  8. Event-related functional magnetic resonance imaging: modelling, inference and optimization.

    PubMed Central

    Josephs, O; Henson, R N

    1999-01-01

    Event-related functional magnetic resonance imaging is a recent and popular technique for detecting haemodynamic responses to brief stimuli or events. However, the design of event-related experiments requires careful consideration of numerous issues of measurement, modelling and inference. Here we review these issues, with particular emphasis on the use of basis functions within a general linear modelling framework to model and make inferences about the haemodynamic response. With these models in mind, we then consider how the properties of functional magnetic resonance imaging data determine the optimal experimental design for a specific hypothesis, in terms of stimulus ordering and interstimulus interval. Finally, we illustrate various event-related models with examples from recent studies. PMID:10466147

  9. Vikodak - A Modular Framework for Inferring Functional Potential of Microbial Communities from 16S Metagenomic Datasets

    PubMed Central

    Nagpal, Sunil; Haque, Mohammed Monzoorul; Mande, Sharmila S.

    2016-01-01

    Background The overall metabolic/functional potential of any given environmental niche is a function of the sum total of genes/proteins/enzymes that are encoded and expressed by various interacting microbes residing in that niche. Consequently, prior (collated) information pertaining to genes, enzymes encoded by the resident microbes can aid in indirectly (re)constructing/ inferring the metabolic/ functional potential of a given microbial community (given its taxonomic abundance profile). In this study, we present Vikodak—a multi-modular package that is based on the above assumption and automates inferring and/ or comparing the functional characteristics of an environment using taxonomic abundance generated from one or more environmental sample datasets. With the underlying assumptions of co-metabolism and independent contributions of different microbes in a community, a concerted effort has been made to accommodate microbial co-existence patterns in various modules incorporated in Vikodak. Results Validation experiments on over 1400 metagenomic samples have confirmed the utility of Vikodak in (a) deciphering enzyme abundance profiles of any KEGG metabolic pathway, (b) functional resolution of distinct metagenomic environments, (c) inferring patterns of functional interaction between resident microbes, and (d) automating statistical comparison of functional features of studied microbiomes. Novel features incorporated in Vikodak also facilitate automatic removal of false positives and spurious functional predictions. Conclusions With novel provisions for comprehensive functional analysis, inclusion of microbial co-existence pattern based algorithms, automated inter-environment comparisons; in-depth analysis of individual metabolic pathways and greater flexibilities at the user end, Vikodak is expected to be an important value addition to the family of existing tools for 16S based function prediction. Availability and Implementation A web implementation of Vikodak

  10. Research on denoising in WDM laser inter-satellites communication system

    NASA Astrophysics Data System (ADS)

    Wen, Chuanhua; Su, Yang; Li, Yuquan; Zhou, Li

    2006-09-01

    This paper proposes a method of wavelet analysis for de-noising at receiver system in WDM laser inter-satellites communication. Background noises such as galactic noise, sunlight and etc make the received power reduce. The noisy signal is decomposed using wavelets and wavelet packets; then is transformed into wavelet coefficients and the lower order coefficients are removed by applying a soft threshold. De-noised signal is obtained by reconstructing with the remaining coefficients. In this paper, we evaluate different wavelet analysis for de-noising at receiver system in inter-satellites laser communication. Simulation results indicate that if the wavelet de-noising method is used with different wavelet analyzing functions, it will improves the signal to noise ratio (SNR) about 2 dB when the signal frequency is 1.5 GHz.

  11. Bayesian functional integral method for inferring continuous data from discrete measurements.

    PubMed

    Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul

    2012-02-01

    Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. PMID:22325261

  12. Relevant modes selection method based on Spearman correlation coefficient for laser signal denoising using empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Duan, Yabo; Song, Chengtian

    2016-10-01

    Empirical mode decomposition (EMD) is a recently proposed nonlinear and nonstationary laser signal denoising method. A noisy signal is broken down using EMD into oscillatory components that are called intrinsic mode functions (IMFs). Thresholding-based denoising and correlation-based partial reconstruction of IMFs are the two main research directions for EMD-based denoising. Similar to other decomposition-based denoising approaches, EMD-based denoising methods require a reliable threshold to determine which IMFs are noise components and which IMFs are noise-free components. In this work, we propose a new approach in which each IMF is first denoised using EMD interval thresholding (EMD-IT), and then a robust thresholding process based on Spearman correlation coefficient is used for relevant modes selection. The proposed method tackles the problem using a thresholding-based denoising approach coupled with partial reconstruction of the relevant IMFs. Other traditional denoising methods, including correlation-based EMD partial reconstruction (EMD-Correlation), discrete Fourier transform and wavelet-based methods, are investigated to provide a comparison with the proposed technique. Simulation and test results demonstrate the superior performance of the proposed method when compared with the other methods.

  13. Empirical Mode Decomposition Technique with Conditional Mutual Information for Denoising Operational Sensor Data

    SciTech Connect

    Omitaomu, Olufemi A; Protopopescu, Vladimir A; Ganguly, Auroop R

    2011-01-01

    A new approach is developed for denoising signals using the Empirical Mode Decomposition (EMD) technique and the Information-theoretic method. The EMD technique is applied to decompose a noisy sensor signal into the so-called intrinsic mode functions (IMFs). These functions are of the same length and in the same time domain as the original signal. Therefore, the EMD technique preserves varying frequency in time. Assuming the given signal is corrupted by high-frequency Gaussian noise implies that most of the noise should be captured by the first few modes. Therefore, our proposition is to separate the modes into high-frequency and low-frequency groups. We applied an information-theoretic method, namely mutual information, to determine the cut-off for separating the modes. A denoising procedure is applied only to the high-frequency group using a shrinkage approach. Upon denoising, this group is combined with the original low-frequency group to obtain the overall denoised signal. We illustrate our approach with simulated and real-world data sets. The results are compared to two popular denoising techniques in the literature, namely discrete Fourier transform (DFT) and discrete wavelet transform (DWT). We found that our approach performs better than DWT and DFT in most cases, and comparatively to DWT in some cases in terms of: (i) mean square error, (ii) recomputed signal-to-noise ratio, and (iii) visual quality of the denoised signals.

  14. Integrating evolutionary and functional approaches to infer adaptation at specific loci.

    PubMed

    Storz, Jay F; Wheat, Christopher W

    2010-09-01

    Inferences about adaptation at specific loci are often exclusively based on the static analysis of DNA sequence variation. Ideally,population-genetic evidence for positive selection serves as a stepping-off point for experimental studies to elucidate the functional significance of the putatively adaptive variation. We argue that inferences about adaptation at specific loci are best achieved by integrating the indirect, retrospective insights provided by population-genetic analyses with the more direct, mechanistic insights provided by functional experiments. Integrative studies of adaptive genetic variation may sometimes be motivated by experimental insights into molecular function, which then provide the impetus to perform population genetic tests to evaluate whether the functional variation is of adaptive significance. In other cases, studies may be initiated by genome scans of DNA variation to identify candidate loci for recent adaptation. Results of such analyses can then motivate experimental efforts to test whether the identified candidate loci do in fact contribute to functional variation in some fitness-related phenotype. Functional studies can provide corroborative evidence for positive selection at particular loci, and can potentially reveal specific molecular mechanisms of adaptation.

  15. Using evolutionary sequence variation to make inferences about protein structure and function

    NASA Astrophysics Data System (ADS)

    Colwell, Lucy

    2015-03-01

    The evolutionary trajectory of a protein through sequence space is constrained by its function. Collections of sequence homologs record the outcomes of millions of evolutionary experiments in which the protein evolves according to these constraints. The explosive growth in the number of available protein sequences raises the possibility of using the natural variation present in homologous protein sequences to infer these constraints and thus identify residues that control different protein phenotypes. Because in many cases phenotypic changes are controlled by more than one amino acid, the mutations that separate one phenotype from another may not be independent, requiring us to understand the correlation structure of the data. To address this we build a maximum entropy probability model for the protein sequence. The parameters of the inferred model are constrained by the statistics of a large sequence alignment. Pairs of sequence positions with the strongest interactions accurately predict contacts in protein tertiary structure, enabling all atom structural models to be constructed. We describe development of a theoretical inference framework that enables the relationship between the amount of available input data and the reliability of structural predictions to be better understood.

  16. Statistical limitations in functional neuroimaging. II. Signal detection and statistical inference.

    PubMed Central

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    The field of functional neuroimaging (FNI) methodology has developed into a mature but evolving area of knowledge and its applications have been extensive. A general problem in the analysis of FNI data is finding a signal embedded in noise. This is sometimes called signal detection. Signal detection theory focuses in general on issues relating to the optimization of conditions for separating the signal from noise. When methods from probability theory and mathematical statistics are directly applied in this procedure it is also called statistical inference. In this paper we briefly discuss some aspects of signal detection theory relevant to FNI and, in addition, some common approaches to statistical inference used in FNI. Low-pass filtering in relation to functional-anatomical variability and some effects of filtering on signal detection of interest to FNI are discussed. Also, some general aspects of hypothesis testing and statistical inference are discussed. This includes the need for characterizing the signal in data when the null hypothesis is rejected, the problem of multiple comparisons that is central to FNI data analysis, omnibus tests and some issues related to statistical power in the context of FNI. In turn, random field, scale space, non-parametric and Monte Carlo approaches are reviewed, representing the most common approaches to statistical inference used in FNI. Complementary to these issues an overview and discussion of non-inferential descriptive methods, common statistical models and the problem of model selection is given in a companion paper. In general, model selection is an important prelude to subsequent statistical inference. The emphasis in both papers is on the assumptions and inherent limitations of the methods presented. Most of the methods described here generally serve their purposes well when the inherent assumptions and limitations are taken into account. Significant differences in results between different methods are most apparent in

  17. De novo inference of protein function from coarse-grained dynamics.

    PubMed

    Bhadra, Pratiti; Pal, Debnath

    2014-10-01

    Inference of molecular function of proteins is the fundamental task in the quest for understanding cellular processes. The task is getting increasingly difficult with thousands of new proteins discovered each day. The difficulty arises primarily due to lack of high-throughput experimental technique for assessing protein molecular function, a lacunae that computational approaches are trying hard to fill. The latter too faces a major bottleneck in absence of clear evidence based on evolutionary information. Here we propose a de novo approach to annotate protein molecular function through structural dynamics match for a pair of segments from two dissimilar proteins, which may share even <10% sequence identity. To screen these matches, corresponding 1 µs coarse-grained (CG) molecular dynamics trajectories were used to compute normalized root-mean-square-fluctuation graphs and select mobile segments, which were, thereafter, matched for all pairs using unweighted three-dimensional autocorrelation vectors. Our in-house custom-built forcefield (FF), extensively validated against dynamics information obtained from experimental nuclear magnetic resonance data, was specifically used to generate the CG dynamics trajectories. The test for correspondence of dynamics-signature of protein segments and function revealed 87% true positive rate and 93.5% true negative rate, on a dataset of 60 experimentally validated proteins, including moonlighting proteins and those with novel functional motifs. A random test against 315 unique fold/function proteins for a negative test gave >99% true recall. A blind prediction on a novel protein appears consistent with additional evidences retrieved therein. This is the first proof-of-principle of generalized use of structural dynamics for inferring protein molecular function leveraging our custom-made CG FF, useful to all.

  18. Impact of Prematurity and Perinatal Antibiotics on the Developing Intestinal Microbiota: A Functional Inference Study

    PubMed Central

    Arboleya, Silvia; Sánchez, Borja; Solís, Gonzalo; Fernández, Nuria; Suárez, Marta; Hernández-Barranco, Ana M.; Milani, Christian; Margolles, Abelardo; de los Reyes-Gavilán, Clara G.; Ventura, Marco; Gueimonde, Miguel

    2016-01-01

    Background: The microbial colonization of the neonatal gut provides a critical stimulus for normal maturation and development. This process of early microbiota establishment, known to be affected by several factors, constitutes an important determinant for later health. Methods: We studied the establishment of the microbiota in preterm and full-term infants and the impact of perinatal antibiotics upon this process in premature babies. To this end, 16S rRNA gene sequence-based microbiota assessment was performed at phylum level and functional inference analyses were conducted. Moreover, the levels of the main intestinal microbial metabolites, the short-chain fatty acids (SCFA) acetate, propionate and butyrate, were measured by Gas-Chromatography Flame ionization/Mass spectrometry detection. Results: Prematurity affects microbiota composition at phylum level, leading to increases of Proteobacteria and reduction of other intestinal microorganisms. Perinatal antibiotic use further affected the microbiota of the preterm infant. These changes involved a concomitant alteration in the levels of intestinal SCFA. Moreover, functional inference analyses allowed for identifying metabolic pathways potentially affected by prematurity and perinatal antibiotics use. Conclusion: A deficiency or delay in the establishment of normal microbiota function seems to be present in preterm infants. Perinatal antibiotic use, such as intrapartum prophylaxis, affected the early life microbiota establishment in preterm newborns, which may have consequences for later health. PMID:27136545

  19. Inferring functional relationships and causal network structure from gene expression profiles.

    PubMed

    Nagarajan, Radhakrishnan; Upreti, Meenakshi

    2011-01-01

    Inferring functional relationships and network structure from the observed gene expression profiles can provide a novel insight into the working of the genes as a system or network as opposed to independent entities. Such networks may also represent possible causal relationships between a given set of genes, hence can prove to be a convenient abstraction of the underlying signaling mechanism. The discovery of functional relationships from the observed gene expression profiles does not rely on prior literature, hence useful in identifying undocumented relationships between a given set of genes. Several techniques have been proposed in the literature. The present study investigates the choice Granger causality (GC) and its extensions in modeling the network structure between a given pair of genes from their expression profiles. The impact of noise variance on GC relationships is investigated. VAR parameter estimation is proposed to obtain a finer insight into the functional relationships inferred using GC tests. The results are presented on synthetic networks generated from known vector-autoregressive (VAR) models and those from cell-cycle gene expression profiles that can be modeled as a first-order bivariate VAR.

  20. Inferring deep biosphere function and diversity through (near) surface biosphere portals (Invited)

    NASA Astrophysics Data System (ADS)

    Meyer-Dombard, D. R.; Cardace, D.; Woycheese, K. M.; Swingley, W.; Schubotz, F.; Shock, E.

    2013-12-01

    The consideration of surface expressions of the deep subsurface- such as springs- remains one of the most economically viable means to query the deep biosphere's diversity and function. Hot spring source pools are ideal portals for accessing and inferring the taxonomic and functional diversity of related deep subsurface microbial communities. Consideration of the geochemical composition of deep vs. surface fluids provides context for interpretation of community function. Further, parallel assessment of 16S rRNA data, metagenomic sequencing, and isotopic compositions of biomass in surface springs allows inference of the functional capacities of subsurface ecosystems. Springs in Yellowstone National Park (YNP), the Philippines, and Turkey are considered here, incorporating near-surface, transition, and surface ecosystems to identify 'legacy' taxa and functions of the deep biosphere. We find that source pools often support functional capacity suited to subsurface ecosystems. For example, in hot ecosystems, source pools are strictly chemosynthetic, and surface environments with measureable dissolved oxygen may contain evidence of community functions more favorable under anaerobic conditions. Metagenomic reads from a YNP ecosystem indicate the genetic capacity for sulfate reduction at high temperature. However, inorganic sulfate reduction is only minimally energy-yielding in these surface environments suggesting the potential that sulfate reduction is a 'legacy' function of deeper biosphere ecosystems. Carbon fixation tactics shift with increased surface exposure of the thermal fluids. Genes related to the rTCA cycle and the acetyl co-A pathway are most prevalent in highest temperature, anaerobic sites. At lower temperature sites, fewer total carbon fixation genes were observed, perhaps indicating an increase in heterotrophic metabolism with increased surface exposure. In hydrogen and methane rich springs in the Philippines and Turkey, methanogenic taxa dominate source

  1. LncRNA ontology: inferring lncRNA functions based on chromatin states and expression patterns

    PubMed Central

    Li, Yongsheng; Chen, Hong; Pan, Tao; Jiang, Chunjie; Zhao, Zheng; Wang, Zishan; Zhang, Jinwen; Xu, Juan; Li, Xia

    2015-01-01

    Accumulating evidences suggest that long non-coding RNAs (lncRNAs) perform important functions. Genome-wide chromatin-states area rich source of information about cellular state, yielding insights beyond what is typically obtained by transcriptome profiling. We propose an integrative method for genome-wide functional predictions of lncRNAs by combining chromatin states data with gene expression patterns. We first validated the method using protein-coding genes with known function annotations. Our validation results indicated that our integrative method performs better than co-expression analysis, and is accurate across different conditions. Next, by applying the integrative model genome-wide, we predicted the probable functions for more than 97% of human lncRNAs. The putative functions inferred by our method match with previously annotated by the targets of lncRNAs. Moreover, the linkage from the cellular processes influenced by cancer-associated lncRNAs to the cancer hallmarks provided a “lncRNA point-of-view” on tumor biology. Our approach provides a functional annotation of the lncRNAs, which we developed into a web-based application, LncRNA Ontology, to provide visualization, analysis, and downloading of lncRNA putative functions. PMID:26485761

  2. [Wavelet analysis and its application in denoising the spectrum of hyperspectral image].

    PubMed

    Zhou, Dan; Wang, Qin-Jun; Tian, Qing-Jiu; Lin, Qi-Zhong; Fu, Wen-Xue

    2009-07-01

    In order to remove the sawtoothed noise in the spectrum of hyperspectral remote sensing and improve the accuracy of information extraction using spectrum in the present research, the spectrum of vegetation in the USGS (United States Geological Survey) spectrum library was used to simulate the performance of wavelet denoising. These spectra were measured by a custom-modified and computer-controlled Beckman spectrometer at the USGS Denver Spectroscopy Lab. The wavelength accuracy is about 5 nm in the NIR and 2 nm in the visible. In the experiment, noise with signal to noise ratio (SNR) 30 was first added to the spectrum, and then removed by the wavelet denoising approach. For the purpose of finding the optimal parameters combinations, the SNR, mean squared error (MSE), spectral angle (SA) and integrated evaluation coefficient eta were used to evaluate the approach's denoising effects. Denoising effect is directly proportional to SNR, and inversely proportional to MSE, SA and the integrated evaluation coefficient eta. Denoising results show that the sawtoothed noise in noisy spectrum was basically eliminated, and the denoised spectrum basically coincides with the original spectrum, maintaining a good spectral characteristic of the curve. Evaluation results show that the optimal denoising can be achieved by firstly decomposing the noisy spectrum into 3-7 levels using db12, db10, sym9 and sym6 wavelets, then processing the wavelet transform coefficients by soft-threshold functions, and finally estimating the thresholds by heursure threshold selection rule and rescaling using a single estimation of level noise based on first-level coefficients. However, this approach depends on the noise level, which means that for different noise level the optimal parameters combination is also diverse.

  3. Denoising and dimensionality reduction of genomic data

    NASA Astrophysics Data System (ADS)

    Capobianco, Enrico

    2005-05-01

    Genomics represents a challenging research field for many quantitative scientists, and recently a vast variety of statistical techniques and machine learning algorithms have been proposed and inspired by cross-disciplinary work with computational and systems biologists. In genomic applications, the researcher deals with noisy and complex high-dimensional feature spaces; a wealth of genes whose expression levels are experimentally measured, can often be observed for just a few time points, thus limiting the available samples. This unbalanced combination suggests that it might be hard for standard statistical inference techniques to come up with good general solutions, likewise for machine learning algorithms to avoid heavy computational work. Thus, one naturally turns to two major aspects of the problem: sparsity and intrinsic dimensionality. These two aspects are studied in this paper, where for both denoising and dimensionality reduction, a very efficient technique, i.e., Independent Component Analysis, is used. The numerical results are very promising, and lead to a very good quality of gene feature selection, due to the signal separation power enabled by the decomposition technique. We investigate how the use of replicates can improve these results, and deal with noise through a stabilization strategy which combines the estimated components and extracts the most informative biological information from them. Exploiting the inherent level of sparsity is a key issue in genetic regulatory networks, where the connectivity matrix needs to account for the real links among genes and discard many redundancies. Most experimental evidence suggests that real gene-gene connections represent indeed a subset of what is usually mapped onto either a huge gene vector or a typically dense and highly structured network. Inferring gene network connectivity from the expression levels represents a challenging inverse problem that is at present stimulating key research in biomedical

  4. Improved extreme value weighted sparse representational image denoising with random perturbation

    NASA Astrophysics Data System (ADS)

    Xuan, Shibin; Han, Yulan

    2015-11-01

    Research into the removal of mixed noise is a hot topic in the field of image denoising. Currently, weighted encoding with sparse nonlocal regularization represents an excellent mixed noise removal method. To make the fitting function closer to the requirements of a robust estimation technique, an extreme value technique is used that allows the fitting function to satisfy three conditions of robust estimation on a larger interval. Moreover, a random disturbance sequence is integrated into the denoising model to prevent the iterative solving process from falling into local optima. A radon transform-based noise detection algorithm and an adaptive median filter are used to obtain a high-quality initial solution for the iterative procedure of the image denoising model. Experimental results indicate that this improved method efficiently enhances the weighted encoding with a sparse nonlocal regularization model. The proposed method can effectively remove mixed noise from corrupted images, while better preserving the edges and details of the processed image.

  5. On the inference of function from structure using biomechanical modelling and simulation of extinct organisms.

    PubMed

    Hutchinson, John R

    2012-02-23

    Biomechanical modelling and simulation techniques offer some hope for unravelling the complex inter-relationships of structure and function perhaps even for extinct organisms, but have their limitations owing to this complexity and the many unknown parameters for fossil taxa. Validation and sensitivity analysis are two indispensable approaches for quantifying the accuracy and reliability of such models or simulations. But there are other subtleties in biomechanical modelling that include investigator judgements about the level of simplicity versus complexity in model design or how uncertainty and subjectivity are dealt with. Furthermore, investigator attitudes toward models encompass a broad spectrum between extreme credulity and nihilism, influencing how modelling is conducted and perceived. Fundamentally, more data and more testing of methodology are required for the field to mature and build confidence in its inferences. PMID:21666064

  6. Experimental evidence validating the computational inference of functional associations from gene fusion events: a critical survey.

    PubMed

    Promponas, Vasilis J; Ouzounis, Christos A; Iliopoulos, Ioannis

    2014-05-01

    More than a decade ago, a number of methods were proposed for the inference of protein interactions, using whole-genome information from gene clusters, gene fusions and phylogenetic profiles. This structural and evolutionary view of entire genomes has provided a valuable approach for the functional characterization of proteins, especially those without sequence similarity to proteins of known function. Furthermore, this view has raised the real possibility to detect functional associations of genes and their corresponding proteins for any entire genome sequence. Yet, despite these exciting developments, there have been relatively few cases of real use of these methods outside the computational biology field, as reflected from citation analysis. These methods have the potential to be used in high-throughput experimental settings in functional genomics and proteomics to validate results with very high accuracy and good coverage. In this critical survey, we provide a comprehensive overview of 30 most prominent examples of single pairwise protein interaction cases in small-scale studies, where protein interactions have either been detected by gene fusion or yielded additional, corroborating evidence from biochemical observations. Our conclusion is that with the derivation of a validated gold-standard corpus and better data integration with big experiments, gene fusion detection can truly become a valuable tool for large-scale experimental biology.

  7. Experimental evidence validating the computational inference of functional associations from gene fusion events: a critical survey

    PubMed Central

    Promponas, Vasilis J.; Ouzounis, Christos A.; Iliopoulos, Ioannis

    2014-01-01

    More than a decade ago, a number of methods were proposed for the inference of protein interactions, using whole-genome information from gene clusters, gene fusions and phylogenetic profiles. This structural and evolutionary view of entire genomes has provided a valuable approach for the functional characterization of proteins, especially those without sequence similarity to proteins of known function. Furthermore, this view has raised the real possibility to detect functional associations of genes and their corresponding proteins for any entire genome sequence. Yet, despite these exciting developments, there have been relatively few cases of real use of these methods outside the computational biology field, as reflected from citation analysis. These methods have the potential to be used in high-throughput experimental settings in functional genomics and proteomics to validate results with very high accuracy and good coverage. In this critical survey, we provide a comprehensive overview of 30 most prominent examples of single pairwise protein interaction cases in small-scale studies, where protein interactions have either been detected by gene fusion or yielded additional, corroborating evidence from biochemical observations. Our conclusion is that with the derivation of a validated gold-standard corpus and better data integration with big experiments, gene fusion detection can truly become a valuable tool for large-scale experimental biology. PMID:23220349

  8. Application of wavelet analysis in laser Doppler vibration signal denoising

    NASA Astrophysics Data System (ADS)

    Lan, Yu-fei; Xue, Hui-feng; Li, Xin-liang; Liu, Dan

    2010-10-01

    Large number of experiments show that, due to external disturbances, the measured surface is too rough and other factors make use of laser Doppler technique to detect the vibration signal contained complex information, low SNR, resulting in Doppler frequency shift signals unmeasured, can not be demodulated Doppler phase and so on. This paper first analyzes the laser Doppler signal model and feature in the vibration test, and studies the most commonly used three ways of wavelet denoising techniques: the modulus maxima wavelet denoising method, the spatial correlation denoising method and wavelet threshold denoising method. Here we experiment with the vibration signals and achieve three ways by MATLAB simulation. Processing results show that the wavelet modulus maxima denoising method at low laser Doppler vibration SNR, has an advantage for the signal which mixed with white noise and contained more singularities; the spatial correlation denoising method is more suitable for denoising the laser Doppler vibration signal which noise level is not very high, and has a better edge reconstruction capacity; wavelet threshold denoising method has a wide range of adaptability, computational efficiency, and good denoising effect. Specifically, in the wavelet threshold denoising method, we estimate the original noise variance by spatial correlation method, using an adaptive threshold denoising method, and make some certain amendments in practice. Test can be shown that, compared with conventional threshold denoising, this method is more effective to extract the feature of laser Doppler vibration signal.

  9. Geodesic denoising for optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Shahrian Varnousfaderani, Ehsan; Vogl, Wolf-Dieter; Wu, Jing; Gerendas, Bianca S.; Simader, Christian; Langs, Georg; Waldstein, Sebastian M.; Schmidt-Erfurth, Ursula

    2016-03-01

    Optical coherence tomography (OCT) is an optical signal acquisition method capturing micrometer resolution, cross-sectional three-dimensional images. OCT images are used widely in ophthalmology to diagnose and monitor retinal diseases such as age-related macular degeneration (AMD) and Glaucoma. While OCT allows the visualization of retinal structures such as vessels and retinal layers, image quality and contrast is reduced by speckle noise, obfuscating small, low intensity structures and structural boundaries. Existing denoising methods for OCT images may remove clinically significant image features such as texture and boundaries of anomalies. In this paper, we propose a novel patch based denoising method, Geodesic Denoising. The method reduces noise in OCT images while preserving clinically significant, although small, pathological structures, such as fluid-filled cysts in diseased retinas. Our method selects optimal image patch distribution representations based on geodesic patch similarity to noisy samples. Patch distributions are then randomly sampled to build a set of best matching candidates for every noisy sample, and the denoised value is computed based on a geodesic weighted average of the best candidate samples. Our method is evaluated qualitatively on real pathological OCT scans and quantitatively on a proposed set of ground truth, noise free synthetic OCT scans with artificially added noise and pathologies. Experimental results show that performance of our method is comparable with state of the art denoising methods while outperforming them in preserving the critical clinically relevant structures.

  10. LC-MS/MS based proteomic analysis and functional inference of hypothetical proteins in Desulfovibrio vulgaris

    SciTech Connect

    Zhang, Weiwen; Culley, David E.; Gritsenko, Marina A.; Moore, Ronald J.; Nie, Lei; Scholten, Johannes C.; Petritis, Konstantinos; Strittmatter, Eric F.; Camp, David G.; Smith, Richard D.; Brockman, Fred J.

    2006-11-03

    ABSTRACT In the previous study, the whole-genome gene expression profiles of D. vulgaris in response to oxidative stress and heat shock were determined. The results showed 24-28% of the responsive genes were hypothetical proteins that have not been experimentally characterized or whose function can not be deduced by simple sequence comparison. To further explore the protecting mechanisms employed in D. vulgaris against the oxidative stress and heat shock, attempt was made in this study to infer functions of these hypothetical proteins by phylogenomic profiling along with detailed sequence comparison against various publicly available databases. By this approach we were ableto assign possible functions to 25 responsive hypothetical proteins. The findings included that DVU0725, induced by oxidative stress, may be involved in lipopolysaccharide biosynthesis, implying that the alternation of lipopolysaccharide on cell surface might service as a mechanism against oxidative stress in D. vulgaris. In addition, two responsive proteins, DVU0024 encoding a putative transcriptional regulator and DVU1670 encoding predicted redox protein, were sharing co-evolution atterns with rubrerythrin in Archaeoglobus fulgidus and Clostridium perfringens, respectively, implying that they might be part of the stress response and protective systems in D. vulgaris. The study demonstrated that phylogenomic profiling is a useful tool in interpretation of experimental genomics data, and also provided further insight on cellular response to oxidative stress and heat shock in D. vulgaris.

  11. Low-rank separated representation surrogates of high-dimensional stochastic functions: Application in Bayesian inference

    SciTech Connect

    Validi, AbdoulAhad

    2014-03-01

    This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.

  12. Pulsar Signal Denoising Method Based on Laplace Distribution in No-subsampling Wavelet Packet Domain

    NASA Astrophysics Data System (ADS)

    Wenbo, Wang; Yanchao, Zhao; Xiangli, Wang

    2016-11-01

    In order to improve the denoising effect of the pulsar signal, a new denoising method is proposed in the no-subsampling wavelet packet domain based on the local Laplace prior model. First, we count the true noise-free pulsar signal’s wavelet packet coefficient distribution characteristics and construct the true signal wavelet packet coefficients’ Laplace probability density function model. Then, we estimate the denosied wavelet packet coefficients by using the noisy pulsar wavelet coefficients based on maximum a posteriori criteria. Finally, we obtain the denoisied pulsar signal through no-subsampling wavelet packet reconstruction of the estimated coefficients. The experimental results show that the proposed method performs better when calculating the pulsar time of arrival than the translation-invariant wavelet denoising method.

  13. Multiresolution Bilateral Filtering for Image Denoising

    PubMed Central

    Zhang, Ming; Gunturk, Bahadir K.

    2008-01-01

    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges; it has shown to be an effective image denoising technique. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. There are two main contributions of this paper. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising applications. The second contribution is an extension of the bilateral filter: multiresolution bilateral filter, where bilateral filtering is applied to the approximation (low-frequency) subbands of a signal decomposed using a wavelet filter bank. The multiresolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. Experimental results with both simulated and real data are provided. PMID:19004705

  14. An image denoising application using shearlets

    NASA Astrophysics Data System (ADS)

    Sevindir, Hulya Kodal; Yazici, Cuneyt

    2013-10-01

    Medical imaging is a multidisciplinary field related to computer science, electrical/electronic engineering, physics, mathematics and medicine. There has been dramatic increase in variety, availability and resolution of medical imaging devices for the last half century. For proper medical imaging highly trained technicians and clinicians are needed to pull out clinically pertinent information from medical data correctly. Artificial systems must be designed to analyze medical data sets either in a partially or even a fully automatic manner to fulfil the need. For this purpose there has been numerous ongoing research for finding optimal representations in image processing and computer vision [1, 18]. Medical images almost always contain artefacts and it is crucial to remove these artefacts to obtain healthy results. Out of many methods for denoising images, in this paper, two denoising methods, wavelets and shearlets, have been applied to mammography images. Comparing these two methods, shearlets give better results for denoising such data.

  15. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition.

  16. Study on torpedo fuze signal denoising method based on WPT

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Sun, Changcun; Zhang, Tao; Ren, Zhiliang

    2013-07-01

    Torpedo fuze signal denoising is an important action to ensure reliable operation of fuze. Based on the good characteristics of wavelet packet transform (WPT) in signal denoising, the paper used wavelet packet transform to denoise the fuze signal under a complex background interference, and a simulation of the denoising results with Matlab is performed. Simulation result shows that the WPT denoising method can effectively eliminate background noise exist in torpedo fuze target signal with higher precision and less distortion, leading to advance the reliability of torpedo fuze operation.

  17. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  18. Integrating gene and protein expression data with genome-scale metabolic networks to infer functional pathways

    PubMed Central

    2013-01-01

    Background The study of cellular metabolism in the context of high-throughput -omics data has allowed us to decipher novel mechanisms of importance in biotechnology and health. To continue with this progress, it is essential to efficiently integrate experimental data into metabolic modeling. Results We present here an in-silico framework to infer relevant metabolic pathways for a particular phenotype under study based on its gene/protein expression data. This framework is based on the Carbon Flux Path (CFP) approach, a mixed-integer linear program that expands classical path finding techniques by considering additional biophysical constraints. In particular, the objective function of the CFP approach is amended to account for gene/protein expression data and influence obtained paths. This approach is termed integrative Carbon Flux Path (iCFP). We show that gene/protein expression data also influences the stoichiometric balancing of CFPs, which provides a more accurate picture of active metabolic pathways. This is illustrated in both a theoretical and real scenario. Finally, we apply this approach to find novel pathways relevant in the regulation of acetate overflow metabolism in Escherichia coli. As a result, several targets which could be relevant for better understanding of the phenomenon leading to impaired acetate overflow are proposed. Conclusions A novel mathematical framework that determines functional pathways based on gene/protein expression data is presented and validated. We show that our approach is able to provide new insights into complex biological scenarios such as acetate overflow in Escherichia coli. PMID:24314206

  19. Magnetic resonance image denoising using multiple filters

    NASA Astrophysics Data System (ADS)

    Ai, Danni; Wang, Jinjuan; Miwa, Yuichi

    2013-07-01

    We introduced and compared ten denoisingfilters which are all proposed during last fifteen years. Especially, the state-of-art denoisingalgorithms, NLM and BM3D, have attracted much attention. Several expansions are proposed to improve the noise reduction based on these two algorithms. On the other hand, optimal dictionaries, sparse representations and appropriate shapes of the transform's support are also considered for the image denoising. The comparison among variousfiltersis implemented by measuring the SNR of a phantom image and denoising effectiveness of a clinical image. The computational time is finally evaluated.

  20. A Protein Domain Co-Occurrence Network Approach for Predicting Protein Function and Inferring Species Phylogeny

    PubMed Central

    Wang, Zheng; Zhang, Xue-Cheng; Le, Mi Ha; Xu, Dong; Stacey, Gary; Cheng, Jianlin

    2011-01-01

    Protein Domain Co-occurrence Network (DCN) is a biological network that has not been fully-studied. We analyzed the properties of the DCNs of H. sapiens, S. cerevisiae, C. elegans, D. melanogaster, and 15 plant genomes. These DCNs have the hallmark features of scale-free networks. We investigated the possibility of using DCNs to predict protein and domain functions. Based on our experiment conducted on 66 randomly selected proteins, the best of top 3 predictions made by our DCN-based aggregated neighbor-counting method achieved a semantic similarity score of 0.81 to the actual Gene Ontology terms of the proteins. Moreover, the top 3 predictions using neighbor-counting, χ2, and a SVM-based method achieved an accuracy of 66%, 59%, and 61%, respectively, when used to predict specific Gene Ontology terms of human target domains. These predictions on average had a semantic similarity score of 0.82, 0.80, and 0.79 to the actual Gene Ontology terms, respectively. We also used DCNs to predict whether a domain is an enzyme domain, and our SVM-based and neighbor-inference method correctly classified 79% and 77% of the target domains, respectively. When using DCNs to classify a target domain into one of the six enzyme classes, we found that, as long as there is one EC number available in the neighboring domains, our SVM-based and neighboring-counting method correctly classified 92.4% and 91.9% of the target domains, respectively. Furthermore, we benchmarked the performance of using DCNs to infer species phylogenies on six different combinations of 398 single-chromosome prokaryotic genomes. The phylogenetic tree of 54 prokaryotic taxa generated by our DCNs-alignment-based method achieved a 93.45% similarity score compared to the Bergey's taxonomy. In summary, our studies show that genome-wide DCNs contain rich information that can be effectively used to decipher protein function and reveal the evolutionary relationship among species. PMID:21455299

  1. Analysis the application of several denoising algorithm in the astronomical image denoising

    NASA Astrophysics Data System (ADS)

    Jiang, Chao; Geng, Ze-xun; Bao, Yong-qiang; Wei, Xiao-feng; Pan, Ying-feng

    2014-02-01

    Image denoising is an important method of preprocessing, it is one of the forelands in the field of Computer Graphic and Computer Vision. Astronomical target imaging are most vulnerable to atmospheric turbulence and noise interference, in order to reconstruct the high quality image of the target, we need to restore the high frequency signal of image, but noise also belongs to the high frequency signal, so there will be noise amplification in the reconstruction process. In order to avoid this phenomenon, join image denoising in the process of reconstruction is a feasible solution. This paper mainly research on the principle of four classic denoising algorithm, which are TV, BLS - GSM, NLM and BM3D, we use simulate data for image denoising to analysis the performance of the four algorithms, experiments demonstrate that the four algorithms can remove the noise, the BM3D algorithm not only have high quality of denosing, but also have the highest efficiency at the same time.

  2. Phydbac2: improved inference of gene function using interactive phylogenomic profiling and chromosomal location analysis.

    PubMed

    Enault, François; Suhre, Karsten; Poirot, Olivier; Abergel, Chantal; Claverie, Jean-Michel

    2004-07-01

    Phydbac (phylogenomic display of bacterial genes) implemented a method of phylogenomic profiling using a distance measure based on normalized BLAST scores. This method was able to increase the predictive power of phylogenomic profiling by about 25% when compared to the classical approach based on Hamming distances. Here we present a major extension of Phydbac (named here Phydbac2), that extends both the concept and the functionality of the original web-service. While phylogenomic profiles remain the central focus of Phydbac2, it now integrates chromosomal proximity and gene fusion analyses as two additional non-similarity-based indicators for inferring pairwise gene functional relationships. Moreover, all presently available (January 2004) fully sequenced bacterial genomes and those of three lower eukaryotes are now included in the profiling process, thus increasing the initial number of reference genomes (71 in Phydbac) to 150 in Phydbac2. Using the KEGG metabolic pathway database as a benchmark, we show that the predictive power of Phydbac2 is improved by 27% over the previous version. This gain is accounted for on one hand, by the increased number of reference genomes (11%) and on the other hand, as a result of including chromosomal proximity into the distance measure (16%). The expanded functionality of Phydbac2 now allows the user to query more than 50 different genomes, including at least one member of each major bacterial group, most major pathogens and potential bio-terrorism agents. The search for co-evolving genes based on consensus profiles from multiple organisms, the display of Phydbac2 profiles side by side with COG information, the inclusion of KEGG metabolic pathway maps the production of chromosomal proximity maps, and the possibility of collecting and processing results from different Phydbac queries in a common shopping cart are the main new features of Phydbac2. The Phydbac2 web server is available at http://igs-server.cnrs-mrs.fr/phydbac/.

  3. Inference of gene function based on gene fusion events: the rosetta-stone method.

    PubMed

    Suhre, Karsten

    2007-01-01

    The method described in this chapter can be used to infer putative functional links between two proteins. The basic idea is based on the principle of "guilt by association." It is assumed that two proteins, which are found to be transcribed by a single transcript in one (or several) genomes are likely to be functionally linked, for example by acting in a same metabolic pathway or by forming a multiprotein complex. This method is of particular interest for studying genes that exhibit no, or only remote, homologies with already well-characterized proteins. Combined with other non-homology based methods, gene fusion events may yield valuable information for hypothesis building on protein function, and may guide experimental characterization of the target protein, for example by suggesting potential ligands or binding partners. This chapter uses the FusionDB database (http://www.igs.cnrs-mrs.fr/FusionDB/) as source of information. FusionDB provides a characterization of a large number of gene fusion events at hand of multiple sequence alignments. Orthologous genes are included to yield a comprehensive view of the structure of a gene fusion event. Phylogenetic tree reconstruction is provided to evaluate the history of a gene fusion event, and three-dimensional protein structure information is used, where available, to further characterize the nature of the gene fusion. For genes that are not comprised in FusionDB, some instructions are given as how to generate a similar type of information, based solely on publicly available web tools that are listed here.

  4. Inconsistent Denoising and Clustering Algorithms for Amplicon Sequence Data.

    PubMed

    Koskinen, Kaisa; Auvinen, Petri; Björkroth, K Johanna; Hultman, Jenni

    2015-08-01

    Natural microbial communities have been studied for decades using the 16S rRNA gene as a marker. In recent years, the application of second-generation sequencing technologies has revolutionized our understanding of the structure and function of microbial communities in complex environments. Using these highly parallel techniques, a detailed description of community characteristics are constructed, and even the rare biosphere can be detected. The new approaches carry numerous advantages and lack many features that skewed the results using traditional techniques, but we are still facing serious bias, and the lack of reliable comparability of produced results. Here, we contrasted publicly available amplicon sequence data analysis algorithms by using two different data sets, one with defined clone-based structure, and one with food spoilage community with well-studied communities. We aimed to assess which software and parameters produce results that resemble the benchmark community best, how large differences can be detected between methods, and whether these differences are statistically significant. The results suggest that commonly accepted denoising and clustering methods used in different combinations produce significantly different outcome: clustering method impacts greatly on the number of operational taxonomic units (OTUs) and denoising algorithm influences more on taxonomic affiliations. The magnitude of the OTU number difference was up to 40-fold and the disparity between results seemed highly dependent on the community structure and diversity. Statistically significant differences in taxonomies between methods were seen even at phylum level. However, the application of effective denoising method seemed to even out the differences produced by clustering. PMID:25525895

  5. Inferring muscle functional roles of the ostrich pelvic limb during walking and running using computer optimization

    PubMed Central

    Rubenson, Jonas

    2016-01-01

    Owing to their cursorial background, ostriches (Struthio camelus) walk and run with high metabolic economy, can reach very fast running speeds and quickly execute cutting manoeuvres. These capabilities are believed to be a result of their ability to coordinate muscles to take advantage of specialized passive limb structures. This study aimed to infer the functional roles of ostrich pelvic limb muscles during gait. Existing gait data were combined with a newly developed musculoskeletal model to generate simulations of ostrich walking and running that predict muscle excitations, force and mechanical work. Consistent with previous avian electromyography studies, predicted excitation patterns showed that individual muscles tended to be excited primarily during only stance or swing. Work and force estimates show that ostrich gaits are partially hip-driven with the bi-articular hip–knee muscles driving stance mechanics. Conversely, the knee extensors acted as brakes, absorbing energy. The digital extensors generated large amounts of both negative and positive mechanical work, with increased magnitudes during running, providing further evidence that ostriches make extensive use of tendinous elastic energy storage to improve economy. The simulations also highlight the need to carefully consider non-muscular soft tissues that may play a role in ostrich gait. PMID:27146688

  6. Changes in Obesity Odds Ratio among Iranian Adults, since 2000: Quadratic Inference Functions Method

    PubMed Central

    Etemad, Koorosh; Seifi, Behjat; Mohammad, Kazem; Biglarian, Akbar; Koohpayehzadeh, Jalil

    2016-01-01

    Background. Monitoring changes in obesity prevalence by risk factors is relevant to public health programs that focus on reducing or preventing obesity. The purpose of this paper was to study trends in obesity odds ratios (ORs) for individuals aged 20 years and older in Iran by using a new statistical methodology. Methods. Data collected by the National Surveys in Iran, from 2000 through 2011. Since responses of the member of each cluster are correlated, the quadratic inference functions (QIF) method was used to model the relationship between the odds of obesity and risk factors. Results. During the study period, the prevalence rate of obesity increased from 12% to 22%. By using QIF method and a model selection criterion for performing stepwise regression analysis, we found that while obesity prevalence generally increased in both sexes, all ages, all employment, residence, and smoking levels, it seems to have changes in obesity ORs since 2000. Conclusions. Because obesity is one of the main risk factors for many diseases, awareness of the differences by factors allows development of targets for prevention and early intervention. PMID:27803729

  7. Comparative internal anatomy of Staurozoa (Cnidaria), with functional and evolutionary inferences

    PubMed Central

    Collins, Allen G.; Hirano, Yayoi M.; Mills, Claudia E.

    2016-01-01

    Comparative efforts to understand the body plan evolution of stalked jellyfishes are scarce. Most characters, and particularly internal anatomy, have neither been explored for the class Staurozoa, nor broadly applied in its taxonomy and classification. Recently, a molecular phylogenetic hypothesis was derived for Staurozoa, allowing for the first broad histological comparative study of staurozoan taxa. This study uses comparative histology to describe the body plans of nine staurozoan species, inferring functional and evolutionary aspects of internal morphology based on the current phylogeny of Staurozoa. We document rarely-studied structures, such as ostia between radial pockets, intertentacular lobules, gametoducts, pad-like adhesive structures, and white spots of nematocysts (the last four newly proposed putative synapomorphies for Staurozoa). Two different regions of nematogenesis are documented. This work falsifies the view that the peduncle region of stauromedusae only retains polypoid characters; metamorphosis from stauropolyp to stauromedusa occurs both at the apical region (calyx) and basal region (peduncle). Intertentacular lobules, observed previously in only a small number of species, are shown to be widespread. Similarly, gametoducts were documented in all analyzed genera, both in males and females, thereby elucidating gamete release. Finally, ostia connecting adjacent gastric radial pockets appear to be universal for Staurozoa. Detailed histological studies of medusozoan polyps and medusae are necessary to further understand the relationships between staurozoan features and those of other medusozoan cnidarians. PMID:27812408

  8. Fracture in teeth: a diagnostic for inferring bite force and tooth function.

    PubMed

    Lee, James J-W; Constantino, Paul J; Lucas, Peter W; Lawn, Brian R

    2011-11-01

    Teeth are brittle and highly susceptible to cracking. We propose that observations of such cracking can be used as a diagnostic tool for predicting bite force and inferring tooth function in living and fossil mammals. Laboratory tests on model tooth structures and extracted human teeth in simulated biting identify the principal fracture modes in enamel. Examination of museum specimens reveals the presence of similar fractures in a wide range of vertebrates, suggesting that cracks extended during ingestion or mastication. The use of 'fracture mechanics' from materials engineering provides elegant relations for quantifying critical bite forces in terms of characteristic tooth size and enamel thickness. The role of enamel microstructure in determining how cracks initiate and propagate within the enamel (and beyond) is discussed. The picture emerges of teeth as damage-tolerant structures, full of internal weaknesses and defects and yet able to contain the expansion of seemingly precarious cracks and fissures within the enamel shell. How the findings impact on dietary pressures forms an undercurrent of the study.

  9. Optimization of wavelet- and curvelet-based denoising algorithms by multivariate SURE and GCV

    NASA Astrophysics Data System (ADS)

    Mortezanejad, R.; Gholami, A.

    2016-06-01

    One of the most crucial challenges in seismic data processing is the reduction of noise in the data or improving the signal-to-noise ratio (SNR). Wavelet- and curvelet-based denoising algorithms have become popular to address random noise attenuation for seismic sections. Wavelet basis, thresholding function, and threshold value are three key factors of such algorithms, having a profound effect on the quality of the denoised section. Therefore, given a signal, it is necessary to optimize the denoising operator over these factors to achieve the best performance. In this paper a general denoising algorithm is developed as a multi-variant (variable) filter which performs in multi-scale transform domains (e.g. wavelet and curvelet). In the wavelet domain this general filter is a function of the type of wavelet, characterized by its smoothness, thresholding rule, and threshold value, while in the curvelet domain it is only a function of thresholding rule and threshold value. Also, two methods, Stein’s unbiased risk estimate (SURE) and generalized cross validation (GCV), evaluated using a Monte Carlo technique, are utilized to optimize the algorithm in both wavelet and curvelet domains for a given seismic signal. The best wavelet function is selected from a family of fractional B-spline wavelets. The optimum thresholding rule is selected from general thresholding functions which contain the most well known thresholding functions, and the threshold value is chosen from a set of possible values. The results obtained from numerical tests show high performance of the proposed method in both wavelet and curvelet domains in comparison to conventional methods when denoising seismic data.

  10. Dichoptic Metacontrast Masking Functions to Infer Transmission Delay in Optic Neuritis

    PubMed Central

    Bruchmann, Maximilian; Korsukewitz, Catharina; Krämer, Julia; Wiendl, Heinz; Meuth, Sven G.

    2016-01-01

    Optic neuritis (ON) has detrimental effects on the transmission of neuronal signals generated at the earliest stages of visual information processing. The amount, as well as the speed of transmitted visual signals is impaired. Measurements of visual evoked potentials (VEP) are often implemented in clinical routine. However, the specificity of VEPs is limited because multiple cortical areas are involved in the generation of P1 potentials, including feedback signals from higher cortical areas. Here, we show that dichoptic metacontrast masking can be used to estimate the temporal delay caused by ON. A group of 15 patients with unilateral ON, nine of which had sufficient visual acuity and volunteered to participate, and a group of healthy control subjects (N = 8) were presented with flashes of gray disks to one eye and flashes of gray annuli to the corresponding retinal location of the other eye. By asking subjects to report the subjective visibility of the target (i.e. the disk) while varying the stimulus onset asynchrony (SOA) between disk and annulus, we obtained typical U-shaped masking functions. From these functions we inferred the critical SOAmax at which the mask (i.e. the annulus) optimally suppressed the visibility of the target. ON-associated transmission delay was estimated by comparing the SOAmax between conditions in which the disk had been presented to the affected and the mask to the other eye, and vice versa. SOAmax differed on average by 28 ms, suggesting a reduction in transmission speed in the affected eye. Compared to previously reported methods assessing perceptual consequences of altered neuronal transmission speed the presented method is more accurate as it is not limited by the observers’ ability to judge subtle variations in perceived synchrony. PMID:27711139

  11. Equivalence between Step Selection Functions and Biased Correlated Random Walks for Statistical Inference on Animal Movement

    PubMed Central

    Duchesne, Thierry; Fortin, Daniel; Rivest, Louis-Paul

    2015-01-01

    Animal movement has a fundamental impact on population and community structure and dynamics. Biased correlated random walks (BCRW) and step selection functions (SSF) are commonly used to study movements. Because no studies have contrasted the parameters and the statistical properties of their estimators for models constructed under these two Lagrangian approaches, it remains unclear whether or not they allow for similar inference. First, we used the Weak Law of Large Numbers to demonstrate that the log-likelihood function for estimating the parameters of BCRW models can be approximated by the log-likelihood of SSFs. Second, we illustrated the link between the two approaches by fitting BCRW with maximum likelihood and with SSF to simulated movement data in virtual environments and to the trajectory of bison (Bison bison L.) trails in natural landscapes. Using simulated and empirical data, we found that the parameters of a BCRW estimated directly from maximum likelihood and by fitting an SSF were remarkably similar. Movement analysis is increasingly used as a tool for understanding the influence of landscape properties on animal distribution. In the rapidly developing field of movement ecology, management and conservation biologists must decide which method they should implement to accurately assess the determinants of animal movement. We showed that BCRW and SSF can provide similar insights into the environmental features influencing animal movements. Both techniques have advantages. BCRW has already been extended to allow for multi-state modeling. Unlike BCRW, however, SSF can be estimated using most statistical packages, it can simultaneously evaluate habitat selection and movement biases, and can easily integrate a large number of movement taxes at multiple scales. SSF thus offers a simple, yet effective, statistical technique to identify movement taxis. PMID:25898019

  12. Fuzzy logic recursive change detection for tracking and denoising of video sequences

    NASA Astrophysics Data System (ADS)

    Zlokolica, Vladimir; De Geyter, Matthias; Schulte, Stefan; Pizurica, Aleksandra; Philips, Wilfried; Kerre, Etienne

    2005-03-01

    In this paper we propose a fuzzy logic recursive scheme for motion detection and temporal filtering that can deal with the Gaussian noise and unsteady illumination conditions both in temporal and spatial direction. Our focus is on applications concerning tracking and denoising of image sequences. We process an input noisy sequence with fuzzy logic motion detection in order to determine the degree of motion confidence. The proposed motion detector combines the membership degree appropriately using defined fuzzy rules, where the membership degree of motion for each pixel in a 2D-sliding-window is determined by the proposed membership function. Both fuzzy membership function and fuzzy rules are defined in such a way that the performance of the motion detector is optimized in terms of its robustness to noise and unsteady lighting conditions. We perform simultaneously tracking and recursive adaptive temporal filtering, where the amount of filtering is inversely proportional to the confidence with respect to the existence of motion. Finally, temporally filtered frames are further processed by the proposed spatial filter in order to obtain denoised image sequence. The main contribution of this paper is the robust novel fuzzy recursive scheme for motion detection and temporal filtering. We evaluate the proposed motion detection algorithm using two criteria: robustness to noise and changing illumination conditions and motion blur in temporal recursive denoising. Additionally, we make comparisons in terms of noise reduction with other state of the art video denoising techniques.

  13. Dissociable functions of reward inference in the lateral prefrontal cortex and the striatum

    PubMed Central

    Tanaka, Shingo; Pan, Xiaochuan; Oguchi, Mineki; Taylor, Jessica E.; Sakagami, Masamichi

    2015-01-01

    In a complex and uncertain world, how do we select appropriate behavior? One possibility is that we choose actions that are highly reinforced by their probabilistic consequences (model-free processing). However, we may instead plan actions prior to their actual execution by predicting their consequences (model-based processing). It has been suggested that the brain contains multiple yet distinct systems involved in reward prediction. Several studies have tried to allocate model-free and model-based systems to the striatum and the lateral prefrontal cortex (LPFC), respectively. Although there is much support for this hypothesis, recent research has revealed discrepancies. To understand the nature of the reward prediction systems in the LPFC and the striatum, a series of single-unit recording experiments were conducted. LPFC neurons were found to infer the reward associated with the stimuli even when the monkeys had not yet learned the stimulus-reward (SR) associations directly. Striatal neurons seemed to predict the reward for each stimulus only after directly experiencing the SR contingency. However, the one exception was “Exclusive Or” situations in which striatal neurons could predict the reward without direct experience. Previous single-unit studies in monkeys have reported that neurons in the LPFC encode category information, and represent reward information specific to a group of stimuli. Here, as an extension of these, we review recent evidence that a group of LPFC neurons can predict reward specific to a category of visual stimuli defined by relevant behavioral responses. We suggest that the functional difference in reward prediction between the LPFC and the striatum is that while LPFC neurons can utilize abstract code, striatal neurons can code individual associations between stimuli and reward but cannot utilize abstract code. PMID:26236266

  14. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  15. Efficient bias correction for magnetic resonance image denoising.

    PubMed

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. PMID:23074149

  16. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  17. Neural representation of swallowing is retained with age. A functional neuroimaging study validated by classical and Bayesian inference.

    PubMed

    Windel, Anne-Sophie; Mihai, Paul Glad; Lotze, Martin

    2015-06-01

    We investigated the neural representation of swallowing in two age groups for a total of 51 healthy participants (seniors: average age 64 years; young adults: average age 24 years) using high spatial resolution functional magnetic resonance imaging (fMRI). Two statistical comparisons (classical and Bayesian inference) revealed no significant differences between subject groups, apart from higher cortical activation for the seniors in the frontal pole 1 of Brodmann's Area 10 using Bayesian inference. Seniors vs. young participants showed longer reaction times and higher skin conductance response (SCR) during swallowing. We found a positive association of SCR and fMRI-activation only among seniors in areas processing sensorimotor performance, arousal and emotional perception. The results indicate that the highly automated swallowing network retains its functionality with age. However, seniors with higher SCR during swallowing appear to also engage areas involved in attention control and emotional regulation, possibly suggesting increased attention and emotional demands during task performance.

  18. Performance comparison of denoising filters for source camera identification

    NASA Astrophysics Data System (ADS)

    Cortiana, A.; Conotter, V.; Boato, G.; De Natale, F. G. B.

    2011-02-01

    Source identification for digital content is one of the main branches of digital image forensics. It relies on the extraction of the photo-response non-uniformity (PRNU) noise as a unique intrinsic fingerprint that efficiently characterizes the digital device which generated the content. Such noise is estimated as the difference between the content and its de-noised version obtained via denoising filter processing. This paper proposes a performance comparison of different denoising filters for source identification purposes. In particular, results achieved with a sophisticated 3D filter are presented and discussed with respect to state-of-the-art denoising filters previously employed in such a context.

  19. Adaptive Image Denoising by Mixture Adaptation

    NASA Astrophysics Data System (ADS)

    Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  20. Infrared image denoising by nonlocal means filtering

    NASA Astrophysics Data System (ADS)

    Dee-Noor, Barak; Stern, Adrian; Yitzhaky, Yitzhak; Kopeika, Natan

    2012-05-01

    The recently introduced non-local means (NLM) image denoising technique broke the traditional paradigm according to which image pixels are processed by their surroundings. Non-local means technique was demonstrated to outperform state-of-the art denoising techniques when applied to images in the visible. This technique is even more powerful when applied to low contrast images, which makes it tractable for denoising infrared (IR) images. In this work we investigate the performance of NLM applied to infrared images. We also present a new technique designed to speed-up the NLM filtering process. The main drawback of the NLM is the large computational time required by the process of searching similar patches. Several techniques were developed during the last years to reduce the computational burden. Here we present a new techniques designed to reduce computational cost and sustain optimal filtering results of NLM technique. We show that the new technique, which we call Multi-Resolution Search NLM (MRS-NLM), reduces significantly the computational cost of the filtering process and we present a study of its performance on IR images.

  1. A phylogeny-based benchmarking test for orthology inference reveals the limitations of function-based validation.

    PubMed

    Trachana, Kalliopi; Forslund, Kristoffer; Larsson, Tomas; Powell, Sean; Doerks, Tobias; von Mering, Christian; Bork, Peer

    2014-01-01

    Accurate orthology prediction is crucial for many applications in the post-genomic era. The lack of broadly accepted benchmark tests precludes a comprehensive analysis of orthology inference. So far, functional annotation between orthologs serves as a performance proxy. However, this violates the fundamental principle of orthology as an evolutionary definition, while it is often not applicable due to limited experimental evidence for most species. Therefore, we constructed high quality "gold standard" orthologous groups that can serve as a benchmark set for orthology inference in bacterial species. Herein, we used this dataset to demonstrate 1) why a manually curated, phylogeny-based dataset is more appropriate for benchmarking orthology than other popular practices and 2) how it guides database design and parameterization through careful error quantification. More specifically, we illustrate how function-based tests often fail to identify false assignments, misjudging the true performance of orthology inference methods. We also examined how our dataset can instruct the selection of a "core" species repertoire to improve detection accuracy. We conclude that including more genomes at the proper evolutionary distances can influence the overall quality of orthology detection. The curated gene families, called Reference Orthologous Groups, are publicly available at http://eggnog.embl.de/orthobench2. PMID:25369365

  2. Lidar signal de-noising by singular value decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Huanxue; Liu, Jianguo; Zhang, Tianshu

    2014-11-01

    Signal de-noising remains an important problem in lidar signal processing. This paper presents a de-noising method based on singular value decomposition. Experimental results on lidar simulated signal and real signal show that the proposed algorithm not only improves the signal-to-noise ratio effectively, but also preserves more detail information.

  3. Combining interior and exterior characteristics for remote sensing image denoising

    NASA Astrophysics Data System (ADS)

    Peng, Ni; Sun, Shujin; Wang, Runsheng; Zhong, Ping

    2016-04-01

    Remote sensing image denoising faces many challenges since a remote sensing image usually covers a wide area and thus contains complex contents. Using the patch-based statistical characteristics is a flexible method to improve the denoising performance. There are usually two kinds of statistical characteristics available: interior and exterior characteristics. Different statistical characteristics have their own strengths to restore specific image contents. Combining different statistical characteristics to use their strengths together may have the potential to improve denoising results. This work proposes a method combining statistical characteristics to adaptively select statistical characteristics for different image contents. The proposed approach is implemented through a new characteristics selection criterion learned over training data. Moreover, with the proposed combination method, this work develops a denoising algorithm for remote sensing images. Experimental results show that our method can make full use of the advantages of interior and exterior characteristics for different image contents and thus improve the denoising performance.

  4. Denoising portal images by means of wavelet techniques

    NASA Astrophysics Data System (ADS)

    Gonzalez Lopez, Antonio Francisco

    Portal images are used in radiotherapy for the verification of patient positioning. The distinguishing feature of this image type lies in its formation process: the same beam used for patient treatment is used for image formation. The high energy of the photons used in radiotherapy strongly limits the quality of portal images: Low contrast between tissues, low spatial resolution and low signal to noise ratio. This Thesis studies the enhancement of these images, in particular denoising of portal images. The statistical properties of portal images and noise are studied: power spectra, statistical dependencies between image and noise and marginal, joint and conditional distributions in the wavelet domain. Later, various denoising methods are applied to noisy portal images. Methods operating in the wavelet domain are the basis of this Thesis. In addition, the Wiener filter and the non local means filter (NLM), operating in the image domain, are used as a reference. Other topics studied in this Thesis are spatial resolution, wavelet processing and image processing in dosimetry in radiotherapy. In this regard, the spatial resolution of portal imaging systems is studied; a new method for determining the spatial resolution of the imaging equipments in digital radiology is presented; the calculation of the power spectrum in the wavelet domain is studied; reducing uncertainty in film dosimetry is investigated; a method for the dosimetry of small radiation fields with radiochromic film is presented; the optimal signal resolution is determined, as a function of the noise level and the quantization step, in the digitization process of films and the useful optical density range is set, as a function of the required uncertainty level, for a densitometric system. Marginal distributions of portal images are similar to those of natural images. This also applies to the statistical relationships between wavelet coefficients, intra-band and inter-band. These facts result in a better

  5. Bayesian wavelet-based image denoising using the Gauss-Hermite expansion.

    PubMed

    Rahman, S M Mahbubur; Ahmad, M Omair; Swamy, M N S

    2008-10-01

    The probability density functions (PDFs) of the wavelet coefficients play a key role in many wavelet-based image processing algorithms, such as denoising. The conventional PDFs usually have a limited number of parameters that are calculated from the first few moments only. Consequently, such PDFs cannot be made to fit very well with the empirical PDF of the wavelet coefficients of an image. As a result, the shrinkage function utilizing any of these density functions provides a substandard denoising performance. In order for the probabilistic model of the image wavelet coefficients to be able to incorporate an appropriate number of parameters that are dependent on the higher order moments, a PDF using a series expansion in terms of the Hermite polynomials that are orthogonal with respect to the standard Gaussian weight function, is introduced. A modification in the series function is introduced so that only a finite number of terms can be used to model the image wavelet coefficients, ensuring at the same time the resulting PDF to be non-negative. It is shown that the proposed PDF matches the empirical one better than some of the standard ones, such as the generalized Gaussian or Bessel K-form PDF. A Bayesian image denoising technique is then proposed, wherein the new PDF is exploited to statistically model the subband as well as the local neighboring image wavelet coefficients. Experimental results on several test images demonstrate that the proposed denoising method, both in the subband-adaptive and locally adaptive conditions, provides a performance better than that of most of the methods that use PDFs with limited number of parameters.

  6. OPTICAL COHERENCE TOMOGRAPHY HEART TUBE IMAGE DENOISING BASED ON CONTOURLET TRANSFORM.

    PubMed

    Guo, Qing; Sun, Shuifa; Dong, Fangmin; Gao, Bruce Z; Wang, Rui

    2012-01-01

    Optical Coherence Tomography(OCT) gradually becomes a very important imaging technology in the Biomedical field for its noninvasive, nondestructive and real-time properties. However, the interpretation and application of the OCT images are limited by the ubiquitous noise. In this paper, a denoising algorithm based on contourlet transform for the OCT heart tube image is proposed. A bivariate function is constructed to model the joint probability density function (pdf) of the coefficient and its cousin in contourlet domain. A bivariate shrinkage function is deduced to denoise the image by the maximum a posteriori (MAP) estimation. Three metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and equivalent number of look (ENL), are used to evaluate the denoised image using the proposed algorithm. The results show that the signal-to-noise ratio is improved while the edges of object are preserved by the proposed algorithm. Systemic comparisons with other conventional algorithms, such as mean filter, median filter, RKT filter, Lee filter, as well as bivariate shrinkage function for wavelet-based algorithm are conducted. The advantage of the proposed algorithm over these methods is illustrated. PMID:25364626

  7. Applications of wavelet analysis in differential propagation phase shift data de-noising

    NASA Astrophysics Data System (ADS)

    Hu, Zhiqun; Liu, Liping

    2014-07-01

    Using numerical simulation data of the forward differential propagation shift (ϕDP) of polarimetric radar, the principle and performing steps of noise reduction by wavelet analysis are introduced in detail. Profiting from the multiscale analysis, various types of noises can be identified according to their characteristics in different scales, and suppressed in different resolutions by a penalty threshold strategy through which a fixed threshold value is applied, a default threshold strategy through which the threshold value is determined by the noise intensity, or a ϕDP penalty threshold strategy through which a special value is designed for ϕDP de-noising. Then, a hard-or soft-threshold function, depending on the de-noising purpose, is selected to reconstruct the signal. Combining the three noise suppression strategies and the two signal reconstruction functions, and without loss of generality, two schemes are presented to verify the de-noising effect by dbN wavelets: (1) the penalty threshold strategy with the soft threshold function scheme (PSS); (2) the ϕDP penalty threshold strategy with the soft threshold function scheme (PPSS). Furthermore, the wavelet de-noising is compared with the mean, median, Kalman, and finite impulse response (FIR) methods with simulation data and two actual cases. The results suggest that both of the two schemes perform well, especially when ϕDP data are simultaneously polluted by various scales and types of noises. A slight difference is that the PSS method can retain more detail, and the PPSS can smooth the signal more successfully.

  8. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    SciTech Connect

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frederic

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimation of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a

  9. Wavelet-based fMRI analysis: 3-D denoising, signal separation, and validation metrics

    PubMed Central

    Khullar, Siddharth; Michael, Andrew; Correa, Nicolle; Adali, Tulay; Baum, Stefi A.; Calhoun, Vince D.

    2010-01-01

    We present a novel integrated wavelet-domain based framework (w-ICA) for 3-D de-noising functional magnetic resonance imaging (fMRI) data followed by source separation analysis using independent component analysis (ICA) in the wavelet domain. We propose the idea of a 3-D wavelet-based multi-directional de-noising scheme where each volume in a 4-D fMRI data set is sub-sampled using the axial, sagittal and coronal geometries to obtain three different slice-by-slice representations of the same data. The filtered intensity value of an arbitrary voxel is computed as an expected value of the de-noised wavelet coefficients corresponding to the three viewing geometries for each sub-band. This results in a robust set of de-noised wavelet coefficients for each voxel. Given the decorrelated nature of these de-noised wavelet coefficients; it is possible to obtain more accurate source estimates using ICA in the wavelet domain. The contributions of this work can be realized as two modules. First, the analysis module where we combine a new 3-D wavelet denoising approach with better signal separation properties of ICA in the wavelet domain, to yield an activation component that corresponds closely to the true underlying signal and is maximally independent with respect to other components. Second, we propose and describe two novel shape metrics for post-ICA comparisons between activation regions obtained through different frameworks. We verified our method using simulated as well as real fMRI data and compared our results against the conventional scheme (Gaussian smoothing + spatial ICA: s-ICA). The results show significant improvements based on two important features: (1) preservation of shape of the activation region (shape metrics) and (2) receiver operating characteristic (ROC) curves. It was observed that the proposed framework was able to preserve the actual activation shape in a consistent manner even for very high noise levels in addition to significant reduction in false

  10. Denoising solar radiation data using coiflet wavelets

    SciTech Connect

    Karim, Samsul Ariffin Abdul Janier, Josefina B. Muthuvalu, Mohana Sundaram; Hasan, Mohammad Khatim; Sulaiman, Jumat; Ismail, Mohd Tahir

    2014-10-24

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.

  11. [A novel denoising approach to SVD filtering based on DCT and PCA in CT image].

    PubMed

    Feng, Fuqiang; Wang, Jun

    2013-10-01

    Because of various effects of the imaging mechanism, noises are inevitably introduced in medical CT imaging process. Noises in the images will greatly degrade the quality of images and bring difficulties to clinical diagnosis. This paper presents a new method to improve singular value decomposition (SVD) filtering performance in CT image. Filter based on SVD can effectively analyze characteristics of the image in horizontal (and/or vertical) directions. According to the features of CT image, we can make use of discrete cosine transform (DCT) to extract the region of interest and to shield uninterested region so as to realize the extraction of structure characteristics of the image. Then we transformed SVD to the image after DCT, constructing weighting function for image reconstruction adaptively weighted. The algorithm for the novel denoising approach in this paper was applied in CT image denoising, and the experimental results showed that the new method could effectively improve the performance of SVD filtering.

  12. Multiresolution parametric estimation of transparent motions and denoising of fluoroscopic images.

    PubMed

    Auvray, Vincent; Liénard, Jean; Bouthemy, Patrick

    2005-01-01

    We describe a novel multiresolution parametric framework to estimate transparent motions typically present in X-Ray exams. Assuming the presence if two transparent layers, it computes two affine velocity fields by minimizing an appropriate objective function with an incremental Gauss-Newton technique. We have designed a realistic simulation scheme of fluoroscopic image sequences to validate our method on data with ground truth and different levels of noise. An experiment on real clinical images is also reported. We then exploit this transparent-motion estimation method to denoise two layers image sequences using a motion-compensated estimation method. In accordance with theory, we show that we reach a denoising factor of 2/3 in a few iterations without bringing any local artifacts in the image sequence.

  13. Computational Approaches to Spatial Orientation: From Transfer Functions to Dynamic Bayesian Inference

    PubMed Central

    MacNeilage, Paul R.; Ganesan, Narayan; Angelaki, Dora E.

    2008-01-01

    Spatial orientation is the sense of body orientation and self-motion relative to the stationary environment, fundamental to normal waking behavior and control of everyday motor actions including eye movements, postural control, and locomotion. The brain achieves spatial orientation by integrating visual, vestibular, and somatosensory signals. Over the past years, considerable progress has been made toward understanding how these signals are processed by the brain using multiple computational approaches that include frequency domain analysis, the concept of internal models, observer theory, Bayesian theory, and Kalman filtering. Here we put these approaches in context by examining the specific questions that can be addressed by each technique and some of the scientific insights that have resulted. We conclude with a recent application of particle filtering, a probabilistic simulation technique that aims to generate the most likely state estimates by incorporating internal models of sensor dynamics and physical laws and noise associated with sensory processing as well as prior knowledge or experience. In this framework, priors for low angular velocity and linear acceleration can explain the phenomena of velocity storage and frequency segregation, both of which have been modeled previously using arbitrary low-pass filtering. How Kalman and particle filters may be implemented by the brain is an emerging field. Unlike past neurophysiological research that has aimed to characterize mean responses of single neurons, investigations of dynamic Bayesian inference should attempt to characterize population activities that constitute probabilistic representations of sensory and prior information. PMID:18842952

  14. Effect of taxonomic resolution on ecological and palaeoecological inference - a test using testate amoeba water table depth transfer functions

    NASA Astrophysics Data System (ADS)

    Mitchell, Edward A. D.; Lamentowicz, Mariusz; Payne, Richard J.; Mazei, Yuri

    2014-05-01

    Sound taxonomy is a major requirement for quantitative environmental reconstruction using biological data. Transfer function performance should theoretically be expected to decrease with reduced taxonomic resolution. However for many groups of organisms taxonomy is imperfect and species level identification not always possible. We conducted numerical experiments on five testate amoeba water table (DWT) transfer function data sets. We sequentially reduced the number of taxonomic groups by successively merging morphologically similar species and removing inconspicuous species. We then assessed how these changes affected model performance and palaeoenvironmental reconstruction using two fossil data sets. Model performance decreased with decreasing taxonomic resolution, but this had only limited effects on patterns of inferred DWT, at least to detect major dry/wet shifts. Higher-resolution taxonomy may however still be useful to detect more subtle changes, or for reconstructed shifts to be significant.

  15. A Genome-Scale Investigation of How Sequence, Function, and Tree-Based Gene Properties Influence Phylogenetic Inference

    PubMed Central

    Shen, Xing-Xing; Salichos, Leonidas; Rokas, Antonis

    2016-01-01

    Molecular phylogenetic inference is inherently dependent on choices in both methodology and data. Many insightful studies have shown how choices in methodology, such as the model of sequence evolution or optimality criterion used, can strongly influence inference. In contrast, much less is known about the impact of choices in the properties of the data, typically genes, on phylogenetic inference. We investigated the relationships between 52 gene properties (24 sequence-based, 19 function-based, and 9 tree-based) with each other and with three measures of phylogenetic signal in two assembled data sets of 2,832 yeast and 2,002 mammalian genes. We found that most gene properties, such as evolutionary rate (measured through the percent average of pairwise identity across taxa) and total tree length, were highly correlated with each other. Similarly, several gene properties, such as gene alignment length, Guanine-Cytosine content, and the proportion of tree distance on internal branches divided by relative composition variability (treeness/RCV), were strongly correlated with phylogenetic signal. Analysis of partial correlations between gene properties and phylogenetic signal in which gene evolutionary rate and alignment length were simultaneously controlled, showed similar patterns of correlations, albeit weaker in strength. Examination of the relative importance of each gene property on phylogenetic signal identified gene alignment length, alongside with number of parsimony-informative sites and variable sites, as the most important predictors. Interestingly, the subsets of gene properties that optimally predicted phylogenetic signal differed considerably across our three phylogenetic measures and two data sets; however, gene alignment length and RCV were consistently included as predictors of all three phylogenetic measures in both yeasts and mammals. These results suggest that a handful of sequence-based gene properties are reliable predictors of phylogenetic signal

  16. A Genome-Scale Investigation of How Sequence, Function, and Tree-Based Gene Properties Influence Phylogenetic Inference.

    PubMed

    Shen, Xing-Xing; Salichos, Leonidas; Rokas, Antonis

    2016-01-01

    Molecular phylogenetic inference is inherently dependent on choices in both methodology and data. Many insightful studies have shown how choices in methodology, such as the model of sequence evolution or optimality criterion used, can strongly influence inference. In contrast, much less is known about the impact of choices in the properties of the data, typically genes, on phylogenetic inference. We investigated the relationships between 52 gene properties (24 sequence-based, 19 function-based, and 9 tree-based) with each other and with three measures of phylogenetic signal in two assembled data sets of 2,832 yeast and 2,002 mammalian genes. We found that most gene properties, such as evolutionary rate (measured through the percent average of pairwise identity across taxa) and total tree length, were highly correlated with each other. Similarly, several gene properties, such as gene alignment length, Guanine-Cytosine content, and the proportion of tree distance on internal branches divided by relative composition variability (treeness/RCV), were strongly correlated with phylogenetic signal. Analysis of partial correlations between gene properties and phylogenetic signal in which gene evolutionary rate and alignment length were simultaneously controlled, showed similar patterns of correlations, albeit weaker in strength. Examination of the relative importance of each gene property on phylogenetic signal identified gene alignment length, alongside with number of parsimony-informative sites and variable sites, as the most important predictors. Interestingly, the subsets of gene properties that optimally predicted phylogenetic signal differed considerably across our three phylogenetic measures and two data sets; however, gene alignment length and RCV were consistently included as predictors of all three phylogenetic measures in both yeasts and mammals. These results suggest that a handful of sequence-based gene properties are reliable predictors of phylogenetic signal

  17. Emergence and evolution of modern molecular functions inferred from phylogenomic analysis of ontological data.

    PubMed

    Kim, Kyung Mo; Caetano-Anollés, Gustavo

    2010-07-01

    The biological processes that characterize the phenotypes of a living system are embodied in the function of molecules and hold the key to evolutionary history, delimiting natural selection and change. These processes and functions provide direct insight into the emergence, development, and organization of cellular life. However, detailed molecular functions make up a network-like hierarchy of relationships that tells little of evolutionary links between structure and function in biology. For example, Gene Ontology terms represent widely-used vocabularies of processes and functions with evolutionary relationships that are implicit but not defined. Here, we uncover patterns of global evolutionary history in ontological terms associated with the sequence of 38 genomes. These patterns unfold the metabolic origins of modern molecular functions and major biological transitions in evolution toward complex life. Phylogenies reveal the primordial appearance of hydrolases and transferases, with ATPase, GTPase, and helicase activities being the most ancient. This indicates that ancient catalysts were crucial for binding and transport, the emergence of nucleic acids and protein biopolymers, and the communication of primordial cells with the environment. Finally, the history of biological processes showed that cellular biopolymer metabolic processes preceded biopolymer biosynthesis and essential processes related to macromolecular formation, directly challenging the existence of an RNA world. Phylogenomic systematization of biological function takes the structure and function paradigm to a completely new level of abstraction, demonstrating a "metabolic first" origin of life. The approach uncovers patterns in the morphing of function that are unprecedented and necessary for systematic views in biology. PMID:20418223

  18. Comparative population genomics: power and principles for the inference of functionality.

    PubMed

    Lawrie, David S; Petrov, Dmitri A

    2014-04-01

    The availability of sequenced genomes from multiple related organisms allows the detection and localization of functional genomic elements based on the idea that such elements evolve more slowly than neutral sequences. Although such comparative genomics methods have proven useful in discovering functional elements and ascertaining levels of functional constraint in the genome as a whole, here we outline limitations intrinsic to this approach that cannot be overcome by sequencing more species. We argue that it is essential to supplement comparative genomics with ultra-deep sampling of populations from closely related species to enable substantially more powerful genomic scans for functional elements. The convergence of sequencing technology and population genetics theory has made such projects feasible and has exciting implications for functional genomics.

  19. A new method for mobile phone image denoising

    NASA Astrophysics Data System (ADS)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  20. Denoising time-domain induced polarisation data using wavelet techniques

    NASA Astrophysics Data System (ADS)

    Deo, Ravin N.; Cull, James P.

    2016-05-01

    Time-domain induced polarisation (TDIP) methods are routinely used for near-surface evaluations in quasi-urban environments harbouring networks of buried civil infrastructure. A conventional technique for improving signal to noise ratio in such environments is by using analogue or digital low-pass filtering followed by stacking and rectification. However, this induces large distortions in the processed data. In this study, we have conducted the first application of wavelet based denoising techniques for processing raw TDIP data. Our investigation included laboratory and field measurements to better understand the advantages and limitations of this technique. It was found that distortions arising from conventional filtering can be significantly avoided with the use of wavelet based denoising techniques. With recent advances in full-waveform acquisition and analysis, incorporation of wavelet denoising techniques can further enhance surveying capabilities. In this work, we present the rationale for utilising wavelet denoising methods and discuss some important implications, which can positively influence TDIP methods.

  1. Edge-preserving image denoising via optimal color space projection.

    PubMed

    Lian, Nai-Xiang; Zagorodnov, Vitali; Tan, Yap-Peng

    2006-09-01

    Denoising of color images can be done on each color component independently. Recent work has shown that exploiting strong correlation between high-frequency content of different color components can improve the denoising performance. We show that for typical color images high correlation also means similarity, and propose to exploit this strong intercolor dependency using an optimal luminance/color-difference space projection. Experimental results confirm that performing denoising on the projected color components yields superior denoising performance, both in peak signal-to-noise ratio and visual quality sense, compared to that of existing solutions. We also develop a novel approach to estimate directly from the noisy image data the image and noise statistics, which are required to determine the optimal projection.

  2. Denoising of chaotic signal using independent component analysis and empirical mode decomposition with circulate translating

    NASA Astrophysics Data System (ADS)

    Wen-Bo, Wang; Xiao-Dong, Zhang; Yuchan, Chang; Xiang-Li, Wang; Zhao, Wang; Xi, Chen; Lei, Zheng

    2016-01-01

    In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. Project supported by the National Science and Technology, China (Grant No. 2012BAJ15B04), the National Natural Science Foundation of China (Grant Nos. 41071270 and 61473213), the Natural Science Foundation of Hubei Province, China (Grant No. 2015CFB424), the State Key Laboratory Foundation of Satellite Ocean Environment Dynamics, China (Grant No. SOED1405), the Hubei Provincial Key Laboratory Foundation of Metallurgical Industry Process System Science, China (Grant No. Z201303), and the Hubei Key Laboratory Foundation of Transportation Internet of Things, Wuhan University of Technology, China (Grant No.2015III015-B02).

  3. Genome-scale co-evolutionary inference identifies functions and clients of bacterial Hsp90.

    PubMed

    Press, Maximilian O; Li, Hui; Creanza, Nicole; Kramer, Günter; Queitsch, Christine; Sourjik, Victor; Borenstein, Elhanan

    2013-01-01

    The molecular chaperone Hsp90 is essential in eukaryotes, in which it facilitates the folding of developmental regulators and signal transduction proteins known as Hsp90 clients. In contrast, Hsp90 is not essential in bacteria, and a broad characterization of its molecular and organismal function is lacking. To enable such characterization, we used a genome-scale phylogenetic analysis to identify genes that co-evolve with bacterial Hsp90. We find that genes whose gain and loss were coordinated with Hsp90 throughout bacterial evolution tended to function in flagellar assembly, chemotaxis, and bacterial secretion, suggesting that Hsp90 may aid assembly of protein complexes. To add to the limited set of known bacterial Hsp90 clients, we further developed a statistical method to predict putative clients. We validated our predictions by demonstrating that the flagellar protein FliN and the chemotaxis kinase CheA behaved as Hsp90 clients in Escherichia coli, confirming the predicted role of Hsp90 in chemotaxis and flagellar assembly. Furthermore, normal Hsp90 function is important for wild-type motility and/or chemotaxis in E. coli. This novel function of bacterial Hsp90 agreed with our subsequent finding that Hsp90 is associated with a preference for multiple habitats and may therefore face a complex selection regime. Taken together, our results reveal previously unknown functions of bacterial Hsp90 and open avenues for future experimental exploration by implicating Hsp90 in the assembly of membrane protein complexes and adaptation to novel environments. PMID:23874229

  4. Denoising, deconvolving, and decomposing photon observations. Derivation of the D3PO algorithm

    NASA Astrophysics Data System (ADS)

    Selig, Marco; Enßlin, Torsten A.

    2015-02-01

    The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution is not dependent on the underlying position space, the implementation of the D3PO algorithm uses the nifty package to ensure applicability to various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 × 32 arcmin2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A74

  5. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India. PMID:26697285

  6. Binary black hole merger rates inferred from luminosity function of ultra-luminous X-ray sources

    NASA Astrophysics Data System (ADS)

    Inoue, Yoshiyuki; Tanaka, Yasuyuki T.; Isobe, Naoki

    2016-10-01

    The Advanced Laser Interferometer Gravitational-Wave Observatory (aLIGO) has detected direct signals of gravitational waves (GWs) from GW150914. The event was a merger of binary black holes whose masses are 36^{+5}_{-4} M_{{⊙}} and 29^{+4}_{-4} M_{{⊙}}. Such binary systems are expected to be directly evolved from stellar binary systems or formed by dynamical interactions of black holes in dense stellar environments. Here we derive the binary black hole merger rate based on the nearby ultra-luminous X-ray source (ULX) luminosity function (LF) under the assumption that binary black holes evolve through X-ray emitting phases. We obtain the binary black hole merger rate as 5.8(tULX/0.1 Myr)- 1λ- 0.6exp ( - 0.30λ) Gpc- 3 yr- 1, where tULX is the typical duration of the ULX phase and λ is the Eddington ratio in luminosity. This is coincident with the event rate inferred from the detection of GW150914 as well as the predictions based on binary population synthesis models. Although we are currently unable to constrain the Eddington ratio of ULXs in luminosity due to the uncertainties of our models and measured binary black hole merger event rates, further X-ray and GW data will allow us to narrow down the range of the Eddington ratios of ULXs. We also find the cumulative merger rate for the mass range of 5 M⊙ ≤ MBH ≤ 100 M⊙ inferred from the ULX LF is consistent with that estimated by the aLIGO collaboration considering various astrophysical conditions such as the mass function of black holes.

  7. Pragmatic Inferences in High-Functioning Adults with Autism and Asperger Syndrome

    ERIC Educational Resources Information Center

    Pijnacker, Judith; Hagoort, Peter; Buitelaar, Jan; Teunisse, Jan-Pieter; Geurts, Bart

    2009-01-01

    Although people with autism spectrum disorders (ASD) often have severe problems with pragmatic aspects of language, little is known about their pragmatic reasoning. We carried out a behavioral study on high-functioning adults with autistic disorder (n = 11) and Asperger syndrome (n = 17) and matched controls (n = 28) to investigate whether they…

  8. Brain Imaging and Cognitive Neuroscience: Toward Strong Inference in Attributing Function to Structure.

    ERIC Educational Resources Information Center

    Sarter, Martin; And Others

    1996-01-01

    Cognitive neuroscience is a scientific discipline that aims to determine how brain function gives rise to mental activity. Modern imaging techniques have contributed significantly to the emergence of this discipline. A conceptual framework is presented to help interpret data describing the relationships between cognitive phenomena and brain…

  9. The cost of misremembering: Inferring the loss function in visual working memory.

    PubMed

    Sims, Chris R

    2015-01-01

    Visual working memory (VWM) is a highly limited storage system. A basic consequence of this fact is that visual memories cannot perfectly encode or represent the veridical structure of the world. However, in natural tasks, some memory errors might be more costly than others. This raises the intriguing possibility that the nature of memory error reflects the costs of committing different kinds of errors. Many existing theories assume that visual memories are noise-corrupted versions of afferent perceptual signals. However, this additive noise assumption oversimplifies the problem. Implicit in the behavioral phenomena of visual working memory is the concept of a loss function: a mathematical entity that describes the relative cost to the organism of making different types of memory errors. An optimally efficient memory system is one that minimizes the expected loss according to a particular loss function, while subject to a constraint on memory capacity. This paper describes a novel theoretical framework for characterizing visual working memory in terms of its implicit loss function. Using inverse decision theory, the empirical loss function is estimated from the results of a standard delayed recall visual memory experiment. These results are compared to the predicted behavior of a visual working memory system that is optimally efficient for a previously identified natural task, gaze correction following saccadic error. Finally, the approach is compared to alternative models of visual working memory, and shown to offer a superior account of the empirical data across a range of experimental datasets. PMID:25740875

  10. The cost of misremembering: Inferring the loss function in visual working memory.

    PubMed

    Sims, Chris R

    2015-03-04

    Visual working memory (VWM) is a highly limited storage system. A basic consequence of this fact is that visual memories cannot perfectly encode or represent the veridical structure of the world. However, in natural tasks, some memory errors might be more costly than others. This raises the intriguing possibility that the nature of memory error reflects the costs of committing different kinds of errors. Many existing theories assume that visual memories are noise-corrupted versions of afferent perceptual signals. However, this additive noise assumption oversimplifies the problem. Implicit in the behavioral phenomena of visual working memory is the concept of a loss function: a mathematical entity that describes the relative cost to the organism of making different types of memory errors. An optimally efficient memory system is one that minimizes the expected loss according to a particular loss function, while subject to a constraint on memory capacity. This paper describes a novel theoretical framework for characterizing visual working memory in terms of its implicit loss function. Using inverse decision theory, the empirical loss function is estimated from the results of a standard delayed recall visual memory experiment. These results are compared to the predicted behavior of a visual working memory system that is optimally efficient for a previously identified natural task, gaze correction following saccadic error. Finally, the approach is compared to alternative models of visual working memory, and shown to offer a superior account of the empirical data across a range of experimental datasets.

  11. Inference for the median residual life function in sequential multiple assignment randomized trials

    PubMed Central

    Kidwell, Kelley M.; Ko, Jin H.; Wahed, Abdus S.

    2014-01-01

    In survival analysis, median residual lifetime is often used as a summary measure to assess treatment effectiveness; it is not clear, however, how such a quantity could be estimated for a given dynamic treatment regimen using data from sequential randomized clinical trials. We propose a method to estimate a dynamic treatment regimen-specific median residual life (MERL) function from sequential multiple assignment randomized trials. We present the MERL estimator, which is based on inverse probability weighting, as well as, two variance estimates for the MERL estimator. One variance estimate follows from Lunceford, Davidian and Tsiatis’ 2002 survival function-based variance estimate and the other uses the sandwich estimator. The MERL estimator is evaluated, and its two variance estimates are compared through simulation studies, showing that the estimator and both variance estimates produce approximately unbiased results in large samples. To demonstrate our methods, the estimator has been applied to data from a sequentially randomized leukemia clinical trial. PMID:24254496

  12. Inference for the median residual life function in sequential multiple assignment randomized trials.

    PubMed

    Kidwell, Kelley M; Ko, Jin H; Wahed, Abdus S

    2014-04-30

    In survival analysis, median residual lifetime is often used as a summary measure to assess treatment effectiveness; it is not clear, however, how such a quantity could be estimated for a given dynamic treatment regimen using data from sequential randomized clinical trials. We propose a method to estimate a dynamic treatment regimen-specific median residual life (MERL) function from sequential multiple assignment randomized trials. We present the MERL estimator, which is based on inverse probability weighting, as well as, two variance estimates for the MERL estimator. One variance estimate follows from Lunceford, Davidian and Tsiatis' 2002 survival function-based variance estimate and the other uses the sandwich estimator. The MERL estimator is evaluated, and its two variance estimates are compared through simulation studies, showing that the estimator and both variance estimates produce approximately unbiased results in large samples. To demonstrate our methods, the estimator has been applied to data from a sequentially randomized leukemia clinical trial. PMID:24254496

  13. Simple Math is Enough: Two Examples of Inferring Functional Associations from Genomic Data

    NASA Technical Reports Server (NTRS)

    Liang, Shoudan

    2003-01-01

    Non-random features in the genomic data are usually biologically meaningful. The key is to choose the feature well. Having a p-value based score prioritizes the findings. If two proteins share a unusually large number of common interaction partners, they tend to be involved in the same biological process. We used this finding to predict the functions of 81 un-annotated proteins in yeast.

  14. Denoising traffic collision data using ensemble empirical mode decomposition (EEMD) and its application for constructing continuous risk profile (CRP).

    PubMed

    Kim, Nam-Seog; Chung, Koohong; Ahn, Seongchae; Yu, Jeong Whon; Choi, Keechoo

    2014-10-01

    Filtering out the noise in traffic collision data is essential in reducing false positive rates (i.e., requiring safety investigation of sites where it is not needed) and can assist government agencies in better allocating limited resources. Previous studies have demonstrated that denoising traffic collision data is possible when there exists a true known high collision concentration location (HCCL) list to calibrate the parameters of a denoising method. However, such a list is often not readily available in practice. To this end, the present study introduces an innovative approach for denoising traffic collision data using the Ensemble Empirical Mode Decomposition (EEMD) method which is widely used for analyzing nonlinear and nonstationary data. The present study describes how to transform the traffic collision data before the data can be decomposed using the EEMD method to obtain set of Intrinsic Mode Functions (IMFs) and residue. The attributes of the IMFs were then carefully examined to denoise the data and to construct Continuous Risk Profiles (CRPs). The findings from comparing the resulting CRP profiles with CRPs in which the noise was filtered out with two different empirically calibrated weighted moving window lengths are also documented, and the results and recommendations for future research are discussed.

  15. Inferring functional connectivity in MRI using Bayesian network structure learning with a modified PC algorithm.

    PubMed

    Iyer, Swathi P; Shafran, Izhak; Grayson, David; Gates, Kathleen; Nigg, Joel T; Fair, Damien A

    2013-07-15

    Resting state functional connectivity MRI (rs-fcMRI) is a popular technique used to gauge the functional relatedness between regions in the brain for typical and special populations. Most of the work to date determines this relationship by using Pearson's correlation on BOLD fMRI timeseries. However, it has been recognized that there are at least two key limitations to this method. First, it is not possible to resolve the direct and indirect connections/influences. Second, the direction of information flow between the regions cannot be differentiated. In the current paper, we follow-up on recent work by Smith et al. (2011), and apply PC algorithm to both simulated data and empirical data to determine whether these two factors can be discerned with group average, as opposed to single subject, functional connectivity data. When applied on simulated individual subjects, the algorithm performs well determining indirect and direct connection but fails in determining directionality. However, when applied at group level, PC algorithm gives strong results for both indirect and direct connections and the direction of information flow. Applying the algorithm on empirical data, using a diffusion-weighted imaging (DWI) structural connectivity matrix as the baseline, the PC algorithm outperformed the direct correlations. We conclude that, under certain conditions, the PC algorithm leads to an improved estimate of brain network structure compared to the traditional connectivity analysis based on correlations.

  16. Function of pretribosphenic and tribosphenic mammalian molars inferred from 3D animation.

    PubMed

    Schultz, Julia A; Martin, Thomas

    2014-10-01

    Appearance of the tribosphenic molar in the Late Jurassic (160 Ma) is a crucial innovation for food processing in mammalian evolution. This molar type is characterized by a protocone, a talonid basin and a two-phased chewing cycle, all of which are apomorphic. In this functional study on the teeth of Late Jurassic Dryolestes leiriensis and the living marsupial Monodelphis domestica, we demonstrate that pretribosphenic and tribosphenic molars show fundamental differences of food reduction strategies, representing a shift in dental function during the transition of tribosphenic mammals. By using the Occlusal Fingerprint Analyser (OFA), we simulated the chewing motions of the pretribosphenic Dryolestes that represents an evolutionary precursor condition to such tribosphenic mammals as Monodelphis. Animation of chewing path and detection of collisional contacts between virtual models of teeth suggests that Dryolestes differs from the classical two-phased chewing movement of tribosphenidans, due to the narrowing of the interdental space in cervical (crown-root transition) direction, the inclination angle of the hypoflexid groove, and the unicuspid talonid. The pretribosphenic chewing cycle is equivalent to phase I of the tribosphenic chewing cycle, but the former lacks phase II of the tribosphenic chewing. The new approach can analyze the chewing cycle of the jaw by using polygonal 3D models of tooth surfaces, in a way that is complementary to the electromyography and strain gauge studies of muscle function of living animals. The technique allows alignment and scaling of isolated fossil teeth and utilizes the wear facet orientation and striation of the teeth to reconstruct the chewing path of extinct mammals.

  17. Function of pretribosphenic and tribosphenic mammalian molars inferred from 3D animation

    NASA Astrophysics Data System (ADS)

    Schultz, Julia A.; Martin, Thomas

    2014-10-01

    Appearance of the tribosphenic molar in the Late Jurassic (160 Ma) is a crucial innovation for food processing in mammalian evolution. This molar type is characterized by a protocone, a talonid basin and a two-phased chewing cycle, all of which are apomorphic. In this functional study on the teeth of Late Jurassic Dryolestes leiriensis and the living marsupial Monodelphis domestica, we demonstrate that pretribosphenic and tribosphenic molars show fundamental differences of food reduction strategies, representing a shift in dental function during the transition of tribosphenic mammals. By using the Occlusal Fingerprint Analyser (OFA), we simulated the chewing motions of the pretribosphenic Dryolestes that represents an evolutionary precursor condition to such tribosphenic mammals as Monodelphis. Animation of chewing path and detection of collisional contacts between virtual models of teeth suggests that Dryolestes differs from the classical two-phased chewing movement of tribosphenidans, due to the narrowing of the interdental space in cervical (crown-root transition) direction, the inclination angle of the hypoflexid groove, and the unicuspid talonid. The pretribosphenic chewing cycle is equivalent to phase I of the tribosphenic chewing cycle, but the former lacks phase II of the tribosphenic chewing. The new approach can analyze the chewing cycle of the jaw by using polygonal 3D models of tooth surfaces, in a way that is complementary to the electromyography and strain gauge studies of muscle function of living animals. The technique allows alignment and scaling of isolated fossil teeth and utilizes the wear facet orientation and striation of the teeth to reconstruct the chewing path of extinct mammals.

  18. Inferring cortical function in the mouse visual system through large-scale systems neuroscience.

    PubMed

    Hawrylycz, Michael; Anastassiou, Costas; Arkhipov, Anton; Berg, Jim; Buice, Michael; Cain, Nicholas; Gouwens, Nathan W; Gratiy, Sergey; Iyer, Ramakrishnan; Lee, Jung Hoon; Mihalas, Stefan; Mitelut, Catalin; Olsen, Shawn; Reid, R Clay; Teeter, Corinne; de Vries, Saskia; Waters, Jack; Zeng, Hongkui; Koch, Christof

    2016-07-01

    The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort. PMID:27382147

  19. Inferring cortical function in the mouse visual system through large-scale systems neuroscience.

    PubMed

    Hawrylycz, Michael; Anastassiou, Costas; Arkhipov, Anton; Berg, Jim; Buice, Michael; Cain, Nicholas; Gouwens, Nathan W; Gratiy, Sergey; Iyer, Ramakrishnan; Lee, Jung Hoon; Mihalas, Stefan; Mitelut, Catalin; Olsen, Shawn; Reid, R Clay; Teeter, Corinne; de Vries, Saskia; Waters, Jack; Zeng, Hongkui; Koch, Christof

    2016-07-01

    The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort.

  20. Bayesian nonparametric inference on quantile residual life function: Application to breast cancer data.

    PubMed

    Park, Taeyoung; Jeong, Jong-Hyeon; Lee, Jae Won

    2012-08-15

    There is often an interest in estimating a residual life function as a summary measure of survival data. For ease in presentation of the potential therapeutic effect of a new drug, investigators may summarize survival data in terms of the remaining life years of patients. Under heavy right censoring, however, some reasonably high quantiles (e.g., median) of a residual lifetime distribution cannot be always estimated via a popular nonparametric approach on the basis of the Kaplan-Meier estimator. To overcome the difficulties in dealing with heavily censored survival data, this paper develops a Bayesian nonparametric approach that takes advantage of a fully model-based but highly flexible probabilistic framework. We use a Dirichlet process mixture of Weibull distributions to avoid strong parametric assumptions on the unknown failure time distribution, making it possible to estimate any quantile residual life function under heavy censoring. Posterior computation through Markov chain Monte Carlo is straightforward and efficient because of conjugacy properties and partial collapse. We illustrate the proposed methods by using both simulated data and heavily censored survival data from a recent breast cancer clinical trial conducted by the National Surgical Adjuvant Breast and Bowel Project. PMID:22437758

  1. Inferring cortical function in the mouse visual system through large-scale systems neuroscience

    PubMed Central

    Hawrylycz, Michael; Anastassiou, Costas; Arkhipov, Anton; Berg, Jim; Buice, Michael; Cain, Nicholas; Gouwens, Nathan W.; Gratiy, Sergey; Iyer, Ramakrishnan; Lee, Jung Hoon; Mihalas, Stefan; Mitelut, Catalin; Olsen, Shawn; Reid, R. Clay; Teeter, Corinne; de Vries, Saskia; Waters, Jack; Zeng, Hongkui; Koch, Christof

    2016-01-01

    The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort. PMID:27382147

  2. EcID. A database for the inference of functional interactions in E. coli.

    PubMed

    Andres Leon, Eduardo; Ezkurdia, Iakes; García, Beatriz; Valencia, Alfonso; Juan, David

    2009-01-01

    The EcID database (Escherichia coli Interaction Database) provides a framework for the integration of information on functional interactions extracted from the following sources: EcoCyc (metabolic pathways, protein complexes and regulatory information), KEGG (metabolic pathways), MINT and IntAct (protein interactions). It also includes information on protein complexes from the two E. coli high-throughput pull-down experiments and potential interactions extracted from the literature using the web services associated to the iHOP text-mining system. Additionally, EcID incorporates results of various prediction methods, including two protein interaction prediction methods based on genomic information (Phylogenetic Profiles and Gene Neighbourhoods) and three methods based on the analysis of co-evolution (Mirror Tree, In Silico 2 Hybrid and Context Mirror). EcID associates to each prediction a specifically developed confidence score. The two main features that make EcID different from other systems are the combination of co-evolution-based predictions with the experimental data, and the introduction of E. coli-specific information, such as gene regulation information from EcoCyc. The possibilities offered by the combination of the EcID database information are illustrated with a prediction of potential functions for a group of poorly characterized genes related to yeaG. EcID is available online at http://ecid.bioinfo.cnio.es.

  3. Effect of denoising on supervised lung parenchymal clusters

    NASA Astrophysics Data System (ADS)

    Jayamani, Padmapriya; Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Denoising is a critical preconditioning step for quantitative analysis of medical images. Despite promises for more consistent diagnosis, denoising techniques are seldom explored in clinical settings. While this may be attributed to the esoteric nature of the parameter sensitve algorithms, lack of quantitative measures on their ecacy to enhance the clinical decision making is a primary cause of physician apathy. This paper addresses this issue by exploring the eect of denoising on the integrity of supervised lung parenchymal clusters. Multiple Volumes of Interests (VOIs) were selected across multiple high resolution CT scans to represent samples of dierent patterns (normal, emphysema, ground glass, honey combing and reticular). The VOIs were labeled through consensus of four radiologists. The original datasets were ltered by multiple denoising techniques (median ltering, anisotropic diusion, bilateral ltering and non-local means) and the corresponding ltered VOIs were extracted. Plurality of cluster indices based on multiple histogram-based pair-wise similarity measures were used to assess the quality of supervised clusters in the original and ltered space. The resultant rank orders were analyzed using the Borda criteria to nd the denoising-similarity measure combination that has the best cluster quality. Our exhaustive analyis reveals (a) for a number of similarity measures, the cluster quality is inferior in the ltered space; and (b) for measures that benet from denoising, a simple median ltering outperforms non-local means and bilateral ltering. Our study suggests the need to judiciously choose, if required, a denoising technique that does not deteriorate the integrity of supervised clusters.

  4. Complex geometry of the subducted Pacific slab inferred from receiver function

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiqing; Wu, Qingju; Zhang, Guangcheng

    2014-05-01

    In recent years, slab tear has received considerable attention and been reported in many arc-arc junctures in Pacific plate subdution zones. From 2009 to 2011, we deployed two portable experiments equipped with CMG-3ESPC seismometers and the recorders of REFTEK-130B in NE China. The two linear seismic arrays were designed nearly parallel, and each of them containing about 60 seismic stations extended about 1200 km from west to east spanning all surface geological terrains of NE China. The south one was firstly set up and continually operated over two year, while the north deployment worked only about one year. By using the teleseismic data collected by these two arrays, we calculate the P receiver functions to map topographic variation of the upper mantle discontinuities. Our sampled region is located where the juncture between the subducting Kuril and Japan slabs reaches the 660-km discontinuity. Distinct variation of the 660-km discontinuity is mapped beneath the regions. A deeper-than-normal 660 km discontinuity is observed locally in the southeastern part of our sampled region. The depression of the 660 km discontinuity may be resulted from an oceanic lithospheric slab deflected in the mantle transition zone, in good agreement with the result of earlier tomographic and other seismic studies in this region. The northeastern portion of our sampled region, however, does not show clearly the deflection of the slab. The variation of the tomography of the 660-km discontinuity in our sampled regions may indicate a complex geometry of the subducted Pacific slab.

  5. Inferring genome-wide functional modulatory network: a case study on NF-κB/RelA transcription factor.

    PubMed

    Li, Xueling; Zhu, Min; Brasier, Allan R; Kudlicki, Andrzej S

    2015-04-01

    How different pathways lead to the activation of a specific transcription factor (TF) with specific effects is not fully understood. We model context-specific transcriptional regulation as a modulatory network: triplets composed of a TF, target gene, and modulator. Modulators usually affect the activity of a specific TF at the posttranscriptional level in a target gene-specific action mode. This action may be classified as enhancement, attenuation, or inversion of either activation or inhibition. As a case study, we inferred, from a large collection of expression profiles, all potential modulations of NF-κB/RelA. The predicted modulators include many proteins previously not reported as physically binding to RelA but with relevant functions, such as RNA processing, cell cycle, mitochondrion, ubiquitin-dependent proteolysis, and chromatin modification. Modulators from different processes exert specific prevalent action modes on distinct pathways. Modulators from noncoding RNA, RNA-binding proteins, TFs, and kinases modulate the NF-κB/RelA activity with specific action modes consistent with their molecular functions and modulation level. The modulatory networks of NF-κB/RelA in the context epithelial-mesenchymal transition (EMT) and burn injury have different modulators, including those involved in extracellular matrix (FBN1), cytoskeletal regulation (ACTN1), and metastasis-associated lung adenocarcinoma transcript 1 (MALAT1), a long intergenic nonprotein coding RNA, and tumor suppression (FOXP1) for EMT, and TXNIP, GAPDH, PKM2, IFIT5, LDHA, NID1, and TPP1 for burn injury.

  6. Why are dunkels sticky? Preschoolers infer functionality and intentional creation for artifact properties learned from generic language.

    PubMed

    Cimpian, Andrei; Cadena, Cristina

    2010-10-01

    Artifacts pose a potential learning problem for children because the mapping between their features and their functions is often not transparent. In solving this problem, children are likely to rely on a number of information sources (e.g., others' actions, affordances). We argue that children's sensitivity to nuances in the language used to describe artifacts is an important, but so far unacknowledged, piece of this puzzle. Specifically, we hypothesize that children are sensitive to whether an unfamiliar artifact's features are highlighted using generic (e.g., "Dunkels are sticky") or non-generic (e.g., "This dunkel is sticky") language. Across two studies, older-but not younger-preschoolers who heard such features introduced via generic statements inferred that they are a functional part of the artifact's design more often than children who heard the same features introduced via non-generic statements. The ability to pick up on this linguistic cue may expand considerably the amount of conceptual information about artifacts that children derive from conversations with adults. PMID:20656283

  7. Receiver-Function Stacking Methods to Infer Crustal Anisotropic Structure with Application to the Turkish-Anatolian Plateau

    NASA Astrophysics Data System (ADS)

    Kaviani, A.; Rumpker, G.

    2015-12-01

    To account for the presence of seismic anisotropy within the crust and to estimate the relevant parameters, we first discuss a robust technique for the analysis of shear-wave splitting in layered anisotropic media by using converted shear phases. We use a combined approach that involves time-shifting and stacking of radial receiver functions and energy-minimization of transverse receiver functions to constrain the splitting parameters (i.e. the fast-polarization direction and the delay time) for an anisotropic layer. In multi-layered anisotropic media, the splitting parameters for the individual layers can be inferred by a layer-stripping approach, where the splitting effects due to shallower layers on converted phases from deeper discontinuities are successively corrected. The effect of anisotropy on the estimates of crustal thickness and average bulk Vp/Vs ratio can be significant. Recently, we extended the approach of Zhu & Kanamori (2000) to include P-to-S converted waves and their crustal reverberations generated in the anisotropic case. The anisotropic parameters of the medium are first estimated using the splitting analysis of the Ps-phase as described above. Then, a grid-search is performed over layer thickness and Vp/Vs ratio, while accounting for all relevant arrivals (up to 20 phases) in the anisotropic medium. We apply these techniques to receiver-function data from seismological stations across the Turkish-Anatolian Plateau to study seismic anisotropy in the crust and its relationship to crustal tectonics. Preliminary results reveal significant crustal anisotropy and indicate that the strength and direction of the anisotropy vary across the main tectonic boundaries. We also improve the estimates of the crustal thickness and the bulk Vp/Vs ratio by accounting for the presence of crustal anisotropy beneath the station. ReferenceZhu, L. & H. Kanamori (2000), Moho depth variation in southern California from teleseismic receiver functions, J. Geophys. Res

  8. Stacked Denoising Autoencoders Applied to Star/Galaxy Classification

    NASA Astrophysics Data System (ADS)

    Qin, H. R.; Lin, J. M.; Wang, J. Y.

    2016-05-01

    In recent years, the deep learning has been becoming more and more popular because it is well-adapted, and has a high accuracy and complex structure, but it has not been used in astronomy. In order to resolve the question that the classification accuracy of star/galaxy is high on the bright set, but low on the faint set of the Sloan Digital Sky Survey (SDSS), we introduce the new deep learning SDA (stacked denoising autoencoders) and dropout technology, which can greatly improve robustness and anti-noise performance. We randomly selected the bright source set and faint source set from DR12 and DR7 with spectroscopic measurements, and preprocessed them. Afterwards, we randomly selected the training set and testing set without replacement from the bright set and faint set. At last, we used the obtained training set to train the SDA model of SDSS-DR7 and SDSS-DR12. We compared the testing result with the results of Library for Support Vector Machines (LibSVM), J48, Logistic Model Trees (LMT), Support Vector Machine (SVM), Logistic Regression, and Decision Stump algorithm on the SDSS-DR12 testing set, and the results of six kinds of decision trees on the SDSS-DR7 testing set. The simulation shows that SDA has a better classification accuracy than other machine learning algorithms. When we use completeness function as the test parameter, the test accuracy rate is improved by about 15% on the faint set of SDSS-DR7.

  9. Load identification approach based on basis pursuit denoising algorithm

    NASA Astrophysics Data System (ADS)

    Ginsberg, D.; Ruby, M.; Fritzen, C. P.

    2015-07-01

    The information of the external loads is of great interest in many fields of structural analysis, such as structural health monitoring (SHM) systems or assessment of damage after extreme events. However, in most cases it is not possible to measure the external forces directly, so they need to be reconstructed. Load reconstruction refers to the problem of estimating an input to a dynamic system when the system output and the impulse response functions are usually the knowns. Generally, this leads to a so called ill-posed inverse problem, which involves solving an underdetermined linear system of equations. For most practical applications it can be assumed that the applied loads are not arbitrarily distributed in time and space, at least some specific characteristics about the external excitation are known a priori. In this contribution this knowledge was used to develop a more suitable force reconstruction method, which allows identifying the time history and the force location simultaneously by employing significantly fewer sensors compared to other reconstruction approaches. The properties of the external force are used to transform the ill-posed problem into a sparse recovery task. The sparse solution is acquired by solving a minimization problem known as basis pursuit denoising (BPDN). The possibility of reconstructing loads based on noisy structural measurement signals will be demonstrated by considering two frequently occurring loading conditions: harmonic excitation and impact events, separately and combined. First a simulation study of a simple plate structure is carried out and thereafter an experimental investigation of a real beam is performed.

  10. Application of time-resolved glucose concentration photoacoustic signals based on an improved wavelet denoising

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-10-01

    Real-time monitoring of blood glucose concentration (BGC) is a great important procedure in controlling diabetes mellitus and preventing the complication for diabetic patients. Noninvasive measurement of BGC has already become a research hotspot because it can overcome the physical and psychological harm. Photoacoustic spectroscopy is a well-established, hybrid and alternative technique used to determine the BGC. According to the theory of photoacoustic technique, the blood is irradiated by plused laser with nano-second repeation time and micro-joule power, the photoacoustic singals contained the information of BGC are generated due to the thermal-elastic mechanism, then the BGC level can be interpreted from photoacoustic signal via the data analysis. But in practice, the time-resolved photoacoustic signals of BGC are polluted by the varities of noises, e.g., the interference of background sounds and multi-component of blood. The quality of photoacoustic signal of BGC directly impacts the precision of BGC measurement. So, an improved wavelet denoising method was proposed to eliminate the noises contained in BGC photoacoustic signals. To overcome the shortcoming of traditional wavelet threshold denoising, an improved dual-threshold wavelet function was proposed in this paper. Simulation experimental results illustrated that the denoising result of this improved wavelet method was better than that of traditional soft and hard threshold function. To varify the feasibility of this improved function, the actual photoacoustic BGC signals were test, the test reslut demonstrated that the signal-to-noises ratio(SNR) of the improved function increases about 40-80%, and its root-mean-square error (RMSE) decreases about 38.7-52.8%.

  11. Remote sensing image denoising application by generalized morphological component analysis

    NASA Astrophysics Data System (ADS)

    Yu, Chong; Chen, Xiong

    2014-12-01

    In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.

  12. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  13. Evaluation of denoising algorithms for biological electron tomography.

    PubMed

    Narasimha, Rajesh; Aganj, Iman; Bennett, Adam E; Borgnia, Mario J; Zabransky, Daniel; Sapiro, Guillermo; McLaughlin, Steven W; Milne, Jacqueline L S; Subramaniam, Sriram

    2008-10-01

    Tomograms of biological specimens derived using transmission electron microscopy can be intrinsically noisy due to the use of low electron doses, the presence of a "missing wedge" in most data collection schemes, and inaccuracies arising during 3D volume reconstruction. Before tomograms can be interpreted reliably, for example, by 3D segmentation, it is essential that the data be suitably denoised using procedures that can be individually optimized for specific data sets. Here, we implement a systematic procedure to compare various nonlinear denoising techniques on tomograms recorded at room temperature and at cryogenic temperatures, and establish quantitative criteria to select a denoising approach that is most relevant for a given tomogram. We demonstrate that using an appropriate denoising algorithm facilitates robust segmentation of tomograms of HIV-infected macrophages and Bdellovibrio bacteria obtained from specimens at room and cryogenic temperatures, respectively. We validate this strategy of automated segmentation of optimally denoised tomograms by comparing its performance with manual extraction of key features from the same tomograms.

  14. Wavelet Denoising of Mobile Radiation Data

    SciTech Connect

    Campbell, D B

    2008-10-31

    The FY08 phase of this project investigated the merits of video fusion as a method for mitigating the false alarms encountered by vehicle borne detection systems in an effort to realize performance gains associated with wavelet denoising. The fusion strategy exploited the significant correlations which exist between data obtained from radiation detectors and video systems with coincident fields of view. The additional information provided by optical systems can greatly increase the capabilities of these detection systems by reducing the burden of false alarms and through the generation of actionable information. The investigation into the use of wavelet analysis techniques as a means of filtering the gross-counts signal obtained from moving radiation detectors showed promise for vehicle borne systems. However, the applicability of these techniques to man-portable systems is limited due to minimal gains in performance over the rapid feedback available to system operators under walking conditions. Furthermore, the fusion of video holds significant promise for systems operating from vehicles or systems organized into stationary arrays; however, the added complexity and hardware required by this technique renders it infeasible for man-portable systems.

  15. Comparison of Generalized Estimating Equations and Quadratic Inference Functions in superior versus inferior Ahmed Glaucoma Valve implantation

    PubMed Central

    Khajeh-Kazemi, Razieh; Golestan, Banafsheh; Mohammad, Kazem; Mahmoudi, Mahmoud; Nedjat, Saharnaz; Pakravan, Mohammad

    2011-01-01

    BACKGROUND: The celebrated generalized estimating equations (GEE) approach is often used in longitudinal data analysis While this method behaves robustly against misspecification of the working correlation structure, it has some limitations on efficacy of estimators, goodness-of-fit tests and model selection criteria The quadratic inference functions (QIF) is a new statistical methodology that overcomes these limitations. METHODS: We administered the use of QIF and GEE in comparing the superior and inferior Ahmed glaucoma valve (AGV) implantation, while our focus was on the efficiency of estimation and using model selection criteria, we compared the effect of implant location on intraocular pressure (IOP) in refractory glaucoma patients We modeled the relationship between IOP and implant location, patient's sex and age, best corrected visual acuity, history of cataract surgery, preoperative IOP and months after surgery with assuming unstructured working correlation. RESULTS: 63 eyes of 63 patients were included in this study, 28 eyes in inferior group and 35 eyes in superior group The GEE analysis revealed that preoperative IOP has a significant effect on IOP (p = 0 011) However, QIF showed that preoperative IOP, months after surgery and squared months are significantly associated with IOP after surgery (p < 0 05) Overall, estimates from QIF are more efficient than GEE (RE = 1 272). CONCLUSIONS: In the case of unstructured working correlation, the QIF is more efficient than GEE There were no considerable difference between these locations, our results confirmed previously published works which mentioned it is better that glaucoma patients undergo superior AGV implantation. PMID:22091239

  16. A new study on mammographic image denoising using multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Dong, Min; Guo, Ya-Nan; Ma, Yi-De; Ma, Yu-run; Lu, Xiang-yu; Wang, Ke-ju

    2015-12-01

    Mammography is the most simple and effective technology for early detection of breast cancer. However, the lesion areas of breast are difficult to detect which due to mammograms are mixed with noise. This work focuses on discussing various multiresolution denoising techniques which include the classical methods based on wavelet and contourlet; moreover the emerging multiresolution methods are also researched. In this work, a new denoising method based on dual tree contourlet transform (DCT) is proposed, the DCT possess the advantage of approximate shift invariant, directionality and anisotropy. The proposed denoising method is implemented on the mammogram, the experimental results show that the emerging multiresolution method succeeded in maintaining the edges and texture details; and it can obtain better performance than the other methods both on visual effects and in terms of the Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM) values.

  17. Non-local MRI denoising using random sampling.

    PubMed

    Hu, Jinrong; Zhou, Jiliu; Wu, Xi

    2016-09-01

    In this paper, we propose a random sampling non-local mean (SNLM) algorithm to eliminate noise in 3D MRI datasets. Non-local means (NLM) algorithms have been implemented efficiently for MRI denoising, but are always limited by high computational complexity. Compared to conventional methods, which raster through the entire search window when computing similarity weights, the proposed SNLM algorithm randomly selects a small subset of voxels which dramatically decreases the computational burden, together with competitive denoising result. Moreover, structure tensor which encapsulates high-order information was introduced as an optimal sampling pattern for further improvement. Numerical experiments demonstrated that the proposed SNLM method can get a good balance between denoising quality and computation efficiency. At a relative sampling ratio (i.e. ξ=0.05), SNLM can remove noise as effectively as full NLM, meanwhile the running time can be reduced to 1/20 of NLM's. PMID:27114338

  18. Total Variation Denoising and Support Localization of the Gradient

    NASA Astrophysics Data System (ADS)

    Chambolle, A.; Duval, V.; Peyré, G.; Poon, C.

    2016-10-01

    This paper describes the geometrical properties of the solutions to the total variation denoising method. A folklore statement is that this method is able to restore sharp edges, but at the same time, might introduce some staircasing (i.e. “fake” edges) in flat areas. Quite surprisingly, put aside numerical evidences, almost no theoretical result are available to backup these claims. The first contribution of this paper is a precise mathematical definition of the “extended support” (associated to the noise-free image) of TV denoising. This is intuitively the region which is unstable and will suffer from the staircasing effect. Our main result shows that the TV denoising method indeed restores a piece-wise constant image outside a small tube surrounding the extended support. Furthermore, the radius of this tube shrinks toward zero as the noise level vanishes and in some cases, an upper bound on the convergence rate is given.

  19. The NIFTY way of Bayesian signal inference

    SciTech Connect

    Selig, Marco

    2014-12-05

    We introduce NIFTY, 'Numerical Information Field Theory', a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTY can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTY as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D{sup 3}PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.

  20. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  1. Dictionary Pair Learning on Grassmann Manifolds for Image Denoising.

    PubMed

    Zeng, Xianhua; Bian, Wei; Liu, Wei; Shen, Jialie; Tao, Dacheng

    2015-11-01

    Image denoising is a fundamental problem in computer vision and image processing that holds considerable practical importance for real-world applications. The traditional patch-based and sparse coding-driven image denoising methods convert 2D image patches into 1D vectors for further processing. Thus, these methods inevitably break down the inherent 2D geometric structure of natural images. To overcome this limitation pertaining to the previous image denoising methods, we propose a 2D image denoising model, namely, the dictionary pair learning (DPL) model, and we design a corresponding algorithm called the DPL on the Grassmann-manifold (DPLG) algorithm. The DPLG algorithm first learns an initial dictionary pair (i.e., the left and right dictionaries) by employing a subspace partition technique on the Grassmann manifold, wherein the refined dictionary pair is obtained through a sub-dictionary pair merging. The DPLG obtains a sparse representation by encoding each image patch only with the selected sub-dictionary pair. The non-zero elements of the sparse representation are further smoothed by the graph Laplacian operator to remove the noise. Consequently, the DPLG algorithm not only preserves the inherent 2D geometric structure of natural images but also performs manifold smoothing in the 2D sparse coding space. We demonstrate that the DPLG algorithm also improves the structural SIMilarity values of the perceptual visual quality for denoised images using the experimental evaluations on the benchmark images and Berkeley segmentation data sets. Moreover, the DPLG also produces the competitive peak signal-to-noise ratio values from popular image denoising algorithms.

  2. Application of Wavelet Analysis Technique in the Signal Denoising of Life Sign Detection

    NASA Astrophysics Data System (ADS)

    Zhen, Zhang; Fang, LIU.

    In life sign detection, radar echo signal is very weak and hard to extract. For solve this problem, weak life signal de-noising based on wavelet transform is studied. Through the studies of wavelet threshold de-noising method, the use of it in weak life signal de-noising in strong noise background, and the verification of simulation by Matlab, the results shows that wavelet threshold de-noising method can remove the noise signal from weak life signal effectively and be an effective de-noising and extraction method for weak life signal.

  3. Denoising seismic data using wavelet methods: a comparison study

    NASA Astrophysics Data System (ADS)

    Hloupis, G.; Vallianatos, F.

    2009-04-01

    In order to derive onset times, amplitudes or other useful characteristic from a seismogram, the usual denoising procedure involves the use of a linear pass-band filter. This family of filters is zero-phase and is useful according to phase properties but their efficiency is reduced when transients are existing near seismic signals. The alternative solution is the Wiener filter which focuses on the elimination of the mean square error between recorded and expected signal. Its main disadvantage is the assumption that signal and noise are stationary. This assumption does not hold for the seismic signals leading to denoising solutions that does not assume stationarity. Solutions based on Wavelet Transform proved effective for denoising problems across several areas. Here we present recent WT denoising methods (WDM) that will applied later to seismic sequences of Seismological Network of Crete. Wavelet denoising schemes have proved to be well adapted to several types of signals. For non-stationary signals, such as seismograms, the use of linear and non-linear wavelet denoising methods seems promising. The contribution of this study is a comparison for wavelet denoising methods suitable for seismic signals, which proved from previous studies their superiority against appropriate conventional filtering techniques. The importance of wavelet denoising methods relies on two facts: they recovered the seismic signals having fewer artifacts than conventional filters (for high SNR seismograms) and at the same time they can provide satisfactory representations (for detecting the earthquake's primary arrival) for low SNR seismograms or microearthquakes. The latter is very important for a possible development of an automatic procedure for the regular daily detection of small or non-regional earthquakes especially when the number of the stations is quite big. Initially, their performance is measured over a database of synthetic seismic signals in order to evaluate the better wavelet

  4. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  5. THE PANCHROMATIC HUBBLE ANDROMEDA TREASURY. IV. A PROBABILISTIC APPROACH TO INFERRING THE HIGH-MASS STELLAR INITIAL MASS FUNCTION AND OTHER POWER-LAW FUNCTIONS

    SciTech Connect

    Weisz, Daniel R.; Fouesneau, Morgan; Dalcanton, Julianne J.; Clifton Johnson, L.; Beerman, Lori C.; Williams, Benjamin F.; Hogg, David W.; Foreman-Mackey, Daniel T.; Rix, Hans-Walter; Gouliermis, Dimitrios; Dolphin, Andrew E.; Lang, Dustin; Bell, Eric F.; Gordon, Karl D.; Kalirai, Jason S.; Skillman, Evan D.

    2013-01-10

    We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M {approx}> 1 M {sub Sun }). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, {alpha}, are unbiased and that the uncertainty, {Delta}{alpha}, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on {alpha}, and provide an analytic approximation for {Delta}{alpha} as a function of the observed number of stars and mass range. Comparison with literature studies shows that {approx}3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield ({alpha}) = 2.46, with a 1{sigma} dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the

  6. An association of platelet indices with blood pressure in Beijing adults: Applying quadratic inference function for a longitudinal study.

    PubMed

    Yang, Kun; Tao, Lixin; Mahara, Gehendra; Yan, Yan; Cao, Kai; Liu, Xiangtong; Chen, Sipeng; Xu, Qin; Liu, Long; Wang, Chao; Huang, Fangfang; Zhang, Jie; Yan, Aoshuang; Ping, Zhao; Guo, Xiuhua

    2016-09-01

    The quadratic inference function (QIF) method becomes more acceptable for correlated data because of its advantages over generalized estimating equations (GEE). This study aimed to evaluate the relationship between platelet indices and blood pressure using QIF method, which has not been studied extensively in real data settings.A population-based longitudinal study was conducted in Beijing from 2007 to 2012, and the median of follow-up was 6 years. A total of 6515 cases, who were aged between 20 and 65 years at baseline and underwent routine physical examinations every year from 3 Beijing hospitals were enrolled to explore the association between platelet indices and blood pressure by QIF method. The original continuous platelet indices were categorized into 4 levels (Q1-Q4) using the 3 quartiles of P25, P50, and P75 as a critical value. GEE was performed to make a comparison with QIF.After adjusting for age, usage of drugs, and other confounding factors, mean platelet volume was negatively associated with diastolic blood pressure (DBP) (Equation is included in full-text article.)in males and positively linked with systolic blood pressure (SBP) (Equation is included in full-text article.). Platelet distribution width was negatively associated with SBP (Equation is included in full-text article.). Blood platelet count was associated with DBP (Equation is included in full-text article.)in males.Adults in Beijing with prolonged exposure to extreme value of platelet indices have elevated risk for future hypertension and evidence suggesting using some platelet indices for early diagnosis of high blood pressure was provided. PMID:27684843

  7. Discrete shearlet transform on GPU with applications in anomaly detection and denoising

    NASA Astrophysics Data System (ADS)

    Gibert, Xavier; Patel, Vishal M.; Labate, Demetrio; Chellappa, Rama

    2014-12-01

    Shearlets have emerged in recent years as one of the most successful methods for the multiscale analysis of multidimensional signals. Unlike wavelets, shearlets form a pyramid of well-localized functions defined not only over a range of scales and locations, but also over a range of orientations and with highly anisotropic supports. As a result, shearlets are much more effective than traditional wavelets in handling the geometry of multidimensional data, and this was exploited in a wide range of applications from image and signal processing. However, despite their desirable properties, the wider applicability of shearlets is limited by the computational complexity of current software implementations. For example, denoising a single 512 × 512 image using a current implementation of the shearlet-based shrinkage algorithm can take between 10 s and 2 min, depending on the number of CPU cores, and much longer processing times are required for video denoising. On the other hand, due to the parallel nature of the shearlet transform, it is possible to use graphics processing units (GPU) to accelerate its implementation. In this paper, we present an open source stand-alone implementation of the 2D discrete shearlet transform using CUDA C++ as well as GPU-accelerated MATLAB implementations of the 2D and 3D shearlet transforms. We have instrumented the code so that we can analyze the running time of each kernel under different GPU hardware. In addition to denoising, we describe a novel application of shearlets for detecting anomalies in textured images. In this application, computation times can be reduced by a factor of 50 or more, compared to multicore CPU implementations.

  8. Impedance cardiography signal denoising using discrete wavelet transform.

    PubMed

    Chabchoub, Souhir; Mansouri, Sofienne; Salah, Ridha Ben

    2016-09-01

    Impedance cardiography (ICG) is a non-invasive technique for diagnosing cardiovascular diseases. In the acquisition procedure, the ICG signal is often affected by several kinds of noise which distort the determination of the hemodynamic parameters. Therefore, doctors cannot recognize ICG waveform correctly and the diagnosis of cardiovascular diseases became inaccurate. The aim of this work is to choose the most suitable method for denoising the ICG signal. Indeed, different wavelet families are used to denoise the ICG signal. The Haar, Daubechies (db2, db4, db6, and db8), Symlet (sym2, sym4, sym6, sym8) and Coiflet (coif2, coif3, coif4, coif5) wavelet families are tested and evaluated in order to select the most suitable denoising method. The wavelet family with best performance is compared with two denoising methods: one based on Savitzky-Golay filtering and the other based on median filtering. Each method is evaluated by means of the signal to noise ratio (SNR), the root mean square error (RMSE) and the percent difference root mean square (PRD). The results show that the Daubechies wavelet family (db8) has superior performance on noise reduction in comparison to other methods. PMID:27376722

  9. Pixon Based Image Denoising Scheme by Preserving Exact Edge Locations

    NASA Astrophysics Data System (ADS)

    Srikrishna, Atluri; Reddy, B. Eswara; Pompapathi, Manasani

    2016-09-01

    Denoising of an image is an essential step in many image processing applications. In any image de-noising algorithm, it is a major concern to keep interesting structures of the image like abrupt changes in image intensity values (edges). In this paper an efficient algorithm for image de-noising is proposed that obtains integrated and consecutive original image from noisy image using diffusion equations in pixon domain. The process mainly consists of two steps. In first step, the pixons for noisy image are obtained by using K-means clustering process and next step includes applying diffusion equations on the pixonal model of the image to obtain new intensity values for the restored image. The process has been applied on a variety of standard images and the objective fidelity has been compared with existing algorithms. The experimental results show that the proposed algorithm has a better performance by preserving edge details compared in terms of Figure of Merit and improved Peak-to-Signal-Noise-Ratio value. The proposed method brings out a denoising technique which preserves edge details.

  10. MicroRNA-Target Network Inference and Local Network Enrichment Analysis Identify Two microRNA Clusters with Distinct Functions in Head and Neck Squamous Cell Carcinoma

    PubMed Central

    Sass, Steffen; Pitea, Adriana; Unger, Kristian; Hess, Julia; Mueller, Nikola S.; Theis, Fabian J.

    2015-01-01

    MicroRNAs represent ~22 nt long endogenous small RNA molecules that have been experimentally shown to regulate gene expression post-transcriptionally. One main interest in miRNA research is the investigation of their functional roles, which can typically be accomplished by identification of mi-/mRNA interactions and functional annotation of target gene sets. We here present a novel method “miRlastic”, which infers miRNA-target interactions using transcriptomic data as well as prior knowledge and performs functional annotation of target genes by exploiting the local structure of the inferred network. For the network inference, we applied linear regression modeling with elastic net regularization on matched microRNA and messenger RNA expression profiling data to perform feature selection on prior knowledge from sequence-based target prediction resources. The novelty of miRlastic inference originates in predicting data-driven intra-transcriptome regulatory relationships through feature selection. With synthetic data, we showed that miRlastic outperformed commonly used methods and was suitable even for low sample sizes. To gain insight into the functional role of miRNAs and to determine joint functional properties of miRNA clusters, we introduced a local enrichment analysis procedure. The principle of this procedure lies in identifying regions of high functional similarity by evaluating the shortest paths between genes in the network. We can finally assign functional roles to the miRNAs by taking their regulatory relationships into account. We thoroughly evaluated miRlastic on a cohort of head and neck cancer (HNSCC) patients provided by The Cancer Genome Atlas. We inferred an mi-/mRNA regulatory network for human papilloma virus (HPV)-associated miRNAs in HNSCC. The resulting network best enriched for experimentally validated miRNA-target interaction, when compared to common methods. Finally, the local enrichment step identified two functional clusters of mi

  11. The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal

    NASA Astrophysics Data System (ADS)

    Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis

    2016-08-01

    The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.

  12. Functional characterization of somatic mutations in cancer using network-based inference of protein activity | Office of Cancer Genomics

    Cancer.gov

    Identifying the multiple dysregulated oncoproteins that contribute to tumorigenesis in a given patient is crucial for developing personalized treatment plans. However, accurate inference of aberrant protein activity in biological samples is still challenging as genetic alterations are only partially predictive and direct measurements of protein activity are generally not feasible.

  13. Multitaper Spectral Analysis and Wavelet Denoising Applied to Helioseismic Data

    NASA Technical Reports Server (NTRS)

    Komm, R. W.; Gu, Y.; Hill, F.; Stark, P. B.; Fodor, I. K.

    1999-01-01

    Estimates of solar normal mode frequencies from helioseismic observations can be improved by using Multitaper Spectral Analysis (MTSA) to estimate spectra from the time series, then using wavelet denoising of the log spectra. MTSA leads to a power spectrum estimate with reduced variance and better leakage properties than the conventional periodogram. Under the assumption of stationarity and mild regularity conditions, the log multitaper spectrum has a statistical distribution that is approximately Gaussian, so wavelet denoising is asymptotically an optimal method to reduce the noise in the estimated spectra. We find that a single m-upsilon spectrum benefits greatly from MTSA followed by wavelet denoising, and that wavelet denoising by itself can be used to improve m-averaged spectra. We compare estimates using two different 5-taper estimates (Stepian and sine tapers) and the periodogram estimate, for GONG time series at selected angular degrees l. We compare those three spectra with and without wavelet-denoising, both visually, and in terms of the mode parameters estimated from the pre-processed spectra using the GONG peak-fitting algorithm. The two multitaper estimates give equivalent results. The number of modes fitted well by the GONG algorithm is 20% to 60% larger (depending on l and the temporal frequency) when applied to the multitaper estimates than when applied to the periodogram. The estimated mode parameters (frequency, amplitude and width) are comparable for the three power spectrum estimates, except for modes with very small mode widths (a few frequency bins), where the multitaper spectra broadened the modest compared with the periodogram. We tested the influence of the number of tapers used and found that narrow modes at low n values are broadened to the extent that they can no longer be fit if the number of tapers is too large. For helioseismic time series of this length and temporal resolution, the optimal number of tapers is less than 10.

  14. Dictionary-based image denoising for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Allner, Sebastian; Mei, Kai; Pfeiffer, Franz; Noël, Peter B.

    2016-03-01

    Compared to conventional computed tomography (CT), dual energy CT allows for improved material decomposition by conducting measurements at two distinct energy spectra. Since radiation exposure is a major concern in clinical CT, there is a need for tools to reduce the noise level in images while preserving diagnostic information. One way to achieve this goal is the application of image-based denoising algorithms after an analytical reconstruction has been performed. We have developed a modified dictionary denoising algorithm for dual energy CT aimed at exploiting the high spatial correlation between between images obtained from different energy spectra. Both the low-and high energy image are partitioned into small patches which are subsequently normalized. Combined patches with improved signal-to-noise ratio are formed by a weighted addition of corresponding normalized patches from both images. Assuming that corresponding low-and high energy image patches are related by a linear transformation, the signal in both patches is added coherently while noise is neglected. Conventional dictionary denoising is then performed on the combined patches. Compared to conventional dictionary denoising and bilateral filtering, our algorithm achieved superior performance in terms of qualitative and quantitative image quality measures. We demonstrate, in simulation studies, that this approach can produce 2d-histograms of the high- and low-energy reconstruction which are characterized by significantly improved material features and separation. Moreover, in comparison to other approaches that attempt denoising without simultaneously using both energy signals, superior similarity to the ground truth can be found with our proposed algorithm.

  15. Customized maximal-overlap multiwavelet denoising with data-driven group threshold for condition monitoring of rolling mill drivetrain

    NASA Astrophysics Data System (ADS)

    Chen, Jinglong; Wan, Zhiguo; Pan, Jun; Zi, Yanyang; Wang, Yu; Chen, Binqiang; Sun, Hailiang; Yuan, Jing; He, Zhengjia

    2016-02-01

    Fault identification timely of rolling mill drivetrain is significant for guaranteeing product quality and realizing long-term safe operation. So, condition monitoring system of rolling mill drivetrain is designed and developed. However, because compound fault and weak fault feature information is usually sub-merged in heavy background noise, this task still faces challenge. This paper provides a possibility for fault identification of rolling mills drivetrain by proposing customized maximal-overlap multiwavelet denoising method. The effectiveness of wavelet denoising method mainly relies on the appropriate selections of wavelet base, transform strategy and threshold rule. First, in order to realize exact matching and accurate detection of fault feature, customized multiwavelet basis function is constructed via symmetric lifting scheme and then vibration signal is processed by maximal-overlap multiwavelet transform. Next, based on spatial dependency of multiwavelet transform coefficients, spatial neighboring coefficient data-driven group threshold shrinkage strategy is developed for denoising process by choosing the optimal group length and threshold via the minimum of Stein's Unbiased Risk Estimate. The effectiveness of proposed method is first demonstrated through compound fault identification of reduction gearbox on rolling mill. Then it is applied for weak fault identification of dedusting fan bearing on rolling mill and the results support its feasibility.

  16. A hybrid fault diagnosis method based on second generation wavelet de-noising and local mean decomposition for rotating machinery.

    PubMed

    Liu, Zhiwen; He, Zhengjia; Guo, Wei; Tang, Zhangchun

    2016-03-01

    In order to extract fault features of large-scale power equipment from strong background noise, a hybrid fault diagnosis method based on the second generation wavelet de-noising (SGWD) and the local mean decomposition (LMD) is proposed in this paper. In this method, a de-noising algorithm of second generation wavelet transform (SGWT) using neighboring coefficients was employed as the pretreatment to remove noise in rotating machinery vibration signals by virtue of its good effect in enhancing the signal-noise ratio (SNR). Then, the LMD method is used to decompose the de-noised signals into several product functions (PFs). The PF corresponding to the faulty feature signal is selected according to the correlation coefficients criterion. Finally, the frequency spectrum is analyzed by applying the FFT to the selected PF. The proposed method is applied to analyze the vibration signals collected from an experimental gearbox and a real locomotive rolling bearing. The results demonstrate that the proposed method has better performances such as high SNR and fast convergence speed than the normal LMD method.

  17. Translation-invariant multiwavelet denoising using improved neighbouring coefficients and its application on rolling bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Hailiang, Sun; Yanyang, Zi; Zhengjia, He; Xiaodong, Wang; Jing, Yuan

    2011-07-01

    The deficiencies of conventional neighbouring coefficients denoising are the invariant neighbouring window size and the global threshold; therefore, it cannot accurately represent local concentrated energy of the collected signals in engineering application. The improved neighbouring coefficients named Neighbouring Coefficients Dependent on Level (NCDL) is proposed. The size of neighbouring window varies with different decomposition levels and the threshold is chosen according to the neighbourhood. Translation invariant method can effectively weaken some visual artifacts, for example Gibbs phenomena in the neighbourhood of discontinuities. Multiwavelets have two or more scaling and wavelet functions. Compared with scalar wavelet, multiwavelets offer several excellent properties such as symmetric, orthogonal, compactly support and higher order of vanishing moment. A novel denoising method - translation invariant multiwavelet denoising with improved neighbouring coefficients is presented. The simulation signal proves the validity of the presented method. This method is then applied to the fault diagnosis of a locomotive rolling bearing. The results show that the present method can effectively extract the fault characteristic frequency of a slight scrape on the outer race of the rolling bearing.

  18. A hybrid fault diagnosis method based on second generation wavelet de-noising and local mean decomposition for rotating machinery.

    PubMed

    Liu, Zhiwen; He, Zhengjia; Guo, Wei; Tang, Zhangchun

    2016-03-01

    In order to extract fault features of large-scale power equipment from strong background noise, a hybrid fault diagnosis method based on the second generation wavelet de-noising (SGWD) and the local mean decomposition (LMD) is proposed in this paper. In this method, a de-noising algorithm of second generation wavelet transform (SGWT) using neighboring coefficients was employed as the pretreatment to remove noise in rotating machinery vibration signals by virtue of its good effect in enhancing the signal-noise ratio (SNR). Then, the LMD method is used to decompose the de-noised signals into several product functions (PFs). The PF corresponding to the faulty feature signal is selected according to the correlation coefficients criterion. Finally, the frequency spectrum is analyzed by applying the FFT to the selected PF. The proposed method is applied to analyze the vibration signals collected from an experimental gearbox and a real locomotive rolling bearing. The results demonstrate that the proposed method has better performances such as high SNR and fast convergence speed than the normal LMD method. PMID:26753616

  19. A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2016-05-01

    An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

  20. Improved deadzone modeling for bivariate wavelet shrinkage-based image denoising

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2016-05-01

    Modern image processing performed on-board low Size, Weight, and Power (SWaP) platforms, must provide high- performance while simultaneously reducing memory footprint, power consumption, and computational complexity. Image preprocessing, along with downstream image exploitation algorithms such as object detection and recognition, and georegistration, place a heavy burden on power and processing resources. Image preprocessing often includes image denoising to improve data quality for downstream exploitation algorithms. High-performance image denoising is typically performed in the wavelet domain, where noise generally spreads and the wavelet transform compactly captures high information-bearing image characteristics. In this paper, we improve modeling fidelity of a previously-developed, computationally-efficient wavelet-based denoising algorithm. The modeling improvements enhance denoising performance without significantly increasing computational cost, thus making the approach suitable for low-SWAP platforms. Specifically, this paper presents modeling improvements to the Sendur-Selesnick model (SSM) which implements a bivariate wavelet shrinkage denoising algorithm that exploits interscale dependency between wavelet coefficients. We formulate optimization problems for parameters controlling deadzone size which leads to improved denoising performance. Two formulations are provided; one with a simple, closed form solution which we use for numerical result generation, and the second as an integral equation formulation involving elliptic integrals. We generate image denoising performance results over different image sets drawn from public domain imagery, and investigate the effect of wavelet filter tap length on denoising performance. We demonstrate denoising performance improvement when using the enhanced modeling over performance obtained with the baseline SSM model.

  1. Energy-based wavelet de-noising of hydrologic time series.

    PubMed

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed.

  2. Examining Alternatives to Wavelet Denoising for Astronomical Source Finding

    NASA Astrophysics Data System (ADS)

    Jurek, R.; Brown, S.

    2012-08-01

    The Square Kilometre Array and its pathfinders ASKAP and MeerKAT will produce prodigious amounts of data that necessitate automated source finding. The performance of automated source finders can be improved by pre-processing a dataset. In preparation for the WALLABY and DINGO surveys, we have used a test HI datacube constructed from actual Westerbork Telescope noise and WHISP HI galaxies to test the real world improvement of linear smoothing, the Duchamp source finder's wavelet denoising, iterative median smoothing and mathematical morphology subtraction, on intensity threshold source finding of spectral line datasets. To compare these pre-processing methods we have generated completeness-reliability performance curves for each method and a range of input parameters. We find that iterative median smoothing produces the best source finding results for ASKAP HI spectral line observations, but wavelet denoising is a safer pre-processing technique. In this paper we also present our implementations of iterative median smoothing and mathematical morphology subtraction.

  3. Diffusion weighted image denoising using overcomplete local PCA.

    PubMed

    Manjón, José V; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D Louis; Robles, Montserrat

    2013-01-01

    Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889

  4. Diffusion Weighted Image Denoising Using Overcomplete Local PCA

    PubMed Central

    Manjón, José V.; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D. Louis; Robles, Montserrat

    2013-01-01

    Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889

  5. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  6. Streak image denoising and segmentation using adaptive Gaussian guided filter.

    PubMed

    Jiang, Zhuocheng; Guo, Baoping

    2014-09-10

    In streak tube imaging lidar (STIL), streak images are obtained using a CCD camera. However, noise in the captured streak images can greatly affect the quality of reconstructed 3D contrast and range images. The greatest challenge for streak image denoising is reducing the noise while preserving details. In this paper, we propose an adaptive Gaussian guided filter (AGGF) for noise removal and detail enhancement of streak images. The proposed algorithm is based on a guided filter (GF) and part of an adaptive bilateral filter (ABF). In the AGGF, the details are enhanced by optimizing the offset parameter. AGGF-denoised streak images are significantly sharper than those denoised by the GF. Moreover, the AGGF is a fast linear time algorithm achieved by recursively implementing a Gaussian filter kernel. Experimentally, AGGF demonstrates its capacity to preserve edges and thin structures and outperforms the existing bilateral filter and domain transform filter in terms of both visual quality and peak signal-to-noise ratio performance.

  7. Streak image denoising and segmentation using adaptive Gaussian guided filter.

    PubMed

    Jiang, Zhuocheng; Guo, Baoping

    2014-09-10

    In streak tube imaging lidar (STIL), streak images are obtained using a CCD camera. However, noise in the captured streak images can greatly affect the quality of reconstructed 3D contrast and range images. The greatest challenge for streak image denoising is reducing the noise while preserving details. In this paper, we propose an adaptive Gaussian guided filter (AGGF) for noise removal and detail enhancement of streak images. The proposed algorithm is based on a guided filter (GF) and part of an adaptive bilateral filter (ABF). In the AGGF, the details are enhanced by optimizing the offset parameter. AGGF-denoised streak images are significantly sharper than those denoised by the GF. Moreover, the AGGF is a fast linear time algorithm achieved by recursively implementing a Gaussian filter kernel. Experimentally, AGGF demonstrates its capacity to preserve edges and thin structures and outperforms the existing bilateral filter and domain transform filter in terms of both visual quality and peak signal-to-noise ratio performance. PMID:25321679

  8. Comparison of de-noising techniques for FIRST images

    SciTech Connect

    Fodor, I K; Kamath, C

    2001-01-22

    Data obtained through scientific observations are often contaminated by noise and artifacts from various sources. As a result, a first step in mining these data is to isolate the signal of interest by minimizing the effects of the contaminations. Once the data has been cleaned or de-noised, data mining can proceed as usual. In this paper, we describe our work in denoising astronomical images from the Faint Images of the Radio Sky at Twenty-Centimeters (FIRST) survey. We are mining this survey to detect radio-emitting galaxies with a bent-double morphology. This task is made difficult by the noise in the images caused by the processing of the sensor data. We compare three different approaches to de-noising: thresholding of wavelet coefficients advocated in the statistical community, traditional Altering methods used in the image processing community, and a simple thresholding scheme proposed by FIRST astronomers. While each approach has its merits and pitfalls, we found that for our purpose, the simple thresholding scheme worked relatively well for the FIRST dataset.

  9. Baseline Adaptive Wavelet Thresholding Technique for sEMG Denoising

    NASA Astrophysics Data System (ADS)

    Bartolomeo, L.; Zecca, M.; Sessa, S.; Lin, Z.; Mukaeda, Y.; Ishii, H.; Takanishi, Atsuo

    2011-06-01

    The surface Electromyography (sEMG) signal is affected by different sources of noises: current technology is considerably robust to the interferences of the power line or the cable motion artifacts, but still there are many limitations with the baseline and the movement artifact noise. In particular, these sources have frequency spectra that include also the low-frequency components of the sEMG frequency spectrum; therefore, a standard all-bandwidth filtering could alter important information. The Wavelet denoising method has been demonstrated to be a powerful solution in processing white Gaussian noise in biological signals. In this paper we introduce a new technique for the denoising of the sEMG signal: by using the baseline of the signal before the task, we estimate the thresholds to apply to the Wavelet thresholding procedure. The experiments have been performed on ten healthy subjects, by placing the electrodes on the Extensor Carpi Ulnaris and Triceps Brachii on right upper and lower arms, and performing a flexion and extension of the right wrist. An Inertial Measurement Unit, developed in our group, has been used to recognize the movements of the hands to segment the exercise and the pre-task baseline. Finally, we show better performances of the proposed method in term of noise cancellation and distortion of the signal, quantified by a new suggested indicator of denoising quality, compared to the standard Donoho technique.

  10. Adaptive nonlocal means filtering based on local noise level for CT denoising

    SciTech Connect

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.

    2014-01-15

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  11. A unified variational approach to denoising and bias correction in MR.

    PubMed

    Fan, Ayres; Wells, William M; Fisher, John W; Cetin, Müjdat; Haker, Steven; Mulkern, Robert; Tempany, Clare; Willsky, Alan S

    2003-07-01

    We propose a novel bias correction method for magnetic resonance (MR) imaging that uses complementary body coil and surface coil images. The former are spatially homogeneous but have low signal intensity; the latter provide excellent signal response but have large bias fields. We present a variational framework where we optimize an energy functional to estimate the bias field and the underlying image using both observed images. The energy functional contains smoothness-enforcing regularization for both the image and the bias field. We present extensions of our basic framework to a variety of imaging protocols. We solve the optimization problem using a computationally efficient numerical algorithm based on coordinate descent, preconditioned conjugate gradient, half-quadratic regularization, and multigrid techniques. We show qualitative and quantitative results demonstrating the effectiveness of the proposed method in producing debiased and denoised MR images. PMID:15344454

  12. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    PubMed

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  13. Denoised and texture enhanced MVCT to improve soft tissue conspicuity

    SciTech Connect

    Sheng, Ke Qi, Sharon X.; Gou, Shuiping; Wu, Jiaolong

    2014-10-15

    Purpose: MVCT images have been used in TomoTherapy treatment to align patients based on bony anatomies but its usefulness for soft tissue registration, delineation, and adaptive radiation therapy is limited due to insignificant photoelectric interaction components and the presence of noise resulting from low detector quantum efficiency of megavoltage x-rays. Algebraic reconstruction with sparsity regularizers as well as local denoising methods has not significantly improved the soft tissue conspicuity. The authors aim to utilize a nonlocal means denoising method and texture enhancement to recover the soft tissue information in MVCT (DeTECT). Methods: A block matching 3D (BM3D) algorithm was adapted to reduce the noise while keeping the texture information of the MVCT images. Following imaging denoising, a saliency map was created to further enhance visual conspicuity of low contrast structures. In this study, BM3D and saliency maps were applied to MVCT images of a CT imaging quality phantom, a head and neck, and four prostate patients. Following these steps, the contrast-to-noise ratios (CNRs) were quantified. Results: By applying BM3D denoising and saliency map, postprocessed MVCT images show remarkable improvements in imaging contrast without compromising resolution. For the head and neck patient, the difficult-to-see lymph nodes and vein in the carotid space in the original MVCT image became conspicuous in DeTECT. For the prostate patients, the ambiguous boundary between the bladder and the prostate in the original MVCT was clarified. The CNRs of phantom low contrast inserts were improved from 1.48 and 3.8 to 13.67 and 16.17, respectively. The CNRs of two regions-of-interest were improved from 1.5 and 3.17 to 3.14 and 15.76, respectively, for the head and neck patient. DeTECT also increased the CNR of prostate from 0.13 to 1.46 for the four prostate patients. The results are substantially better than a local denoising method using anisotropic diffusion

  14. Denoising Intra-voxel Axon Fiber Orientations by Means of ECQMMF Method

    NASA Astrophysics Data System (ADS)

    Ramirez-Manzanares, Alonso; Rivera, Mariano; Gee, James C.

    Diffusion weighted magnetic resonance imaging is widely used in the study of the structure of the fiber pathways in brain white matter. In this work we present a new method for denoising intra-voxel axon fiber tracks. In order to improve local (voxelwise) estimations, we use the general-purpose segmentation method called Entropy-Controlled Quadratic Markov Measure Field Models. Our proposal is capable of spatially-regularize multiple axon fiber orientations (intra-voxel orientations). In order to provide the best as possible local axon orientations to our spatial regularization procedure, we evaluate two optimization methods for fitting a Diffusion Basis Function model. We present qualitative results on real human Diffusion Weighted MRI data where the ground-truth is not available, and we quantitatively validate our results by synthetic experiments.

  15. Blind source separation based x-ray image denoising from an image sequence.

    PubMed

    Yu, Chun-Yu; Li, Yan; Fei, Bin; Li, Wei-Liang

    2015-09-01

    Blind source separation (BSS) based x-ray image denoising from an image sequence is proposed. Without priori knowledge, the useful image signal can be separated from an x-ray image sequence, for original images are supposed as different combinations of stable image signal and random image noise. The BSS algorithms such as fixed-point independent component analysis and second-order statistics singular value decomposition are used and compared with multi-frame averaging which is a common algorithm for improving image's signal-to-noise ratio (SNR). Denoising performance is evaluated in SNR, standard deviation, entropy, and runtime. Analysis indicates that BSS is applicable to image denoising; the denoised image's quality will get better when more frames are included in an x-ray image sequence, but it will cost more time; there should be trade-off between denoising performance and runtime, which means that the number of frames included in an image sequence is enough. PMID:26429442

  16. Blind source separation based x-ray image denoising from an image sequence

    NASA Astrophysics Data System (ADS)

    Yu, Chun-Yu; Li, Yan; Fei, Bin; Li, Wei-Liang

    2015-09-01

    Blind source separation (BSS) based x-ray image denoising from an image sequence is proposed. Without priori knowledge, the useful image signal can be separated from an x-ray image sequence, for original images are supposed as different combinations of stable image signal and random image noise. The BSS algorithms such as fixed-point independent component analysis and second-order statistics singular value decomposition are used and compared with multi-frame averaging which is a common algorithm for improving image's signal-to-noise ratio (SNR). Denoising performance is evaluated in SNR, standard deviation, entropy, and runtime. Analysis indicates that BSS is applicable to image denoising; the denoised image's quality will get better when more frames are included in an x-ray image sequence, but it will cost more time; there should be trade-off between denoising performance and runtime, which means that the number of frames included in an image sequence is enough.

  17. Phase-aware candidate selection for time-of-flight depth map denoising

    NASA Astrophysics Data System (ADS)

    Hach, Thomas; Seybold, Tamara; Böttcher, Hendrik

    2015-03-01

    This paper presents a new pre-processing algorithm for Time-of-Flight (TOF) depth map denoising. Typically, denoising algorithms use the raw depth map as it comes from the sensor. Systematic artifacts due to the measurement principle are not taken into account which degrades the denoising results. For phase measurement TOF sensing, a major artifact is observed as salt-and-pepper noise caused by the measurement's ambiguity. Our pre-processing algorithm is able to isolate and unwrap affected pixels deploying the physical behavior of the capturing system yielding Gaussian noise. Using this pre-processing method before applying the denoising step clearly improves the parameter estimation for the denoising filter together with its final results.

  18. A New Method for Nonlocal Means Image Denoising Using Multiple Images

    PubMed Central

    Wang, Xingzheng; Wang, Haoqian; Yang, Jiangfeng; Zhang, Yongbing

    2016-01-01

    The basic principle of nonlocal means is to denoise a pixel using the weighted average of the neighbourhood pixels, while the weight is decided by the similarity of these pixels. The key issue of the nonlocal means method is how to select similar patches and design the weight of them. There are two main contributions of this paper: The first contribution is that we use two images to denoise the pixel. These two noised images are with the same noise deviation. Instead of using only one image, we calculate the weight from two noised images. After the first denoising process, we get a pre-denoised image and a residual image. The second contribution is combining the nonlocal property between residual image and pre-denoised image. The improved nonlocal means method pays more attention on the similarity than the original one, which turns out to be very effective in eliminating gaussian noise. Experimental results with simulated data are provided. PMID:27459293

  19. Improving Students' Ability to Intuitively Infer Resistance from Magnitude of Current and Potential Difference Information: A Functional Learning Approach

    ERIC Educational Resources Information Center

    Chasseigne, Gerard; Giraudeau, Caroline; Lafon, Peggy; Mullet, Etienne

    2011-01-01

    The study examined the knowledge of the functional relations between potential difference, magnitude of current, and resistance among seventh graders, ninth graders, 11th graders (in technical schools), and college students. It also tested the efficiency of a learning device named "functional learning" derived from cognitive psychology on the…

  20. Making Inferences: Comprehension of Physical Causality, Intentionality, and Emotions in Discourse by High-Functioning Older Children, Adolescents, and Adults with Autism

    ERIC Educational Resources Information Center

    Bodner, Kimberly E.; Engelhardt, Christopher R.; Minshew, Nancy J.; Williams, Diane L.

    2015-01-01

    Studies investigating inferential reasoning in autism spectrum disorder (ASD) have focused on the ability to make socially-related inferences or inferences more generally. Important variables for intervention planning such as whether inferences depend on physical experiences or the nature of social information have received less consideration. A…

  1. Denoising in digital speckle pattern interferometry using wave atoms.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2007-05-15

    We present an effective method for speckle noise removal in digital speckle pattern interferometry, which is based on a wave-atom thresholding technique. Wave atoms are a variant of 2D wavelet packets with a parabolic scaling relation and improve the sparse representation of fringe patterns when compared with traditional expansions. The performance of the denoising method is analyzed by using computer-simulated fringes, and the results are compared with those produced by wavelet and curvelet thresholding techniques. An application of the proposed method to reduce speckle noise in experimental data is also presented.

  2. Decoding the Role of the Insula in Human Cognition: Functional Parcellation and Large-Scale Reverse Inference

    PubMed Central

    Yarkoni, Tal; Khaw, Mel Win; Sanfey, Alan G.

    2013-01-01

    Recent work has indicated that the insula may be involved in goal-directed cognition, switching between networks, and the conscious awareness of affect and somatosensation. However, these findings have been limited by the insula’s remarkably high base rate of activation and considerable functional heterogeneity. The present study used a relatively unbiased data-driven approach combining resting-state connectivity-based parcellation of the insula with large-scale meta-analysis to understand how the insula is anatomically organized based on functional connectivity patterns as well as the consistency and specificity of the associated cognitive functions. Our findings support a tripartite subdivision of the insula and reveal that the patterns of functional connectivity in the resting-state analysis appear to be relatively conserved across tasks in the meta-analytic coactivation analysis. The function of the networks was meta-analytically “decoded” using the Neurosynth framework and revealed that while the dorsoanterior insula is more consistently involved in human cognition than ventroanterior and posterior networks, each parcellated network is specifically associated with a distinct function. Collectively, this work suggests that the insula is instrumental in integrating disparate functional systems involved in processing affect, sensory-motor processing, and general cognition and is well suited to provide an interface between feelings, cognition, and action. PMID:22437053

  3. Comparison of f2/f1 ratio functions in rabbit and gerbil: Ear-canal DPOAEs vs noninvasively inferred intracochlear DPs

    NASA Astrophysics Data System (ADS)

    Martin, Glen K.; Stagner, Barden B.; Dong, Wei; Lonsbury-Martin, Brenda L.

    2015-12-01

    The properties of distortion product otoacoustic emissions (DPOAEs), i.e., distortion products (DPs) measured in the ear canal, have been thoroughly described. However, considerably less is known about the behavior of intracochlear DPs (iDPs). Detailed comparisons of DPOAEs to iDPs would provide valuable insights on the extent to which ear-canal DPOAEs mirror iDPs. Prior studies described a technique whereby the behavior of iDPs could be inferred by interacting a probe tone (f3) with the iDP of interest to produce a `secondary' DPOAE (DPOAÉ). The behavior of DPOAÉ was then used to deduce the characteristics of the iDP. In the present study, this method was used in rabbits and gerbils to simultaneously compare DPOAE f2/f1-ratio functions to their iDP counterparts. The 2f1-f2 and 2f2-f1 DPOAEs were collected with f1 and f2 primary-tone levels varied from 35-75 dB SPL, and with a 50-dB SPL f3 placed at a DP/f3 ratio of 1.25 to evoke a DPOAÉ at 2f3-(2f1-f2) or 2f3-(2f2-f1). Control experiments demonstrated little effect of the f3-probe tone on DPOAE-ratio functions. Substitution experiments were performed to determine any suppressive effects of the f1 and f2 primaries on the generation of DPOAÉ, as well as to infer the intracochlear level of the iDP once the DPOAÉ was corrected for suppression. Results showed that at low primary-tone levels, 2f1-f2 DPOAE f2/f1-ratio functions peaked around f2/f1=1.25, and exhibited an inverted U-shaped function. In contrast, simultaneously measured 2f1-f2 iDP-ratio functions peaked at f2/f1≈1. Similar growth of the inferred iDP was obtained for higher-level primaries when the ratio functions were corrected for suppressive effects. At these higher levels, DPOAE-ratio functions leveled off and no longer showed the steep reduction at narrow f2/f1 ratios. Overall, noninvasive estimates of 2f1-f2 iDP-ratio functions agreed with reports of similar functions directly measured for 2f1-f2 DPs on the basilar membrane (BM) or in

  4. A method for predicting DCT-based denoising efficiency for grayscale images corrupted by AWGN and additive spatially correlated noise

    NASA Astrophysics Data System (ADS)

    Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.

    2015-03-01

    Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.

  5. OFMspert - Inference of operator intentions in supervisory control using a blackboard architecture. [operator function model expert system

    NASA Technical Reports Server (NTRS)

    Jones, Patricia S.; Mitchell, Christine M.; Rubin, Kenneth S.

    1988-01-01

    The authors proposes an architecture for an expert system that can function as an operator's associate in the supervisory control of a complex dynamic system. Called OFMspert (operator function model (OFM) expert system), the architecture uses the operator function modeling methodology as the basis for the design. The authors put emphasis on the understanding capabilities, i.e., the intent referencing property, of an operator's associate. The authors define the generic structure of OFMspert, particularly those features that support intent inferencing. They also describe the implementation and validation of OFMspert in GT-MSOCC (Georgia Tech-Multisatellite Operations Control Center), a laboratory domain designed to support research in human-computer interaction and decision aiding in complex, dynamic systems.

  6. Vibration Sensor Data Denoising Using a Time-Frequency Manifold for Machinery Fault Diagnosis

    PubMed Central

    He, Qingbo; Wang, Xiangxiang; Zhou, Qiang

    2014-01-01

    Vibration sensor data from a mechanical system are often associated with important measurement information useful for machinery fault diagnosis. However, in practice the existence of background noise makes it difficult to identify the fault signature from the sensing data. This paper introduces the time-frequency manifold (TFM) concept into sensor data denoising and proposes a novel denoising method for reliable machinery fault diagnosis. The TFM signature reflects the intrinsic time-frequency structure of a non-stationary signal. The proposed method intends to realize data denoising by synthesizing the TFM using time-frequency synthesis and phase space reconstruction (PSR) synthesis. Due to the merits of the TFM in noise suppression and resolution enhancement, the denoised signal would have satisfactory denoising effects, as well as inherent time-frequency structure keeping. Moreover, this paper presents a clustering-based statistical parameter to evaluate the proposed method, and also presents a new diagnostic approach, called frequency probability time series (FPTS) spectral analysis, to show its effectiveness in fault diagnosis. The proposed TFM-based data denoising method has been employed to deal with a set of vibration sensor data from defective bearings, and the results verify that for machinery fault diagnosis the method is superior to two traditional denoising methods. PMID:24379045

  7. Vibration sensor data denoising using a time-frequency manifold for machinery fault diagnosis.

    PubMed

    He, Qingbo; Wang, Xiangxiang; Zhou, Qiang

    2013-12-27

    Vibration sensor data from a mechanical system are often associated with important measurement information useful for machinery fault diagnosis. However, in practice the existence of background noise makes it difficult to identify the fault signature from the sensing data. This paper introduces the time-frequency manifold (TFM) concept into sensor data denoising and proposes a novel denoising method for reliable machinery fault diagnosis. The TFM signature reflects the intrinsic time-frequency structure of a non-stationary signal. The proposed method intends to realize data denoising by synthesizing the TFM using time-frequency synthesis and phase space reconstruction (PSR) synthesis. Due to the merits of the TFM in noise suppression and resolution enhancement, the denoised signal would have satisfactory denoising effects, as well as inherent time-frequency structure keeping. Moreover, this paper presents a clustering-based statistical parameter to evaluate the proposed method, and also presents a new diagnostic approach, called frequency probability time series (FPTS) spectral analysis, to show its effectiveness in fault diagnosis. The proposed TFM-based data denoising method has been employed to deal with a set of vibration sensor data from defective bearings, and the results verify that for machinery fault diagnosis the method is superior to two traditional denoising methods.

  8. Structural Plasticity Denoises Responses and Improves Learning Speed

    PubMed Central

    Spiess, Robin; George, Richard; Cook, Matthew; Diehl, Peter U.

    2016-01-01

    Despite an abundance of computational models for learning of synaptic weights, there has been relatively little research on structural plasticity, i.e., the creation and elimination of synapses. Especially, it is not clear how structural plasticity works in concert with spike-timing-dependent plasticity (STDP) and what advantages their combination offers. Here we present a fairly large-scale functional model that uses leaky integrate-and-fire neurons, STDP, homeostasis, recurrent connections, and structural plasticity to learn the input encoding, the relation between inputs, and to infer missing inputs. Using this model, we compare the error and the amount of noise in the network's responses with and without structural plasticity and the influence of structural plasticity on the learning speed of the network. Using structural plasticity during learning shows good results for learning the representation of input values, i.e., structural plasticity strongly reduces the noise of the response by preventing spikes with a high error. For inferring missing inputs we see similar results, with responses having less noise if the network was trained using structural plasticity. Additionally, using structural plasticity with pruning significantly decreased the time to learn weights suitable for inference. Presumably, this is due to the clearer signal containing less spikes that misrepresent the desired value. Therefore, this work shows that structural plasticity is not only able to improve upon the performance using STDP without structural plasticity but also speeds up learning. Additionally, it addresses the practical problem of limited resources for connectivity that is not only apparent in the mammalian neocortex but also in computer hardware or neuromorphic (brain-inspired) hardware by efficiently pruning synapses without losing performance.

  9. Structural Plasticity Denoises Responses and Improves Learning Speed

    PubMed Central

    Spiess, Robin; George, Richard; Cook, Matthew; Diehl, Peter U.

    2016-01-01

    Despite an abundance of computational models for learning of synaptic weights, there has been relatively little research on structural plasticity, i.e., the creation and elimination of synapses. Especially, it is not clear how structural plasticity works in concert with spike-timing-dependent plasticity (STDP) and what advantages their combination offers. Here we present a fairly large-scale functional model that uses leaky integrate-and-fire neurons, STDP, homeostasis, recurrent connections, and structural plasticity to learn the input encoding, the relation between inputs, and to infer missing inputs. Using this model, we compare the error and the amount of noise in the network's responses with and without structural plasticity and the influence of structural plasticity on the learning speed of the network. Using structural plasticity during learning shows good results for learning the representation of input values, i.e., structural plasticity strongly reduces the noise of the response by preventing spikes with a high error. For inferring missing inputs we see similar results, with responses having less noise if the network was trained using structural plasticity. Additionally, using structural plasticity with pruning significantly decreased the time to learn weights suitable for inference. Presumably, this is due to the clearer signal containing less spikes that misrepresent the desired value. Therefore, this work shows that structural plasticity is not only able to improve upon the performance using STDP without structural plasticity but also speeds up learning. Additionally, it addresses the practical problem of limited resources for connectivity that is not only apparent in the mammalian neocortex but also in computer hardware or neuromorphic (brain-inspired) hardware by efficiently pruning synapses without losing performance. PMID:27660610

  10. Structural Plasticity Denoises Responses and Improves Learning Speed.

    PubMed

    Spiess, Robin; George, Richard; Cook, Matthew; Diehl, Peter U

    2016-01-01

    Despite an abundance of computational models for learning of synaptic weights, there has been relatively little research on structural plasticity, i.e., the creation and elimination of synapses. Especially, it is not clear how structural plasticity works in concert with spike-timing-dependent plasticity (STDP) and what advantages their combination offers. Here we present a fairly large-scale functional model that uses leaky integrate-and-fire neurons, STDP, homeostasis, recurrent connections, and structural plasticity to learn the input encoding, the relation between inputs, and to infer missing inputs. Using this model, we compare the error and the amount of noise in the network's responses with and without structural plasticity and the influence of structural plasticity on the learning speed of the network. Using structural plasticity during learning shows good results for learning the representation of input values, i.e., structural plasticity strongly reduces the noise of the response by preventing spikes with a high error. For inferring missing inputs we see similar results, with responses having less noise if the network was trained using structural plasticity. Additionally, using structural plasticity with pruning significantly decreased the time to learn weights suitable for inference. Presumably, this is due to the clearer signal containing less spikes that misrepresent the desired value. Therefore, this work shows that structural plasticity is not only able to improve upon the performance using STDP without structural plasticity but also speeds up learning. Additionally, it addresses the practical problem of limited resources for connectivity that is not only apparent in the mammalian neocortex but also in computer hardware or neuromorphic (brain-inspired) hardware by efficiently pruning synapses without losing performance. PMID:27660610

  11. Denoising Stimulated Raman Spectroscopic Images by Total Variation Minimization

    PubMed Central

    Liao, Chien-Sheng; Choi, Joon Hee; Zhang, Delong; Chan, Stanley H.; Cheng, Ji-Xin

    2016-01-01

    High-speed coherent Raman scattering imaging is opening a new avenue to unveiling the cellular machinery by visualizing the spatio-temporal dynamics of target molecules or intracellular organelles. By extracting signals from the laser at MHz modulation frequency, current stimulated Raman scattering (SRS) microscopy has reached shot noise limited detection sensitivity. The laser-based local oscillator in SRS microscopy not only generates high levels of signal, but also delivers a large shot noise which degrades image quality and spectral fidelity. Here, we demonstrate a denoising algorithm that removes the noise in both spatial and spectral domains by total variation minimization. The signal-to-noise ratio of SRS spectroscopic images was improved by up to 57 times for diluted dimethyl sulfoxide solutions and by 15 times for biological tissues. Weak Raman peaks of target molecules originally buried in the noise were unraveled. Coupling the denoising algorithm with multivariate curve resolution allowed discrimination of fat stores from protein-rich organelles in C. elegans. Together, our method significantly improved detection sensitivity without frame averaging, which can be useful for in vivo spectroscopic imaging. PMID:26955400

  12. GPU-based cone-beam reconstruction using wavelet denoising

    NASA Astrophysics Data System (ADS)

    Jin, Kyungchan; Park, Jungbyung; Park, Jongchul

    2012-03-01

    The scattering noise artifact resulted in low-dose projection in repetitive cone-beam CT (CBCT) scans decreases the image quality and lessens the accuracy of the diagnosis. To improve the image quality of low-dose CT imaging, the statistical filtering is more effective in noise reduction. However, image filtering and enhancement during the entire reconstruction process exactly may be challenging due to high performance computing. The general reconstruction algorithm for CBCT data is the filtered back-projection, which for a volume of 512×512×512 takes up to a few minutes on a standard system. To speed up reconstruction, massively parallel architecture of current graphical processing unit (GPU) is a platform suitable for acceleration of mathematical calculation. In this paper, we focus on accelerating wavelet denoising and Feldkamp-Davis-Kress (FDK) back-projection using parallel processing on GPU, utilize compute unified device architecture (CUDA) platform and implement CBCT reconstruction based on CUDA technique. Finally, we evaluate our implementation on clinical tooth data sets. Resulting implementation of wavelet denoising is able to process a 1024×1024 image within 2 ms, except data loading process, and our GPU-based CBCT implementation reconstructs a 512×512×512 volume from 400 projection data in less than 1 minute.

  13. Optimization of dynamic measurement of receptor kinetics by wavelet denoising.

    PubMed

    Alpert, Nathaniel M; Reilhac, Anthonin; Chio, Tat C; Selesnick, Ivan

    2006-04-01

    The most important technical limitation affecting dynamic measurements with PET is low signal-to-noise ratio (SNR). Several reports have suggested that wavelet processing of receptor kinetic data in the human brain can improve the SNR of parametric images of binding potential (BP). However, it is difficult to fully assess these reports because objective standards have not been developed to measure the tradeoff between accuracy (e.g. degradation of resolution) and precision. This paper employs a realistic simulation method that includes all major elements affecting image formation. The simulation was used to derive an ensemble of dynamic PET ligand (11C-raclopride) experiments that was subjected to wavelet processing. A method for optimizing wavelet denoising is presented and used to analyze the simulated experiments. Using optimized wavelet denoising, SNR of the four-dimensional PET data increased by about a factor of two and SNR of three-dimensional BP maps increased by about a factor of 1.5. Analysis of the difference between the processed and unprocessed means for the 4D concentration data showed that more than 80% of voxels in the ensemble mean of the wavelet processed data deviated by less than 3%. These results show that a 1.5x increase in SNR can be achieved with little degradation of resolution. This corresponds to injecting about twice the radioactivity, a maneuver that is not possible in human studies without saturating the PET camera and/or exposing the subject to more than permitted radioactivity.

  14. Sparsity-based Poisson denoising with dictionary learning.

    PubMed

    Giryes, Raja; Elad, Michael

    2014-12-01

    The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR. PMID:25312930

  15. Microarray image enhancement by denoising using stationary wavelet transform.

    PubMed

    Wang, X H; Istepanian, Robert S H; Song, Yong Hua

    2003-12-01

    Microarray imaging is considered an important tool for large scale analysis of gene expression. The accuracy of the gene expression depends on the experiment itself and further image processing. It's well known that the noises introduced during the experiment will greatly affect the accuracy of the gene expression. How to eliminate the effect of the noise constitutes a challenging problem in microarray analysis. Traditionally, statistical methods are used to estimate the noises while the microarray images are being processed. In this paper, we present a new approach to deal with the noise inherent in the microarray image processing procedure. That is, to denoise the image noises before further image processing using stationary wavelet transform (SWT). The time invariant characteristic of SWT is particularly useful in image denoising. The testing result on sample microarray images has shown an enhanced image quality. The results also show that it has a superior performance than conventional discrete wavelet transform and widely used adaptive Wiener filter in this procedure.

  16. Noise distribution and denoising of current density images

    PubMed Central

    Beheshti, Mohammadali; Foomany, Farbod H.; Magtibay, Karl; Jaffray, David A.; Krishnan, Sridhar; Nanthakumar, Kumaraswamy; Umapathy, Karthikeyan

    2015-01-01

    Abstract. Current density imaging (CDI) is a magnetic resonance (MR) imaging technique that could be used to study current pathways inside the tissue. The current distribution is measured indirectly as phase changes. The inherent noise in the MR imaging technique degrades the accuracy of phase measurements leading to imprecise current variations. The outcome can be affected significantly, especially at a low signal-to-noise ratio (SNR). We have shown the residual noise distribution of the phase to be Gaussian-like and the noise in CDI images approximated as a Gaussian. This finding matches experimental results. We further investigated this finding by performing comparative analysis with denoising techniques, using two CDI datasets with two different currents (20 and 45 mA). We found that the block-matching and three-dimensional (BM3D) technique outperforms other techniques when applied on current density (J). The minimum gain in noise power by BM3D applied to J compared with the next best technique in the analysis was found to be around 2 dB per pixel. We characterize the noise profile in CDI images and provide insights on the performance of different denoising techniques when applied at two different stages of current density reconstruction. PMID:26158100

  17. HARDI denoising using nonlocal means on S2

    NASA Astrophysics Data System (ADS)

    Kuurstra, Alan; Dolui, Sudipto; Michailovich, Oleg

    2012-02-01

    Diffusion MRI (dMRI) is a unique imaging modality for in vivo delineation of the anatomical structure of white matter in the brain. In particular, high angular resolution diffusion imaging (HARDI) is a specific instance of dMRI which is known to excel in detection of multiple neural fibers within a single voxel. Unfortunately, the angular resolution of HARDI is known to be inversely proportional to SNR, which makes the problem of denoising of HARDI data be of particular practical importance. Since HARDI signals are effectively band-limited, denoising can be accomplished by means of linear filtering. However, the spatial dependency of diffusivity in brain tissue makes it impossible to find a single set of linear filter parameters which is optimal for all types of diffusion signals. Hence, adaptive filtering is required. In this paper, we propose a new type of non-local means (NLM) filtering which possesses the required adaptivity property. As opposed to similar methods in the field, however, the proposed NLM filtering is applied in the spherical domain of spatial orientations. Moreover, the filter uses an original definition of adaptive weights, which are designed to be invariant to both spatial rotations as well as to a particular sampling scheme in use. As well, we provide a detailed description of the proposed filtering procedure, its efficient implementation, as well as experimental results with synthetic data. We demonstrate that our filter has substantially better adaptivity as compared to a number of alternative methods.

  18. Why Are "Dunkels" Sticky? Preschoolers Infer Functionality and Intentional Creation for Artifact Properties Learned from Generic Language

    ERIC Educational Resources Information Center

    Cimpian, Andrei; Cadena, Cristina

    2010-01-01

    Artifacts pose a potential learning problem for children because the mapping between their features and their functions is often not transparent. In solving this problem, children are likely to rely on a number of information sources (e.g., others' actions, affordances). We argue that children's sensitivity to nuances in the language used to…

  19. Automatic Denoising and Unmixing in Hyperspectral Image Processing

    NASA Astrophysics Data System (ADS)

    Peng, Honghong

    This thesis addresses two important aspects in hyperspectral image processing: automatic hyperspectral image denoising and unmixing. The first part of this thesis is devoted to a novel automatic optimized vector bilateral filter denoising algorithm, while the remainder concerns nonnegative matrix factorization with deterministic annealing for unsupervised unmixing in remote sensing hyperspectral images. The need for automatic hyperspectral image processing has been promoted by the development of potent hyperspectral systems, with hundreds of narrow contiguous bands, spanning the visible to the long wave infrared range of the electromagnetic spectrum. Due to the large volume of raw data generated by such sensors, automatic processing in the hyperspectral images processing chain is preferred to minimize human workload and achieve optimal result. Two of the mostly researched processing for such automatic effort are: hyperspectral image denoising, which is an important preprocessing step for almost all remote sensing tasks, and unsupervised unmixing, which decomposes the pixel spectra into a collection of endmember spectral signatures and their corresponding abundance fractions. Two new methodologies are introduced in this thesis to tackle the automatic processing problems described above. Vector bilateral filtering has been shown to provide good tradeoff between noise removal and edge degradation when applied to multispectral/hyperspectral image denoising. It has also been demonstrated to provide dynamic range enhancement of bands that have impaired signal to noise ratios. Typical vector bilateral filtering usage does not employ parameters that have been determined to satisfy optimality criteria. This thesis also introduces an approach for selection of the parameters of a vector bilateral filter through an optimization procedure rather than by ad hoc means. The approach is based on posing the filtering problem as one of nonlinear estimation and minimizing the Stein

  20. The application study of wavelet packet transformation in the de-noising of dynamic EEG data.

    PubMed

    Li, Yifeng; Zhang, Lihui; Li, Baohui; Wei, Xiaoyang; Yan, Guiding; Geng, Xichen; Jin, Zhao; Xu, Yan; Wang, Haixia; Liu, Xiaoyan; Lin, Rong; Wang, Quan

    2015-01-01

    This paper briefly describes the basic principle of wavelet packet analysis, and on this basis introduces the general principle of wavelet packet transformation for signal den-noising. The dynamic EEG data under +Gz acceleration is made a de-noising treatment by using wavelet packet transformation, and the de-noising effects with different thresholds are made a comparison. The study verifies the validity and application value of wavelet packet threshold method for the de-noising of dynamic EEG data under +Gz acceleration. PMID:26405863

  1. Using fMRI non-local means denoising to uncover activation in sub-cortical structures at 1.5 T for guided HARDI tractography

    PubMed Central

    Bernier, Michaël; Chamberland, Maxime; Houde, Jean-Christophe; Descoteaux, Maxime; Whittingstall, Kevin

    2014-01-01

    In recent years, there has been ever-increasing interest in combining functional magnetic resonance imaging (fMRI) and diffusion magnetic resonance imaging (dMRI) for better understanding the link between cortical activity and connectivity, respectively. However, it is challenging to detect and validate fMRI activity in key sub-cortical areas such as the thalamus, given that they are prone to susceptibility artifacts due to the partial volume effects (PVE) of surrounding tissues (GM/WM interface). This is especially true on relatively low-field clinical MR systems (e.g., 1.5 T). We propose to overcome this limitation by using a spatial denoising technique used in structural MRI and more recently in diffusion MRI called non-local means (NLM) denoising, which uses a patch-based approach to suppress the noise locally. To test this, we measured fMRI in 20 healthy subjects performing three block-based tasks : eyes-open closed (EOC) and left/right finger tapping (FTL, FTR). Overall, we found that NLM yielded more thalamic activity compared to traditional denoising methods. In order to validate our pipeline, we also investigated known structural connectivity going through the thalamus using HARDI tractography: the optic radiations, related to the EOC task, and the cortico-spinal tract (CST) for FTL and FTR. To do so, we reconstructed the tracts using functionally based thalamic and cortical ROIs to initiates seeds of tractography in a two-level coarse-to-fine fashion. We applied this method at the single subject level, which allowed us to see the structural connections underlying fMRI thalamic activity. In summary, we propose a new fMRI processing pipeline which uses a recent spatial denoising technique (NLM) to successfully detect sub-cortical activity which was validated using an advanced dMRI seeding strategy in single subjects at 1.5 T. PMID:25309391

  2. A Function for Representing the Biological Challenge to Respiration Posed by Ocean Acidification and the Geochemical Consequences Inferred

    NASA Astrophysics Data System (ADS)

    Peltzer, E. T.; Brewer, P. G.

    2008-12-01

    Increasing levels of dissolved total CO2 in the ocean from the invasion of fossil fuel CO2 via the atmosphere are widely believed to pose challenges to marine life on several fronts. This is most often expressed as a concern from the resulting lower pH, and the impact of this on calcification in marine organisms (coral reefs, calcareous phytoplankton etc.). These concerns are real, but calcification is by no means the only process affected, nor is the fossil fuel CO2 signal the only geochemical driver of the rapidly emerging deep-sea biological stress. Physical climate change is reducing deep-sea ventilation rates, and thereby leading to increasing oxygen deficits and concomitant increased respiratory CO2. We seek to understand the combined effects of the downward penetration of the fossil fuel signal, and the emergence of the depleted O2/increased respiratory CO2 signal at depth. As a first step, we seek to provide a simple function to capture the changing oceanic state. The most basic thermodynamic equation for the functioning of marine animals can be written as Corg + O2 → CO2 , and this results in the simple Gibbs free energy equation: ΔG° = - RT * ln [fCO2]/[Corg]*[fO2], in which the ratio of pO2 to pCO2 emerges as the dominant factor. From this we construct a simple Respiration Index: RI = log10 (pO2/pCO2), which is linear in energy and map this function for key oceanic regions illustrating the expansion of oceanic dead zones. The formal thermodynamic limit for aerobic life is RI = 0; in practice field data shows that at RI ~ 0.7 microbes turn to electron acceptors other than O2, and denitrification begins to occur. This likely represents the lowest limit for the long-term functioning of higher animals, and the zone RI = 0.7 to 1 appears to present challenges to basic functioning of many marine species. In addition, there are large regions of the ocean where denitrification already occurs, and these zones will expand greatly in size as the combined

  3. Lithospheric Shear Velocity Models Beneath Continental Margins in Antarctica Inferred From Genetic Algorithm Inversion for Teleseismic Receiver Functions

    NASA Astrophysics Data System (ADS)

    Kanao, M.; Shibutani, T.

    2005-12-01

    Seismic shear velocity models of the crust and the uppermost mantle were studied by teleseismic receiver function analyses beneath the permanent stations of the Federation of Digital Seismographic Networks (FDSN) at Antarctic continental margins. In order to eliminate the starting model dependency, a non-linear Genetic Algorithm (GA) was introduced in the time domain inversion of the receiver functions. A plenty of velocity models with an acceptable fit to the receiver function waveforms were generated during the inversion, and a stable model was produced by employing a weighted average of the best 1,000 models encountered in the development of the GA. The shear velocity model beneath the MAW (67.6S, 62.9E) has a sharp Moho boundary at 44 km depth that might have involved in a reworked metamorphic event of adjacent Archaean Napier Complex. A fairly sharp Moho was identified about 28 km depth beneath DRV (66.7S, 140.0E), with a middle grade variation of the crustal velocities that might have been caused by the Early Proterozoic metamorphism. A similar sharp Moho has been found at 40 km beneath SYO (69.0S, 39.6E). Thus Moho depth is consistent with that from refraction / wide-angle reflection surveys around the station. Fairly complicated velocity variations within the crust may have a relationship with lithology of granulite facies metamorphic rocks in the shallow crust associated with Pan-African events. Broadening low velocity zones about 30 km depths with transitional crust-mantle boundary at VNDA (77.5S, 161.9E), might be caused by the rift system besides the Trans Antarctic Mountains. As for the Antarctic Peninsular, very broad Moho was found around 36 km depths around PMSA (64.8S, 64.0W). The evidence of velocity variations within the crust reflects the tectonic histories of each terrain where these permanent stations are located.

  4. Making inferences: Comprehension of physical causality, intentionality, and emotions in discourse by high-functioning older children, adolescents, and adults with autism

    PubMed Central

    Bodner, Kimberly E.; Engelhardt, Christopher R.; Minshew, Nancy J.

    2015-01-01

    Studies investigating inferential reasoning in autism spectrum disorder (ASD) have focused on the ability to make socially-related inferences or inferences more generally. Important variables for intervention planning such as whether inferences depend on physical experience or the nature of social information have received less consideration. A measure of bridging inferences of physical causation, mental states, and emotional states was administered to older children, adolescents, and adults with and without ASD. The ASD group had more difficulty making inferences, particularly related to emotional understanding. Results suggest that individuals with ASD may not have the stored experiential knowledge that specific inferences depend upon or have difficulties accessing relevant experiences due to linguistic limitations. Further research is needed to tease these elements apart. PMID:25821925

  5. Making Inferences: Comprehension of Physical Causality, Intentionality, and Emotions in Discourse by High-Functioning Older Children, Adolescents, and Adults with Autism.

    PubMed

    Bodner, Kimberly E; Engelhardt, Christopher R; Minshew, Nancy J; Williams, Diane L

    2015-09-01

    Studies investigating inferential reasoning in autism spectrum disorder (ASD) have focused on the ability to make socially-related inferences or inferences more generally. Important variables for intervention planning such as whether inferences depend on physical experiences or the nature of social information have received less consideration. A measure of bridging inferences of physical causation, mental states, and emotional states was administered to older children, adolescents, and adults with and without ASD. The ASD group had more difficulty making inferences, particularly related to emotional understanding. Results suggest that individuals with ASD may not have the stored experiential knowledge that specific inferences depend upon or have difficulties accessing relevant experiences due to linguistic limitations. Further research is needed to tease these elements apart.

  6. Inferences regarding the diet of extinct hominins: structural and functional trends in dental and mandibular morphology within the hominin clade.

    PubMed

    Lucas, Peter W; Constantino, Paul J; Wood, Bernard A

    2008-04-01

    This contribution investigates the evolution of diet in the Pan-Homo and hominin clades. It does this by focusing on 12 variables (nine dental and three mandibular) for which data are available about extant chimpanzees, modern humans and most extinct hominins. Previous analyses of this type have approached the interpretation of dental and gnathic function by focusing on the identification of the food consumed (i.e. fruits, leaves, etc.) rather than on the physical properties (i.e. hardness, toughness, etc.) of those foods, and they have not specifically addressed the role that the physical properties of foods play in determining dental adaptations. We take the available evidence for the 12 variables, and set out what the expression of each of those variables is in extant chimpanzees, the earliest hominins, archaic hominins, megadont archaic hominins, and an inclusive grouping made up of transitional hominins and pre-modern Homo. We then present hypotheses about what the states of these variables would be in the last common ancestor of the Pan-Homo clade and in the stem hominin. We review the physical properties of food and suggest how these physical properties can be used to investigate the functional morphology of the dentition. We show what aspects of anterior tooth morphology are critical for food preparation (e.g. peeling fruit) prior to its ingestion, which features of the postcanine dentition (e.g. overall and relative size of the crowns) are related to the reduction in the particle size of food, and how information about the macrostructure (e.g. enamel thickness) and microstructure (e.g. extent and location of enamel prism decussation) of the enamel cap might be used to make predictions about the types of foods consumed by extinct hominins. Specifically, we show how thick enamel can protect against the generation and propagation of cracks in the enamel that begin at the enamel-dentine junction and move towards the outer enamel surface.

  7. Inferences regarding the diet of extinct hominins: structural and functional trends in dental and mandibular morphology within the hominin clade

    PubMed Central

    Lucas, Peter W; Constantino, Paul J; Wood, Bernard A

    2008-01-01

    This contribution investigates the evolution of diet in the Pan–Homo and hominin clades. It does this by focusing on 12 variables (nine dental and three mandibular) for which data are available about extant chimpanzees, modern humans and most extinct hominins. Previous analyses of this type have approached the interpretation of dental and gnathic function by focusing on the identification of the food consumed (i.e. fruits, leaves, etc.) rather than on the physical properties (i.e. hardness, toughness, etc.) of those foods, and they have not specifically addressed the role that the physical properties of foods play in determining dental adaptations. We take the available evidence for the 12 variables, and set out what the expression of each of those variables is in extant chimpanzees, the earliest hominins, archaic hominins, megadont archaic hominins, and an inclusive grouping made up of transitional hominins and pre-modern Homo. We then present hypotheses about what the states of these variables would be in the last common ancestor of the Pan–Homo clade and in the stem hominin. We review the physical properties of food and suggest how these physical properties can be used to investigate the functional morphology of the dentition. We show what aspects of anterior tooth morphology are critical for food preparation (e.g. peeling fruit) prior to its ingestion, which features of the postcanine dentition (e.g. overall and relative size of the crowns) are related to the reduction in the particle size of food, and how information about the macrostructure (e.g. enamel thickness) and microstructure (e.g. extent and location of enamel prism decussation) of the enamel cap might be used to make predictions about the types of foods consumed by extinct hominins. Specifically, we show how thick enamel can protect against the generation and propagation of cracks in the enamel that begin at the enamel–dentine junction and move towards the outer enamel surface. PMID:18380867

  8. Functional Inference of Methylenetetrahydrofolate Reductase Gene Polymorphisms on Enzyme Stability as a Potential Risk Factor for Down Syndrome in Croatia

    PubMed Central

    Vraneković, Jadranka; Babić Božović, Ivana; Starčević Čizmarević, Nada; Buretić-Tomljanović, Alena; Ristić, Smiljana; Petrović, Oleg; Kapović, Miljenko; Brajenović-Milić, Bojana

    2010-01-01

    Understanding the biochemical structure and function of the methylenetetrahydrofolate reductase gene (MTHFR) provides new evidence in elucidating the risk of having a child with Down syndrome (DS) in association with two common MTHFR polymorphisms, C677T and A1298C. The aim of this study was to evaluate the risk for DS according to the presence of MTHFR C677T and A1298C polymorphisms as well as the stability of the enzyme configuration. This study included mothers from Croatia with a liveborn DS child (n = 102) or DS pregnancy (n = 9) and mothers with a healthy child (n = 141). MTHFR C677T and A1298C polymorphisms were assessed by PCR-RFLP. Allele/genotype frequencies differences were determined using χ2 test. Odds ratio and the 95% confidence intervals were calculated to evaluate the effects of different alleles/genotypes. No statistically significant differences were found between the frequencies of allele/genotype or genotype combinations of the MTHFR C677T and A1298C polymorphisms in the case and the control groups. Additionally, the observed frequencies of the stable (677CC/1298AA, 677CC/1298AC, 677CC/1298CC) and unstable (677CT/1298AA, 677CT/1298AC, 677TT/1298AA) enzyme configurations were not significantly different. We found no evidence to support the possibility that MTHFR polymorphisms and the stability of the enzyme configurations were associated with risk of having a child with DS in Croatian population. PMID:20592453

  9. Seismic Imaging Beneath the Kanto Plain, Japan, Inferred from S-wavevector Receiver Functions Obtained at Virtual Subsurface Receivers

    NASA Astrophysics Data System (ADS)

    Murakoshi, T.; Takenaka, H.

    2013-12-01

    This study describes the seismic images of the crust and uppermost mantle beneath the Kanto plain, Japan, by using S-wavevector receiver function (SWV-RF) analysis at subsurface receivers. The SWV-RF is the time series deconvolving the upgoing SV-wave component by the upgoing P-wave one. This method for ground surface records was originally introduced by Reading et al. (2003, GRL). To calculate deep borehole and/or ocean bottom records, Takenaka and Murakoshi (2010, AGU) proposed the SWV-RF at subsurface station, which obtain it from the seismograms observed at a subsurface station using the structure model from the top to the receiver level. This method has a great advantage that the problem of unclearly seismic images beneath very thick sedimentary basin due to the records include strong effect of reverberation within the sedimentary layer can be overcome. Takenaka and Murakoshi (2012, AGU) applied the method to the teleseismic waveform records observed at not only deep borehole but also shallow borehole and ground surface stations in Kanto plain, Japan. To obtain clearly and continuous seismic images, we increased events for SWV-RFs in the period from April 2004 to July 2013, that is almost three times the number in Takenaka and Murakoshi (2012, AGU). We will show the three-dimensional Seismic Features of the crustal and deeper structures beneath the Kanto plain, Japan, which is derived from the vertical cross-sections of the depth-converted SWV-RFs.

  10. Denoising human cardiac diffusion tensor magnetic resonance images using sparse representation combined with segmentation.

    PubMed

    Bao, L J; Zhu, Y M; Liu, W Y; Croisille, P; Pu, Z B; Robini, M; Magnin, I E

    2009-03-21

    Cardiac diffusion tensor magnetic resonance imaging (DT-MRI) is noise sensitive, and the noise can induce numerous systematic errors in subsequent parameter calculations. This paper proposes a sparse representation-based method for denoising cardiac DT-MRI images. The method first generates a dictionary of multiple bases according to the features of the observed image. A segmentation algorithm based on nonstationary degree detector is then introduced to make the selection of atoms in the dictionary adapted to the image's features. The denoising is achieved by gradually approximating the underlying image using the atoms selected from the generated dictionary. The results on both simulated image and real cardiac DT-MRI images from ex vivo human hearts show that the proposed denoising method performs better than conventional denoising techniques by preserving image contrast and fine structures. PMID:19218737

  11. Forecasting performance of denoising signal by Wavelet and Fourier Transforms using SARIMA model

    NASA Astrophysics Data System (ADS)

    Ismail, Mohd Tahir; Mamat, Siti Salwana; Hamzah, Firdaus Mohamad; Karim, Samsul Ariffin Abdul

    2014-07-01

    The goal of this research is to determine the forecasting performance of denoising signal. Monthly rainfall and monthly number of raindays with duration of 20 years (1990-2009) from Bayan Lepas station are utilized as the case study. The Fast Fourier Transform (FFT) and Wavelet Transform (WT) are used in this research to find the denoise signal. The denoise data obtained by Fast Fourier Transform and Wavelet Transform are being analyze by seasonal ARIMA model. The best fitted model is determined by the minimum value of MSE. The result indicates that Wavelet Transform is an effective method in denoising the monthly rainfall and number of rain days signals compared to Fast Fourier Transform.

  12. Exploiting the self-similarity in ERP images by nonlocal means for single-trial denoising.

    PubMed

    Strauss, Daniel J; Teuber, Tanja; Steidl, Gabriele; Corona-Strauss, Farah I

    2013-07-01

    Event related potentials (ERPs) represent a noninvasive and widely available means to analyze neural correlates of sensory and cognitive processing. Recent developments in neural and cognitive engineering proposed completely new application fields of this well-established measurement technique when using an advanced single-trial processing. We have recently shown that 2-D diffusion filtering methods from image processing can be used for the denoising of ERP single-trials in matrix representations, also called ERP images. In contrast to conventional 1-D transient ERP denoising techniques, the 2-D restoration of ERP images allows for an integration of regularities over multiple stimulations into the denoising process. Advanced anisotropic image restoration methods may require directional information for the ERP denoising process. This is especially true if there is a lack of a priori knowledge about possible traces in ERP images. However due to the use of event related experimental paradigms, ERP images are characterized by a high degree of self-similarity over the individual trials. In this paper, we propose the simple and easy to apply nonlocal means method for ERP image denoising in order to exploit this self-similarity rather than focusing on the edge-based extraction of directional information. Using measured and simulated ERP data, we compare our method to conventional approaches in ERP denoising. It is concluded that the self-similarity in ERP images can be exploited for single-trial ERP denoising by the proposed approach. This method might be promising for a variety of evoked and event-related potential applications, including nonstationary paradigms such as changing exogeneous stimulus characteristics or endogenous states during the experiment. As presented, the proposed approach is for the a posteriori denoising of single-trial sequences.

  13. [An improved wavelet threshold algorithm for ECG denoising].

    PubMed

    Liu, Xiuling; Qiao, Lei; Yang, Jianli; Dong, Bin; Wang, Hongrui

    2014-06-01

    Due to the characteristics and environmental factors, electrocardiogram (ECG) signals are usually interfered by noises in the course of signal acquisition, so it is crucial for ECG intelligent analysis to eliminate noises in ECG signals. On the basis of wavelet transform, threshold parameters were improved and a more appropriate threshold expression was proposed. The discrete wavelet coefficients were processed using the improved threshold parameters, the accurate wavelet coefficients without noises were gained through inverse discrete wavelet transform, and then more original signal coefficients could be preserved. MIT-BIH arrythmia database was used to validate the method. Simulation results showed that the improved method could achieve better denoising effect than the traditional ones. PMID:25219225

  14. Non-local mean denoising in diffusion tensor space

    PubMed Central

    SU, BAIHAI; LIU, QIANG; CHEN, JIE; WU, XI

    2014-01-01

    The aim of the present study was to present a novel non-local mean (NLM) method to denoise diffusion tensor imaging (DTI) data in the tensor space. Compared with the original NLM method, which uses intensity similarity to weigh the voxel, the proposed method weighs the voxel using tensor similarity measures in the diffusion tensor space. Euclidean distance with rotational invariance, and Riemannian distance and Log-Euclidean distance with affine invariance were implemented to compare the geometric and orientation features of the diffusion tensor comprehensively. The accuracy and efficacy of the proposed novel NLM method using these three similarity measures in DTI space, along with unbiased novel NLM in diffusion-weighted image space, were compared quantitatively and qualitatively in the present study. PMID:25009599

  15. A novel de-noising method for B ultrasound images

    NASA Astrophysics Data System (ADS)

    Tian, Da-Yong; Mo, Jia-qing; Yu, Yin-Feng; Lv, Xiao-Yi; Yu, Xiao; Jia, Zhen-Hong

    2015-12-01

    B ultrasound as a kind of ultrasonic imaging, which has become the indispensable diagnosis method in clinical medicine. However, the presence of speckle noise in ultrasound image greatly reduces the image quality and interferes with the accuracy of the diagnosis. Therefore, how to construct a method which can eliminate the speckle noise effectively, and at the same time keep the image details effectively is the research target of the current ultrasonic image de-noising. This paper is intended to remove the inherent speckle noise of B ultrasound image. The novel algorithm proposed is based on both wavelet transformation of B ultrasound images and data fusion of B ultrasound images, with a smaller mean squared error (MSE) and greater signal to noise ratio (SNR) compared with other algorithms. The results of this study can effectively remove speckle noise from B ultrasound images, and can well preserved the details and edge information which will produce better visual effects.

  16. Denoised Wigner distribution deconvolution via low-rank matrix completion.

    PubMed

    Lee, Justin; Barbastathis, George

    2016-09-01

    Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object's phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phase retrieval such as ptychography. Our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise. PMID:27607616

  17. Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data.

    PubMed

    Pnevmatikakis, Eftychios A; Soudry, Daniel; Gao, Yuanjun; Machado, Timothy A; Merel, Josh; Pfau, David; Reardon, Thomas; Mu, Yu; Lacefield, Clay; Yang, Weijian; Ahrens, Misha; Bruno, Randy; Jessell, Thomas M; Peterka, Darcy S; Yuste, Rafael; Paninski, Liam

    2016-01-20

    We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multi-neuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data. PMID:26774160

  18. Wavelet denoising of multiframe optical coherence tomography data

    PubMed Central

    Mayer, Markus A.; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.

    2012-01-01

    We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise. PMID:22435103

  19. MRI noise estimation and denoising using non-local PCA.

    PubMed

    Manjón, José V; Coupé, Pierrick; Buades, Antonio

    2015-05-01

    This paper proposes a novel method for MRI denoising that exploits both the sparseness and self-similarity properties of the MR images. The proposed method is a two-stage approach that first filters the noisy image using a non local PCA thresholding strategy by automatically estimating the local noise level present in the image and second uses this filtered image as a guide image within a rotationally invariant non-local means filter. The proposed method internally estimates the amount of local noise presents in the images that enables applying it automatically to images with spatially varying noise levels and also corrects the Rician noise induced bias locally. The proposed approach has been compared with related state-of-the-art methods showing competitive results in all the studied cases. PMID:25725303

  20. Locally adaptive bilateral clustering for universal image denoising

    NASA Astrophysics Data System (ADS)

    Toh, K. K. V.; Mat Isa, N. A.

    2012-12-01

    This paper presents a novel and efficient locally adaptive denoising method based on clustering of pixels into regions of similar geometric and radiometric structures. Clustering is performed by adaptively segmenting pixels in the local kernel based on their augmented variational series. Then, noise pixels are restored by selectively considering the radiometric and spatial properties of every pixel in the formed clusters. The proposed method is exceedingly robust in conveying reliable local structural information even in the presence of noise. As a result, the proposed method substantially outperforms other state-of-the-art methods in terms of image restoration and computational cost. We support our claims with ample simulated and real data experiments. The relatively fast runtime from extensive simulations also suggests that the proposed method is suitable for a variety of image-based products — either embedded in image capturing devices or applied as image enhancement software.

  1. Hybrid de-noising approach for fiber optic gyroscopes combining improved empirical mode decomposition and forward linear prediction algorithms.

    PubMed

    Shen, Chong; Cao, Huiliang; Li, Jie; Tang, Jun; Zhang, Xiaoming; Shi, Yunbo; Yang, Wei; Liu, Jun

    2016-03-01

    A noise reduction algorithm based on an improved empirical mode decomposition (EMD) and forward linear prediction (FLP) is proposed for the fiber optic gyroscope (FOG). Referred to as the EMD-FLP algorithm, it was developed to decompose the FOG outputs into a number of intrinsic mode functions (IMFs) after which mode manipulations are performed to select noise-only IMFs, mixed IMFs, and residual IMFs. The FLP algorithm is then employed to process the mixed IMFs, from which the refined IMFs components are reconstructed to produce the final de-noising results. This hybrid approach is applied to, and verified using, both simulated signals and experimental FOG outputs. The results from the applications show that the method eliminates noise more effectively than the conventional EMD or FLP methods and decreases the standard deviations of the FOG outputs after de-noising from 0.17 to 0.026 under sweep frequency vibration and from 0.22 to 0.024 under fixed frequency vibration. PMID:27036770

  2. Patch-based and multiresolution optimum bilateral filters for denoising images corrupted by Gaussian noise

    NASA Astrophysics Data System (ADS)

    Kishan, Harini; Seelamantula, Chandra Sekhar

    2015-09-01

    We propose optimal bilateral filtering techniques for Gaussian noise suppression in images. To achieve maximum denoising performance via optimal filter parameter selection, we adopt Stein's unbiased risk estimate (SURE)-an unbiased estimate of the mean-squared error (MSE). Unlike MSE, SURE is independent of the ground truth and can be used in practical scenarios where the ground truth is unavailable. In our recent work, we derived SURE expressions in the context of the bilateral filter and proposed SURE-optimal bilateral filter (SOBF). We selected the optimal parameters of SOBF using the SURE criterion. To further improve the denoising performance of SOBF, we propose variants of SOBF, namely, SURE-optimal multiresolution bilateral filter (SMBF), which involves optimal bilateral filtering in a wavelet framework, and SURE-optimal patch-based bilateral filter (SPBF), where the bilateral filter parameters are optimized on small image patches. Using SURE guarantees automated parameter selection. The multiresolution and localized denoising in SMBF and SPBF, respectively, yield superior denoising performance when compared with the globally optimal SOBF. Experimental validations and comparisons show that the proposed denoisers perform on par with some state-of-the-art denoising techniques.

  3. Inferring novel lncRNA-disease associations based on a random walk model of a lncRNA functional similarity network.

    PubMed

    Sun, Jie; Shi, Hongbo; Wang, Zhenzhen; Zhang, Changjian; Liu, Lin; Wang, Letian; He, Weiwei; Hao, Dapeng; Liu, Shulin; Zhou, Meng

    2014-08-01

    Accumulating evidence demonstrates that long non-coding RNAs (lncRNAs) play important roles in the development and progression of complex human diseases, and predicting novel human lncRNA-disease associations is a challenging and urgently needed task, especially at a time when increasing amounts of lncRNA-related biological data are available. In this study, we proposed a global network-based computational framework, RWRlncD, to infer potential human lncRNA-disease associations by implementing the random walk with restart method on a lncRNA functional similarity network. The performance of RWRlncD was evaluated by experimentally verified lncRNA-disease associations, based on leave-one-out cross-validation. We achieved an area under the ROC curve of 0.822, demonstrating the excellent performance of RWRlncD. Significantly, the performance of RWRlncD is robust to different parameter selections. Predictively highly-ranked lncRNA-disease associations in case studies of prostate cancer and Alzheimer's disease were manually confirmed by literature mining, providing evidence of the good performance and potential value of the RWRlncD method in predicting lncRNA-disease associations.

  4. Convergence analysis of a finite element skull model of Herpestes javanicus (Carnivora, Mammalia): implications for robust comparative inferences of biomechanical function.

    PubMed

    Tseng, Zhijie Jack; Flynn, John J

    2015-01-21

    biomechanical attributes from these simulations are used to infer form-function linkage. PMID:25445190

  5. An efficient method for nonnegatively constrained Total Variation-based denoising of medical images corrupted by Poisson noise.

    PubMed

    Landi, G; Piccolomini, E Loli

    2012-01-01

    Medical images obtained with emission processes are corrupted by noise of Poisson type. In the paper the denoising problem is modeled in a Bayesian statistical setting by a nonnegatively constrained minimization problem, where the objective function is constituted by a data fitting term, the Kullback-Leibler divergence, plus a regularization term, the Total Variation function, weighted by a regularization parameter. Aim of the paper is to propose an efficient numerical method for the solution of the constrained problem. The method is a Newton projection method, where the inner system is solved by the Conjugate Gradient method, preconditioned and implemented in an efficient way for this specific application. The numerical results on simulated and real medical images prove the effectiveness of the method, both for the accuracy and the computational cost.

  6. Hardware Design and Implementation of a Wavelet De-Noising Procedure for Medical Signal Preprocessing

    PubMed Central

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-01-01

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz. PMID:26501290

  7. Hardware design and implementation of a wavelet de-noising procedure for medical signal preprocessing.

    PubMed

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-01-01

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz. PMID:26501290

  8. Hardware design and implementation of a wavelet de-noising procedure for medical signal preprocessing.

    PubMed

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-01-01

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.

  9. Inferring horizontal gene transfer.

    PubMed

    Ravenhall, Matt; Škunca, Nives; Lassalle, Florent; Dessimoz, Christophe

    2015-05-01

    Horizontal or Lateral Gene Transfer (HGT or LGT) is the transmission of portions of genomic DNA between organisms through a process decoupled from vertical inheritance. In the presence of HGT events, different fragments of the genome are the result of different evolutionary histories. This can therefore complicate the investigations of evolutionary relatedness of lineages and species. Also, as HGT can bring into genomes radically different genotypes from distant lineages, or even new genes bearing new functions, it is a major source of phenotypic innovation and a mechanism of niche adaptation. For example, of particular relevance to human health is the lateral transfer of antibiotic resistance and pathogenicity determinants, leading to the emergence of pathogenic lineages. Computational identification of HGT events relies upon the investigation of sequence composition or evolutionary history of genes. Sequence composition-based ("parametric") methods search for deviations from the genomic average, whereas evolutionary history-based ("phylogenetic") approaches identify genes whose evolutionary history significantly differs from that of the host species. The evaluation and benchmarking of HGT inference methods typically rely upon simulated genomes, for which the true history is known. On real data, different methods tend to infer different HGT events, and as a result it can be difficult to ascertain all but simple and clear-cut HGT events. PMID:26020646

  10. Inferring Horizontal Gene Transfer

    PubMed Central

    Lassalle, Florent; Dessimoz, Christophe

    2015-01-01

    Horizontal or Lateral Gene Transfer (HGT or LGT) is the transmission of portions of genomic DNA between organisms through a process decoupled from vertical inheritance. In the presence of HGT events, different fragments of the genome are the result of different evolutionary histories. This can therefore complicate the investigations of evolutionary relatedness of lineages and species. Also, as HGT can bring into genomes radically different genotypes from distant lineages, or even new genes bearing new functions, it is a major source of phenotypic innovation and a mechanism of niche adaptation. For example, of particular relevance to human health is the lateral transfer of antibiotic resistance and pathogenicity determinants, leading to the emergence of pathogenic lineages [1]. Computational identification of HGT events relies upon the investigation of sequence composition or evolutionary history of genes. Sequence composition-based ("parametric") methods search for deviations from the genomic average, whereas evolutionary history-based ("phylogenetic") approaches identify genes whose evolutionary history significantly differs from that of the host species. The evaluation and benchmarking of HGT inference methods typically rely upon simulated genomes, for which the true history is known. On real data, different methods tend to infer different HGT events, and as a result it can be difficult to ascertain all but simple and clear-cut HGT events. PMID:26020646

  11. Inferring biotic interactions from proxies.

    PubMed

    Morales-Castilla, Ignacio; Matias, Miguel G; Gravel, Dominique; Araújo, Miguel B

    2015-06-01

    Inferring biotic interactions from functional, phylogenetic and geographical proxies remains one great challenge in ecology. We propose a conceptual framework to infer the backbone of biotic interaction networks within regional species pools. First, interacting groups are identified to order links and remove forbidden interactions between species. Second, additional links are removed by examination of the geographical context in which species co-occur. Third, hypotheses are proposed to establish interaction probabilities between species. We illustrate the framework using published food-webs in terrestrial and marine systems. We conclude that preliminary descriptions of the web of life can be made by careful integration of data with theory.

  12. Multiadaptive Bionic Wavelet Transform: Application to ECG Denoising and Baseline Wandering Reduction

    NASA Astrophysics Data System (ADS)

    Sayadi, Omid; Shamsollahi, Mohammad B.

    2007-12-01

    We present a new modified wavelet transform, called the multiadaptive bionic wavelet transform (MABWT), that can be applied to ECG signals in order to remove noise from them under a wide range of variations for noise. By using the definition of bionic wavelet transform and adaptively determining both the center frequency of each scale together with the[InlineEquation not available: see fulltext.]-function, the problem of desired signal decomposition is solved. Applying a new proposed thresholding rule works successfully in denoising the ECG. Moreover by using the multiadaptation scheme, lowpass noisy interference effects on the baseline of ECG will be removed as a direct task. The method was extensively clinically tested with real and simulated ECG signals which showed high performance of noise reduction, comparable to those of wavelet transform (WT). Quantitative evaluation of the proposed algorithm shows that the average SNR improvement of MABWT is 1.82 dB more than the WT-based results, for the best case. Also the procedure has largely proved advantageous over wavelet-based methods for baseline wandering cancellation, including both DC components and baseline drifts.

  13. Gaussian mixture model-based gradient field reconstruction for infrared image detail enhancement and denoising

    NASA Astrophysics Data System (ADS)

    Zhao, Fan; Zhao, Jian; Zhao, Wenda; Qu, Feng

    2016-05-01

    Infrared images are characterized by low signal-to-noise ratio and low contrast. Therefore, the edge details are easily immerged in the background and noise, making it much difficult to achieve infrared image edge detail enhancement and denoising. This article proposes a novel method of Gaussian mixture model-based gradient field reconstruction, which enhances image edge details while suppressing noise. First, by analyzing the gradient histogram of noisy infrared image, Gaussian mixture model is adopted to simulate the distribution of the gradient histogram, and divides the image information into three parts corresponding to faint details, noise and the edges of clear targets, respectively. Then, the piecewise function is constructed based on the characteristics of the image to increase gradients of faint details and suppress gradients of noise. Finally, anisotropic diffusion constraint is added while visualizing enhanced image from the transformed gradient field to further suppress noise. The experimental results show that the method possesses unique advantage of effectively enhancing infrared image edge details and suppressing noise as well, compared with the existing methods. In addition, it can be used to effectively enhance other types of images such as the visible and medical images.

  14. The use of ensemble empirical mode decomposition as a novel denoising technique

    NASA Astrophysics Data System (ADS)

    Gaci, Said; Hachay, Olga; Zaourar, Naima

    2016-04-01

    Denoising is of a high importance in geophysical data processing. This paper suggests a new denoising technique based on the Ensemble Empirical mode decomposition (EEMD). This technique has been compared with the discrete wavelet transform (DWT) thresholding. Firstly, both methods have been implemented on synthetic signals with diverse waveforms ('blocks', 'heavy sine', 'Doppler', and 'mishmash'). The EEMD denoising method is proved to be the most efficient for 'blocks', 'heavy sine' and 'mishmash' signals for all the considered signal-to-noise ratio (SNR) values. However, the results obtained using the DWT thresholding are the most reliable for 'Doppler' signal, and the difference between the calculated mean square error (MSE) values using the studied methods is slight and decreases as the SNR value gets smaller values. Secondly, the denoising methods have been applied on real seismic traces recorded in the Algerian Sahara. It is shown that the proposed technique outperforms the DWT thresholding. In conclusion, the EEMD technique can provide a powerful tool for denoising seismic signals. Keywords: Ensemble Empirical mode decomposition (EEMD), Discrete wavelet transform (DWT), seismic signal.

  15. Edge-preserving image denoising via group coordinate descent on the GPU

    PubMed Central

    McGaffin, Madison G.; Fessler, Jeffrey A.

    2015-01-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel one-dimensional pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates inplace and only store the noisy data, denoised image and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation (TV). Both algorithms use the majorize-minimize (MM) framework to solve the one-dimensional pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time. PMID:25675454

  16. Total variation-regularized weighted nuclear norm minimization for hyperspectral image mixed denoising

    NASA Astrophysics Data System (ADS)

    Wu, Zhaojun; Wang, Qiang; Wu, Zhenghua; Shen, Yi

    2016-01-01

    Many nuclear norm minimization (NNM)-based methods have been proposed for hyperspectral image (HSI) mixed denoising due to the low-rank (LR) characteristics of clean HSI. However, the NNM-based methods regularize each eigenvalue equally, which is unsuitable for the denoising problem, where each eigenvalue stands for special physical meaning and should be regularized differently. However, the NNM-based methods only exploit the high spectral correlation, while ignoring the local structure of HSI and resulting in spatial distortions. To address these problems, a total variation (TV)-regularized weighted nuclear norm minimization (TWNNM) method is proposed. To obtain the desired denoising performance, two issues are included. First, to exploit the high spectral correlation, the HSI is restricted to be LR, and different eigenvalues are minimized with different weights based on the WNNM. Second, to preserve the local structure of HSI, the TV regularization is incorporated, and the alternating direction method of multipliers is used to solve the resulting optimization problem. Both simulated and real data experiments demonstrate that the proposed TWNNM approach produces superior denoising results for the mixed noise case in comparison with several state-of-the-art denoising methods.

  17. Edge-preserving image denoising via group coordinate descent on the GPU.

    PubMed

    McGaffin, Madison Gray; Fessler, Jeffrey A

    2015-04-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel 1D pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates in-place and only store the noisy data, denoised image, and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation. Both algorithms use the majorize-minimize framework to solve the 1D pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time.

  18. Evaluation of Wavelet Denoising Methods for Small-Scale Joint Roughness Estimation Using Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Bitenc, M.; Kieffer, D. S.; Khoshelham, K.

    2015-08-01

    The precision of Terrestrial Laser Scanning (TLS) data depends mainly on the inherent random range error, which hinders extraction of small details from TLS measurements. New post processing algorithms have been developed that reduce or eliminate the noise and therefore enable modelling details at a smaller scale than one would traditionally expect. The aim of this research is to find the optimum denoising method such that the corrected TLS data provides a reliable estimation of small-scale rock joint roughness. Two wavelet-based denoising methods are considered, namely Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), in combination with different thresholding procedures. The question is, which technique provides a more accurate roughness estimates considering (i) wavelet transform (SWT or DWT), (ii) thresholding method (fixed-form or penalised low) and (iii) thresholding mode (soft or hard). The performance of denoising methods is tested by two analyses, namely method noise and method sensitivity to noise. The reference data are precise Advanced TOpometric Sensor (ATOS) measurements obtained on 20 × 30 cm rock joint sample, which are for the second analysis corrupted by different levels of noise. With such a controlled noise level experiments it is possible to evaluate the methods' performance for different amounts of noise, which might be present in TLS data. Qualitative visual checks of denoised surfaces and quantitative parameters such as grid height and roughness are considered in a comparative analysis of denoising methods. Results indicate that the preferred method for realistic roughness estimation is DWT with penalised low hard thresholding.

  19. Robust 4D Flow Denoising Using Divergence-Free Wavelet Transform

    PubMed Central

    Ong, Frank; Uecker, Martin; Tariq, Umar; Hsiao, Albert; Alley, Marcus T; Vasanawala, Shreyas S.; Lustig, Michael

    2014-01-01

    Purpose To investigate four-dimensional flow denoising using the divergence-free wavelet (DFW) transform and compare its performance with existing techniques. Theory and Methods DFW is a vector-wavelet that provides a sparse representation of flow in a generally divergence-free field and can be used to enforce “soft” divergence-free conditions when discretization and partial voluming result in numerical nondivergence-free components. Efficient denoising is achieved by appropriate shrinkage of divergence-free wavelet and nondivergence-free coefficients. SureShrink and cycle spinning are investigated to further improve denoising performance. Results DFW denoising was compared with existing methods on simulated and phantom data and was shown to yield better noise reduction overall while being robust to segmentation errors. The processing was applied to in vivo data and was demonstrated to improve visualization while preserving quantifications of flow data. Conclusion DFW denoising of four-dimensional flow data was shown to reduce noise levels in flow data both quantitatively and visually. PMID:24549830

  20. Iterative weighted maximum likelihood denoising with probabilistic patch-based weights.

    PubMed

    Deledalle, Charles-Alban; Denis, Loïc; Tupin, Florence

    2009-12-01

    Image denoising is an important problem in image processing since noise may interfere with visual or automatic interpretation. This paper presents a new approach for image denoising in the case of a known uncorrelated noise model. The proposed filter is an extension of the nonlocal means (NL means) algorithm introduced by Buades , which performs a weighted average of the values of similar pixels. Pixel similarity is defined in NL means as the Euclidean distance between patches (rectangular windows centered on each two pixels). In this paper, a more general and statistically grounded similarity criterion is proposed which depends on the noise distribution model. The denoising process is expressed as a weighted maximum likelihood estimation problem where the weights are derived in a data-driven way. These weights can be iteratively refined based on both the similarity between noisy patches and the similarity of patches extracted from the previous estimate. We show that this iterative process noticeably improves the denoising performance, especially in the case of low signal-to-noise ratio images such as synthetic aperture radar (SAR) images. Numerical experiments illustrate that the technique can be successfully applied to the classical case of additive Gaussian noise but also to cases such as multiplicative speckle noise. The proposed denoising technique seems to improve on the state of the art performance in that latter case.

  1. Simultaneous Fusion and Denoising of Panchromatic and Multispectral Satellite Images

    NASA Astrophysics Data System (ADS)

    Ragheb, Amr M.; Osman, Heba; Abbas, Alaa M.; Elkaffas, Saleh M.; El-Tobely, Tarek A.; Khamis, S.; Elhalawany, Mohamed E.; Nasr, Mohamed E.; Dessouky, Moawad I.; Al-Nuaimy, Waleed; Abd El-Samie, Fathi E.

    2012-12-01

    To identify objects in satellite images, multispectral (MS) images with high spectral resolution and low spatial resolution, and panchromatic (Pan) images with high spatial resolution and low spectral resolution need to be fused. Several fusion methods such as the intensity-hue-saturation (IHS), the discrete wavelet transform, the discrete wavelet frame transform (DWFT), and the principal component analysis have been proposed in recent years to obtain images with both high spectral and spatial resolutions. In this paper, a hybrid fusion method for satellite images comprising both the IHS transform and the DWFT is proposed. This method tries to achieve the highest possible spectral and spatial resolutions with as small distortion in the fused image as possible. A comparison study between the proposed hybrid method and the traditional methods is presented in this paper. Different MS and Pan images from Landsat-5, Spot, Landsat-7, and IKONOS satellites are used in this comparison. The effect of noise on the proposed hybrid fusion method as well as the traditional fusion methods is studied. Experimental results show the superiority of the proposed hybrid method to the traditional methods. The results show also that a wavelet denoising step is required when fusion is performed at low signal-to-noise ratios.

  2. ECG signals denoising using wavelet transform and independent component analysis

    NASA Astrophysics Data System (ADS)

    Liu, Manjin; Hui, Mei; Liu, Ming; Dong, Liquan; Zhao, Zhu; Zhao, Yuejin

    2015-08-01

    A method of two channel exercise electrocardiograms (ECG) signals denoising based on wavelet transform and independent component analysis is proposed in this paper. First of all, two channel exercise ECG signals are acquired. We decompose these two channel ECG signals into eight layers and add up the useful wavelet coefficients separately, getting two channel ECG signals with no baseline drift and other interference components. However, it still contains electrode movement noise, power frequency interference and other interferences. Secondly, we use these two channel ECG signals processed and one channel signal constructed manually to make further process with independent component analysis, getting the separated ECG signal. We can see the residual noises are removed effectively. Finally, comparative experiment is made with two same channel exercise ECG signals processed directly with independent component analysis and the method this paper proposed, which shows the indexes of signal to noise ratio (SNR) increases 21.916 and the root mean square error (MSE) decreases 2.522, proving the method this paper proposed has high reliability.

  3. A fast-convergence POCS seismic denoising and reconstruction method

    NASA Astrophysics Data System (ADS)

    Ge, Zi-Jian; Li, Jing-Ye; Pan, Shu-Lin; Chen, Xiao-Hong

    2015-06-01

    The efficiency, precision, and denoising capabilities of reconstruction algorithms are critical to seismic data processing. Based on the Fourier-domain projection onto convex sets (POCS) algorithm, we propose an inversely proportional threshold model that defines the optimum threshold, in which the descent rate is larger than in the exponential threshold in the large-coefficient section and slower than in the exponential threshold in the small-coefficient section. Thus, the computation efficiency of the POCS seismic reconstruction greatly improves without affecting the reconstructed precision of weak reflections. To improve the flexibility of the inversely proportional threshold, we obtain the optimal threshold by using an adjustable dependent variable in the denominator of the inversely proportional threshold model. For random noise attenuation by completing the missing traces in seismic data reconstruction, we present a weighted reinsertion strategy based on the data-driven model that can be obtained by using the percentage of the data-driven threshold in each iteration in the threshold section. We apply the proposed POCS reconstruction method to 3D synthetic and field data. The results suggest that the inversely proportional threshold model improves the computational efficiency and precision compared with the traditional threshold models; furthermore, the proposed reinserting weight strategy increases the SNR of the reconstructed data.

  4. Unsupervised dealiasing and denoising of color-Doppler data.

    PubMed

    Muth, Stéphan; Dort, Sarah; Sebag, Igal A; Blais, Marie-Josée; Garcia, Damien

    2011-08-01

    Color Doppler imaging (CDI) is the premiere modality to analyze blood flow in clinical practice. In the prospect of producing new CDI-based tools, we developed a fast unsupervised denoiser and dealiaser (DeAN) algorithm for color Doppler raw data. The proposed technique uses robust and automated image post-processing techniques that make the DeAN clinically compliant. The DeAN includes three consecutive advanced and hands-off numerical tools: (1) statistical region merging segmentation, (2) recursive dealiasing process, and (3) regularized robust smoothing. The performance of the DeAN was evaluated using Monte-Carlo simulations on mock Doppler data corrupted by aliasing and inhomogeneous noise. Fifty aliased Doppler images of the left ventricle acquired with a clinical ultrasound scanner were also analyzed. The analytical study demonstrated that color Doppler data can be reconstructed with high accuracy despite the presence of strong corruption. The normalized RMS error on the numerical data was less than 8% even with signal-to-noise ratio as low as 10dB. The algorithm also allowed us to recover highly reliable Doppler flows in clinical data. The DeAN is fast, accurate and not observer-dependent. Preliminary results showed that it is also directly applicable to 3-D data. This will offer the possibility of developing new tools to better decipher the blood flow dynamics in cardiovascular diseases.

  5. Computed tomography perfusion imaging denoising using Gaussian process regression

    NASA Astrophysics Data System (ADS)

    Zhu, Fan; Carpenter, Trevor; Rodriguez Gonzalez, David; Atkinson, Malcolm; Wardlaw, Joanna

    2012-06-01

    Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study.

  6. Denoising of Ultrasound Cervix Image Using Improved Anisotropic Diffusion Filter

    PubMed Central

    Rose, R Jemila; Allwin, S

    2015-01-01

    ABSTRACT Objective: The purpose of this study was to evaluate an improved oriented speckle reducing anisotropic diffusion (IADF) filter that suppress the speckle noise from ultrasound B-mode images and shows better result than previous filters such as anisotropic diffusion, wavelet denoising and local statistics. Methods: The clinical ultrasound images of the cervix were obtained by ATL HDI 5000 ultrasound machine from the Regional Cancer Centre, Medical College campus, Thiruvananthapuram. The standardized ways of organizing and storing the image were in the format of bmp and the dimensions of 256 × 256 with the help of an improved oriented speckle reducing anisotropic diffusion filter. For analysis, 24 ultrasound cervix images were tested and the performance measured. Results: This provides quality metrics in the case of maximum peak signal-to-noise ratio (PSNR) of 31 dB, structural similarity index map (SSIM) of 0.88 and edge preservation accuracy of 88%. Conclusion: The IADF filter is the optimal method and it is capable of strong speckle suppression with less computational complexity. PMID:26624591

  7. Generalized non-local means filtering for image denoising

    NASA Astrophysics Data System (ADS)

    Dolui, Sudipto; Salgado Patarroyo, Iván. C.; Michailovich, Oleg V.

    2014-02-01

    Non-local means (NLM) filtering has been shown to outperform alternative denoising methodologies under the model of additive white Gaussian noise contamination. Recently, several theoretical frameworks have been developed to extend this class of algorithms to more general types of noise statistics. However, many of these frameworks are specifically designed for a single noise contamination model, and are far from optimal across varying noise statistics. The NLM filtering techniques rely on the definition of a similarity measure, which quantifies the similarity of two neighbourhoods along with their respective centroids. The key to the unification of the NLM filter for different noise statistics lies in the definition of a universal similarity measure which is guaranteed to provide favourable performance irrespective of the statistics of the noise. Accordingly, the main contribution of this work is to provide a rigorous statistical framework to derive such a universal similarity measure, while highlighting some of its theoretical and practical favourable characteristics. Additionally, the closed form expressions of the proposed similarity measure are provided for a number of important noise scenarios and the practical utility of the proposed similarity measure is demonstrated through numerical experiments.

  8. Wavelet based de-noising of breath air absorption spectra profiles for improved classification by principal component analysis

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.

    2015-11-01

    The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.

  9. Efficient denoising algorithms for large experimental datasets and their applications in Fourier transform ion cyclotron resonance mass spectrometry

    PubMed Central

    Chiron, Lionel; van Agthoven, Maria A.; Kieffer, Bruno; Rolando, Christian; Delsuc, Marc-André

    2014-01-01

    Modern scientific research produces datasets of increasing size and complexity that require dedicated numerical methods to be processed. In many cases, the analysis of spectroscopic data involves the denoising of raw data before any further processing. Current efficient denoising algorithms require the singular value decomposition of a matrix with a size that scales up as the square of the data length, preventing their use on very large datasets. Taking advantage of recent progress on random projection and probabilistic algorithms, we developed a simple and efficient method for the denoising of very large datasets. Based on the QR decomposition of a matrix randomly sampled from the data, this approach allows a gain of nearly three orders of magnitude in processing time compared with classical singular value decomposition denoising. This procedure, called urQRd (uncoiled random QR denoising), strongly reduces the computer memory footprint and allows the denoising algorithm to be applied to virtually unlimited data size. The efficiency of these numerical tools is demonstrated on experimental data from high-resolution broadband Fourier transform ion cyclotron resonance mass spectrometry, which has applications in proteomics and metabolomics. We show that robust denoising is achieved in 2D spectra whose interpretation is severely impaired by scintillation noise. These denoising procedures can be adapted to many other data analysis domains where the size and/or the processing time are crucial. PMID:24390542

  10. Comparison of generalized estimating equations and quadratic inference functions using data from the National Longitudinal Survey of Children and Youth (NLSCY) database

    PubMed Central

    Odueyungbo, Adefowope; Browne, Dillon; Akhtar-Danesh, Noori; Thabane, Lehana

    2008-01-01

    Background The generalized estimating equations (GEE) technique is often used in longitudinal data modeling, where investigators are interested in population-averaged effects of covariates on responses of interest. GEE involves specifying a model relating covariates to outcomes and a plausible correlation structure between responses at different time periods. While GEE parameter estimates are consistent irrespective of the true underlying correlation structure, the method has some limitations that include challenges with model selection due to lack of absolute goodness-of-fit tests to aid comparisons among several plausible models. The quadratic inference functions (QIF) method extends the capabilities of GEE, while also addressing some GEE limitations. Methods We conducted a comparative study between GEE and QIF via an illustrative example, using data from the "National Longitudinal Survey of Children and Youth (NLSCY)" database. The NLSCY dataset consists of long-term, population based survey data collected since 1994, and is designed to evaluate the determinants of developmental outcomes in Canadian children. We modeled the relationship between hyperactivity-inattention and gender, age, family functioning, maternal depression symptoms, household income adequacy, maternal immigration status and maternal educational level using GEE and QIF. Basis for comparison include: (1) ease of model selection; (2) sensitivity of results to different working correlation matrices; and (3) efficiency of parameter estimates. Results The sample included 795, 858 respondents (50.3% male; 12% immigrant; 6% from dysfunctional families). QIF analysis reveals that gender (male) (odds ratio [OR] = 1.73; 95% confidence interval [CI] = 1.10 to 2.71), family dysfunctional (OR = 2.84, 95% CI of 1.58 to 5.11), and maternal depression (OR = 2.49, 95% CI of 1.60 to 2.60) are significantly associated with higher odds of hyperactivity-inattention. The results remained robust under GEE modeling

  11. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  12. ECG denoising and fiducial point extraction using an extended Kalman filtering framework with linear and nonlinear phase observations.

    PubMed

    Akhbari, Mahsa; Shamsollahi, Mohammad B; Jutten, Christian; Armoundas, Antonis A; Sayadi, Omid

    2016-02-01

    In this paper we propose an efficient method for denoising and extracting fiducial point (FP) of ECG signals. The method is based on a nonlinear dynamic model which uses Gaussian functions to model ECG waveforms. For estimating the model parameters, we use an extended Kalman filter (EKF). In this framework called EKF25, all the parameters of Gaussian functions as well as the ECG waveforms (P-wave, QRS complex and T-wave) in the ECG dynamical model, are considered as state variables. In this paper, the dynamic time warping method is used to estimate the nonlinear ECG phase observation. We compare this new approach with linear phase observation models. Using linear and nonlinear EKF25 for ECG denoising and nonlinear EKF25 for fiducial point extraction and ECG interval analysis are the main contributions of this paper. Performance comparison with other EKF-based techniques shows that the proposed method results in higher output SNR with an average SNR improvement of 12 dB for an input SNR of -8 dB. To evaluate the FP extraction performance, we compare the proposed method with a method based on partially collapsed Gibbs sampler and an established EKF-based method. The mean absolute error and the root mean square error of all FPs, across all databases are 14 ms and 22 ms, respectively, for our proposed method, with an advantage when using a nonlinear phase observation. These errors are significantly smaller than errors obtained with other methods. For ECG interval analysis, with an absolute mean error and a root mean square error of about 22 ms and 29 ms, the proposed method achieves better accuracy and smaller variability with respect to other methods.

  13. ECG denoising and fiducial point extraction using an extended Kalman filtering framework with linear and nonlinear phase observations.

    PubMed

    Akhbari, Mahsa; Shamsollahi, Mohammad B; Jutten, Christian; Armoundas, Antonis A; Sayadi, Omid

    2016-02-01

    In this paper we propose an efficient method for denoising and extracting fiducial point (FP) of ECG signals. The method is based on a nonlinear dynamic model which uses Gaussian functions to model ECG waveforms. For estimating the model parameters, we use an extended Kalman filter (EKF). In this framework called EKF25, all the parameters of Gaussian functions as well as the ECG waveforms (P-wave, QRS complex and T-wave) in the ECG dynamical model, are considered as state variables. In this paper, the dynamic time warping method is used to estimate the nonlinear ECG phase observation. We compare this new approach with linear phase observation models. Using linear and nonlinear EKF25 for ECG denoising and nonlinear EKF25 for fiducial point extraction and ECG interval analysis are the main contributions of this paper. Performance comparison with other EKF-based techniques shows that the proposed method results in higher output SNR with an average SNR improvement of 12 dB for an input SNR of -8 dB. To evaluate the FP extraction performance, we compare the proposed method with a method based on partially collapsed Gibbs sampler and an established EKF-based method. The mean absolute error and the root mean square error of all FPs, across all databases are 14 ms and 22 ms, respectively, for our proposed method, with an advantage when using a nonlinear phase observation. These errors are significantly smaller than errors obtained with other methods. For ECG interval analysis, with an absolute mean error and a root mean square error of about 22 ms and 29 ms, the proposed method achieves better accuracy and smaller variability with respect to other methods. PMID:26767425

  14. Computed Tomography Images De-noising using a Novel Two Stage Adaptive Algorithm

    PubMed Central

    Fadaee, Mojtaba; Shamsi, Mousa; Saberkari, Hamidreza; Sedaaghi, Mohammad Hossein

    2015-01-01

    In this paper, an optimal algorithm is presented for de-noising of medical images. The presented algorithm is based on improved version of local pixels grouping and principal component analysis. In local pixels grouping algorithm, blocks matching based on L2 norm method is utilized, which leads to matching performance improvement. To evaluate the performance of our proposed algorithm, peak signal to noise ratio (PSNR) and structural similarity (SSIM) evaluation criteria have been used, which are respectively according to the signal to noise ratio in the image and structural similarity of two images. The proposed algorithm has two de-noising and cleanup stages. The cleanup stage is carried out comparatively; meaning that it is alternately repeated until the two conditions based on PSNR and SSIM are established. Implementation results show that the presented algorithm has a significant superiority in de-noising. Furthermore, the quantities of SSIM and PSNR values are higher in comparison to other methods. PMID:26955565

  15. Comparative study of ECG signal denoising by wavelet thresholding in empirical and variational mode decomposition domains.

    PubMed

    Lahmiri, Salim

    2014-09-01

    Hybrid denoising models based on combining empirical mode decomposition (EMD) and discrete wavelet transform (DWT) were found to be effective in removing additive Gaussian noise from electrocardiogram (ECG) signals. Recently, variational mode decomposition (VMD) has been proposed as a multiresolution technique that overcomes some of the limits of the EMD. Two ECG denoising approaches are compared. The first is based on denoising in the EMD domain by DWT thresholding, whereas the second is based on noise reduction in the VMD domain by DWT thresholding. Using signal-to-noise ratio and mean of squared errors as performance measures, simulation results show that the VMD-DWT approach outperforms the conventional EMD-DWT. In addition, a non-local means approach used as a reference technique provides better results than the VMD-DWT approach. PMID:26609387

  16. Improved DCT-based nonlocal means filter for MR images denoising.

    PubMed

    Hu, Jinrong; Pu, Yifei; Wu, Xi; Zhang, Yi; Zhou, Jiliu

    2012-01-01

    The nonlocal means (NLM) filter has been proven to be an efficient feature-preserved denoising method and can be applied to remove noise in the magnetic resonance (MR) images. To suppress noise more efficiently, we present a novel NLM filter based on the discrete cosine transform (DCT). Instead of computing similarity weights using the gray level information directly, the proposed method calculates similarity weights in the DCT subspace of neighborhood. Due to promising characteristics of DCT, such as low data correlation and high energy compaction, the proposed filter is naturally endowed with more accurate estimation of weights thus enhances denoising effectively. The performance of the proposed filter is evaluated qualitatively and quantitatively together with two other NLM filters, namely, the original NLM filter and the unbiased NLM (UNLM) filter. Experimental results demonstrate that the proposed filter achieves better denoising performance in MRI compared to the others.

  17. Total variation versus wavelet-based methods for image denoising in fluorescence lifetime imaging microscopy

    PubMed Central

    Chang, Ching-Wei; Mycek, Mary-Ann

    2014-01-01

    We report the first application of wavelet-based denoising (noise removal) methods to time-domain box-car fluorescence lifetime imaging microscopy (FLIM) images and compare the results to novel total variation (TV) denoising methods. Methods were tested first on artificial images and then applied to low-light live-cell images. Relative to undenoised images, TV methods could improve lifetime precision up to 10-fold in artificial images, while preserving the overall accuracy of lifetime and amplitude values of a single-exponential decay model and improving local lifetime fitting in live-cell images. Wavelet-based methods were at least 4-fold faster than TV methods, but could introduce significant inaccuracies in recovered lifetime values. The denoising methods discussed can potentially enhance a variety of FLIM applications, including live-cell, in vivo animal, or endoscopic imaging studies, especially under challenging imaging conditions such as low-light or fast video-rate imaging. PMID:22415891

  18. Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction.

    PubMed

    Holan, Scott H; Viator, John A

    2008-06-21

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples. PMID:18495977

  19. NOTE: Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction

    NASA Astrophysics Data System (ADS)

    Holan, Scott H.; Viator, John A.

    2008-06-01

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples.

  20. Class of Fibonacci-Daubechies-4-Haar wavelets with applicability to ECG denoising

    NASA Astrophysics Data System (ADS)

    Smith, Christopher B.; Agaian, Sos S.

    2004-05-01

    The presented paper introduces a new class of wavelets that includes the simplest Haar wavelet (Daubechies-2) as well as the Daubechies-4 wavelet. This class is shown to have several properties similar to the Daubechies wavelets. In application, the new class of wavelets has been shown to effectively denoise ECG signals. In addition, the paper introduces a new polynomial soft threshold technique for denoising through wavelet shrinkage. The polynomial soft threshold technique is able to represent a wide class of polynomial behaviors, including classical soft thresholding.

  1. Evaluating image denoising methods in myocardial perfusion single photon emission computed tomography (SPECT) imaging

    NASA Astrophysics Data System (ADS)

    Skiadopoulos, S.; Karatrantou, A.; Korfiatis, P.; Costaridou, L.; Vassilakos, P.; Apostolopoulos, D.; Panayiotakis, G.

    2009-10-01

    The statistical nature of single photon emission computed tomography (SPECT) imaging, due to the Poisson noise effect, results in the degradation of image quality, especially in the case of lesions of low signal-to-noise ratio (SNR). A variety of well-established single-scale denoising methods applied on projection raw images have been incorporated in SPECT imaging applications, while multi-scale denoising methods with promising performance have been proposed. In this paper, a comparative evaluation study is performed between a multi-scale platelet denoising method and the well-established Butterworth filter applied as a pre- and post-processing step on images reconstructed without and/or with attenuation correction. Quantitative evaluation was carried out employing (i) a cardiac phantom containing two different size cold defects, utilized in two experiments conducted to simulate conditions without and with photon attenuation from myocardial surrounding tissue and (ii) a pilot-verified clinical dataset of 15 patients with ischemic defects. Image noise, defect contrast, SNR and defect contrast-to-noise ratio (CNR) metrics were computed for both phantom and patient defects. In addition, an observer preference study was carried out for the clinical dataset, based on rankings from two nuclear medicine clinicians. Without photon attenuation conditions, denoising by platelet and Butterworth post-processing methods outperformed Butterworth pre-processing for large size defects, while for small size defects, as well as with photon attenuation conditions, all methods have demonstrated similar denoising performance. Under both attenuation conditions, the platelet method showed improved performance with respect to defect contrast, SNR and defect CNR in the case of images reconstructed without attenuation correction, however not statistically significant (p > 0.05). Quantitative as well as preference results obtained from clinical data showed similar performance of the

  2. [Near infrared spectra (NIR) analysis of octane number by wavelet denoising-derivative method].

    PubMed

    Tian, Gao-you; Yuan, Hong-fu; Chu, Xiao-li; Liu, Hui-ying; Lu, Wan-zhen

    2005-04-01

    Derivative can correct baseline effects and also increase the level of noise. Wavelet transform has been proven an efficient tool for de-noising. This paper is directed to the application of wavelet transfer and derivative in the NIR analysis of octane number (RON). The derivative parameters, as well as their effects on the noise level and analytic accuracy of RON, have been studied in detail. The results show that derivative can correct the baseline effects and increase the analytic accuracy. Noise from the derivative spectra has great detriment to the analysis of RON. De-noising of wavelet transform can increase the S/N and improve the analytical accuracy.

  3. MR images denoising using DCT-based unbiased nonlocal means filter

    NASA Astrophysics Data System (ADS)

    Zheng, Xiuqing; Hu, Jinrong; Zhou, Jiuliu

    2013-03-01

    The non-local means (NLM) filter has been proven to be an efficient feature-preserved denoising method and can be applied to remove noise in the magnetic resonance (MR) images. To suppress noise more efficiently, we present a novel NLM filter by using a low-pass filtered and low dimensional version of neighborhood for calculating the similarity weights. The discrete cosine transform (DCT) is used as a smoothing kernel, allowing both improvements in similarity estimation and computational speed-up. Experimental results show that the proposed filter achieves better denoising performance in MR Images compared to others filters, such as recently proposed NLM filter and unbiased NLM (UNLM) filter.

  4. Inference in `poor` languages

    SciTech Connect

    Petrov, S.

    1996-10-01

    Languages with a solvable implication problem but without complete and consistent systems of inference rules (`poor` languages) are considered. The problem of existence of finite complete and consistent inference rule system for a ``poor`` language is stated independently of the language or rules syntax. Several properties of the problem arc proved. An application of results to the language of join dependencies is given.

  5. Biomedical image and signal de-noising using dual tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Rizi, F. Yousefi; Noubari, H. Ahmadi; Setarehdan, S. K.

    2011-10-01

    Dual tree complex wavelet transform(DTCWT) is a form of discrete wavelet transform, which generates complex coefficients by using a dual tree of wavelet filters to obtain their real and imaginary parts. The purposes of de-noising are reducing noise level and improving signal to noise ratio (SNR) without distorting the signal or image. This paper proposes a method for removing white Gaussian noise from ECG signals and biomedical images. The discrete wavelet transform (DWT) is very valuable in a large scope of de-noising problems. However, it has limitations such as oscillations of the coefficients at a singularity, lack of directional selectivity in higher dimensions, aliasing and consequent shift variance. The complex wavelet transform CWT strategy that we focus on in this paper is Kingsbury's and Selesnick's dual tree CWT (DTCWT) which outperforms the critically decimated DWT in a range of applications, such as de-noising. Each complex wavelet is oriented along one of six possible directions, and the magnitude of each complex wavelet has a smooth bell-shape. In the final part of this paper, we present biomedical image and signal de-noising by the means of thresholding magnitude of the wavelet coefficients.

  6. Texture preservation in de-noising UAV surveillance video through multi-frame sampling

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Fevig, Ronald A.; Schultz, Richard R.

    2009-02-01

    Image de-noising is a widely-used technology in modern real-world surveillance systems. Methods can seldom do both de-noising and texture preservation very well without a direct knowledge of the noise model. Most of the neighborhood fusion-based de-noising methods tend to over-smooth the images, which causes a significant loss of detail. Recently, a new non-local means method has been developed, which is based on the similarities among the different pixels. This technique results in good preservation of the textures; however, it also causes some artifacts. In this paper, we utilize the scale-invariant feature transform (SIFT) [1] method to find the corresponding region between different images, and then reconstruct the de-noised images by a weighted sum of these corresponding regions. Both hard and soft criteria are chosen in order to minimize the artifacts. Experiments applied to real unmanned aerial vehicle thermal infrared surveillance video show that our method is superior to popular methods in the literature.

  7. Incrementing data quality of multi-frequency echograms using the Adaptive Wiener Filter (AWF) denoising algorithm

    NASA Astrophysics Data System (ADS)

    Peña, M.

    2016-10-01

    Achieving acceptable signal-to-noise ratio (SNR) can be difficult when working in sparsely populated waters and/or when species have low scattering such as fluid filled animals. The increasing use of higher frequencies and the study of deeper depths in fisheries acoustics, as well as the use of commercial vessels, is raising the need to employ good denoising algorithms. The use of a lower Sv threshold to remove noise or unwanted targets is not suitable in many cases and increases the relative background noise component in the echogram, demanding more effectiveness from denoising algorithms. The Adaptive Wiener Filter (AWF) denoising algorithm is presented in this study. The technique is based on the AWF commonly used in digital photography and video enhancement. The algorithm firstly increments the quality of the data with a variance-dependent smoothing, before estimating the noise level as the envelope of the Sv minima. The AWF denoising algorithm outperforms existing algorithms in the presence of gaussian, speckle and salt & pepper noise, although impulse noise needs to be previously removed. Cleaned echograms present homogenous echotraces with outlined edges.

  8. Translation invariant directional framelet transform combined with Gabor filters for image denoising.

    PubMed

    Shi, Yan; Yang, Xiaoyuan; Guo, Yuhua

    2014-01-01

    This paper is devoted to the study of a directional lifting transform for wavelet frames. A nonsubsampled lifting structure is developed to maintain the translation invariance as it is an important property in image denoising. Then, the directionality of the lifting-based tight frame is explicitly discussed, followed by a specific translation invariant directional framelet transform (TIDFT). The TIDFT has two framelets ψ1, ψ2 with vanishing moments of order two and one respectively, which are able to detect singularities in a given direction set. It provides an efficient and sparse representation for images containing rich textures along with properties of fast implementation and perfect reconstruction. In addition, an adaptive block-wise orientation estimation method based on Gabor filters is presented instead of the conventional minimization of residuals. Furthermore, the TIDFT is utilized to exploit the capability of image denoising, incorporating the MAP estimator for multivariate exponential distribution. Consequently, the TIDFT is able to eliminate the noise effectively while preserving the textures simultaneously. Experimental results show that the TIDFT outperforms some other frame-based denoising methods, such as contourlet and shearlet, and is competitive to the state-of-the-art denoising approaches.

  9. Fast and Memory-Efficient Topological Denoising of 2D and 3D Scalar Fields.

    PubMed

    Günther, David; Jacobson, Alec; Reininghaus, Jan; Seidel, Hans-Peter; Sorkine-Hornung, Olga; Weinkauf, Tino

    2014-12-01

    Data acquisition, numerical inaccuracies, and sampling often introduce noise in measurements and simulations. Removing this noise is often necessary for efficient analysis and visualization of this data, yet many denoising techniques change the minima and maxima of a scalar field. For example, the extrema can appear or disappear, spatially move, and change their value. This can lead to wrong interpretations of the data, e.g., when the maximum temperature over an area is falsely reported being a few degrees cooler because the denoising method is unaware of these features. Recently, a topological denoising technique based on a global energy optimization was proposed, which allows the topology-controlled denoising of 2D scalar fields. While this method preserves the minima and maxima, it is constrained by the size of the data. We extend this work to large 2D data and medium-sized 3D data by introducing a novel domain decomposition approach. It allows processing small patches of the domain independently while still avoiding the introduction of new critical points. Furthermore, we propose an iterative refinement of the solution, which decreases the optimization energy compared to the previous approach and therefore gives smoother results that are closer to the input. We illustrate our technique on synthetic and real-world 2D and 3D data sets that highlight potential applications. PMID:26356972

  10. Subject-specific patch-based denoising for contrast-enhanced cardiac MR images

    NASA Astrophysics Data System (ADS)

    Ma, Lorraine; Ebrahimi, Mehran; Pop, Mihaela

    2016-03-01

    Many patch-based techniques in imaging, e.g., Non-local means denoising, require tuning parameters to yield optimal results. In real-world applications, e.g., denoising of MR images, ground truth is not generally available and the process of choosing an appropriate set of parameters is a challenge. Recently, Zhu et al. proposed a method to define an image quality measure, called Q, that does not require ground truth. In this manuscript, we evaluate the effect of various parameters of the NL-means denoising on this quality metric Q. Our experiments are based on the late-gadolinium enhancement (LGE) cardiac MR images that are inherently noisy. Our described exhaustive evaluation approach can be used in tuning parameters of patch-based schemes. Even in the case that an estimation of optimal parameters is provided using another existing approach, our described method can be used as a secondary validation step. Our preliminary results suggest that denoising parameters should be case-specific rather than generic.

  11. a Universal De-Noising Algorithm for Ground-Based LIDAR Signal

    NASA Astrophysics Data System (ADS)

    Ma, Xin; Xiang, Chengzhi; Gong, Wei

    2016-06-01

    Ground-based lidar, working as an effective remote sensing tool, plays an irreplaceable role in the study of atmosphere, since it has the ability to provide the atmospheric vertical profile. However, the appearance of noise in a lidar signal is unavoidable, which leads to difficulties and complexities when searching for more information. Every de-noising method has its own characteristic but with a certain limitation, since the lidar signal will vary with the atmosphere changes. In this paper, a universal de-noising algorithm is proposed to enhance the SNR of a ground-based lidar signal, which is based on signal segmentation and reconstruction. The signal segmentation serving as the keystone of the algorithm, segments the lidar signal into three different parts, which are processed by different de-noising method according to their own characteristics. The signal reconstruction is a relatively simple procedure that is to splice the signal sections end to end. Finally, a series of simulation signal tests and real dual field-of-view lidar signal shows the feasibility of the universal de-noising algorithm.

  12. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    PubMed

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  13. A multi-scale non-local means algorithm for image de-noising

    NASA Astrophysics Data System (ADS)

    Nercessian, Shahan; Panetta, Karen A.; Agaian, Sos S.

    2012-06-01

    A highly studied problem in image processing and the field of electrical engineering in general is the recovery of a true signal from its noisy version. Images can be corrupted by noise during their acquisition or transmission stages. As noisy images are visually very poor in quality, and complicate further processing stages of computer vision systems, it is imperative to develop algorithms which effectively remove noise in images. In practice, it is a difficult task to effectively remove the noise while simultaneously retaining the edge structures within the image. Accordingly, many de-noising algorithms have been considered attempt to intelligent smooth the image while still preserving its details. Recently, a non-local means (NLM) de-noising algorithm was introduced, which exploited the redundant nature of images to achieve image de-noising. The algorithm was shown to outperform current de-noising standards, including Gaussian filtering, anisotropic diffusion, total variation minimization, and multi-scale transform coefficient thresholding. However, the NLM algorithm was developed in the spatial domain, and therefore, does not leverage the benefit that multi-scale transforms provide a framework in which signals can be better distinguished by noise. Accordingly, in this paper, a multi-scale NLM (MS-NLM) algorithm is proposed, which combines the advantage of the NLM algorithm and multi-scale image processing techniques. Experimental results via computer simulations illustrate that the MS-NLM algorithm outperforms the NLM, both visually and quantitatively.

  14. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising

    PubMed Central

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods. PMID:25993566

  15. Denoising of hyperspectral images by best multilinear rank approximation of a tensor

    NASA Astrophysics Data System (ADS)

    Marin-McGee, Maider; Velez-Reyes, Miguel

    2010-04-01

    The hyperspectral image cube can be modeled as a three dimensional array. Tensors and the tools of multilinear algebra provide a natural framework to deal with this type of mathematical object. Singular value decomposition (SVD) and its variants have been used by the HSI community for denoising of hyperspectral imagery. Denoising of HSI using SVD is achieved by finding a low rank approximation of a matrix representation of the hyperspectral image cube. This paper investigates similar concepts in hyperspectral denoising by using a low multilinear rank approximation the given HSI tensor representation. The Best Multilinear Rank Approximation (BMRA) of a given tensor A is to find a lower multilinear rank tensor B that is as close as possible to A in the Frobenius norm. Different numerical methods to compute the BMRA using Alternating Least Square (ALS) method and Newton's Methods over product of Grassmann manifolds are presented. The effect of the multilinear rank, the numerical method used to compute the BMRA, and different parameter choices in those methods are studied. Results show that comparable results are achievable with both ALS and Newton type methods. Also, classification results using the filtered tensor are better than those obtained either with denoising using SVD or MNF.

  16. Denoising peptide tandem mass spectra for spectral libraries: a Bayesian approach.

    PubMed

    Shao, Wenguang; Lam, Henry

    2013-07-01

    With the rapid accumulation of data from shotgun proteomics experiments, it has become feasible to build comprehensive and high-quality spectral libraries of tandem mass spectra of peptides. A spectral library condenses experimental data into a retrievable format and can be used to aid peptide identification by spectral library searching. A key step in spectral library building is spectrum denoising, which is best accomplished by merging multiple replicates of the same peptide ion into a consensus spectrum. However, this approach cannot be applied to "singleton spectra," for which only one observed spectrum is available for the peptide ion. We developed a method, based on a Bayesian classifier, for denoising peptide tandem mass spectra. The classifier accounts for relationships between peaks, and can be trained on the fly from consensus spectra and immediately applied to denoise singleton spectra, without hard-coded knowledge about peptide fragmentation. A linear regression model was also trained to predict the number of useful "signal" peaks in a spectrum, thereby obviating the need for arbitrary thresholds for peak filtering. This Bayesian approach accumulates weak evidence systematically to boost the discrimination power between signal and noise peaks, and produces readily interpretable conditional probabilities that offer valuable insights into peptide fragmentation behaviors. By cross validation, spectra denoised by this method were shown to retain more signal peaks, and have higher spectral similarities to replicates, than those filtered by intensity only.

  17. Anisotropic feature inferred from receiver functions and S-wave splitting in and around the high strain rate zone, central Japan

    NASA Astrophysics Data System (ADS)

    Shiomi, K.; Takeda, T.; Sekiguchi, S.

    2012-12-01

    By the recent dense GPS observation, the high strain rate zone (HSRZ) crossing the central Japan was discovered. In the HSRZ, E-W compressive stress field is observed, and large earthquakes with M>6 are frequently occurred. In this study, we try to reveal depth-dependent anisotropic feature in this region by using teleseismic receiver functions (RFs) and S-wave splitting information. As a target, we select NIED Hi-net stations N.TGWH and N.TSTH, which are located inside and outside of the HSRZ respectively. For RF analysis, we choose M>5.5 teleseismic events from October 2000 to November 2011. Low-pass filters with fc = 1 and 2 Hz are applied to estimate RFs. In the radial RFs, we find clear positive phase arrivals at 4 to 4.5 s in delay time for both stations. Since this time delay corresponds to 35 km-depth velocity discontinuity existence, these phases may be the converted phases generated at the Moho discontinuity. Seeing the back-azimuth paste-ups of the transverse RFs, we can find polarity changes of later phases at 4 to 4.5 s in delay time at the N.TSTH station. This polarity change occurs for direction of N0E (north), N180E (south), and N270E (west). Although we have no data in N90E (east) direction, this feature implies that anisotropic rocks may exist around the Moho. In order to check this feature, we consider 6-layered subsurface model and compare synthetic RFs with the observation. The first three layers are for thick sediments and upper crust including a dipping velocity interface. The fourth, fifth and sixth layer corresponds to the mid crust, lower crust and uppermost mantle, respectively. The best model infers that the mid- and lower-crust beneath the N.TSTH station should have strong anisotropy whose fast axis directs to the N-S, though the fast axis in the uppermost mantle seems to show E-W direction. Moreover, to explain the observation, the symmetric axes in the lower crust and the uppermost mantle should be dipping about 20 degrees. To check

  18. Environment-dependent denoising autoencoder for distant-talking speech recognition

    NASA Astrophysics Data System (ADS)

    Ueda, Yuma; Wang, Longbiao; Kai, Atsuhiko; Ren, Bo

    2015-12-01

    In this paper, we propose an environment-dependent denoising autoencoder (DAE) and automatic environment identification based on a deep neural network (DNN) with blind reverberation estimation for robust distant-talking speech recognition. Recently, DAEs have been shown to be effective in many noise reduction and reverberation suppression applications because higher-level representations and increased flexibility of the feature mapping function can be learned. However, a DAE is not adequate in mismatched training and test environments. In a conventional DAE, parameters are trained using pairs of reverberant speech and clean speech under various acoustic conditions (that is, an environment-independent DAE). To address the above problem, we propose two environment-dependent DAEs to reduce the influence of mismatches between training and test environments. In the first approach, we train various DAEs using speech from different acoustic environments, and the DAE for the condition that best matches the test condition is automatically selected (that is, a two-step environment-dependent DAE). To improve environment identification performance, we propose a DNN that uses both reverberant speech and estimated reverberation. In the second approach, we add estimated reverberation features to the input of the DAE (that is, a one-step environment-dependent DAE or a reverberation-aware DAE). The proposed method is evaluated using speech in simulated and real reverberant environments. Experimental results show that the environment-dependent DAE outperforms the environment-independent one in both simulated and real reverberant environments. For two-step environment-dependent DAE, the performance of environment identification based on the proposed DNN approach is also better than that of the conventional DNN approach, in which only reverberant speech is used and reverberation is not blindly estimated. And, the one-step environment-dependent DAE significantly outperforms the two

  19. Segmentation of confocal Raman microspectroscopic imaging data using edge-preserving denoising and clustering.

    PubMed

    Alexandrov, Theodore; Lasch, Peter

    2013-06-18

    Over the past decade, confocal Raman microspectroscopic (CRM) imaging has matured into a useful analytical tool to obtain spatially resolved chemical information on the molecular composition of biological samples and has found its way into histopathology, cytology, and microbiology. A CRM imaging data set is a hyperspectral image in which Raman intensities are represented as a function of three coordinates: a spectral coordinate λ encoding the wavelength and two spatial coordinates x and y. Understanding CRM imaging data is challenging because of its complexity, size, and moderate signal-to-noise ratio. Spatial segmentation of CRM imaging data is a way to reveal regions of interest and is traditionally performed using nonsupervised clustering which relies on spectral domain-only information with the main drawback being the high sensitivity to noise. We present a new pipeline for spatial segmentation of CRM imaging data which combines preprocessing in the spectral and spatial domains with k-means clustering. Its core is the preprocessing routine in the spatial domain, edge-preserving denoising (EPD), which exploits the spatial relationships between Raman intensities acquired at neighboring pixels. Additionally, we propose to use both spatial correlation to identify Raman spectral features colocalized with defined spatial regions and confidence maps to assess the quality of spatial segmentation. For CRM data acquired from midsagittal Syrian hamster ( Mesocricetus auratus ) brain cryosections, we show how our pipeline benefits from the complex spatial-spectral relationships inherent in the CRM imaging data. EPD significantly improves the quality of spatial segmentation that allows us to extract the underlying structural and compositional information contained in the Raman microspectra. PMID:23701523

  20. Median Modified Wiener Filter for nonlinear adaptive spatial denoising of protein NMR multidimensional spectra

    PubMed Central

    Cannistraci, Carlo Vittorio; Abbas, Ahmed; Gao, Xin

    2015-01-01

    Denoising multidimensional NMR-spectra is a fundamental step in NMR protein structure determination. The state-of-the-art method uses wavelet-denoising, which may suffer when applied to non-stationary signals affected by Gaussian-white-noise mixed with strong impulsive artifacts, like those in multi-dimensional NMR-spectra. Regrettably, Wavelet's performance depends on a combinatorial search of wavelet shapes and parameters; and multi-dimensional extension of wavelet-denoising is highly non-trivial, which hampers its application to multidimensional NMR-spectra. Here, we endorse a diverse philosophy of denoising NMR-spectra: less is more! We consider spatial filters that have only one parameter to tune: the window-size. We propose, for the first time, the 3D extension of the median-modified-Wiener-filter (MMWF), an adaptive variant of the median-filter, and also its novel variation named MMWF*. We test the proposed filters and the Wiener-filter, an adaptive variant of the mean-filter, on a benchmark set that contains 16 two-dimensional and three-dimensional NMR-spectra extracted from eight proteins. Our results demonstrate that the adaptive spatial filters significantly outperform their non-adaptive versions. The performance of the new MMWF* on 2D/3D-spectra is even better than wavelet-denoising. Noticeably, MMWF* produces stable high performance almost invariant for diverse window-size settings: this signifies a consistent advantage in the implementation of automatic pipelines for protein NMR-spectra analysis. PMID:25619991

  1. Linguistic Markers of Inference Generation While Reading.

    PubMed

    Clinton, Virginia; Carlson, Sarah E; Seipel, Ben

    2016-06-01

    Words can be informative linguistic markers of psychological constructs. The purpose of this study is to examine associations between word use and the process of making meaningful connections to a text while reading (i.e., inference generation). To achieve this purpose, think-aloud data from third-fifth grade students ([Formula: see text]) reading narrative texts were hand-coded for inferences. These data were also processed with a computer text analysis tool, Linguistic Inquiry and Word Count, for percentages of word use in the following categories: cognitive mechanism words, nonfluencies, and nine types of function words. Findings indicate that cognitive mechanisms were an independent, positive predictor of connections to background knowledge (i.e., elaborative inference generation) and nonfluencies were an independent, negative predictor of connections within the text (i.e., bridging inference generation). Function words did not provide unique variance towards predicting inference generation. These findings are discussed in the context of a cognitive reflection model and the differences between bridging and elaborative inference generation. In addition, potential practical implications for intelligent tutoring systems and computer-based methods of inference identification are presented.

  2. Network Plasticity as Bayesian Inference

    PubMed Central

    Legenstein, Robert; Maass, Wolfgang

    2015-01-01

    General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information so well to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling. PMID:26545099

  3. An open-source Matlab code package for improved rank-reduction 3D seismic data denoising and reconstruction

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang; Huang, Weilin; Zhang, Dong; Chen, Wei

    2016-10-01

    Simultaneous seismic data denoising and reconstruction is a currently popular research subject in modern reflection seismology. Traditional rank-reduction based 3D seismic data denoising and reconstruction algorithm will cause strong residual noise in the reconstructed data and thus affect the following processing and interpretation tasks. In this paper, we propose an improved rank-reduction method by modifying the truncated singular value decomposition (TSVD) formula used in the traditional method. The proposed approach can help us obtain nearly perfect reconstruction performance even in the case of low signal-to-noise ratio (SNR). The proposed algorithm is tested via one synthetic and field data examples. Considering that seismic data interpolation and denoising source packages are seldom in the public domain, we also provide a program template for the rank-reduction based simultaneous denoising and reconstruction algorithm by providing an open-source Matlab package.

  4. The study of real-time denoising algorithm based on parallel computing for the MEMS IR imager

    NASA Astrophysics Data System (ADS)

    Gong, Cheng; Hui, Mei; Dong, Liquan; Zhao, Yuejin

    2011-11-01

    Recent years, the MEMS-based optical readout infrared imaging technology is becoming a research hotspot. Studies show that the MEMS-based optical readout infrared imager features a high frame rate. Considering the high data Throughput and computing complexity of denoising algorithm It's difficult to ensure real-time of the image processing. In order to improve processing speed and achieve real-time, we conducted a study of denoising algorithm based on parallel computing using FPGA (Field Programmable Gate Array). In the paper, we analyze the imaging characteristics of MEMS-based optical readout infrared imager and design parallel computing methods for real-time denoising using the hardware description language. The experiment shows that the parallel computing denoising algorithm can improve infrared image processing speed to meet real-time requirement.

  5. A real-time de-noising method applied for transient and weak biomolecular interaction analysis in surface plasmon resonance biosensing

    NASA Astrophysics Data System (ADS)

    Zhan, Shuyue; Shi, Chunfei; Ou, Huichao; Song, Hong; Wang, Xiaoping

    2016-03-01

    Surface plasmon resonance (SPR) biosensing technology will likely become a type of label-free technology for transient and weak biomolecular interaction analysis (BIA); however, it needs some improvement with regard to high-speed and high-resolution measurement. We studied a type of real-time de-noising (RD) data processing method for SPR sensorgrams based on moving average; it can immediately distinguish ultra-weak signals during the process of experiment, and can display a low-noise sensorgram in real time. A flow injection analysis experiment and a CM5 sensorchip affinity experiment are designed to evaluate the characteristics of the RD method. High noise suppression ability and low signal distortion risks of the RD method have been proved. The RD method does not significantly distort signals of the sensorgram in the molecular affinity experiment, and K D values of the RD method essentially coincide with those of the raw sensorgram with a higher signal-to-noise ratio (SNR). Meanwhile, by the RD method denoising the sensorgram with an ultralow SNR that is closer to the condition of the transient and weak molecular interactions, the kinetic constant can be more accurately analyzed, whereas it cannot be realized for the raw sensorgram. The crucial function and significance of the RD method are primarily embodied in the measurement limit of SPR sensing.

  6. De-noising of microwave satellite soil moisture time series

    NASA Astrophysics Data System (ADS)

    Su, Chun-Hsu; Ryu, Dongryeol; Western, Andrew; Wagner, Wolfgang

    2013-04-01

    Technology) ASCAT data sets to identify two types of errors that are spectrally distinct. Based on a semi-empirical model of soil moisture dynamics, we consider possible digital filter designs to improve the accuracy of their soil moisture products by reducing systematic periodic errors and stochastic noise. We describe a methodology to design bandstop filters to remove artificial resonances, and a Wiener filter to remove stochastic white noise present in the satellite data. Utility of these filters is demonstrated by comparing de-noised data against in-situ observations from ground monitoring stations in the Murrumbidgee Catchment (Smith et al., 2012), southeast Australia. Albergel, C., de Rosnay, P., Gruhier, C., Muñoz Sabater, J., Hasenauer, S., Isaksen, L., Kerr, Y. H., & Wagner, W. (2012). Evaluation of remotely sensed and modelled soil moisture products using global ground-based in situ observations. Remote Sensing of Environment, 118, 215-226. Scipal, K., Holmes, T., de Jeu, R., Naeimi, V., & Wagner, W. (2008), A possible solution for the problem of estimating the error structure of global soil moisture data sets. Geophysical Research Letters, 35, L24403. Smith, A. B., Walker, J. P., Western, A. W., Young, R. I., Ellett, K. M., Pipunic, R. C., Grayson, R. B., Siriwardena, L., Chiew, F. H. S., & Richter, H. (2012). The Murrumbidgee soil moisture network data set. Water Resources Research, 48, W07701. Su, C.-H., Ryu, D., Young, R., Western, A. W., & Wagner, W. (2012). Inter-comparison of microwave satellite soil moisture retrievals over Australia. Submitted to Remote Sensing of Environment.

  7. A Wiener-Wavelet-Based filter for de-noising satellite soil moisture retrievals

    NASA Astrophysics Data System (ADS)

    Massari, Christian; Brocca, Luca; Ciabatta, Luca; Moramarco, Tommaso; Su, Chun-Hsu; Ryu, Dongryeol; Wagner, Wolfgang

    2014-05-01

    The reduction of noise in microwave satellite soil moisture (SM) retrievals is of paramount importance for practical applications especially for those associated with the study of climate changes, droughts, floods and other related hydrological processes. So far, Fourier based methods have been used for de-noising satellite SM retrievals by filtering either the observed emissivity time series (Du, 2012) or the retrieved SM observations (Su et al. 2013). This contribution introduces an alternative approach based on a Wiener-Wavelet-Based filtering (WWB) technique, which uses the Entropy-Based Wavelet de-noising method developed by Sang et al. (2009) to design both a causal and a non-causal version of the filter. WWB is used as a post-retrieval processing tool to enhance the quality of observations derived from the i) Advanced Microwave Scanning Radiometer for the Earth observing system (AMSR-E), ii) the Advanced SCATterometer (ASCAT), and iii) the Soil Moisture and Ocean Salinity (SMOS) satellite. The method is tested on three pilot sites located in Spain (Remedhus Network), in Greece (Hydrological Observatory of Athens) and in Australia (Oznet network), respectively. Different quantitative criteria are used to judge the goodness of the de-noising technique. Results show that WWB i) is able to improve both the correlation and the root mean squared differences between satellite retrievals and in situ soil moisture observations, and ii) effectively separates random noise from deterministic components of the retrieved signals. Moreover, the use of WWB de-noised data in place of raw observations within a hydrological application confirms the usefulness of the proposed filtering technique. Du, J. (2012), A method to improve satellite soil moisture retrievals based on Fourier analysis, Geophys. Res. Lett., 39, L15404, doi:10.1029/ 2012GL052435 Su,C.-H.,D.Ryu, A. W. Western, and W. Wagner (2013), De-noising of passive and active microwave satellite soil moisture time

  8. On Bayesian Inductive Inference & Predictive Estimation

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John; Smelyanskiy, Vadim

    2004-01-01

    We investigate Bayesian inference and the Principle of Maximum Entropy (PME) as methods for doing inference under uncertainty. This investigation is primarily through concrete examples that have been previously investigated in the literature. We find that it is possible to do Bayesian inference and PME inference using the same information, despite claims to the contrary, but that the results are not directly comparable. This is because Bayesian inference yields a probability density function (pdf) over the unknown model parameters, whereas PME yields point estimates. If mean estimates are extracted from the Bayesian pdfs, the resulting parameter estimates can differ radically from the PME values and also from the Maximum Likelihood values. We conclude that these differences are due to the Bayesian inference not assuming anything beyond the given prior probabilities and the data, whereas PME implicitly assumes that the given constraints are the only constraints that are operating. Since this assumption can be wrong, PME values may have to be revised when subsequent data shows evidence for more constraints. The entropy concentration previously "proved" by E. T. Jaynes is shown to be in error. Further, we show that PME is a generalized form of independence assumption, and so can be a very powerful method of inference when the variables being investigated are largely independent of each other.

  9. The Bayes Inference Engine

    SciTech Connect

    Hanson, K.M.; Cunningham, G.S.

    1996-04-01

    The authors are developing a computer application, called the Bayes Inference Engine, to provide the means to make inferences about models of physical reality within a Bayesian framework. The construction of complex nonlinear models is achieved by a fully object-oriented design. The models are represented by a data-flow diagram that may be manipulated by the analyst through a graphical programming environment. Maximum a posteriori solutions are achieved using a general, gradient-based optimization algorithm. The application incorporates a new technique of estimating and visualizing the uncertainties in specific aspects of the model.

  10. Application of Wavelet Based Denoising for T-Wave Alternans Analysis in High Resolution ECG Maps

    NASA Astrophysics Data System (ADS)

    Janusek, D.; Kania, M.; Zaczek, R.; Zavala-Fernandez, H.; Zbieć, A.; Opolski, G.; Maniewski, R.

    2011-01-01

    T-wave alternans (TWA) allows for identification of patients at an increased risk of ventricular arrhythmia. Stress test, which increases heart rate in controlled manner, is used for TWA measurement. However, the TWA detection and analysis are often disturbed by muscular interference. The evaluation of wavelet based denoising methods was performed to find optimal algorithm for TWA analysis. ECG signals recorded in twelve patients with cardiac disease were analyzed. In seven of them significant T-wave alternans magnitude was detected. The application of wavelet based denoising method in the pre-processing stage increases the T-wave alternans magnitude as well as the number of BSPM signals where TWA was detected.

  11. Prognostics of Lithium-Ion Batteries Based on Wavelet Denoising and DE-RVM.

    PubMed

    Zhang, Chaolong; He, Yigang; Yuan, Lifeng; Xiang, Sheng; Wang, Jinping

    2015-01-01

    Lithium-ion batteries are widely used in many electronic systems. Therefore, it is significantly important to estimate the lithium-ion battery's remaining useful life (RUL), yet very difficult. One important reason is that the measured battery capacity data are often subject to the different levels of noise pollution. In this paper, a novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries. Wavelet denoising is performed with different thresholds in order to weaken the strong noise and remove the weak noise. Relevance vector machine (RVM) improved by differential evolution (DE) algorithm is utilized to estimate the battery RUL based on the denoised data. An experiment including battery 5 capacity prognostics case and battery 18 capacity prognostics case is conducted and validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately.

  12. Parameters optimization for wavelet denoising based on normalized spectral angle and threshold constraint machine learning

    NASA Astrophysics Data System (ADS)

    Li, Hao; Ma, Yong; Liang, Kun; Tian, Yong; Wang, Rui

    2012-01-01

    Wavelet parameters (e.g., wavelet type, level of decomposition) affect the performance of the wavelet denoising algorithm in hyperspectral applications. Current studies select the best wavelet parameters for a single spectral curve by comparing similarity criteria such as spectral angle (SA). However, the method to find the best parameters for a spectral library that contains multiple spectra has not been studied. In this paper, a criterion named normalized spectral angle (NSA) is proposed. By comparing NSA, the best combination of parameters for a spectral library can be selected. Moreover, a fast algorithm based on threshold constraint and machine learning is developed to reduce the time of a full search. After several iterations of learning, the combination of parameters that constantly surpasses a threshold is selected. The experiments proved that by using the NSA criterion, the SA values decreased significantly, and the fast algorithm could save 80% time consumption, while the denoising performance was not obviously impaired.

  13. A wavelet denoising approach for signal action isolation in the ear canal.

    PubMed

    Vaidyanathan, Ravi; Wang, Shouyan; Gupta, Lalit

    2008-01-01

    The goal of this work was to develop and implement a new filtering strategy to denoise acoustic signals in the ear canal resulting from voluntary movement of the tongue (as a method of generating control input), as well as from other active actions, (speech, eating, drinking, smoking), and passive actions (swallowing, adjusting the jaw, physiological activity). The strategy is based on a denoising wavelet shrinkage approach that separates rhythmic bursting activity and white noise representing sustained tonic activity. While past work has addressed the discrimination of voluntary TMEP signals from one-another, no work has addressed acoustic artefact rejection within the ear. The results described here, combined with our past work in isolating critical components of tongue movement ear pressure (TMEP) signals, provide a basis for discriminating voluntary and involuntary actions of the tongue by monitoring pressure in the ear. At this time, the system has worked in real-time for assistive device control.

  14. A new performance evaluation scheme for jet engine vibration signal denoising

    NASA Astrophysics Data System (ADS)

    Sadooghi, Mohammad Saleh; Esmaeilzadeh Khadem, Siamak

    2016-08-01

    Denoising of a cargo-plane jet engine compressor vibration signal is investigated in this article. Discrete wavelet transform and two families of Donoho-Johnston and parameter method thresholding, are applied to vibration signal. Eighty four combinations of wavelet thresholding and mother wavelet are evaluated. A new performance evaluation scheme for optimal selection of mother wavelet and thresholding method combination is proposed in this paper, which is make a trade off between four performance criteria of signal to noise ratio, percentage root mean square difference, Cross-correlation, and mean square error. Dmeyer mother wavelet (dmey) combined with Rigorous SURE thresholding has the maximum trade off value and was selected as the most appropriate combination for denoising of the signal. It was shown that inappropriate combination leads to data losing. Also higher performance of proposed trade off with respect to other criteria was proven graphically.

  15. Prognostics of Lithium-Ion Batteries Based on Wavelet Denoising and DE-RVM

    PubMed Central

    Zhang, Chaolong; He, Yigang; Yuan, Lifeng; Xiang, Sheng; Wang, Jinping

    2015-01-01

    Lithium-ion batteries are widely used in many electronic systems. Therefore, it is significantly important to estimate the lithium-ion battery's remaining useful life (RUL), yet very difficult. One important reason is that the measured battery capacity data are often subject to the different levels of noise pollution. In this paper, a novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries. Wavelet denoising is performed with different thresholds in order to weaken the strong noise and remove the weak noise. Relevance vector machine (RVM) improved by differential evolution (DE) algorithm is utilized to estimate the battery RUL based on the denoised data. An experiment including battery 5 capacity prognostics case and battery 18 capacity prognostics case is conducted and validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately. PMID:26413090

  16. Projection domain denoising method based on dictionary learning for low-dose CT image reconstruction.

    PubMed

    Zhang, Haiyan; Zhang, Liyi; Sun, Yunshan; Zhang, Jingyu

    2015-01-01

    Reducing X-ray tube current is one of the widely used methods for decreasing the radiation dose. Unfortunately, the signal-to-noise ratio (SNR) of the projection data degrades simultaneously. To improve the quality of reconstructed images, a dictionary learning based penalized weighted least-squares (PWLS) approach is proposed for sinogram denoising. The weighted least-squares considers the statistical characteristic of noise and the penalty models the sparsity of sinogram based on dictionary learning. Then reconstruct CT image using filtered back projection (FBP) algorithm from the denoised sinogram. The proposed method is particularly suitable for the projection data with low SNR. Experimental results show that the proposed method can get high-quality CT images when the signal to noise ratio of projection data declines sharply.

  17. Wavelet-domain TI Wiener-like filtering for complex MR data denoising.

    PubMed

    Hu, Kai; Cheng, Qiaocui; Gao, Xieping

    2016-10-01

    Magnetic resonance (MR) images are affected by random noises, which degrade many image processing and analysis tasks. It has been shown that the noise in magnitude MR images follows a Rician distribution. Unlike additive Gaussian noise, the noise is signal-dependent, and consequently difficult to reduce, especially in low signal-to-noise ratio (SNR) images. Wirestam et al. in [20] proposed a Wiener-like filtering technique in wavelet-domain to reduce noise before construction of the magnitude MR image. Based on Wirestam's study, we propose a wavelet-domain translation-invariant (TI) Wiener-like filtering algorithm for noise reduction in complex MR data. The proposed denoising algorithm shows the following improvements compared with Wirestam's method: (1) we introduce TI property into the Wiener-like filtering in wavelet-domain to suppress artifacts caused by translations of the signal; (2) we integrate one Stein's Unbiased Risk Estimator (SURE) thresholding with two Wiener-like filters to make the hard-thresholding scale adaptive; and (3) the first Wiener-like filtering is used to filter the original noisy image in which the noise obeys Gaussian distribution and it provides more reasonable results. The proposed algorithm is applied to denoise the real and imaginary parts of complex MR images. To evaluate our proposed algorithm, we conduct extensive denoising experiments using T1-weighted simulated MR images, diffusion-weighted (DW) phantom and in vivo data. We compare our algorithm with other popular denoising methods. The results demonstrate that our algorithm outperforms others in term of both efficiency and robustness. PMID:27238055

  18. Wavelet-domain TI Wiener-like filtering for complex MR data denoising.

    PubMed

    Hu, Kai; Cheng, Qiaocui; Gao, Xieping

    2016-10-01

    Magnetic resonance (MR) images are affected by random noises, which degrade many image processing and analysis tasks. It has been shown that the noise in magnitude MR images follows a Rician distribution. Unlike additive Gaussian noise, the noise is signal-dependent, and consequently difficult to reduce, especially in low signal-to-noise ratio (SNR) images. Wirestam et al. in [20] proposed a Wiener-like filtering technique in wavelet-domain to reduce noise before construction of the magnitude MR image. Based on Wirestam's study, we propose a wavelet-domain translation-invariant (TI) Wiener-like filtering algorithm for noise reduction in complex MR data. The proposed denoising algorithm shows the following improvements compared with Wirestam's method: (1) we introduce TI property into the Wiener-like filtering in wavelet-domain to suppress artifacts caused by translations of the signal; (2) we integrate one Stein's Unbiased Risk Estimator (SURE) thresholding with two Wiener-like filters to make the hard-thresholding scale adaptive; and (3) the first Wiener-like filtering is used to filter the original noisy image in which the noise obeys Gaussian distribution and it provides more reasonable results. The proposed algorithm is applied to denoise the real and imaginary parts of complex MR images. To evaluate our proposed algorithm, we conduct extensive denoising experiments using T1-weighted simulated MR images, diffusion-weighted (DW) phantom and in vivo data. We compare our algorithm with other popular denoising methods. The results demonstrate that our algorithm outperforms others in term of both efficiency and robustness.

  19. Feasibility study of dose reduction in digital breast tomosynthesis using non-local denoising algorithms

    NASA Astrophysics Data System (ADS)

    Vieira, Marcelo A. C.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Borges, Lucas R.; Bakic, Predrag R.; Barufaldi, Bruno; Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2015-03-01

    The main purpose of this work is to study the ability of denoising algorithms to reduce the radiation dose in Digital Breast Tomosynthesis (DBT) examinations. Clinical use of DBT is normally performed in "combo-mode", in which, in addition to DBT projections, a 2D mammogram is taken with the standard radiation dose. As a result, patients have been exposed to radiation doses higher than used in digital mammography. Thus, efforts to reduce the radiation dose in DBT examinations are of great interest. However, a decrease in dose leads to an increased quantum noise level, and related decrease in image quality. This work is aimed at addressing this problem by the use of denoising techniques, which could allow for dose reduction while keeping the image quality acceptable. We have studied two "state of the art" denoising techniques for filtering the quantum noise due to the reduced dose in DBT projections: Non-local Means (NLM) and Block-matching 3D (BM3D). We acquired DBT projections at different dose levels of an anthropomorphic physical breast phantom with inserted simulated microcalcifications. Then, we found the optimal filtering parameters where the denoising algorithms are capable of recovering the quality from the DBT images acquired with the standard radiation dose. Results using objective image quality assessment metrics showed that BM3D algorithm achieved better noise adjustment (mean difference in peak signal to noise ratio < 0.1dB) and less blurring (mean difference in image sharpness ~ 6%) than the NLM for the projections acquired with lower radiation doses.

  20. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal

    PubMed Central

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-01-01

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal. PMID:26512665

  1. Neurochip based on light-addressable potentiometric sensor with wavelet transform de-noising*

    PubMed Central

    Liu, Qing-jun; Ye, Wei-wei; Yu, Hui; Hu, Ning; Du, Li-ping; Wang, Ping

    2010-01-01

    Neurochip based on light-addressable potentiometric sensor (LAPS), whose sensing elements are excitable cells, can monitor electrophysiological properties of cultured neuron networks with cellular signals well analyzed. Here we report a kind of neurochip with rat pheochromocytoma (PC12) cells hybrid with LAPS and a method of de-noising signals based on wavelet transform. Cells were cultured on LAPS for several days to form networks, and we then used LAPS system to detect the extracellular potentials with signals de-noised according to decomposition in the time-frequency space. The signal was decomposed into various scales, and coefficients were processed based on the properties of each layer. At last, signal was reconstructed based on the new coefficients. The results show that after de-noising, baseline drift is removed and signal-to-noise ratio is increased. It suggests that the neurochip of PC12 cells coupled to LAPS is stable and suitable for long-term and non-invasive measurement of cell electrophysiological properties with wavelet transform, taking advantage of its time-frequency localization analysis to reduce noise. PMID:20443210

  2. A criterion for signal-based selection of wavelets for denoising intrafascicular nerve recordings.

    PubMed

    Kamavuako, Ernest Nlandu; Jensen, Winnie; Yoshida, Ken; Kurstjens, Mathijs; Farina, Dario

    2010-02-15

    In this paper we propose a novel method for denoising intrafascicular nerve signals with the aim of improving action potential (AP) detection. The method is based on the stationary wavelet transform and thresholding of the wavelet coefficients. Since the choice of the mother wavelet substantially impact the performance, a criterion is proposed for selecting the optimal wavelet. The criterion for selection was based on the root mean square of the average of the output signal triggered by the detected APs. The mother wavelet was parameterized through the scaling filter, which allowed optimization through the proposed criterion. The method was tested on simulated signals and on experimental neural recordings. Experimental signals were recorded from the tibial branch of the sciatic nerve of three anaesthetized New Zealand white rabbits during controlled muscle stretches. The simulation results showed that the proposed method had an equivalent effect on AP detection performance (percentage of correct detection at 6 dB signal-to-noise ratio, mean+/-SD, 95.3+/-5.2%) to the a-posteriori choice of the best wavelet (96.1+/-3.6). Moreover, the AP detection after the proposed denoising method resulted in a correlation of 0.94+/-0.02 between the estimated spike rate and the muscle length. Therefore, the study proposes an effective method for selecting the optimal mother wavelet for denoising neural signals with the aim of improving AP detection.

  3. Wavelet-based adaptive denoising and baseline correction for MALDI TOF MS.

    PubMed

    Shin, Hyunjin; Sampat, Mehul P; Koomen, John M; Markey, Mia K

    2010-06-01

    Proteomic profiling by MALDI TOF mass spectrometry (MS) is an effective method for identifying biomarkers from human serum/plasma, but the process is complicated by the presence of noise in the spectra. In MALDI TOF MS, the major noise source is chemical noise, which is defined as the interference from matrix material and its clusters. Because chemical noise is nonstationary and nonwhite, wavelet-based denoising is more effective than conventional noise reduction schemes based on Fourier analysis. However, current wavelet-based denoising methods for mass spectrometry do not fully consider the characteristics of chemical noise. In this article, we propose new wavelet-based high-frequency noise reduction and baseline correction methods that were designed based on the discrete stationary wavelet transform. The high-frequency noise reduction algorithm adaptively estimates the time-varying threshold for each frequency subband from multiple realizations of chemical noise and removes noise from mass spectra of samples using the estimated thresholds. The baseline correction algorithm computes the monotonically decreasing baseline in the highest approximation of the wavelet domain. The experimental results demonstrate that our algorithms effectively remove artifacts in mass spectra that are due to chemical noise while preserving informative features as compared to commonly used denoising methods.

  4. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal.

    PubMed

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-10-23

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal.

  5. A New Image Denoising Algorithm that Preserves Structures of Astronomical Data

    NASA Astrophysics Data System (ADS)

    Bressert, Eli; Edmonds, P.; Kowal Arcand, K.

    2007-05-01

    We have processed numerous x-ray data sets using several well-known algorithms such as Gaussian and adaptive smoothing for public related image releases. These algorithms are used to denoise/smooth images and retain the overall structure of observed objects. Recently, a new PDE algorithm and program, provided by Dr. David Tschumperle and referred to as GREYCstoration, has been tested and is in the progress of being implemented into the Chandra EPO imaging group. Results of GREYCstoration will be presented and compared to the currently used methods for x-ray and multiple wavelength images. What demarcates Tschumperle's algorithm from the current algorithms used by the EPO imaging group is its ability to preserve the main structures of an image strongly, while reducing noise. In addition to denoising images, GREYCstoration can be used to erase artifacts accumulated during observation and mosaicing stages. GREYCstoration produces results that are comparable and in some cases more preferable than the current denoising/smoothing algorithms. From our early stages of testing, the results of the new algorithm will provide insight on the algorithm's initial capabilities on multiple wavelength astronomy data sets.

  6. Denoising of arterial and venous Doppler signals using discrete wavelet transform: effect on clinical parameters.

    PubMed

    Tokmakçi, Mahmut; Erdoğan, Nuri

    2009-05-01

    In this paper, the effects of a wavelet transform based denoising strategy on clinical Doppler parameters are analyzed. The study scheme included: (a) Acquisition of arterial and venous Doppler signals by sampling the audio output of an ultrasound scanner from 20 healthy volunteers, (b) Noise reduction via decomposition of the signals through discrete wavelet transform, (c) Spectral analysis of noisy and noise-free signals with short time Fourier transform, (d) Curve fitting to spectrograms, (e) Calculation of clinical Doppler parameters, (f) Statistical comparison of parameters obtained from noisy and noise-free signals. The decomposition level was selected as the highest level at which the maximum power spectral density and its corresponding frequency were preserved. In all subjects, noise-free spectrograms had smoother trace with less ripples. In both arterial and venous spectrograms, denoising resulted in a significant decrease in the maximum (systolic) and mean frequency, with no statistical difference in the minimum (diastolic) frequency. In arterial signals, this leads to a significant decrease in the calculated parameters such as Systolic/Diastolic Velocity Ratio, Resistivity Index, Pulsatility Index and Acceleration Time. Acceleration Index did not change significantly. Despite a successful denoising, the effects of wavelet decomposition on high frequency components in the Doppler signal should be challenged by comparison with reference data, or, through clinical investigations. PMID:19470316

  7. Adaptive non-local means filtering based on local noise level for CT denoising

    NASA Astrophysics Data System (ADS)

    Li, Zhoubo; Yu, Lifeng; Trzasko, Joshua D.; Fletcher, Joel G.; McCollough, Cynthia H.; Manduca, Armando

    2012-03-01

    Radiation dose from CT scans is an increasing health concern in the practice of radiology. Higher dose scans can produce clearer images with high diagnostic quality, but may increase the potential risk of radiation-induced cancer or other side effects. Lowering radiation dose alone generally produces a noisier image and may degrade diagnostic performance. Recently, CT dose reduction based on non-local means (NLM) filtering for noise reduction has yielded promising results. However, traditional NLM denoising operates under the assumption that image noise is spatially uniform noise, while in CT images the noise level varies significantly within and across slices. Therefore, applying NLM filtering to CT data using a global filtering strength cannot achieve optimal denoising performance. In this work, we have developed a technique for efficiently estimating the local noise level for CT images, and have modified the NLM algorithm to adapt to local variations in noise level. The local noise level estimation technique matches the true noise distribution determined from multiple repetitive scans of a phantom object very well. The modified NLM algorithm provides more effective denoising of CT data throughout a volume, and may allow significant lowering of radiation dose. Both the noise map calculation and the adaptive NLM filtering can be performed in times that allow integration with the clinical workflow.

  8. Diagnostic accuracy of late iodine enhancement on cardiac computed tomography with a denoise filter for the evaluation of myocardial infarction.

    PubMed

    Matsuda, Takuya; Kido, Teruhito; Itoh, Toshihide; Saeki, Hideyuki; Shigemi, Susumu; Watanabe, Kouki; Kido, Tomoyuki; Aono, Shoji; Yamamoto, Masaya; Matsuda, Takeshi; Mochizuki, Teruhito

    2015-12-01

    We evaluated the image quality and diagnostic performance of late iodine enhancement (LIE) in dual-source computed tomography (DSCT) with low kilo-voltage peak (kVp) images and a denoise filter for the detection of acute myocardial infarction (AMI) in comparison with late gadolinium enhancement (LGE) magnetic resonance imaging (MRI). The Hospital Ethics Committee approved the study protocol. Before discharge, 19 patients who received percutaneous coronary intervention after AMI underwent DSCT and 1.5 T MRI. Immediately after coronary computed tomography (CT) angiography, contrast medium was administered at a slow injection rate. LIE-CT scans were acquired via dual-energy CT and reconstructed as 100-, 140-kVp, and mixed images. An iterative three-dimensional edge-preserved smoothing filter was applied to the 100-kVp images to obtain denoised 100-kVp images. The mixed, 140-kVp, 100-kVp, and denoised 100-kVp images were assessed using contrast-to-noise ratio (CNR), and their diagnostic performance in comparison with MRI and infarcted volumes were evaluated. Three hundred four segments of 19 patients were evaluated. Fifty-three segments showed LGE in MRI. The median CNR of the mixed, 140-, 100-kVp and denoised 100-kVp images was 3.49, 1.21, 3.57, and 6.08, respectively. The median CNR was significantly higher in the denoised 100-kVp images than in the other three images (P < 0.05). The denoised 100-kVp images showed the highest diagnostic accuracy and sensitivity. The percentage of myocardium in the four CT image types was significantly correlated with the respective MRI findings. The use of a denoise filter with a low-kVp image can improve CNR, sensitivity, and accuracy in LIE-CT.

  9. Diagnostic accuracy of late iodine enhancement on cardiac computed tomography with a denoise filter for the evaluation of myocardial infarction.

    PubMed

    Matsuda, Takuya; Kido, Teruhito; Itoh, Toshihide; Saeki, Hideyuki; Shigemi, Susumu; Watanabe, Kouki; Kido, Tomoyuki; Aono, Shoji; Yamamoto, Masaya; Matsuda, Takeshi; Mochizuki, Teruhito

    2015-12-01

    We evaluated the image quality and diagnostic performance of late iodine enhancement (LIE) in dual-source computed tomography (DSCT) with low kilo-voltage peak (kVp) images and a denoise filter for the detection of acute myocardial infarction (AMI) in comparison with late gadolinium enhancement (LGE) magnetic resonance imaging (MRI). The Hospital Ethics Committee approved the study protocol. Before discharge, 19 patients who received percutaneous coronary intervention after AMI underwent DSCT and 1.5 T MRI. Immediately after coronary computed tomography (CT) angiography, contrast medium was administered at a slow injection rate. LIE-CT scans were acquired via dual-energy CT and reconstructed as 100-, 140-kVp, and mixed images. An iterative three-dimensional edge-preserved smoothing filter was applied to the 100-kVp images to obtain denoised 100-kVp images. The mixed, 140-kVp, 100-kVp, and denoised 100-kVp images were assessed using contrast-to-noise ratio (CNR), and their diagnostic performance in comparison with MRI and infarcted volumes were evaluated. Three hundred four segments of 19 patients were evaluated. Fifty-three segments showed LGE in MRI. The median CNR of the mixed, 140-, 100-kVp and denoised 100-kVp images was 3.49, 1.21, 3.57, and 6.08, respectively. The median CNR was significantly higher in the denoised 100-kVp images than in the other three images (P < 0.05). The denoised 100-kVp images showed the highest diagnostic accuracy and sensitivity. The percentage of myocardium in the four CT image types was significantly correlated with the respective MRI findings. The use of a denoise filter with a low-kVp image can improve CNR, sensitivity, and accuracy in LIE-CT. PMID:26202159

  10. Wavelet-based denoising of the Fourier metric in real-time wavefront correction for single molecule localization microscopy

    NASA Astrophysics Data System (ADS)

    Tehrani, Kayvan Forouhesh; Mortensen, Luke J.; Kner, Peter

    2016-03-01

    Wavefront sensorless schemes for correction of aberrations induced by biological specimens require a time invariant property of an image as a measure of fitness. Image intensity cannot be used as a metric for Single Molecule Localization (SML) microscopy because the intensity of blinking fluorophores follows exponential statistics. Therefore a robust intensity-independent metric is required. We previously reported a Fourier Metric (FM) that is relatively intensity independent. The Fourier metric has been successfully tested on two machine learning algorithms, a Genetic Algorithm and Particle Swarm Optimization, for wavefront correction about 50 μm deep inside the Central Nervous System (CNS) of Drosophila. However, since the spatial frequencies that need to be optimized fall into regions of the Optical Transfer Function (OTF) that are more susceptible to noise, adding a level of denoising can improve performance. Here we present wavelet-based approaches to lower the noise level and produce a more consistent metric. We compare performance of different wavelets such as Daubechies, Bi-Orthogonal, and reverse Bi-orthogonal of different degrees and orders for pre-processing of images.

  11. PDE-based Non-Linear Diffusion Techniques for Denoising Scientific and Industrial Images: An Empirical Study

    SciTech Connect

    Weeratunga, S K; Kamath, C

    2001-12-20

    Removing noise from data is often the first step in data analysis. Denoising techniques should not only reduce the noise, but do so without blurring or changing the location of the edges. Many approaches have been proposed to accomplish this; in this paper, they focus on one such approach, namely the use of non-linear diffusion operators. This approach has been studied extensively from a theoretical viewpoint ever since the 1987 work of Perona and Malik showed that non-linear filters outperformed the more traditional linear Canny edge detector. They complement this theoretical work by investigating the performance of several isotropic diffusion operators on test images from scientific domains. They explore the effects of various parameters such as the choice of diffusivity function, explicit and implicit methods for the discretization of the PDE, and approaches for the spatial discretization of the non-linear operator etc. They also compare these schemes with simple spatial filters and the more complex wavelet-based shrinkage techniques. The empirical results show that, with an appropriate choice of parameters, diffusion-based schemes can be as effective as competitive techniques.

  12. Spectral and geographical variability in the oceanic response to atmospheric pressure fluctuations, as inferred from “dynamic barometer” Green's functions

    NASA Astrophysics Data System (ADS)

    Dey, N.; Dickman, S. R.

    2010-09-01

    A decade ago, a novel theoretical approach was developed (Dickman, 1998) for determining the dynamic response of the oceans to atmospheric pressure variations, a response nicknamed the "dynamic barometer" (DB), and the effects of that response on Earth's rotation. This approach employed a generalized, spherical harmonic ocean tide model to compute oceanic Green's functions, the oceans' fluid dynamic response to unit-amplitude pressure forcing on various spatial and temporal scales, and then construct rotational Green's functions, representing the rotational effects of that response. When combined with the observed atmospheric pressure field, the rotational Green's functions would yield the effects of the DB on Earth's rotation. The Green's functions reflect in some way the geographical and spectral sensitivity of the oceans to atmospheric pressure forcing. We have formulated a measure of that sensitivity using a simple combination of rotational Green's functions. We find that the DB response of the oceans to atmospheric pressure forcing depends significantly on geographic location and on frequency. Compared to the inverted barometer (IB) (the traditional static model), the DB effects differ slightly at long periods but become very different at shorter periods. Among all the responses, the prograde polar motion effects are the most dynamic, with large portions of the North Atlantic and some of the North Pacific no larger than one third of IB, but most of the Southern Hemisphere oceans at least 50% greater than IB.

  13. Scene Construction, Visual Foraging, and Active Inference

    PubMed Central

    Mirza, M. Berk; Adams, Rick A.; Mathys, Christoph D.; Friston, Karl J.

    2016-01-01

    This paper describes an active inference scheme for visual searches and the perceptual synthesis entailed by scene construction. Active inference assumes that perception and action minimize variational free energy, where actions are selected to minimize the free energy expected in the future. This assumption generalizes risk-sensitive control and expected utility theory to include epistemic value; namely, the value (or salience) of information inherent in resolving uncertainty about the causes of ambiguous cues or outcomes. Here, we apply active inference to saccadic searches of a visual scene. We consider the (difficult) problem of categorizing a scene, based on the spatial relationship among visual objects where, crucially, visual cues are sampled myopically through a sequence of saccadic eye movements. This means that evidence for competing hypotheses about the scene has to be accumulated sequentially, calling upon both prediction (planning) and postdiction (memory). Our aim is to highlight some simple but fundamental aspects of the requisite functional anatomy; namely, the link between approximate Bayesian inference under mean field assumptions and functional segregation in the visual cortex. This link rests upon the (neurobiologically plausible) process theory that accompanies the normative formulation of active inference for Markov decision processes. In future work, we hope to use this scheme to model empirical saccadic searches and identify the prior beliefs that underwrite intersubject variability in the way people forage for information in visual scenes (e.g., in schizophrenia). PMID:27378899

  14. Scene Construction, Visual Foraging, and Active Inference.

    PubMed

    Mirza, M Berk; Adams, Rick A; Mathys, Christoph D; Friston, Karl J

    2016-01-01

    This paper describes an active inference scheme for visual searches and the perceptual synthesis entailed by scene construction. Active inference assumes that perception and action minimize variational free energy, where actions are selected to minimize the free energy expected in the future. This assumption generalizes risk-sensitive control and expected utility theory to include epistemic value; namely, the value (or salience) of information inherent in resolving uncertainty about the causes of ambiguous cues or outcomes. Here, we apply active inference to saccadic searches of a visual scene. We consider the (difficult) problem of categorizing a scene, based on the spatial relationship among visual objects where, crucially, visual cues are sampled myopically through a sequence of saccadic eye movements. This means that evidence for competing hypotheses about the scene has to be accumulated sequentially, calling upon both prediction (planning) and postdiction (memory). Our aim is to highlight some simple but fundamental aspects of the requisite functional anatomy; namely, the link between approximate Bayesian inference under mean field assumptions and functional segregation in the visual cortex. This link rests upon the (neurobiologically plausible) process theory that accompanies the normative formulation of active inference for Markov decision processes. In future work, we hope to use this scheme to model empirical saccadic searches and identify the prior beliefs that underwrite intersubject variability in the way people forage for information in visual scenes (e.g., in schizophrenia). PMID:27378899

  15. Towards Context Sensitive Information Inference.

    ERIC Educational Resources Information Center

    Song, D.; Bruza, P. D.

    2003-01-01

    Discusses information inference from a psychologistic stance and proposes an information inference mechanism that makes inferences via computations of information flow through an approximation of a conceptual space. Highlights include cognitive economics of information processing; context sensitivity; and query models for information retrieval.…

  16. Multimodel inference and adaptive management

    USGS Publications Warehouse

    Rehme, S.E.; Powell, L.A.; Allen, C.R.

    2011-01-01

    Ecology is an inherently complex science coping with correlated variables, nonlinear interactions and multiple scales of pattern and process, making it difficult for experiments to result in clear, strong inference. Natural resource managers, policy makers, and stakeholders rely on science to provide timely and accurate management recommendations. However, the time necessary to untangle the complexities of interactions within ecosystems is often far greater than the time available to make management decisions. One method of coping with this problem is multimodel inference. Multimodel inference assesses uncertainty by calculating likelihoods among multiple competing hypotheses, but multimodel inference results are often equivocal. Despite this, there may be pressure for ecologists to provide management recommendations regardless of the strength of their study’s inference. We reviewed papers in the Journal of Wildlife Management (JWM) and the journal Conservation Biology (CB) to quantify the prevalence of multimodel inference approaches, the resulting inference (weak versus strong), and how authors dealt with the uncertainty. Thirty-eight percent and 14%, respectively, of articles in the JWM and CB used multimodel inference approaches. Strong inference was rarely observed, with only 7% of JWM and 20% of CB articles resulting in strong inference. We found the majority of weak inference papers in both journals (59%) gave specific management recommendations. Model selection uncertainty was ignored in most recommendations for management. We suggest that adaptive management is an ideal method to resolve uncertainty when research results in weak inference.

  17. SYMBOLIC INFERENCE OF XENOBIOTIC METABOLISM

    PubMed Central

    MCSHAN, D.C.; UPDADHAYAYA, M.; SHAH, I.

    2009-01-01

    We present a new symbolic computational approach to elucidate the biochemical networks of living systems de novo and we apply it to an important biomedical problem: xenobiotic metabolism. A crucial issue in analyzing and modeling a living organism is understanding its biochemical network beyond what is already known. Our objective is to use the available metabolic information in a representational framework that enables the inference of novel biochemical knowledge and whose results can be validated experimentally. We describe a symbolic computational approach consisting of two parts. First, biotransformation rules are inferred from the molecular graphs of compounds in enzyme-catalyzed reactions. Second, these rules are recursively applied to different compounds to generate novel metabolic networks, containing new biotransformations and new metabolites. Using data for 456 generic reactions and 825 generic compounds from KEGG we were able to extract 110 biotransformation rules, which generalize a subset of known biocatalytic functions. We tested our approach by applying these rules to ethanol, a common substance of abuse and to furfuryl alcohol, a xenobiotic organic solvent, which is absent in metabolic databases. In both cases our predictions on the fate of ethanol and furfuryl alcohol are consistent with the literature on the metabolism of these compounds. PMID:14992532

  18. Bayesian inference for OPC modeling

    NASA Astrophysics Data System (ADS)

    Burbine, Andrew; Sturtevant, John; Fryer, David; Smith, Bruce W.

    2016-03-01

    The use of optical proximity correction (OPC) demands increasingly accurate models of the photolithographic process. Model building and inference techniques in the data science community have seen great strides in the past two decades which make better use of available information. This paper aims to demonstrate the predictive power of Bayesian inference as a method for parameter selection in lithographic models by quantifying the uncertainty associated with model inputs and wafer data. Specifically, the method combines the model builder's prior information about each modelling assumption with the maximization of each observation's likelihood as a Student's t-distributed random variable. Through the use of a Markov chain Monte Carlo (MCMC) algorithm, a model's parameter space is explored to find the most credible parameter values. During parameter exploration, the parameters' posterior distributions are generated by applying Bayes' rule, using a likelihood function and the a priori knowledge supplied. The MCMC algorithm used, an affine invariant ensemble sampler (AIES), is implemented by initializing many walkers which semiindependently explore the space. The convergence of these walkers to global maxima of the likelihood volume determine the parameter values' highest density intervals (HDI) to reveal champion models. We show that this method of parameter selection provides insights into the data that traditional methods do not and outline continued experiments to vet the method.

  19. Dopamine, affordance and active inference.

    PubMed

    Friston, Karl J; Shiner, Tamara; FitzGerald, Thomas; Galea, Joseph M; Adams, Rick; Brown, Harriet; Dolan, Raymond J; Moran, Rosalyn; Stephan, Klaas Enno; Bestmann, Sven

    2012-01-01

    The role of dopamine in behaviour and decision-making is often cast in terms of reinforcement learning and optimal decision theory. Here, we present an alternative view that frames the physiology of dopamine in terms of Bayes-optimal behaviour. In this account, dopamine controls the precision or salience of (external or internal) cues that engender action. In other words, dopamine balances bottom-up sensory information and top-down prior beliefs when making hierarchical inferences (predictions) about cues that have affordance. In this paper, we focus on the consequences of changing tonic levels of dopamine firing using simulations of cued sequential movements. Crucially, the predictions driving movements are based upon a hierarchical generative model that infers the context in which movements are made. This means that we can confuse agents by changing the context (order) in which cues are presented. These simulations provide a (Bayes-optimal) model of contextual uncertainty and set switching that can be quantified in terms of behavioural and electrophysiological responses. Furthermore, one can simulate dopaminergic lesions (by changing the precision of prediction errors) to produce pathological behaviours that are reminiscent of those seen in neurological disorders such as Parkinson's disease. We use these simulations to demonstrate how a single functional role for dopamine at the synaptic level can manifest in different ways at the behavioural level.

  20. Trandimensional Inference in the Geosciences

    NASA Astrophysics Data System (ADS)

    Bodin, Thomas

    2016-04-01

    An inverse problem is the task often occurring in many branches of Earth sciences, where the values of some model parameters describing the Earth must be obtained given noisy observations made at the surface. In all applications of inversion, assumptions are made about the nature of the model parametrisation and data noise characteristics, and results can significantly depend on those assumptions. These quantities are often manually `tuned' by means of subjective trial-and-error procedures, and this prevents to accurately quantify uncertainties in the solution. A Bayesian approach allows these assumptions to be relaxed by incorporating relevant parameters as unknowns in the inference problem. Rather than being forced to make decisions on parametrisation, the level of data noise and the weights between data types in advance, as is often the case in an optimization framework, the choice can be informed by the data themselves. Probabilistic sampling techniques such as transdimensional Markov chain Monte Carlo, allow sampling over complex posterior probability density functions, thus providing information on constraint, trade-offs and uncertainty in the unknowns. This presentation will present a review of transdimensional inference, and its application to different problems, ranging from Geochemistry to Solid Earth Geophysics.

  1. Inference is bliss: using evolutionary relationship to guide categorical inferences.

    PubMed

    Novick, Laura R; Catley, Kefyn M; Funk, Daniel J

    2011-01-01

    Three experiments, adopting an evolutionary biology perspective, investigated subjects' inferences about living things. Subjects were told that different enzymes help regulate cell function in two taxa and asked which enzyme a third taxon most likely uses. Experiment 1 and its follow-up, with college students, used triads involving amphibians, reptiles, and mammals (reptiles and mammals are most closely related evolutionarily) and plants, fungi, and animals (fungi are more closely related to animals than to plants). Experiment 2, with 10th graders, also included triads involving mammals, birds, and snakes/crocodilians (birds and snakes/crocodilians are most closely related). Some subjects received cladograms (hierarchical diagrams) depicting the evolutionary relationships among the taxa. The effect of providing cladograms depended on students' background in biology. The results illuminate students' misconceptions concerning common taxa and constraints on their willingness to override faulty knowledge when given appropriate evolutionary evidence. Implications for introducing tree thinking into biology curricula are discussed. PMID:21463358

  2. Performance evaluation and optimization of BM4D-AV denoising algorithm for cone-beam CT images

    NASA Astrophysics Data System (ADS)

    Huang, Kuidong; Tian, Xiaofei; Zhang, Dinghua; Zhang, Hua

    2015-12-01

    The broadening application of cone-beam Computed Tomography (CBCT) in medical diagnostics and nondestructive testing, necessitates advanced denoising algorithms for its 3D images. The block-matching and four dimensional filtering algorithm with adaptive variance (BM4D-AV) is applied to the 3D image denoising in this research. To optimize it, the key filtering parameters of the BM4D-AV algorithm are assessed firstly based on the simulated CBCT images and a table of optimized filtering parameters is obtained. Then, considering the complexity of the noise in realistic CBCT images, possible noise standard deviations in BM4D-AV are evaluated to attain the chosen principle for the realistic denoising. The results of corresponding experiments demonstrate that the BM4D-AV algorithm with optimized parameters presents excellent denosing effect on the realistic 3D CBCT images.

  3. A new modified differential evolution algorithm scheme-based linear frequency modulation radar signal de-noising

    NASA Astrophysics Data System (ADS)

    Dawood Al-Dabbagh, Mohanad; Dawoud Al-Dabbagh, Rawaa; Raja Abdullah, R. S. A.; Hashim, F.

    2015-06-01

    The main intention of this study was to investigate the development of a new optimization technique based on the differential evolution (DE) algorithm, for the purpose of linear frequency modulation radar signal de-noising. As the standard DE algorithm is a fixed length optimizer, it is not suitable for solving signal de-noising problems that call for variability. A modified crossover scheme called rand-length crossover was designed to fit the proposed variable-length DE, and the new DE algorithm is referred to as the random variable-length crossover differential evolution (rvlx-DE) algorithm. The measurement results demonstrate a highly efficient capability for target detection in terms of frequency response and peak forming that was isolated from noise distortion. The modified method showed significant improvements in performance over traditional de-noising techniques.

  4. Scattering Properties of Jovian Tropospheric Cloud Particles Inferred from Cassini/ISS: Mie Scattering Phase Function and Particle Size in South Tropical Zone III

    NASA Astrophysics Data System (ADS)

    Sato, T.; Satoh, T.; Kasaba, Y.

    2010-12-01

    The three distinct cloud layers were predicted by an equilibrium cloud condensation model (ECCM) of Jupiter. An ammonia ice cloud (NH3), an ammonia hydrosulfide cloud (NH4SH), and a water ice (H2O) cloud would be based at altitudes corresponding to pressures of about 0.7, 2.2 and 6 bars, respectively. However, there are significant gaps in our knowledge of the vertical cloud structure, despite the continuing effort by numerous ground-based, space-based, and in-situ observations and theory. Methane (CH4) is considered that its altitude distribution is globally uniform because it does not condense in Jovian atmosphere. Therefore, it is possible to derive the vertical cloud structure and the optical properties of clouds (i.e., optical thickness and single scattering albedo) by observing reflected sunlight in CH4 bands (727, 890 nm) and continuum in visible to near-infrared spectral ranges. Since we need to consider multiple scattering by clouds, it is essential to know scattering properties (e.g., scattering phase function) of clouds for determination of vertical cloud structure. However, we cannot derive those from ground-based and Earth-orbit observations because of the limitation of solar phase angle as viewed from the Earth. Then, most previous studies have used the scattering phase function deduced from the Pioneer 10/IPP data (blue: 440 nm, red: 640nm) [Tomasko et al., 1978]. There are two shortcomings in the Pioneer scattering phase function. One is that we have to use this scattering phase function at red as a substitute for analyses of imaging photometry using CH4 bands (center: 727 and 890 nm), although clouds should have wavelength dependency. The other is that the red pass band of IPP was so broad (595-720 nm) that this scattering phase function in red just show wavelength-averaged scattering properties of clouds. To provide a new reference scattering phase function with wavelength dependency, we have analyzed the Cassini/ISS data in BL1 (451 nm), CB1 (619

  5. Gene-network inference by message passing

    NASA Astrophysics Data System (ADS)

    Braunstein, A.; Pagnani, A.; Weigt, M.; Zecchina, R.

    2008-01-01

    The inference of gene-regulatory processes from gene-expression data belongs to the major challenges of computational systems biology. Here we address the problem from a statistical-physics perspective and develop a message-passing algorithm which is able to infer sparse, directed and combinatorial regulatory mechanisms. Using the replica technique, the algorithmic performance can be characterized analytically for artificially generated data. The algorithm is applied to genome-wide expression data of baker's yeast under various environmental conditions. We find clear cases of combinatorial control, and enrichment in common functional annotations of regulated genes and their regulators.

  6. Denoising techniques combined to Monte Carlo simulations for the prediction of high-resolution portal images in radiotherapy treatment verification

    NASA Astrophysics Data System (ADS)

    Lazaro, D.; Barat, E.; Le Loirec, C.; Dautremer, T.; Montagu, T.; Guérin, L.; Batalla, A.

    2013-05-01

    This work investigates the possibility of combining Monte Carlo (MC) simulations to a denoising algorithm for the accurate prediction of images acquired using amorphous silicon (a-Si) electronic portal imaging devices (EPIDs). An accurate MC model of the Siemens OptiVue1000 EPID was first developed using the penelope code, integrating a non-uniform backscatter modelling. Two already existing denoising algorithms were then applied on simulated portal images, namely the iterative reduction of noise (IRON) method and the locally adaptive Savitzky-Golay (LASG) method. A third denoising method, based on a nonparametric Bayesian framework and called DPGLM (for Dirichlet process generalized linear model) was also developed. Performances of the IRON, LASG and DPGLM methods, in terms of smoothing capabilities and computation time, were compared for portal images computed for different values of the RMS pixel noise (up to 10%) in three different configurations, a heterogeneous phantom irradiated by a non-conformal 15 × 15 cm2 field, a conformal beam from a pelvis treatment plan, and an IMRT beam from a prostate treatment plan. For all configurations, DPGLM outperforms both IRON and LASG by providing better smoothing performances and demonstrating a better robustness with respect to noise. Additionally, no parameter tuning is required by DPGLM, which makes the denoising step very generic and easy to handle for any portal image. Concerning the computation time, the denoising of 1024 × 1024 images takes about 1 h 30 min, 2 h and 5 min using DPGLM, IRON, and LASG, respectively. This paper shows the feasibility to predict within a few hours and with the same resolution as real images accurate portal images, combining MC simulations with the DPGLM denoising algorithm.

  7. Wavelet Transform-Based De-Noising for Two-Photon Imaging of Synaptic Ca2+ Transients

    PubMed Central

    Tigaret, Cezar M.; Tsaneva-Atanasova, Krasimira; Collingridge, Graham L.; Mellor, Jack R.

    2013-01-01

    Postsynaptic Ca2+ transients triggered by neurotransmission at excitatory synapses are a key signaling step for the induction of synaptic plasticity and are typically recorded in tissue slices using two-photon fluorescence imaging with Ca2+-sensitive dyes. The signals generated are small with very low peak signal/noise ratios (pSNRs) that make detailed analysis problematic. Here, we implement a wavelet-based de-noising algorithm (PURE-LET) to enhance signal/noise ratio for Ca2+ fluorescence transients evoked by single synaptic events under physiological conditions. Using simulated Ca2+ transients with defined noise levels, we analyzed the ability of the PURE-LET algorithm to retrieve the underlying signal. Fitting single Ca2+ transients with an exponential rise and decay model revealed a distortion of τrise but improved accuracy and reliability of τdecay and peak amplitude after PURE-LET de-noising compared to raw signals. The PURE-LET de-noising algorithm also provided a ∼30-dB gain in pSNR compared to ∼16-dB pSNR gain after an optimized binomial filter. The higher pSNR provided by PURE-LET de-noising increased discrimination accuracy between successes and failures of synaptic transmission as measured by the occurrence of synaptic Ca2+ transients by ∼20% relative to an optimized binomial filter. Furthermore, in comparison to binomial filter, no optimization of PURE-LET de-noising was required for reducing arbitrary bias. In conclusion, the de-noising of fluorescent Ca2+ transients using PURE-LET enhances detection and characterization of Ca2+ responses at central excitatory synapses. PMID:23473483

  8. Visual Inference Programming

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin; Timucin, Dogan; Rabbette, Maura; Curry, Charles; Allan, Mark; Lvov, Nikolay; Clanton, Sam; Pilewskie, Peter

    2002-01-01

    The goal of visual inference programming is to develop a software framework data analysis and to provide machine learning algorithms for inter-active data exploration and visualization. The topics include: 1) Intelligent Data Understanding (IDU) framework; 2) Challenge problems; 3) What's new here; 4) Framework features; 5) Wiring diagram; 6) Generated script; 7) Results of script; 8) Initial algorithms; 9) Independent Component Analysis for instrument diagnosis; 10) Output sensory mapping virtual joystick; 11) Output sensory mapping typing; 12) Closed-loop feedback mu-rhythm control; 13) Closed-loop training; 14) Data sources; and 15) Algorithms. This paper is in viewgraph form.

  9. Graphical inference for Infovis.

    PubMed

    Wickham, Hadley; Cook, Dianne; Hofmann, Heike; Buja, Andreas

    2010-01-01

    How do we know if what we see is really there? When visualizing data, how do we avoid falling into the trap of apophenia where we see patterns in random noise? Traditionally, infovis has been concerned with discovering new relationships, and statistics with preventing spurious relationships from being reported. We pull these opposing poles closer with two new techniques for rigorous statistical inference of visual discoveries. The "Rorschach" helps the analyst calibrate their understanding of uncertainty and "line-up" provides a protocol for assessing the significance of visual discoveries, protecting against the discovery of spurious structure.

  10. Functional interactions between OCA2 and the protein complexes BLOC-1, BLOC-2, and AP-3 inferred from epistatic analyses of mouse coat pigmentation.

    PubMed

    Hoyle, Diego J; Rodriguez-Fernandez, Imilce A; Dell'angelica, Esteban C

    2011-04-01

    The biogenesis of melanosomes is a multistage process that requires the function of cell-type-specific and ubiquitously expressed proteins. OCA2, the product of the gene defective in oculocutaneous albinism type 2, is a melanosomal membrane protein with restricted expression pattern and a potential role in the trafficking of other proteins to melanosomes. The ubiquitous protein complexes AP-3, BLOC-1, and BLOC-2, which contain as subunits the products of genes defective in various types of Hermansky-Pudlak syndrome, have been likewise implicated in trafficking to melanosomes. We have tested for genetic interactions between mutant alleles causing deficiency in OCA2 (pink-eyed dilution unstable), AP-3 (pearl), BLOC-1 (pallid), and BLOC-2 (cocoa) in C57BL/6J mice. The pallid allele was epistatic to pink-eyed dilution, and the latter behaved as a semi-dominant phenotypic enhancer of cocoa and, to a lesser extent, of pearl. These observations suggest functional links between OCA2 and these three protein complexes involved in melanosome biogenesis.

  11. Terrestrial Laser Scanner Data Denoising by Dictionary Learning of Sparse Coding

    NASA Astrophysics Data System (ADS)

    Smigiel, E.; Alby, E.; Grussenmeyer, P.

    2013-07-01

    Point cloud processing is basically a signal processing issue. The huge amount of data which are collected with Terrestrial Laser Scanners or photogrammetry techniques faces the classical questions linked with signal or image processing. Among others, denoising and compression are questions which have to be addressed in this context. That is why, one has to turn attention to signal theory because it is susceptible to guide one's good practices or to inspire new ideas from the latest developments of this field. The literature have been showing for decades how strong and dynamic, the theoretical field is and how efficient the derived algorithms have become. For about ten years, a new technique has appeared: known as compressive sensing or compressive sampling, it is based first on sparsity which is an interesting characteristic of many natural signals. Based on this concept, many denoising and compression techniques have shown their efficiencies. Sparsity can also be seen as redundancy removal of natural signals. Taken along with incoherent measurements, compressive sensing has appeared and uses the idea that redundancy could be removed at the very early stage of sampling. Hence, instead of sampling the signal at high sampling rate and removing redundancy as a second stage, the acquisition stage itself may be run with redundancy removal. This paper gives some theoretical aspects of these ideas with first simple mathematics. Then, the idea of compressive sensing for a Terrestrial Laser Scanner is examined as a potential research question and finally, a denoising scheme based on a dictionary learning of sparse coding is experienced. Both the theoretical discussion and the obtained results show that it is worth staying close to signal processing theory and its community to take benefit of its latest developments.

  12. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  13. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time. PMID:26405887

  14. Computationally efficient Bayesian inference for inverse problems.

    SciTech Connect

    Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.

    2007-10-01

    Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.

  15. Deep Learning for Population Genetic Inference

    PubMed Central

    Sheehan, Sara; Song, Yun S.

    2016-01-01

    Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme. PMID:27018908

  16. Deep Learning for Population Genetic Inference.

    PubMed

    Sheehan, Sara; Song, Yun S

    2016-03-01

    Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme. PMID:27018908

  17. Deep Learning for Population Genetic Inference.

    PubMed

    Sheehan, Sara; Song, Yun S

    2016-03-01

    Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.

  18. A method for the dynamic analysis of the heart using a Lyapounov based denoising algorithm.

    PubMed

    Nascimento, Jacinto C; Sanches, João M; Marques, Jorge S

    2006-01-01

    Heart tracking in ultrasound sequences is a difficult task due to speckle noise, low SNR and lack of contrast. Therefore it is usually difficult to obtain robust estimates of the heart cavities since feature detectors produce a large number of outliers. This paper presents an algorithm which combines two main operations: i) a novel denoising algorithm based on the Lyapounov equation and ii) a robust tracker, recently proposed by the authors, based on a model of the outlier features. Experimental results are provided, showing that the proposed algorithm is computationally efficient and leads to accurate estimates of the left ventricle during the cardiac cycle.

  19. Denoising of human speech using combined acoustic and em sensor signal processing

    SciTech Connect

    Ng, L C; Burnett, G C; Holzrichter, J F; Gable, T J

    1999-11-29

    Low Power EM radar-like sensors have made it possible to measure properties of the human speech production system in real-time, without acoustic interference. This greatly enhances the quality and quantify of information for many speech related applications. See Holzrichter, Burnett, Ng, and Lea, J. Acoustic. Soc. Am. 103 (1) 622 (1998). By using combined Glottal-EM- Sensor- and Acoustic-signals, segments of voiced, unvoiced, and no-speech can be reliably defined. Real-time Denoising filters can be constructed to remove noise from the user's corresponding speech signal.

  20. [Adaptive de-noising of ECG signal based on stationary wavelet transform].

    PubMed

    Dong, Hong-sheng; Zhang, Ai-hua; Hao, Xiao-hong

    2009-03-01

    According to the limitations of wavelet threshold in de-noising method, we approached a combining algorithm of the stationary wavelet transform with adaptive filter. The stationary wavelet transformation can suppress Gibbs phenomena in traditional DWT effectively, and adaptive filter is introduced at the high scale wavelet coefficient of the stationary wavelet transformation. It would remove baseline wander and keep the shape of low frequency and low amplitude P wave, T wave and ST segment wave of ECG signal well. That is important for analyzing ECG signal of other feature information.

  1. New methods for MRI denoising based on sparseness and self-similarity.

    PubMed

    Manjón, José V; Coupé, Pierrick; Buades, Antonio; Louis Collins, D; Robles, Montserrat

    2012-01-01

    This paper proposes two new methods for the three-dimensional denoising of magnetic resonance images that exploit the sparseness and self-similarity properties of the images. The proposed methods are based on a three-dimensional moving-window discrete cosine transform hard thresholding and a three-dimensional rotationally invariant version of the well-known nonlocal means filter. The proposed approaches were compared with related state-of-the-art methods and produced very competitive results. Both methods run in less than a minute, making them usable in most clinical and research settings. PMID:21570894

  2. [A fast non-local means algorithm for denoising of computed tomography images].

    PubMed

    Kang, Changqing; Cao, Wenping; Fang, Lei; Hua, Li; Cheng, Hong

    2012-11-01

    A fast non-local means image denoising algorithm is presented based on the single motif of existing computed tomography images in medical archiving systems. The algorithm is carried out in two steps of prepossessing and actual possessing. The sample neighborhood database is created via the data structure of locality sensitive hashing in the prepossessing stage. The CT image noise is removed by non-local means algorithm based on the sample neighborhoods accessed fast by locality sensitive hashing. The experimental results showed that the proposed algorithm could greatly reduce the execution time, as compared to NLM, and effectively preserved the image edges and details.

  3. Video Denoising by Fuzzy Directional Filter Using the DSP EVM DM642

    NASA Astrophysics Data System (ADS)

    Gallegos-Funes, Francisco J.; Kravchenko, Victor; Ponomaryov, Volodymyr; Rosales-Silva, Alberto

    We present a new 3D Fuzzy Directional (3D-FD) algorithm for the denoising of video colour sequences corrupted by impulsive noise. The proposed approach consists of the estimations of movement levels, noise in the neighborhood video frames, permitting to preserve the edges, fine details and chromaticity characteristics in video sequences. Experimental results show that the noise in these sequences can be efficiently removed by the proposed 3D-FD filter, and that the method outperforms other state of the art filters of comparable complexity on video sequences. Finally, hardware requirements are evaluated permitting real time implementation on DSP EVM DM642.

  4. A new denoising method in high-dimensional PCA-space

    NASA Astrophysics Data System (ADS)

    Do, Quoc Bao; Beghdadi, Azeddine; Luong, Marie

    2012-03-01

    Kernel-design based method such as Bilateral filter (BIL), non-local means (NLM) filter is known as one of the most attractive approaches for denoising. We propose in this paper a new noise filtering method inspired by BIL, NLM filters and principal component analysis (PCA). The main idea here is to perform the BIL in a multidimensional PCA-space using an anisotropic kernel. The filtered multidimensional signal is then transformed back onto the image spatial domain to yield the desired enhanced image. In this work, it is demonstrated that the proposed method is a generalization of kernel-design based methods. The obtained results are highly promising.

  5. Source mechanism of long-period events at Kusatsu-Shirane Volcano, Japan, inferred from waveform inversion of the effective excitation functions

    USGS Publications Warehouse

    Nakano, M.; Kumagai, H.; Chouet, B.A.

    2003-01-01

    We investigate the source mechanism of long-period (LP) events observed at Kusatsu-Shirane Volcano, Japan, based on waveform inversions of their effective excitation functions. The effective excitation function, which represents the apparent excitation observed at individual receivers, is estimated by applying an autoregressive filter to the LP waveform. Assuming a point source, we apply this method to seven LP events the waveforms of which are characterized by simple decaying and nearly monochromatic oscillations with frequency in the range 1-3 Hz. The results of the waveform inversions show dominant volumetric change components accompanied by single force components, common to all the events analyzed, and suggesting a repeated activation of a sub-horizontal crack located 300 m beneath the summit crater lakes. Based on these results, we propose a model of the source process of LP seismicity, in which a gradual buildup of steam pressure in a hydrothermal crack in response to magmatic heat causes repeated discharges of steam from the crack. The rapid discharge of fluid causes the collapse of the fluid-filled crack and excites acoustic oscillations of the crack, which produce the characteristic waveforms observed in the LP events. The presence of a single force synchronous with the collapse of the crack is interpreted as the release of gravitational energy that occurs as the slug of steam ejected from the crack ascends toward the surface and is replaced by cooler water flowing downward in a fluid-filled conduit linking the crack and the base of the crater lake. ?? 2003 Elsevier Science B.V. All rights reserved.

  6. THE ABUNDANCES OF HYDROCARBON FUNCTIONAL GROUPS IN THE INTERSTELLAR MEDIUM INFERRED FROM LABORATORY SPECTRA OF HYDROGENATED AND METHYLATED POLYCYCLIC AROMATIC HYDROCARBONS

    SciTech Connect

    Steglich, M.; Jäger, C.; Huisken, F.; Friedrich, M.; Plass, W.; Räder, H.-J.; Müllen, K.; Henning, Th.

    2013-10-01

    Infrared (IR) absorption spectra of individual polycyclic aromatic hydrocarbons (PAHs) containing methyl (-CH{sub 3}), methylene (CH{sub 2}), or diamond-like CH groups and IR spectra of mixtures of methylated and hydrogenated PAHs prepared by gas-phase condensation were measured at room temperature (as grains in pellets) and at low temperature (isolated in Ne matrices). In addition, the PAH blends were subjected to an in-depth molecular structure analysis by means of high-performance liquid chromatography, nuclear magnetic resonance spectroscopy, and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. Supported by calculations at the density functional theory level, the laboratory results were applied to analyze in detail the aliphatic absorption complex of the diffuse interstellar medium at 3.4 μm and to determine the abundances of hydrocarbon functional groups. Assuming that the PAHs are mainly locked in grains, aliphatic CH {sub x} groups (x = 1, 2, 3) would contribute approximately in equal quantities to the 3.4 μm feature (N {sub CHx}/N {sub H} ≈ 10{sup –5}-2 × 10{sup –5}). The abundances, however, may be two to four times lower if a major contribution to the 3.4 μm feature comes from molecules in the gas phase. Aromatic ≅CH groups seem to be almost absent from some lines of sight, but can be nearly as abundant as each of the aliphatic components in other directions (N{sub ≅CH}/N {sub H} ∼< 2 × 10{sup –5}; upper value for grains). Due to comparatively low binding energies, astronomical IR emission sources do not display such heavy excess hydrogenation. At best, especially in protoplanetary nebulae, CH{sub 2} groups bound to aromatic molecules, i.e., excess hydrogens on the molecular periphery only, can survive the presence of a nearby star.

  7. Circular inferences in schizophrenia.

    PubMed

    Jardri, Renaud; Denève, Sophie

    2013-11-01

    A considerable number of recent experimental and computational studies suggest that subtle impairments of excitatory to inhibitory balance or regulation are involved in many neurological and psychiatric conditions. The current paper aims to relate, specifically and quantitatively, excitatory to inhibitory imbalance with psychotic symptoms in schizophrenia. Considering that the brain constructs hierarchical causal models of the external world, we show that the failure to maintain the excitatory to inhibitory balance results in hallucinations as well as in the formation and subsequent consolidation of delusional beliefs. Indeed, the consequence of excitatory to inhibitory imbalance in a hierarchical neural network is equated to a pathological form of causal inference called 'circular belief propagation'. In circular belief propagation, bottom-up sensory information and top-down predictions are reverberated, i.e. prior beliefs are misinterpreted as sensory observations and vice versa. As a result, these predictions are counted multiple times. Circular inference explains the emergence of erroneous percepts, the patient's overconfidence when facing probabilistic choices, the learning of 'unshakable' causal relationships between unrelated events and a paradoxical immunity to perceptual illusions, which are all known to be associated with schizophrenia. PMID:24065721

  8. Moment inference from tomograms

    USGS Publications Warehouse

    Day-Lewis, F. D.; Chen, Y.; Singha, K.

    2007-01-01

    Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error. Copyright 2007 by the American Geophysical Union.

  9. Circular inferences in schizophrenia.

    PubMed

    Jardri, Renaud; Denève, Sophie

    2013-11-01

    A considerable number of recent experimental and computational studies suggest that subtle impairments of excitatory to inhibitory balance or regulation are involved in many neurological and psychiatric conditions. The current paper aims to relate, specifically and quantitatively, excitatory to inhibitory imbalance with psychotic symptoms in schizophrenia. Considering that the brain constructs hierarchical causal models of the external world, we show that the failure to maintain the excitatory to inhibitory balance results in hallucinations as well as in the formation and subsequent consolidation of delusional beliefs. Indeed, the consequence of excitatory to inhibitory imbalance in a hierarchical neural network is equated to a pathological form of causal inference called 'circular belief propagation'. In circular belief propagation, bottom-up sensory information and top-down predictions are reverberated, i.e. prior beliefs are misinterpreted as sensory observations and vice versa. As a result, these predictions are counted multiple times. Circular inference explains the emergence of erroneous percepts, the patient's overconfidence when facing probabilistic choices, the learning of 'unshakable' causal relationships between unrelated events and a paradoxical immunity to perceptual illusions, which are all known to be associated with schizophrenia.

  10. Crustal structure and configuration of the subducting Philippine Sea plate beneath the Pacific coast industrial zone in Japan inferred from receiver function analysis

    NASA Astrophysics Data System (ADS)

    Igarashi, T.; Iidaka, T.; Sakai, S.; Hirata, N.

    2012-12-01

    We apply receiver function (RF) analyses to estimate the crustal structure and configuration of the subducting Philippine Sea (PHS) plate beneath the Pacific coast industrial zone stretching from Tokyo to Fukuoka in Japan. Destructive earthquakes often occurred at the plate interface of the PHS plate, and seismic activities increase after the 2011 Tohoku earthquake (Mw9.0) around the Tokyo metropolitan area. Investigation on the crustal structure is the key to understanding the stress concentration and strain accumulation process, and information on configuration of the subducting plate is important to mitigate future earthquake disasters. In this study, we searched for the best-correlated velocity structure model between an observed receiver function at each station and synthetic ones by using a grid search method. Synthetic RFs were calculated from many assumed one-dimensional velocity structures that consist of four layers with positive velocity steps. Observed receiver functions were stacked without considering back azimuth or epicentral distance. We further constructed the vertical cross-sections of depth-converted RF images transformed the lapse time of time series to depth by using the estimated structure models. Telemetric seismographic network data covered on the Japanese Islands including the Metropolitan Seismic Observation network, which constructed under the Special Project for Earthquake Disaster Mitigation in the Tokyo Metropolitan area and maintained by Special Project for Reducing Vulnerability for Urban Mega Earthquake Disasters, are used. We selected events with magnitudes greater or equal to 5.0 and epicentral distance between 30 and 90 degrees based on USGS catalogues. As a result, we clarify spatial distributions of the crustal S-wave velocities. Estimated average one-dimensional S-wave velocity structure is approximately equal to the JMA2011 structural model although the velocity from the ground surface to 5 km in depth is slow. In particular

  11. The Abundances of Hydrocarbon Functional Groups in the Interstellar Medium Inferred from Laboratory Spectra of Hydrogenated and Methylated Polycyclic Aromatic Hydrocarbons

    NASA Astrophysics Data System (ADS)

    Steglich, M.; Jäger, C.; Huisken, F.; Friedrich, M.; Plass, W.; Räder, H.-J.; Müllen, K.; Henning, Th.

    2013-10-01

    Infrared (IR) absorption spectra of individual polycyclic aromatic hydrocarbons (PAHs) containing methyl (\\sbondCH3), methylene (\\protect{\\epsfbox{art/apjs484229un01.eps}}CH2), or diamond-like \\protect{\\epsfbox{art/apjs484229un02.eps}}CH groups and IR spectra of mixtures of methylated and hydrogenated PAHs prepared by gas-phase condensation were measured at room temperature (as grains in pellets) and at low temperature (isolated in Ne matrices). In addition, the PAH blends were subjected to an in-depth molecular structure analysis by means of high-performance liquid chromatography, nuclear magnetic resonance spectroscopy, and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. Supported by calculations at the density functional theory level, the laboratory results were applied to analyze in detail the aliphatic absorption complex of the diffuse interstellar medium at 3.4 μm and to determine the abundances of hydrocarbon functional groups. Assuming that the PAHs are mainly locked in grains, aliphatic CH x groups (x = 1, 2, 3) would contribute approximately in equal quantities to the 3.4 μm feature (N CHx /N H ≈ 10-5-2 × 10-5). The abundances, however, may be two to four times lower if a major contribution to the 3.4 μm feature comes from molecules in the gas phase. Aromatic \\epsfbox{art/apjs484229un03.eps} CH groups seem to be almost absent from some lines of sight, but can be nearly as abundant as each of the aliphatic components in other directions (N_{\\epsfbox{art/apjs484229un03.eps} CH}/N H lsim 2 × 10-5 upper value for grains). Due to comparatively low binding energies, astronomical IR emission sources do not display such heavy excess hydrogenation. At best, especially in protoplanetary nebulae, \\protect{\\epsfbox{art/apjs484229un01.eps}}CH2 groups bound to aromatic molecules, i.e., excess hydrogens on the molecular periphery only, can survive the presence of a nearby star.

  12. Armored geckos: A histological investigation of osteoderm development in Tarentola (Phyllodactylidae) and Gekko (Gekkonidae) with comments on their regeneration and inferred function.

    PubMed

    Vickaryous, M K; Meldrum, G; Russell, A P

    2015-11-01

    Osteoderms are bone-rich organs found in the dermis of many scleroglossan lizards sensu lato, but are only known for two genera of gekkotans (geckos): Tarentola and Gekko. Here, we investigate their sequence of appearance, mode of development, structural diversity and ability to regenerate following tail loss. Osteoderms were present in all species of Tarentola sampled (Tarentola annularis, T. mauritanica, T. americana, T. crombei, T. chazaliae) as well as Gekko gecko, but not G. smithii. Gekkotan osteoderms first appear within the integument dorsal to the frontal bone or within the supraocular scales. They then manifest as mineralized structures in other positions across the head. In Tarentola and G. gecko, discontinuous clusters subsequently form dorsal to the pelvis/base of the tail, and then dorsal to the pectoral apparatus. Gekkotan osteoderm formation begins once the dermis is fully formed. Early bone deposition appears to involve populations of fibroblast-like cells, which are gradually replaced by more rounded osteoblasts. In T. annularis and T. mauritanica, an additional skeletal tissue is deposited across the superficial surface of the osteoderm. This tissue is vitreous, avascular, cell-poor, lacks intrinsic collagen, and is herein identified as osteodermine. We also report that following tail loss, both T. annularis and T. mauritanica are capable of regenerating osteoderms, including osteodermine, in the regenerated part of the tail. We propose that osteoderms serve roles in defense against combative prey and intraspecific aggression, along with anti-predation functions.

  13. Armored geckos: A histological investigation of osteoderm development in Tarentola (Phyllodactylidae) and Gekko (Gekkonidae) with comments on their regeneration and inferred function.

    PubMed

    Vickaryous, M K; Meldrum, G; Russell, A P

    2015-11-01

    Osteoderms are bone-rich organs found in the dermis of many scleroglossan lizards sensu lato, but are only known for two genera of gekkotans (geckos): Tarentola and Gekko. Here, we investigate their sequence of appearance, mode of development, structural diversity and ability to regenerate following tail loss. Osteoderms were present in all species of Tarentola sampled (Tarentola annularis, T. mauritanica, T. americana, T. crombei, T. chazaliae) as well as Gekko gecko, but not G. smithii. Gekkotan osteoderms first appear within the integument dorsal to the frontal bone or within the supraocular scales. They then manifest as mineralized structures in other positions across the head. In Tarentola and G. gecko, discontinuous clusters subsequently form dorsal to the pelvis/base of the tail, and then dorsal to the pectoral apparatus. Gekkotan osteoderm formation begins once the dermis is fully formed. Early bone deposition appears to involve populations of fibroblast-like cells, which are gradually replaced by more rounded osteoblasts. In T. annularis and T. mauritanica, an additional skeletal tissue is deposited across the superficial surface of the osteoderm. This tissue is vitreous, avascular, cell-poor, lacks intrinsic collagen, and is herein identified as osteodermine. We also report that following tail loss, both T. annularis and T. mauritanica are capable of regenerating osteoderms, including osteodermine, in the regenerated part of the tail. We propose that osteoderms serve roles in defense against combative prey and intraspecific aggression, along with anti-predation functions. PMID:26248595

  14. Estimating uncertainty of inference for validation

    SciTech Connect

    Booker, Jane M; Langenbrunner, James R; Hemez, Francois M; Ross, Timothy J

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  15. Fractional Diffusion, Low Exponent Lévy Stable Laws, and ‘Slow Motion’ Denoising of Helium Ion Microscope Nanoscale Imagery

    PubMed Central

    Carasso, Alfred S.; Vladár, András E.

    2012-01-01

    Helium ion microscopes (HIM) are capable of acquiring images with better than 1 nm resolution, and HIM images are particularly rich in morphological surface details. However, such images are generally quite noisy. A major challenge is to denoise these images while preserving delicate surface information. This paper presents a powerful slow motion denoising technique, based on solving linear fractional diffusion equations forward in time. The method is easily implemented computationally, using fast Fourier transform (FFT) algorithms. When applied to actual HIM images, the method is found to reproduce the essential surface morphology of the sample with high fidelity. In contrast, such highly sophisticated methodologies as Curvelet Transform denoising, and Total Variation denoising using split Bregman iterations, are found to eliminate vital fine scale information, along with the noise. Image Lipschitz exponents are a useful image metrology tool for quantifying the fine structure content in an image. In this paper, this tool is applied to rank order the above three distinct denoising approaches, in terms of their texture preserving properties. In several denoising experiments on actual HIM images, it was found that fractional diffusion smoothing performed noticeably better than split Bregman TV, which in turn, performed slightly better than Curvelet denoising. PMID:26900518

  16. Bayesian inference in geomagnetism

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1988-01-01

    The inverse problem in empirical geomagnetic modeling is investigated, with critical examination of recently published studies. Particular attention is given to the use of Bayesian inference (BI) to select the damping parameter lambda in the uniqueness portion of the inverse problem. The mathematical bases of BI and stochastic inversion are explored, with consideration of bound-softening problems and resolution in linear Gaussian BI. The problem of estimating the radial magnetic field B(r) at the earth core-mantle boundary from surface and satellite measurements is then analyzed in detail, with specific attention to the selection of lambda in the studies of Gubbins (1983) and Gubbins and Bloxham (1985). It is argued that the selection method is inappropriate and leads to lambda values much larger than those that would result if a reasonable bound on the heat flow at the CMB were assumed.

  17. BIE: Bayesian Inference Engine

    NASA Astrophysics Data System (ADS)

    Weinberg, Martin D.

    2013-12-01

    The Bayesian Inference Engine (BIE) is an object-oriented library of tools written in C++ designed explicitly to enable Bayesian update and model comparison for astronomical problems. To facilitate "what if" exploration, BIE provides a command line interface (written with Bison and Flex) to run input scripts. The output of the code is a simulation of the Bayesian posterior distribution from which summary statistics e.g. by taking moments, or determine confidence intervals and so forth, can be determined. All of these quantities are fundamentally integrals and the Markov Chain approach produces variates heta distributed according to P( heta|D) so moments are trivially obtained by summing of the ensemble of variates.

  18. Statistics of Natural Stochastic Textures and Their Application in Image Denoising.

    PubMed

    Zachevsky, Ido; Zeevi, Yehoshua Y Josh

    2016-05-01

    Natural stochastic textures (NSTs), characterized by their fine details, are prone to corruption by artifacts, introduced during the image acquisition process by the combined effect of blur and noise. While many successful algorithms exist for image restoration and enhancement, the restoration of natural textures and textured images based on suitable statistical models has yet to be further improved. We examine the statistical properties of NST using three image databases. We show that the Gaussian distribution is suitable for many NST, while other natural textures can be properly represented by a model that separates the image into two layers; one of these layers contains the structural elements of smooth areas and edges, while the other contains the statistically Gaussian textural details. Based on these statistical properties, an algorithm for the denoising of natural images containing NST is proposed, using patch-based fractional Brownian motion model and regularization by means of anisotropic diffusion. It is illustrated that this algorithm successfully recovers both missing textural details and structural attributes that characterize natural images. The algorithm is compared with classical as well as the state-of-the-art denoising algorithms. PMID:27045423

  19. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    PubMed

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature. PMID:27084318

  20. Novel example-based method for super-resolution and denoising of medical images.

    PubMed

    Dinh-Hoan Trinh; Luong, Marie; Dibos, Francoise; Rocchisani, Jean-Marie; Canh-Duong Pham; Nguyen, Truong Q

    2014-04-01

    In this paper, we propose a novel example-based method for denoising and super-resolution of medical images. The objective is to estimate a high-resolution image from a single noisy low-resolution image, with the help of a given database of high and low-resolution image patch pairs. Denoising and super-resolution in this paper is performed on each image patch. For each given input low-resolution patch, its high-resolution version is estimated based on finding a nonnegative sparse linear representation of the input patch over the low-resolution patches from the database, where the coefficients of the representation strongly depend on the similarity between the input patch and the sample patches in the database. The problem of finding the nonnegative sparse linear representation is modeled as a nonnegative quadratic programming problem. The proposed method is especially useful for the case of noise-corrupted and low-resolution image. Experimental results show that the proposed method outperforms other state-of-the-art super-resolution methods while effectively removing noise.

  1. Bearing fault diagnosis based on variational mode decomposition and total variation denoising

    NASA Astrophysics Data System (ADS)

    Zhang, Suofeng; Wang, Yanxue; He, Shuilong; Jiang, Zhansi

    2016-07-01

    Feature extraction plays an essential role in bearing fault detection. However, the measured vibration signals are complex and non-stationary in nature, and meanwhile impulsive signatures of rolling bearing are usually immersed in stochastic noise. Hence, a novel hybrid fault diagnosis approach is developed for the denoising and non-stationary feature extraction in this work, which combines well with the variational mode decomposition (VMD) and majoriation-minization based total variation denoising (TV-MM). The TV-MM approach is utilized to remove stochastic noise in the raw signal and to enhance the corresponding characteristics. Since the parameter λ is very important in TV-MM, the weighted kurtosis index is also proposed in this work to determine an appropriate λ used in TV-MM. The performance of the proposed hybrid approach is conducted through the analysis of the simulated and practical bearing vibration signals. Results demonstrate that the proposed approach has superior capability to detect roller bearing faults from vibration signals.

  2. An unified framework for Bayesian denoising for several medical and biological imaging modalities.

    PubMed

    Sanches, João M; Nascimento, Jacinto C; Marques, Jorge S

    2007-01-01

    Multiplicative noise is often present in several medical and biological imaging modalities, such as MRI, Ultrasound, PET/SPECT and Fluorescence Microscopy. Noise removal and preserving the details is not a trivial task. Bayesian algorithms have been used to tackle this problem. They succeed to accomplish this task, however they lead to a computational burden as we increase the image dimensionality. Therefore, a significant effort has been made to accomplish this tradeoff, i.e., to develop fast and reliable algorithms to remove noise without distorting relevant clinical information. This paper provides a new unified framework for Bayesian denoising of images corrupted with additive and multiplicative multiplicative noise. This allows to deal with additive white Gaussian and multiplicative noise described by Poisson and Rayleigh distributions respectively. The proposed algorithm is based on the maximum a posteriori (MAP) criterion, and an edge preserving priors are used to avoid the distortion of the relevant image details. The denoising task is performed by an iterative scheme based on Sylvester/Lyapunov equation. This approach allows to use fast and efficient algorithms described in the literature to solve the Sylvester/Lyapunov equation developed in the context of the Control theory. Experimental results with synthetic and real data testify the performance of the proposed technique, and competitive results are achieved when comparing to the of the state-of-the-art methods.

  3. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    PubMed

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature.

  4. Wavelet-based noise-model driven denoising algorithm for differential phase contrast mammography.

    PubMed

    Arboleda, Carolina; Wang, Zhentian; Stampanoni, Marco

    2013-05-01

    Traditional mammography can be positively complemented by phase contrast and scattering x-ray imaging, because they can detect subtle differences in the electron density of a material and measure the local small-angle scattering power generated by the microscopic density fluctuations in the specimen, respectively. The grating-based x-ray interferometry technique can produce absorption, differential phase contrast (DPC) and scattering signals of the sample, in parallel, and works well with conventional X-ray sources; thus, it constitutes a promising method for more reliable breast cancer screening and diagnosis. Recently, our team proved that this novel technology can provide images superior to conventional mammography. This new technology was used to image whole native breast samples directly after mastectomy. The images acquired show high potential, but the noise level associated to the DPC and scattering signals is significant, so it is necessary to remove it in order to improve image quality and visualization. The noise models of the three signals have been investigated and the noise variance can be computed. In this work, a wavelet-based denoising algorithm using these noise models is proposed. It was evaluated with both simulated and experimental mammography data. The outcomes demonstrated that our method offers a good denoising quality, while simultaneously preserving the edges and important structural features. Therefore, it can help improve diagnosis and implement further post-processing techniques such as fusion of the three signals acquired.

  5. Nonlocal transform-domain filter for volumetric data denoising and reconstruction.

    PubMed

    Maggioni, Matteo; Katkovnik, Vladimir; Egiazarian, Karen; Foi, Alessandro

    2013-01-01

    We present an extension of the BM3D filter to volumetric data. The proposed algorithm, BM4D, implements the grouping and collaborative filtering paradigm, where mutually similar d-dimensional patches are stacked together in a (d+1)-dimensional array and jointly filtered in transform domain. While in BM3D the basic data patches are blocks of pixels, in BM4D we utilize cubes of voxels, which are stacked into a 4-D "group." The 4-D transform applied on the group simultaneously exploits the local correlation present among voxels in each cube and the nonlocal correlation between the corresponding voxels of different cubes. Thus, the spectrum of the group is highly sparse, leading to very effective separation of signal and noise through coefficient shrinkage. After inverse transformation, we obtain estimates of each grouped cube, which are then adaptively aggregated at their original locations. We evaluate the algorithm on denoising of volumetric data corrupted by Gaussian and Rician noise, as well as on reconstruction of volumetric phantom data with non-zero phase from noisy and incomplete Fourier-domain (k-space) measurements. Experimental results demonstrate the state-of-the-art denoising performance of BM4D, and its effectiveness when exploited as a regularizer in volumetric data reconstruction. PMID:22868570

  6. Denoising of X-ray pulsar observed profile in the undecimated wavelet domain

    NASA Astrophysics Data System (ADS)

    Xue, Meng-fan; Li, Xiao-ping; Fu, Ling-zhong; Liu, Xiu-ping; Sun, Hai-feng; Shen, Li-rong

    2016-01-01

    The low intensity of the X-ray pulsar signal and the strong X-ray background radiation lead to low signal-to-noise ratio (SNR) of the X-ray pulsar observed profile obtained through epoch folding, especially when the observation time is not long enough. This signifies the necessity of denoising of the observed profile. In this paper, the statistical characteristics of the X-ray pulsar signal are studied, and a signal-dependent noise model is established for the observed profile. Based on this, a profile noise reduction method by performing a local linear minimum mean square error filtering in the un-decimated wavelet domain is developed. The detail wavelet coefficients are rescaled by multiplying their amplitudes by a locally adaptive factor, which is the local variance ratio of the noiseless coefficients to the noisy ones. All the nonstationary statistics needed in the algorithm are calculated from the observed profile, without a priori information. The results of experim! ents, carried out on simulated data obtained by the ground-based simulation system and real data obtained by Rossi X-Ray Timing Explorer satellite, indicate that the proposed method is excellent in both noise suppression and preservation of peak sharpness, and it also clearly outperforms four widely accepted and used wavelet denoising methods, in terms of SNR, Pearson correlation coefficient and root mean square error.

  7. Simultaneous seismic data interpolation and denoising with a new adaptive method based on dreamlet transform

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Wu, Ru-Shan; Chen, Xiaohong; Li, Jingye

    2015-05-01

    Interpolation and random noise removal is a pre-requisite for multichannel techniques because the irregularity and random noise in observed data can affect their performances. Projection Onto Convex Sets (POCS) method can better handle seismic data interpolation if the data's signal-to-noise ratio (SNR) is high, while it has difficulty in noisy situations because it inserts the noisy observed seismic data in each iteration. Weighted POCS method can weaken the noise effects, while the performance is affected by the choice of weight factors and is still unsatisfactory. Thus, a new weighted POCS method is derived through the Iterative Hard Threshold (IHT) view, and in order to eliminate random noise, a new adaptive method is proposed to achieve simultaneous seismic data interpolation and denoising based on dreamlet transform. Performances of the POCS method, the weighted POCS method and the proposed method are compared in simultaneous seismic data interpolation and denoising which demonstrate the validity of the proposed method. The recovered SNRs confirm that the proposed adaptive method is the most effective among the three methods. Numerical examples on synthetic and real data demonstrate the validity of the proposed adaptive method.

  8. Efficient and Robust Nonlocal Means Denoising of MR Data Based on Salient Features Matching

    PubMed Central

    Tristán-Vega, Antonio; García-Pérez, Verónica; Aja-Fernández, Santiago; Westin, Carl-Fredrik

    2014-01-01

    The Nonlocal Means (NLM) filter has become a popular approach for denoising medical images due to its excellent performance. However, its heavy computational load has been an important shortcoming preventing its use. NLM works by averaging pixels in nonlocal vicinities, weighting them depending on their similarity with the pixel of interest. This similarity is assessed based on the squared difference between corresponding pixels inside local patches centered at the locations compared. Our proposal is to reduce the computational load of this comparison by checking only a subset of salient features associated to the pixels, which suffice to estimate the actual difference as computed in the original NLM approach. The speedup achieved with respect to the original implementation is over one order of magnitude, and, when compared to more recent NLM improvements for MRI denoising, our method is nearly twice as fast. At the same time, we evidence from both synthetic and in vivo experiments that computing of appropriate salient features make the estimation of NLM weights more robust to noise. Consequently, we are able to improve the outcomes achieved with recent state of the art techniques for a wide range of realistic Signal-to-Noise Ratio scenarios like diffusion MRI. Finally, the statistical characterization of the features computed allows to get rid of some of the heuristics commonly used for parameter tuning. PMID:21906832

  9. TDOA Matrices: Algebraic Properties and Their Application to Robust Denoising With Missing Data

    NASA Astrophysics Data System (ADS)

    Velasco, Jose; Pizarro, Daniel; Macias-Guarasa, Javier; Asaei, Afsaneh

    2016-10-01

    Measuring the Time delay of Arrival (TDOA) between a set of sensors is the basic setup for many applications, such as localization or signal beamforming. This paper presents the set of TDOA matrices, which are built from noise-free TDOA measurements, not requiring knowledge of the sensor array geometry. We prove that TDOA matrices are rank-two and have a special SVD decomposition that leads to a compact linear parametric representation. Properties of TDOA matrices are applied in this paper to perform denoising, by finding the TDOA matrix closest to the matrix composed with noisy measurements. The paper shows that this problem admits a closed-form solution for TDOA measurements contaminated with Gaussian noise which extends to the case of having missing data. The paper also proposes a novel robust denoising method resistant to outliers, missing data and inspired in recent advances in robust low-rank estimation. Experiments in synthetic and real datasets show TDOA-based localization, both in terms of TDOA accuracy estimation and localization error.

  10. A New Approach to Inverting and De-Noising Backscatter from Lidar Observations

    NASA Astrophysics Data System (ADS)

    Marais, Willem; Hen Hu, Yu; Holz, Robert; Eloranta, Edwin

    2016-06-01

    Atmospheric lidar observations provide a unique capability to directly observe the vertical profile of cloud and aerosol scattering properties and have proven to be an important capability for the atmospheric science community. For this reason NASA and ESA have put a major emphasis on developing both space and ground based lidar instruments. Measurement noise (solar background and detector noise) has proven to be a significant limitation and is typically reduced by temporal and vertical averaging. This approach has significant limitations as it results in significant reduction in the spatial information and can introduce biases due to the non-linear relationship between the signal and the retrieved scattering properties. This paper investigates a new approach to de-noising and retrieving cloud and aerosol backscatter properties from lidar observations that leverages a technique developed for medical imaging to de-blur and de-noise images; the accuracy is defined as the error between the true and inverted photon rates. Hence non-linear bias errors can be mitigated and spatial information can be preserved.

  11. [A De-Noising Algorithm for Fluorescence Detection Signal of Mineral Oil in Water by SWT].

    PubMed

    Wang, Yu-tian; Cheng, Peng-fei; Hou, Pei-guo; Yang, Zhe

    2015-05-01

    Fluorescence analysis is an important means of detecting mineral oil in water pollutants because of high sensitivity, selectivity, ease of design, etc. Noise generated from Photo detector will affect the sensitivity of fluorescence detection system, so the elimination of fluorescence signal noise has been a hot issue. For the fluorescence signal, due to the length increase of the branch set, it produces some boundary issues. The dbN wavelet family can flexibly balance the border issues, retain the useful signals and get. rid of noise, the de-noising effects of dbN families are compared, the db7 wavelet is chosen as the optimal wavelet. The noisy fluorescence signal is statically decomposed into 5 levels via db7 wavelet, and the thresholds are chosen adaptively based on the wavelet entropy theory. The pure fluorescence signal is obtained after the approximation coefficients and detail coefficients quantified by thresholds reconstructed. Compared with the DWT, the signal de-noised via SWT has the advantage of information integrity and time translation invariance.

  12. Bearing fault diagnosis based on variational mode decomposition and total variation denoising

    NASA Astrophysics Data System (ADS)

    Zhang, Suofeng; Wang, Yanxue; He, Shuilong; Jiang, Zhansi

    2016-07-01

    Feature extraction plays an essential role in bearing fault detection. However, the measured vibration signals are complex and non-stationary in nature, and meanwhile impulsive signatures of rolling bearing are usually immersed in stochastic noise. Hence, a novel hybrid fault diagnosis approach is developed for the denoising and non-stationary feature extraction in this work, which combines well with the variational mode decomposition (VMD) and majoriation–minization based total variation denoising (TV-MM). The TV-MM approach is utilized to remove stochastic noise in the raw signal and to enhance the corresponding characteristics. Since the parameter λ is very important in TV-MM, the weighted kurtosis index is also proposed in this work to determine an appropriate λ used in TV-MM. The performance of the proposed hybrid approach is conducted through the analysis of the simulated and practical bearing vibration signals. Results demonstrate that the proposed approach has superior capability to detect roller bearing faults from vibration signals.

  13. Statistics of Natural Stochastic Textures and Their Application in Image Denoising.

    PubMed

    Zachevsky, Ido; Zeevi, Yehoshua Y Josh

    2016-05-01

    Natural stochastic textures (NSTs), characterized by their fine details, are prone to corruption by artifacts, introduced during the image acquisition process by the combined effect of blur and noise. While many successful algorithms exist for image restoration and enhancement, the restoration of natural textures and textured images based on suitable statistical models has yet to be further improved. We examine the statistical properties of NST using three image databases. We show that the Gaussian distribution is suitable for many NST, while other natural textures can be properly represented by a model that separates the image into two layers; one of these layers contains the structural elements of smooth areas and edges, while the other contains the statistically Gaussian textural details. Based on these statistical properties, an algorithm for the denoising of natural images containing NST is proposed, using patch-based fractional Brownian motion model and regularization by means of anisotropic diffusion. It is illustrated that this algorithm successfully recovers both missing textural details and structural attributes that characterize natural images. The algorithm is compared with classical as well as the state-of-the-art denoising algorithms.

  14. A formal model of interpersonal inference

    PubMed Central

    Moutoussis, Michael; Trujillo-Barreto, Nelson J.; El-Deredy, Wael; Dolan, Raymond J.; Friston, Karl J.

    2014-01-01

    Introduction: We propose that active Bayesian inference—a general framework for decision-making—can equally be applied to interpersonal exchanges. Social cognition, however, entails special challenges. We address these challenges through a novel formulation of a formal model and demonstrate its psychological significance. Method: We review relevant literature, especially with regards to interpersonal representations, formulate a mathematical model and present a simulation study. The model accommodates normative models from utility theory and places them within the broader setting of Bayesian inference. Crucially, we endow people's prior beliefs, into which utilities are absorbed, with preferences of self and others. The simulation illustrates the model's dynamics and furnishes elementary predictions of the theory. Results: (1) Because beliefs about self and others inform both the desirability and plausibility of outcomes, in this framework interpersonal representations become beliefs that have to be actively inferred. This inference, akin to “mentalizing” in the psychological literature, is based upon the outcomes of interpersonal exchanges. (2) We show how some well-known social-psychological phenomena (e.g., self-serving biases) can be explained in terms of active interpersonal inference. (3) Mentalizing naturally entails Bayesian updating of how people value social outcomes. Crucially this includes inference about one's own qualities and preferences. Conclusion: We inaugurate a Bayes optimal framework for modeling intersubject variability in mentalizing during interpersonal exchanges. Here, interpersonal representations are endowed with explicit functional and affective properties. We suggest the active inference framework lends itself to the study of psychiatric conditions where mentalizing is distorted. PMID:24723872

  15. Bayes factors and multimodel inference

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    Multimodel inference has two main themes: model selection, and model averaging. Model averaging is a means of making inference conditional on a model set, rather than on a selected model, allowing formal recognition of the uncertainty associated with model choice. The Bayesian paradigm provides a natural framework for model averaging, and provides a context for evaluation of the commonly used AIC weights. We review Bayesian multimodel inference, noting the importance of Bayes factors. Noting the sensitivity of Bayes factors to the choice of priors on parameters, we define and propose nonpreferential priors as offering a reasonable standard for objective multimodel inference.

  16. [Denoising and assessing method of additive noise in the ultraviolet spectrum of SO2 in flue gas].

    PubMed

    Zhou, Tao; Sun, Chang-Ku; Liu, Bin; Zhao, Yu-Mei

    2009-11-01

    The problem of denoising and assessing method of the spectrum of SO2 in flue gas was studied based on DOAS. The denoising procedure of the additive noise in the spectrum was divided into two parts: reducing the additive noise and enhancing the useful signal. When obtaining the absorption feature of measured gas, a multi-resolution preprocessing method of original spectrum was adopted for denoising by DWT (discrete wavelet transform). The signal energy operators in different scales were used to choose the denoising threshold and separate the useful signal from the noise. On the other hand, because there was no sudden change in the spectra of flue gas in time series, the useful signal component was enhanced according to the signal time dependence. And the standard absorption cross section was used to build the ideal absorption spectrum with the measured gas temperature and pressure. This ideal spectrum was used as the desired signal instead of the original spectrum in the assessing method to modify the SNR (signal-noise ratio). There were two different environments to do the proof test-in the lab and at the scene. In the lab, SO2 was measured several times with the system using this method mentioned above. The average deviation was less than 1.5%, while the repeatability was less than 1%. And the short range experiment data were better than the large range. In the scene of a power plant whose concentration of flue gas had a large variation range, the maximum deviation of this method was 2.31% in the 18 groups of contrast data. The experimental results show that the denoising effect of the scene spectrum was better than that of the lab spectrum. This means that this method can improve the SNR of the spectrum effectively, which is seriously polluted by additive noise. PMID:20101989

  17. Noise Level Estimation for Model Selection in Kernel PCA Denoising.

    PubMed

    Varon, Carolina; Alzate, Carlos; Suykens, Johan A K

    2015-11-01

    One of the main challenges in unsupervised learning is to find suitable values for the model parameters. In kernel principal component analysis (kPCA), for example, these are the number of components, the kernel, and its parameters. This paper presents a model selection criterion based on distance distributions (MDDs). This criterion can be used to find the number of components and the σ(2) parameter of radial basis function kernels by means of spectral comparison between information and noise. The noise content is estimated from the statistical moments of the distribution of distances in the original dataset. This allows for a type of randomization of the dataset, without actually having to permute the data points or generate artificial datasets. After comparing the eigenvalues computed from the estimated noise with the ones from the input dataset, information is retained and maximized by a set of model parameters. In addition to the model selection criterion, this paper proposes a modification to the fixed-size method and uses the incomplete Cholesky factorization, both of which are used to solve kPCA in large-scale applications. These two approaches, together with the model selection MDD, were tested in toy examples and real life applications, and it is shown that they outperform other known algorithms. PMID:25608316

  18. Learning to Observe "and" Infer

    ERIC Educational Resources Information Center

    Hanuscin, Deborah L.; Park Rogers, Meredith A.

    2008-01-01

    Researchers describe the need for students to have multiple opportunities and social interaction to learn about the differences between observation and inference and their role in developing scientific explanations (Harlen 2001; Simpson 2000). Helping children develop their skills of observation and inference in science while emphasizing the…

  19. Feature Inference Learning and Eyetracking

    ERIC Educational Resources Information Center

    Rehder, Bob; Colner, Robert M.; Hoffman, Aaron B.

    2009-01-01

    Besides traditional supervised classification learning, people can learn categories by inferring the missing features of category members. It has been proposed that feature inference learning promotes learning a category's internal structure (e.g., its typical features and interfeature correlations) whereas classification promotes the learning of…

  20. Improving Inferences from Multiple Methods.

    ERIC Educational Resources Information Center

    Shotland, R. Lance; Mark, Melvin M.

    1987-01-01

    Multiple evaluation methods (MEMs) can cause an inferential challenge, although there are strategies to strengthen inferences. Practical and theoretical issues involved in the use by social scientists of MEMs, three potential problems in drawing inferences from MEMs, and short- and long-term strategies for alleviating these problems are outlined.…

  1. Causal Inference in Retrospective Studies.

    ERIC Educational Resources Information Center

    Holland, Paul W.; Rubin, Donald B.

    1988-01-01

    The problem of drawing causal inferences from retrospective case-controlled studies is considered. A model for causal inference in prospective studies is applied to retrospective studies. Limitations of case-controlled studies are formulated concerning relevant parameters that can be estimated in such studies. A coffee-drinking/myocardial…

  2. Causal Inference and Developmental Psychology

    ERIC Educational Resources Information Center

    Foster, E. Michael

    2010-01-01

    Causal inference is of central importance to developmental psychology. Many key questions in the field revolve around improving the lives of children and their families. These include identifying risk factors that if manipulated in some way would foster child development. Such a task inherently involves causal inference: One wants to know whether…

  3. Identifiability and inference of pathway motifs by epistasis analysis

    NASA Astrophysics Data System (ADS)

    Phenix, Hilary; Perkins, Theodore; Kærn, Mads

    2013-06-01

    The accuracy of genetic network inference is limited by the assumptions used to determine if one hypothetical model is better than another in explaining experimental observations. Most previous work on epistasis analysis—in which one attempts to infer pathway relationships by determining equivalences among traits following mutations—has been based on Boolean or linear models. Here, we delineate the ultimate limits of epistasis-based inference by systematically surveying all two-gene network motifs and use symbolic algebra with arbitrary regulation functions to examine trait equivalences. Our analysis divides the motifs into equivalence classes, where different genetic perturbations result in indistinguishable experimental outcomes. We demonstrate that this partitioning can reveal important information about network architecture, and show, using simulated data, that it greatly improves the accuracy of genetic network inference methods. Because of the minimal assumptions involved, equivalence partitioning has broad applicability for gene network inference.

  4. Identifiability and inference of pathway motifs by epistasis analysis.

    PubMed

    Phenix, Hilary; Perkins, Theodore; Kærn, Mads

    2013-06-01

    The accuracy of genetic network inference is limited by the assumptions used to determine if one hypothetical model is better than another in explaining experimental observations. Most previous work on epistasis analysis-in which one attempts to infer pathway relationships by determining equivalences among traits following mutations-has been based on Boolean or linear models. Here, we delineate the ultimate limits of epistasis-based inference by systematically surveying all two-gene network motifs and use symbolic algebra with arbitrary regulation functions to examine trait equivalences. Our analysis divides the motifs into equivalence classes, where different genetic perturbations result in indistinguishable experimental outcomes. We demonstrate that this partitioning can reveal important information about network architecture, and show, using simulated data, that it greatly improves the accuracy of genetic network inference methods. Because of the minimal assumptions involved, equivalence partitioning has broad applicability for gene network inference. PMID:23822501

  5. Comparison of JADE and canonical correlation analysis for ECG de-noising.

    PubMed

    Kuzilek, Jakub; Kremen, Vaclav; Lhotska, Lenka

    2014-01-01

    This paper explores differences between two methods for blind source separation within frame of ECG de-noising. First method is joint approximate diagonalization of eigenmatrices, which is based on estimation of fourth order cross-cummulant tensor and its diagonalization. Second one is the statistical method known as canonical correlation analysis, which is based on estimation of correlation matrices between two multidimensional variables. Both methods were used within method, which combines the blind source separation algorithm with decision tree. The evaluation was made on large database of 382 long-term ECG signals and the results were examined. Biggest difference was found in results of 50 Hz power line interference where the CCA algorithm completely failed. Thus main power of CCA lies in estimation of unstructured noise within ECG. JADE algorithm has larger computational complexity thus the CCA perfomed faster when estimating the components.

  6. The application of wavelet shrinkage denoising to magnetic Barkhausen noise measurements

    SciTech Connect

    Thomas, James

    2014-02-18

    The application of Magnetic Barkhausen Noise (MBN) as a non-destructive method of defect detection has proliferated throughout the manufacturing community. Instrument technology and measurement methodology have matured commensurately as applications have moved from the R and D labs to the fully automated manufacturing environment. These new applications present a new set of challenges including a bevy of error sources. A significant obstacle in many industrial applications is a decrease in signal to noise ratio due to (i) environmental EMI and (II) compromises in sensor design for the purposes of automation. The stochastic nature of MBN presents a challenge to any method of noise reduction. An application of wavelet shrinkage denoising is proposed as a method of decreasing extraneous noise in MBN measurements. The method is tested and yields marked improvement on measurements subject to EMI, grounding noise, and even measurements in ideal conditions.

  7. A Self-Alignment Algorithm for SINS Based on Gravitational Apparent Motion and Sensor Data Denoising

    PubMed Central

    Liu, Yiting; Xu, Xiaosu; Liu, Xixiang; Yao, Yiqing; Wu, Liang; Sun, Jin

    2015-01-01

    Initial alignment is always a key topic and difficult to achieve in an inertial navigation system (INS). In this paper a novel self-initial alignment algorithm is proposed using gravitational apparent motion vectors at three different moments and vector-operation. Simulation and analysis showed that this method easily suffers from the random noise contained in accelerometer measurements which are used to construct apparent motion directly. Aiming to resolve this problem, an online sensor data denoising method based on a Kalman filter is proposed and a novel reconstruction method for apparent motion is designed to avoid the collinearity among vectors participating in the alignment solution. Simulation, turntable tests and vehicle tests indicate that the proposed alignment algorithm can fulfill initial alignment of strapdown INS (SINS) under both static and swinging conditions. The accuracy can either reach or approach the theoretical values determined by sensor precision under static or swinging conditions. PMID:25923932

  8. A Self-Alignment Algorithm for SINS Based on Gravitational Apparent Motion and Sensor Data Denoising.

    PubMed

    Liu, Yiting; Xu, Xiaosu; Liu, Xixiang; Yao, Yiqing; Wu, Liang; Sun, Jin

    2015-01-01

    Initial alignment is always a key topic and difficult to achieve in an inertial navigation system (INS). In this paper a novel self-initial alignment algorithm is proposed using gravitational apparent motion vectors at three different moments and vector-operation. Simulation and analysis showed that this method easily suffers from the random noise contained in accelerometer measurements which are used to construct apparent motion directly. Aiming to resolve this problem, an online sensor data denoising method based on a Kalman filter is proposed and a novel reconstruction method for apparent motion is designed to avoid the collinearity among vectors participating in the alignment solution. Simulation, turntable tests and vehicle tests indicate that the proposed alignment algorithm can fulfill initial alignment of strapdown INS (SINS) under both static and swinging conditions. The accuracy can either reach or approach the theoretical values determined by sensor precision under static or swinging conditions. PMID:25923932

  9. Comparison of JADE and canonical correlation analysis for ECG de-noising.

    PubMed

    Kuzilek, Jakub; Kremen, Vaclav; Lhotska, Lenka

    2014-01-01

    This paper explores differences between two methods for blind source separation within frame of ECG de-noising. First method is joint approximate diagonalization of eigenmatrices, which is based on estimation of fourth order cross-cummulant tensor and its diagonalization. Second one is the statistical method known as canonical correlation analysis, which is based on estimation of correlation matrices between two multidimensional variables. Both methods were used within method, which combines the blind source separation algorithm with decision tree. The evaluation was made on large database of 382 long-term ECG signals and the results were examined. Biggest difference was found in results of 50 Hz power line interference where the CCA algorithm completely failed. Thus main power of CCA lies in estimation of unstructured noise within ECG. JADE algorithm has larger computational complexity thus the CCA perfomed faster when estimating the components. PMID:25570833

  10. Feasibility of blind source separation methods for the denoising of dense-array EEG.

    PubMed

    Taheri, N; Kachenoura, A; Ansari-Asl, K; Karfoul, A; Senhadji, L; Albera, L; Merlet, I

    2015-01-01

    High-density electroencephalographic recordings have recently been proved to bring useful information during the pre-surgical evaluation of patients suffering from drug-resistant epilepsy. However, these recordings can be particularly obscured by noise and artifacts. This paper focuses on the denoising of dense-array EEG data (e.g. 257 channels) contaminated with muscle artifacts. In this context, we compared the efficiency of several Independent Component Analysis (ICA) methods, namely SOBI, SOBIrob, PICA, InfoMax, two different implementations of FastICA, COM2, ERICA, and SIMBEC, as well as that of Canonical Correlation Analysis (CCA). We evaluated the performance using the Normalized Mean Square Error (NMSE) criterion and calculated the numerical complexity. Quantitative results obtained on realistic simulated data show that some of the ICA methods as well as CCA can properly remove muscular artifacts from dense-array EEG. PMID:26737361

  11. Use of Split Bregman denoising for iterative reconstruction in fluorescence diffuse optical tomography.

    PubMed

    Chamorro-Servent, Judit; Abascal, Juan F P J; Aguirre, Juan; Arridge, Simon; Correia, Teresa; Ripoll, Jorge; Desco, Manuel; Vaquero, Juan J

    2013-07-01

    Fluorescence diffuse optical tomography (fDOT) is a noninvasive imaging technique that makes it possible to quantify the spatial distribution of fluorescent tracers in small animals. fDOT image reconstruction is commonly performed by means of iterative methods such as the algebraic reconstruction technique (ART). The useful results yielded by more advanced l1-regularized techniques for signal recovery and image reconstruction, together with the recent publication of Split Bregman (SB) procedure, led us to propose a new approach to the fDOT inverse problem, namely, ART-SB. This method alternates a cost-efficient reconstruction step (ART iteration) with a denoising filtering step based on minimization of total variation of the image using the SB method, which can be solved efficiently and quickly. We applied this method to simulated and experimental fDOT data and found that ART-SB provides substantial benefits over conventional ART.

  12. A blind detection scheme based on modified wavelet denoising algorithm for wireless optical communications

    NASA Astrophysics Data System (ADS)

    Li, Ruijie; Dang, Anhong

    2015-10-01

    This paper investigates a detection scheme without channel state information for wireless optical communication (WOC) systems in turbulence induced fading channel. The proposed scheme can effectively diminish the additive noise caused by background radiation and photodetector, as well as the intensity scintillation caused by turbulence. The additive noise can be mitigated significantly using the modified wavelet threshold denoising algorithm, and then, the intensity scintillation can be attenuated by exploiting the temporal correlation of the WOC channel. Moreover, to improve the performance beyond that of the maximum likelihood decision, the maximum a posteriori probability (MAP) criterion is considered. Compared with conventional blind detection algorithm, simulation results show that the proposed detection scheme can improve the signal-to-noise ratio (SNR) performance about 4.38 dB while the bit error rate and scintillation index (SI) are 1×10-6 and 0.02, respectively.

  13. The EM Method in a Probabilistic Wavelet-Based MRI Denoising

    PubMed Central

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  14. Three-dimensional fuzzy-directional processing to impulse video color denoising in real time environment

    NASA Astrophysics Data System (ADS)

    Rosales-Silva, Alberto J.; Ponomaryov, Volodymyr; Gallegos-Funes, Francisco

    2009-05-01

    It is presented a robust three dimensional scheme using fuzzy and directional techniques in denoising video color images contaminated by impulsive random noise. This scheme estimates a noise and movement level in local area, detecting edges and fine details in an image video sequence. The proposed approach cares the chromaticity properties in multidimensional and multichannel images. The algorithm was specially designed to reduce computational charge, and its performance is quantified using objective criteria, such as Pick Signal Noise Relation, Mean Absolute Error and Normalized Color Difference, as well visual subjective views. Novel filter shows superiority rendering against other well known algorithms found in the literature. Real-time analysis is realized on Digital Signal Processor to outperform processing capability. The DSP was designed by Texas Instruments for multichannel processing in the multitask process, and permits to improve the performance of several tasks, and at the same time enhancing processing time and reducing computational charge in such a dedicated hardware.

  15. Denoising NMR time-domain signal by singular-value decomposition accelerated by graphics processing units.

    PubMed

    Man, Pascal P; Bonhomme, Christian; Babonneau, Florence

    2014-01-01

    We present a post-processing method that decreases the NMR spectrum noise without line shape distortion. As a result the signal-to-noise (S/N) ratio of a spectrum increases. This method is called Cadzow enhancement procedure that is based on the singular-value decomposition of time-domain signal. We also provide software whose execution duration is a few seconds for typical data when it is executed in modern graphic-processing unit. We tested this procedure not only on low sensitive nucleus (29)Si in hybrid materials but also on low gyromagnetic ratio, quadrupole nucleus (87)Sr in reference sample Sr(NO3)2. Improving the spectrum S/N ratio facilitates the determination of T/Q ratio of hybrid materials. It is also applicable to simulated spectrum, resulting shorter simulation duration for powder averaging. An estimation of the number of singular values needed for denoising is also provided. PMID:24880899

  16. Local denoising of digital speckle pattern interferometry fringes by multiplicative correlation and weighted smoothing splines.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2005-05-10

    We evaluate the use of smoothing splines with a weighted roughness measure for local denoising of the correlation fringes produced in digital speckle pattern interferometry. In particular, we also evaluate the performance of the multiplicative correlation operation between two speckle patterns that is proposed as an alternative procedure to generate the correlation fringes. It is shown that the application of a normalization algorithm to the smoothed correlation fringes reduces the excessive bias generated in the previous filtering stage. The evaluation is carried out by use of computer-simulated fringes that are generated for different average speckle sizes and intensities of the reference beam, including decorrelation effects. A comparison with filtering methods based on the continuous wavelet transform is also presented. Finally, the performance of the smoothing method in processing experimental data is illustrated.

  17. Geometric moment based nonlocal-means filter for ultrasound image denoising

    NASA Astrophysics Data System (ADS)

    Dou, Yangchao; Zhang, Xuming; Ding, Mingyue; Chen, Yimin

    2011-06-01

    It is inevitable that there is speckle noise in ultrasound image. Despeckling is the important process. The original nonlocal means (NLM) filter can remove speckle noise and protect the texture information effectively when the image corruption degree is relatively low. But when the noise in the image is strong, NLM will produce fictitious texture information, which has the disadvantageous influence on its denoising performance. In this paper, a novel nonlocal means (NLM) filter is proposed. We introduce geometric moments into the NLM filter. Though geometric moments are not orthogonal moments, it is popular by its concision, and its restoration ability is not yet proved. Results on synthetic data and real ultrasound image show that the proposed method can get better despeckling performance than other state-of-the-art method.

  18. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    PubMed

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  19. The empirical accuracy of uncertain inference models

    NASA Technical Reports Server (NTRS)

    Vaughan, David S.; Yadrick, Robert M.; Perrin, Bruce M.; Wise, Ben P.

    1987-01-01

    Uncertainty is a pervasive feature of the domains in which expert systems are designed to function. Research design to test uncertain inference methods for accuracy and robustness, in accordance with standard engineering practice is reviewed. Several studies were conducted to assess how well various methods perform on problems constructed so that correct answers are known, and to find out what underlying features of a problem cause strong or weak performance. For each method studied, situations were identified in which performance deteriorates dramatically. Over a broad range of problems, some well known methods do only about as well as a simple linear regression model, and often much worse than a simple independence probability model. The results indicate that some commercially available expert system shells should be used with caution, because the uncertain inference models that they implement can yield rather inaccurate results.

  20. Inference for current leukemia free survival

    PubMed Central

    Liu, Leiyan; Logan, Brent

    2009-01-01

    Donor lymphocyte infusion (DLI) for patients who relapse following an allogeneic stem cell transplant has proved remarkably durable. Because of the potential for second remissions with DLI, the current leukemia free survival (CLFS), which is the probability that a patient has not failed the entire course of the treatment, is becoming of interest to clinical investigators. Based on either a multistate Markov model or a linear combination of Kaplan–Meier estimators, we explore regression models for the CLFS. We focus on the two sample problem and we develop confidence bands for the CLFS or for differences in CLFS as well as a Kolmogorov type hypothesis test using a re-sampling technique. We also examine the use of pseudo-values to make inference on the direct effects of covariates on the CLFS function and we develop a score test for the equality of two CLFS. We illustrate these inference methods on a bone marrow transplant dataset. PMID:18663574

  1. Parameter inference for biochemical systems that undergo a Hopf bifurcation.

    PubMed

    Kirk, Paul D W; Toni, Tina; Stumpf, Michael P H

    2008-07-01

    The increasingly widespread use of parametric mathematical models to describe biological systems means that the ability to infer model parameters is of great importance. In this study, we consider parameter inferability in nonlinear ordinary differential equation models that undergo a bifurcation, focusing on a simple but generic biochemical reaction model. We systematically investigate the shape of the likelihood function for the model's parameters, analyzing the changes that occur as the model undergoes a Hopf bifurcation. We demonstrate that there exists an intrinsic link between inference and the parameters' impact on the modeled system's dynamical stability, which we hope will motivate further research in this area.

  2. Bayesian Inference: with ecological applications

    USGS Publications Warehouse

    Link, William A.; Barker, Richard J.

    2010-01-01

    This text provides a mathematically rigorous yet accessible and engaging introduction to Bayesian inference with relevant examples that will be of interest to biologists working in the fields of ecology, wildlife management and environmental studies as well as students in advanced undergraduate statistics.. This text opens the door to Bayesian inference, taking advantage of modern computational efficiencies and easily accessible software to evaluate complex hierarchical models.

  3. Pathway network inference from gene expression data

    PubMed Central

    2014-01-01

    Background The development of high-throughput omics technologies enabled genome-wide measurements of the activity of cellular elements and provides the analytical resources for the progress of the Systems Biology discipline. Analysis and interpretation of gene expression data has evolved from the gene to the pathway and interaction level, i.e. from the detection of differentially expressed genes, to the establishment of gene interaction networks and the identification of enriched functional categories. Still, the understanding of biological systems requires a further level of analysis that addresses the characterization of the interaction between functional modules. Results We present a novel computational methodology to study the functional interconnections among the molecular elements of a biological system. The PANA approach uses high-throughput genomics measurements and a functional annotation scheme to extract an activity profile from each functional block -or pathway- followed by machine-learning methods to infer the relationships between these functional profiles. The result is a global, interconnected network of pathways that represents the functional cross-talk within the molecular system. We have applied this approach to describe the functional transcriptional connections during the yeast cell cycle and to identify pathways that change their connectivity in a disease condition using an Alzheimer example. Conclusions PANA is a useful tool to deepen in our understanding of the functional interdependences that operate within complex biological systems. We show the approach is algorithmically consistent and the inferred network is well supported by the available functional data. The method allows the dissection of the molecular basis of the functional connections and we describe the different regulatory mechanisms that explain the network's topology obtained for the yeast cell cycle data. PMID:25032889

  4. Active inference, communication and hermeneutics.

    PubMed

    Friston, Karl J; Frith, Christopher D

    2015-07-01

    Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others--during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions--both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then--in principle--they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa.

  5. Active inference, communication and hermeneutics☆

    PubMed Central

    Friston, Karl J.; Frith, Christopher D.

    2015-01-01

    Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others – during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions – both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then – in principle – they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa. PMID:25957007

  6. Active inference, communication and hermeneutics.

    PubMed

    Friston, Karl J; Frith, Christopher D

    2015-07-01

    Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others--during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions--both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then--in principle--they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa. PMID:25957007

  7. Inferring the temperature dependence of population parameters: the effects of experimental design and inference algorithm

    PubMed Central

    Palamara, Gian Marco; Childs, Dylan Z; Clements, Christopher F; Petchey, Owen L; Plebani, Marco; Smith, Matthew J

    2014-01-01

    Understanding and quantifying the temperature dependence of population parameters, such as intrinsic growth rate and carrying capacity, is critical for predicting the ecological responses to environmental change. Many studies provide empirical estimates of such temperature dependencies, but a thorough investigation of the methods used to infer them has not been performed yet. We created artificial population time series using a stochastic logistic model parameterized with the Arrhenius equation, so that activation energy drives the temperature dependence of population parameters. We simulated different experimental designs and used different inference methods, varying the likelihood functions and other aspects of the parameter estimation methods. Finally, we applied the best performing inference methods to real data for the species Paramecium caudatum. The relative error of the estimates of activation energy varied between 5% and 30%. The fraction of habitat sampled played the most important role in determining the relative error; sampling at least 1% of the habitat kept it below 50%. We found that methods that simultaneously use all time series data (direct methods) and methods that estimate population parameters separately for each temperature (indirect methods) are complementary. Indirect methods provide a clearer insight into the shape of the functional form describing the temperature dependence of population parameters; direct methods enable a more accurate estimation of the parameters of such functional forms. Using both methods, we found that growth rate and carrying capacity of Paramecium caudatum scale with temperature according to different activation energies. Our study shows how careful choice of experimental design and inference methods can increase the accuracy of the inferred relationships between temperature and population parameters. The comparison of estimation methods provided here can increase the accuracy of model predictions, with important

  8. Inferring the temperature dependence of population parameters: the effects of experimental design and inference algorithm.

    PubMed

    Palamara, Gian Marco; Childs, Dylan Z; Clements, Christopher F; Petchey, Owen L; Plebani, Marco; Smith, Matthew J

    2014-12-01

    Understanding and quantifying the temperature dependence of population parameters, such as intrinsic growth rate and carrying capacity, is critical for predicting the ecological responses to environmental change. Many studies provide empirical estimates of such temperature dependencies, but a thorough investigation of the methods used to infer them has not been performed yet. We created artificial population time series using a stochastic logistic model parameterized with the Arrhenius equation, so that activation energy drives the temperature dependence of population parameters. We simulated different experimental designs and used different inference methods, varying the likelihood functions and other aspects of the parameter estimation methods. Finally, we applied the best performing inference methods to real data for the species Paramecium caudatum. The relative error of the estimates of activation energy varied between 5% and 30%. The fraction of habitat sampled played the most important role in determining the relative error; sampling at least 1% of the habitat kept it below 50%. We found that methods that simultaneously use all time series data (direct methods) and methods that estimate population parameters separately for each temperature (indirect methods) are complementary. Indirect methods provide a clearer insight into the shape of the functional form describing the temperature dependence of population parameters; direct methods enable a more accurate estimation of the parameters of such functional forms. Using both methods, we found that growth rate and carrying capacity of Paramecium caudatum scale with temperature according to different activation energies. Our study shows how careful choice of experimental design and inference methods can increase the accuracy of the inferred relationships between temperature and population parameters. The comparison of estimation methods provided here can increase the accuracy of model predictions, with important

  9. Optimal fuzzy inference for short-term load forecasting

    SciTech Connect

    Mori, Hiroyuki; Kobayashi, Hidenori

    1996-02-01

    This paper proposes an optimal fuzzy inference method for short-term load forecasting. The proposed method constructs an optimal structure of the simplified fuzzy inference that minimizes model errors and the number of the membership functions to grasp nonlinear behavior of power system short-term loads. The model is identified by simulated annealing and the steepest descent method. The proposed method is demonstrated in examples.

  10. Optimal fuzzy inference for short-term load forecasting

    SciTech Connect

    Mori, Hiroyuki; Kobayashi, Hidenori

    1995-12-31

    This paper proposes an optimal fuzzy inference method for short-term load forecasting. The proposed method constructs an optimal structure of the simplified fuzzy inference that minimizes model errors and the number of the membership functions to grasp nonlinear behavior of power system short-term loads. The model is identified by simulated annealing and the steepest descent method. The proposed method is demonstrated in examples.

  11. Physics of Inference

    NASA Astrophysics Data System (ADS)

    Toroczkai, Zoltan

    Jaynes's maximum entropy method provides a family of principled models that allow the prediction of a system's properties as constrained by empirical data (observables). However, their use is often hindered by the degeneracy problem characterized by spontaneous symmetry breaking, where predictions fail. Here we show that degeneracy appears when the corresponding density of states function is not log-concave, which is typically the consequence of nonlinear relationships between the constraining observables. We illustrate this phenomenon on several examples, including from complex networks, combinatorics and classical spin systems (e.g., Blume-Emery-Griffiths lattice-spin models). Exploiting these nonlinear relationships we then propose a solution to the degeneracy problem for a large class of systems via transformations that render the density of states function log-concave. The effectiveness of the method is demonstrated on real-world network data. Finally, we discuss the implications of these findings on the relationship between the geometrical properties of the density of states function and phase transitions in spin systems. Supported in part by Grant No. FA9550-12-1-0405 from AFOSR/DARPA and by Grant No. HDTRA 1-09-1-0039 from DTRA.

  12. Optimal inference with suboptimal models: Addiction and active Bayesian inference

    PubMed Central

    Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl

    2015-01-01

    When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321

  13. De-noising and retrieving algorithm of Mie lidar data based on the particle filter and the Fernald method.

    PubMed

    Li, Chen; Pan, Zengxin; Mao, Feiyue; Gong, Wei; Chen, Shihua; Min, Qilong

    2015-10-01

    The signal-to-noise ratio (SNR) of an atmospheric lidar decreases rapidly as range increases, so that maintaining high accuracy when retrieving lidar data at the far end is difficult. To avoid this problem, many de-noising algorithms have been developed; in particular, an effective de-noising algorithm has been proposed to simultaneously retrieve lidar data and obtain a de-noised signal by combining the ensemble Kalman filter (EnKF) and the Fernald method. This algorithm enhances the retrieval accuracy and effective measure range of a lidar based on the Fernald method, but sometimes leads to a shift (bias) in the near range as a result of the over-smoothing caused by the EnKF. This study proposes a new scheme that avoids this phenomenon using a particle filter (PF) instead of the EnKF in the de-noising algorithm. Synthetic experiments show that the PF performs better than the EnKF and Fernald methods. The root mean square error of PF are 52.55% and 38.14% of that of the Fernald and EnKF methods, and PF increases the SNR by 44.36% and 11.57% of that of the Fernald and EnKF methods, respectively. For experiments with real signals, the relative bias of the EnKF is 5.72%, which is reduced to 2.15% by the PF in the near range. Furthermore, the suppression impact on the random noise in the far range is also made significant via the PF. An extensive application of the PF method can be useful in determining the local and global properties of aerosols.

  14. Denoising of Ictal EEG Data Using Semi-Blind Source Separation Methods Based on Time-Frequency Priors.

    PubMed

    Hajipour Sardouie, Sepideh; Bagher Shamsollahi, Mohammad; Albera, Laurent; Merlet, Isabelle

    2015-05-01

    Removing muscle activity from ictal ElectroEncephaloGram (EEG) data is an essential preprocessing step in diagnosis and study of epileptic disorders. Indeed, at the very beginning of seizures, ictal EEG has a low amplitude and its morphology in the time domain is quite similar to muscular activity. Contrary to the time domain, ictal signals have specific characteristics in the time-frequency domain. In this paper, we use the time-frequency signature of ictal discharges as a priori information on the sources of interest. To extract the time-frequency signature of ictal sources, we use the Canonical Correlation Analysis (CCA) method. Then, we propose two time-frequency based semi-blind source separation approaches, namely the Time-Frequency-Generalized EigenValue Decomposition (TF-GEVD) and the Time-Frequency-Denoising Source Separation (TF-DSS), for the denoising of ictal signals based on these time-frequency signatures. The performance of the proposed methods is compared with that of CCA and Independent Component Analysis (ICA) approaches for the denoising of simulated ictal EEGs and of real ictal data. The results show the superiority of the proposed methods in comparison with CCA and ICA. PMID:25095269

  15. Multi-step damped multichannel singular spectrum analysis for simultaneous reconstruction and denoising of 3D seismic data

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Chen, Yangkang; Huang, Weilin; Gan, Shuwei

    2016-10-01

    Multichannel singular spectrum analysis (MSSA) is an effective approach for simultaneous seismic data reconstruction and denoising. MSSA utilizes truncated singular value decomposition (TSVD) to decompose the noisy signal into a signal subspace and a noise subspace and weighted projection onto convex sets (POCS)-like method to reconstruct the missing data in the appropriately constructed block Hankel matrix at each frequency slice. However, there still exists some residual noise in signal space due to two major factors: the deficiency of traditional TSVD and the iteratively inserted observed noisy data during the process of weighted POCS like iterations. In this paper, we first further extend the recently proposed damped MSSA (DMSSA) for random noise attenuation, which is more powerful in distinguishing between signal and noise, to simultaneous reconstruction and denoising. Then combined with DMSSA, we propose a multi-step strategy, named multi-step damped MSSA (MS-DMSSA), to efficiently reduce the inserted noise during the POCS like iterations, thus can improve the final performance of simultaneous reconstruction and denoising. Application of the MS-DMSSA approach on 3D synthetic and field seismic data demonstrates a better performance compared with the conventional MSSA approach.

  16. Denoising of B{sub 1}{sup +} field maps for noise-robust image reconstruction in electrical properties tomography

    SciTech Connect

    Michel, Eric; Hernandez, Daniel; Cho, Min Hyoung; Lee, Soo Yeol

    2014-10-15

    Purpose: To validate the use of adaptive nonlinear filters in reconstructing conductivity and permittivity images from the noisy B{sub 1}{sup +} maps in electrical properties tomography (EPT). Methods: In EPT, electrical property images are computed by taking Laplacian of the B{sub 1}{sup +} maps. To mitigate the noise amplification in computing the Laplacian, the authors applied adaptive nonlinear denoising filters to the measured complex B{sub 1}{sup +} maps. After the denoising process, they computed the Laplacian by central differences. They performed EPT experiments on phantoms and a human brain at 3 T along with corresponding EPT simulations on finite-difference time-domain models. They evaluated the EPT images comparing them with the ones obtained by previous EPT reconstruction methods. Results: In both the EPT simulations and experiments, the nonlinear filtering greatly improved the EPT image quality when evaluated in terms of the mean and standard deviation of the electrical property values at the regions of interest. The proposed method also improved the overall similarity between the reconstructed conductivity images and the true shapes of the conductivity distribution. Conclusions: The nonlinear denoising enabled us to obtain better-quality EPT images of the phantoms and the human brain at 3 T.

  17. Edge preserved enhancement of medical images using adaptive fusion-based denoising by shearlet transform and total variation algorithm

    NASA Astrophysics Data System (ADS)

    Gupta, Deep; Anand, Radhey Shyam; Tyagi, Barjeev

    2013-10-01

    Edge preserved enhancement is of great interest in medical images. Noise present in medical images affects the quality, contrast resolution, and most importantly, texture information and can make post-processing difficult also. An enhancement approach using an adaptive fusion algorithm is proposed which utilizes the features of shearlet transform (ST) and total variation (TV) approach. In the proposed method, three different denoised images processed with TV method, shearlet denoising, and edge information recovered from the remnant of the TV method and processed with the ST are fused adaptively. The result of enhanced images processed with the proposed method helps to improve the visibility and detectability of medical images. For the proposed method, different weights are evaluated from the different variance maps of individual denoised image and the edge extracted information from the remnant of the TV approach. The performance of the proposed method is evaluated by conducting various experiments on both the standard images and different medical images such as computed tomography, magnetic resonance, and ultrasound. Experiments show that the proposed method provides an improvement not only in noise reduction but also in the preservation of more edges and image details as compared to the others.

  18. Study of real-time image denoising and hole-filling for micro-cantilever IR FPA imaging system

    NASA Astrophysics Data System (ADS)

    Feng, Yun; Zhao, Yuejin; Dong, Liquan; Liu, Ming; Liu, Xiaohua; Li, Xiaomeng; Zhao, Zhu; Yu, Xiaomei; Hui, Mei; Wu, Hong

    2014-10-01

    This paper proposes and experimentally demonstrates a new denoising and hole-filling algorithm through discrete points removal and bilinear interpolation based on the bi-material cantilever FPA infrared imaging system. In practice, because of the limitation of FPA manufacturing process and optical readout system, the quality of obtained images is always not satisfying. A lot of noise and holes appear in the images, which restrict the application of the infrared imaging system. After analyzing the causes of noise and holes, an algorithm is presented to improve the quality of infrared images. Firstly, the statistic characteristics such as probability histograms of images with noise are analyzed in great detail. Then, IR images are denoised by the method of discrete points removal. Second, the holes are filled by bilinear interpolation. In this step, the reference points are found through partial derivative method instead of using the edge points of the holes simply. It can detect the real points effectively and enable the holes much closer to the true values. Finally, the algorithm is applied to different infrared images successfully. Experimental results show that the IR images can be denoised effectively and the SNRs are improved substantially. Meanwhile, the filling ratios of target holes reach as high as 95% and the visual quality is achieved well. It proves that the algorithm has the advantages of high speed, great precision and easy implement. It is a highly efficient real-time image processing algorithm for bi-material micro-cantilever FPA infrared imaging system.

  19. Evolutionary inferences from the analysis of exchangeability

    PubMed Central

    Hendry, Andrew P.; Kaeuffer, Renaud; Crispo, Erika; Peichel, Catherine L.; Bolnick, Daniel I.

    2013-01-01

    Evolutionary inferences are usually based on statistical models that compare mean genotypes and phenotypes (or their frequencies) among populations. An alternative is to use the actual distribution of genotypes and phenotypes to infer the “exchangeability” of individuals among populations. We illustrate this approach by using discriminant functions on principal components to classify individuals among paired lake and stream populations of threespine stickleback in each of six independent watersheds. Classification based on neutral and non-neutral microsatellite markers was highest to the population of origin and next-highest to populations in the same watershed. These patterns are consistent with the influence of historical contingency (separate colonization of each watershed) and subsequent gene flow (within but not between watersheds). In comparison to this low genetic exchangeability, ecological (diet) and morphological (trophic and armor traits) exchangeability was relatively high – particularly among populations from similar habitats. These patterns reflect the role of natural selection in driving parallel changes adaptive changes when independent populations colonize similar habitats. Importantly, however, substantial non-parallelism was also evident. Our results show that analyses based on exchangeability can confirm inferences based on statistical analyses of means or frequencies, while also refining insights into the drivers of – and constraints on – evolutionary diversification. PMID:24299398

  20. Causal inference in biology networks with integrated belief propagation.

    PubMed

    Chang, Rui; Karr, Jonathan R; Schadt, Eric E

    2015-01-01

    Inferring causal relationships among molecular and higher order phenotypes is a critical step in elucidating the complexity of living systems. Here we propose a novel method for inferring causality that is no longer constrained by the conditional dependency arguments that limit the ability of statistical causal inference methods to resolve causal relationships within sets of graphical models that are Markov equivalent. Our method utilizes Bayesian belief propagation to infer the responses of perturbation events on molecular traits given a hypothesized graph structure. A distance measure between the inferred response distribution and the observed data is defined to assess the 'fitness' of the hypothesized causal relationships. To test our algorithm, we infer causal relationships within equivalence classes of gene networks in which the form of the functional interactions that are possible are assumed to be nonlinear, given synthetic microarray and RNA sequencing data. We also apply our method to infer causality in real metabolic network with v-structure and feedback loop. We show that our method can recapitulate the causal structure and recover the feedback loop only from steady-state data which conventional method cannot. PMID:25592596

  1. Use of Empirical Mode Decomposition based Denoised NDVI in Extended Three-Temperature Model to estimate Evapotranspiration in Northeast Indian Ecosystems

    NASA Astrophysics Data System (ADS)

    Padhee, S. K.

    2015-12-01

    Evapotranspiration (ET) is an essential component involved in the energy balance and water budgeting methods, and its precise assessment are crucial for estimation of various hydrological parameters. Traditional point estimation methods for ET computation offer quantitative analysis, but lag in spatial distribution. The use of Remote Sensing (RS) data with good spatial, spectral and temporal resolution having broad spatial coverage, could lead the estimations with some advantages. However, approaches which requires data rich environment, demands time and resources. The estimation of spatially distributed soil evaporation (Es) and transpiration from canopy (Ec) by RS data, followed by their combination to provide the total ET, could be a simpler approach for accurate estimates of ET flux at macro-scale level. The 'Extended Three Temperature Model' (Extended 3T Model) is an established model based on same approach and is capable to compute ET and its partition of Es and Ec within the same algorithm. A case study was conducted using Extended 3T Model and MODIS products for the Brahmaputra river basin within the Northeast India for years 2000-2010. The extended 3T model was used by including its pre-requisite the land surface temperature (Ts), which was separated into the surface temperature of dry soil (Tsm) and the surface temperature of vegetation (Tcm), decided by a derivative of vegetation index (NDVI) called fractional vegetation cover (f). However, NDVI time series which is nonlinear and nonstationary can be decomposed by the Empirical Mode Decomposition (EMD) into components called intrinsic mode functions (IMFs), based on inherent temporal scales. The highest frequency component which was found to represent noise was subtracted from the original NDVI series to get the denoised product from which f was derived. The separated land surface temperatures (Tsm and Tcm) were used to calculate the Es and Ec followed by estimation of total ET. The spatiotemporal

  2. Multiple model inference.

    SciTech Connect

    Swiler, Laura Painton; Urbina, Angel

    2010-07-01

    This paper compares three approaches for model selection: classical least squares methods, information theoretic criteria, and Bayesian approaches. Least squares methods are not model selection methods although one can select the model that yields the smallest sum-of-squared error function. Information theoretic approaches balance overfitting with model accuracy by incorporating terms that penalize more parameters with a log-likelihood term to reflect goodness of fit. Bayesian model selection involves calculating the posterior probability that each model is correct, given experimental data and prior probabilities that each model is correct. As part of this calculation, one often calibrates the parameters of each model and this is included in the Bayesian calculations. Our approach is demonstrated on a structural dynamics example with models for energy dissipation and peak force across a bolted joint. The three approaches are compared and the influence of the log-likelihood term in all approaches is discussed.

  3. Inference---A Python Package for Astrostatistics

    NASA Astrophysics Data System (ADS)

    Loredo, T. J.; Connors, A.; Oliphant, T. E.

    2004-08-01

    Python is an object-oriented ``very high level language'' that is easy to learn, actively supported, and freely available for a large variety of computing platforms. It possesses sophisticated scientific computing capabilities thanks to ongoing work by a community of scientists and engineers who maintain a suite of open source scientific packages. Key contributions come from the STScI group maintaining PyRAF, a Python environment for running IRAF tasks. Python's main scientific computing packages are the Numeric and numarray packages implementing efficient array and image processing, and the SciPy package implementing a wide variety of general-use algorithms including optimization, root finding, special functions, numerical integration, and basic statistical tasks. We describe the Inference package, a collection of tools for carrying out advanced astrostatistical analyses that is about to be released as a supplement to SciPy. The Inference package has two main parts. First is a Parametric Inference Engine that offers a unified environment for analysis of parametric models with a variety of methods, including minimum χ2, maximum likelihood, and Bayesian methods. Several common analysis tasks are available with simple syntax (e.g., optimization, multidimensional exploration and integration, simulation); its parameter syntax is remensicent of that of SHERPA. Second, the package includes a growing library of diverse, specialized astrostatistical methods in a variety of domains including time series, spectrum and survey analysis, and basic image analysis. Where possible, a variety of methods are available for a given problem, enabling users to explore alternative methods in a unified environment, with the guidance of significant documentation. The Inference project is supported by NASA AISRP grant NAG5-12082.

  4. Children's Category-Based Inferences Affect Classification

    ERIC Educational Resources Information Center

    Ross, Brian H.; Gelman, Susan A.; Rosengren, Karl S.

    2005-01-01

    Children learn many new categories and make inferences about these categories. Much work has examined how children make inferences on the basis of category knowledge. However, inferences may also affect what is learned about a category. Four experiments examine whether category-based inferences during category learning influence category knowledge…

  5. Causal inference from observational data.

    PubMed

    Listl, Stefan; Jürges, Hendrik; Watt, Richard G

    2016-10-01

    Randomized controlled trials have long been considered the 'gold standard' for causal inference in clinical research. In the absence of randomized experiments, identification of reliable intervention points to improve oral health is often perceived as a challenge. But other fields of science, such as social science, have always been challenged by ethical constraints to conducting randomized controlled trials. Methods have been established to make causal inference using observational data, and these methods are becoming increasingly relevant in clinical medicine, health policy and public health research. This study provides an overview of state-of-the-art methods specifically designed for causal inference in observational data, including difference-in-differences (DiD) analyses, instrumental variables (IV), regression discontinuity designs (RDD) and fixed-effects panel data analysis. The described methods may be particularly useful in dental research, not least because of the increasing availability of routinely collected administrative data and electronic health records ('big data'). PMID:27111146

  6. We infer light in space.

    PubMed

    Schirillo, James A

    2013-10-01

    In studies of lightness and color constancy, the terms lightness and brightness refer to the qualia corresponding to perceived surface reflectance and perceived luminance, respectively. However, what has rarely been considered is the fact that the volume of space containing surfaces appears neither empty, void, nor black, but filled with light. Helmholtz (1866/1962) came closest to describing this phenomenon when discussing inferred illumination, but previous theoretical treatments have fallen short by restricting their considerations to the surfaces of objects. The present work is among the first to explore how we infer the light present in empty space. It concludes with several research examples supporting the theory that humans can infer the differential levels and chromaticities of illumination in three-dimensional space. PMID:23435628

  7. Inferring Diversity: Life After Shannon

    NASA Astrophysics Data System (ADS)

    Giffin, Adom

    The diversity of a community that cannot be fully counted must be inferred. The two preeminent inference methods are the MaxEnt method, which uses information in the form of constraints and Bayes' rule which uses information in the form of data. It has been shown that these two methods are special cases of the method of Maximum (relative) Entropy (ME). We demonstrate how this method can be used as a measure of diversity that not only reproduces the features of Shannon's index but exceeds them by allowing more types of information to be included in the inference. A specific example is solved in detail. Additionally, the entropy that is found is the same form as the thermodynamic entropy.

  8. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    SciTech Connect

    Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain E-mail: cedric.messaoudi@curie.fr Messaoudi, Cedric E-mail: cedric.messaoudi@curie.fr Marco, Sergio E-mail: cedric.messaoudi@curie.fr

    2015-01-13

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  9. LOWER LEVEL INFERENCE CONTROL IN STATISTICAL DATABASE SYSTEMS

    SciTech Connect

    Lipton, D.L.; Wong, H.K.T.

    1984-02-01

    An inference is the process of transforming unclassified data values into confidential data values. Most previous research in inference control has studied the use of statistical aggregates to deduce individual records. However, several other types of inference are also possible. Unknown functional dependencies may be apparent to users who have 'expert' knowledge about the characteristics of a population. Some correlations between attributes may be concluded from 'commonly-known' facts about the world. To counter these threats, security managers should use random sampling of databases of similar populations, as well as expert systems. 'Expert' users of the DATABASE SYSTEM may form inferences from the variable performance of the user interface. Users may observe on-line turn-around time, accounting statistics. the error message received, and the point at which an interactive protocol sequence fails. One may obtain information about the frequency distributions of attribute values, and the validity of data object names from this information. At the back-end of a database system, improved software engineering practices will reduce opportunities to bypass functional units of the database system. The term 'DATA OBJECT' should be expanded to incorporate these data object types which generate new classes of threats. The security of DATABASES and DATABASE SySTEMS must be recognized as separate but related problems. Thus, by increased awareness of lower level inferences, system security managers may effectively nullify the threat posed by lower level inferences.

  10. Bayesian inferences about the self (and others): A review

    PubMed Central

    Moutoussis, Michael; Fearon, Pasco; El-Deredy, Wael; Dolan, Raymond J.; Friston, Karl J.

    2014-01-01

    Viewing the brain as an organ of approximate Bayesian inference can help us understand how it represents the self. We suggest that inferred representations of the self have a normative function: to predict and optimise the likely outcomes of social interactions. Technically, we cast this predict-and-optimise as maximising the chance of favourable outcomes through active inference. Here the utility of outcomes can be conceptualised as prior beliefs about final states. Actions based on interpersonal representations can therefore be understood as minimising surprise – under the prior belief that one will end up in states with high utility. Interpersonal representations thus serve to render interactions more predictable, while the affective valence of interpersonal inference renders self-perception evaluative. Distortions of self-representation contribute to major psychiatric disorders such as depression, personality disorder and paranoia. The approach we review may therefore operationalise the study of interpersonal representations in pathological states. PMID:24583455

  11. Perception, illusions and Bayesian inference.

    PubMed

    Nour, Matthew M; Nour, Joseph M

    2015-01-01

    Descriptive psychopathology makes a distinction between veridical perception and illusory perception. In both cases a perception is tied to a sensory stimulus, but in illusions the perception is of a false object. This article re-examines this distinction in light of new work in theoretical and computational neurobiology, which views all perception as a form of Bayesian statistical inference that combines sensory signals with prior expectations. Bayesian perceptual inference can solve the 'inverse optics' problem of veridical perception and provides a biologically plausible account of a number of illusory phenomena, suggesting that veridical and illusory perceptions are generated by precisely the same inferential mechanisms.

  12. Bayesian Inference in Satellite Gravity Inversion

    NASA Technical Reports Server (NTRS)

    Kis, K. I.; Taylor, Patrick T.; Wittmann, G.; Kim, Hyung Rae; Torony, B.; Mayer-Guerr, T.

    2005-01-01

    To solve a geophysical inverse problem means applying measurements to determine the parameters of the selected model. The inverse problem is formulated as the Bayesian inference. The Gaussian probability density functions are applied in the Bayes's equation. The CHAMP satellite gravity data are determined at the altitude of 400 kilometer altitude over the South part of the Pannonian basin. The model of interpretation is the right vertical cylinder. The parameters of the model are obtained from the minimum problem solved by the Simplex method.

  13. Inferring differentiation pathways from gene expression

    PubMed Central

    Costa, Ivan G.; Roepcke, Stefan; Hafemeister, Christoph; Schliep, Alexander

    2008-01-01

    Motivation: The regulation of proliferation and differentiation of embryonic and adult stem cells into mature cells is central to developmental biology. Gene expression measured in distinguishable developmental stages helps to elucidate underlying molecular processes. In previous work we showed that functional gene modules, which act distinctly in the course of development, can be represented by a mixture of trees. In general, the similarities in the gene expression programs of cell populations reflect the similarities in the differentiation path. Results: We propose a novel model for gene expression profiles and an unsupervised learning method to estimate developmental similarity and infer differentiation pathways. We assess the performance of our model on simulated data and compare it with favorable results to related methods. We also infer differentiation pathways and predict functional modules in gene expression data of lymphoid development. Conclusions: We demonstrate for the first time how, in principal, the incorporation of structural knowledge about the dependence structure helps to reveal differentiation pathways and potentially relevant functional gene modules from microarray datasets. Our method applies in any area of developmental biology where it is possible to obtain cells of distinguishable differentiation stages. Availability: The implementation of our method (GPL license), data and additional results are available at http://algorithmics.molgen.mpg.de/Supplements/InfDif/ Contact: filho@molgen.mpg.de, schliep@molgen.mpg.de Supplementary information: Supplementary data is available at Bioinformatics online. PMID:18586709

  14. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  15. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    PubMed

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter. PMID:24977618

  16. Generalized average of signals (GAS) - a new method for denoising and phase detection

    NASA Astrophysics Data System (ADS)

    Malek, J.; Kolinsky, P.; Strunc, J.; Valenta, J.

    2007-12-01

    A novel method called Generalized Average of Signals (GAS) was developed and tested during the last two years (Málek et al., in press). This method is designed for processing of seismograms from dense seismic arrays and is convenient mainly for denoising and weak phase detection. The main idea of the GAS method is based on non-linear stacking of seismograms in frequency domain, which considerably improves signal-to-noise ratio of coherent seismograms. Several synthetic tests of the GAS method are presented and the results are compared with the PWS method of Schimell and Paulssen (1997). Moreover, examples of application on real data are presented. These examples were chosen to show a broad applicability of the method in experiments of different scales. The first one shows identification of S-waves on seismograms from shallow seismic. The second one concerns identification of converted waves from local earthquakes registered at the WEBNET local network in western Bohemia. Finally, the third one depicts identification of PKIKP onsets on seismograms of teleseismic earthquakes. Schimmel, M., Paulssen H. (1997): Noise reduction and detection of weak, coherent signals through phase- weighted stacks. Geophys. J. Int. 130, 497-505. Málek J., Kolínský P., Strunc J. and Valenta J. (2007): Generalized average of signals (GAS) - a new method for detection of very weak waves in seismograms. Acta Geodyn. et Geomater., in press.

  17. Robust Cell Detection and Segmentation in Histopathological Images Using Sparse Reconstruction and Stacked Denoising Autoencoders

    PubMed Central

    Su, Hai; Xing, Fuyong; Kong, Xiangfei; Xie, Yuanpu; Zhang, Shaoting; Yang, Lin

    2016-01-01

    Computer-aided diagnosis (CAD) is a promising tool for accurate and consistent diagnosis and prognosis. Cell detection and segmentation are essential steps for CAD. These tasks are challenging due to variations in cell shapes, touching cells, and cluttered background. In this paper, we present a cell detection and segmentation algorithm using the sparse reconstruction with trivial templates and a stacked denoising autoencoder (sDAE). The sparse reconstruction handles the shape variations by representing a testing patch as a linear combination of shapes in the learned dictionary. Trivial templates are used to model the touching parts. The sDAE, trained with the original data and their structured labels, is used for cell segmentation. To the best of our knowledge, this is the first study to apply sparse reconstruction and sDAE with structured labels for cell detection and segmentation. The proposed method is extensively tested on two data sets containing more than 3000 cells obtained from brain tumor and lung cancer images. Our algorithm achieves the best performance compared with other state of the arts.

  18. Optimizing the De-Noise Neural Network Model for GPS Time-Series Monitoring of Structures.

    PubMed

    Kaloop, Mosbeh R; Hu, Jong Wan

    2015-09-22

    The Global Positioning System (GPS) is recently used widely in structures and other applications. Notwithstanding, the GPS accuracy still suffers from the errors afflicting the measurements, particularly the short-period displacement of structural components. Previously, the multi filter method is utilized to remove the displacement errors. This paper aims at using a novel application for the neural network prediction models to improve the GPS monitoring time series data. Four prediction models for the learning algorithms are applied and used with neural network solutions: back-propagation, Cascade-forward back-propagation, adaptive filter and extended Kalman filter, to estimate which model can be recommended. The noise simulation and bridge's short-period GPS of the monitoring displacement component of one Hz sampling frequency are used to validate the four models and the previous method. The results show that the Adaptive neural networks filter is suggested for de-noising the observations, specifically for the GPS displacement components of structures. Also, this model is expected to have significant influence on the design of structures in the low frequency responses and measurements' contents.

  19. Quantitative improvement in geology interpretation from remotely sensed data by denoising the haze

    NASA Astrophysics Data System (ADS)

    Yang, Hai-ping; Liu, Xiu-guo; Liu, Fu-jiang; Wu, Guo-ping; Shi, Jin-ping; Mei, Lin-lu

    2008-12-01

    The development of remote geological interpretation technology is booming during recent years. However, there is a significant obstacle for extracting geology information from remote sensing imagery--the presence of clouds and their shadows. Diverse techniques have been proposed including different algorithms such as filtering algorithm and multi-temporal cloud removing algorithm to solve the problem. This paper presents a modified solution to denoise the haze, based on ETM+ imagery. First of all, wavelet transform is applied to Band1, Band2 and Band3 imagery to determine the clear region and different levels of cloud regions. Then all pixels of the ETM+ imagery are classified to specific cover types after the cluster analysis of band4, Band5 and Band7. At last, the mean reflectance matching is performed in the first three bands separately according to different cover types in both clear region and cloud region. Above all, the method is implemented by IDL. The results show that this modified method not only can quantitatively determine the cloud area but also can remove cloud from imagery efficiently. Moreover, compared with the homomorphic filtering method, the experiment results of the proposed method is much more satisfying in Geology Interpretation.

  20. Towards denoising XMCD movies of fast magnetization dynamics using extended Kalman filter.

    PubMed

    Kopp, M; Harmeling, S; Schütz, G; Schölkopf, B; Fähnle, M

    2015-01-01

    The Kalman filter is a well-established approach to get information on the time-dependent state of a system from noisy observations. It was developed in the context of the Apollo project to see the deviation of the true trajectory of a rocket from the desired trajectory. Afterwards it was applied to many different systems with small numbers of components of the respective state vector (typically about 10). In all cases the equation of motion for the state vector was known exactly. The fast dissipative magnetization dynamics is often investigated by x-ray magnetic circular dichroism movies (XMCD movies), which are often very noisy. In this situation the number of components of the state vector is extremely large (about 10(5)), and the equation of motion for the dissipative magnetization dynamics (especially the values of the material parameters of this equation) is not well known. In the present paper it is shown by theoretical considerations that - nevertheless - there is no principle problem for the use of the Kalman filter to denoise XMCD movies of fast dissipative magnetization dynamics.

  1. Extended Kalman smoother with differential evolution technique for denoising of ECG signal.

    PubMed

    Panigrahy, D; Sahu, P K

    2016-09-01

    Electrocardiogram (ECG) signal gives a lot of information on the physiology of heart. In reality, noise from various sources interfere with the ECG signal. To get the correct information on physiology of the heart, noise cancellation of the ECG signal is required. In this paper, the effectiveness of extended Kalman smoother (EKS) with the differential evolution (DE) technique for noise cancellation of the ECG signal is investigated. DE is used as an automatic parameter selection method for the selection of ten optimized components of the ECG signal, and those are used to create the ECG signal according to the real ECG signal. These parameters are used by the EKS for the development of the state equation and also for initialization of the parameters of EKS. EKS framework is used for denoising the ECG signal from the single channel. The effectiveness of proposed noise cancellation technique has been evaluated by adding white, colored Gaussian noise and real muscle artifact noise at different SNR to some visually clean ECG signals from the MIT-BIH arrhythmia database. The proposed noise cancellation technique of ECG signal shows better signal to noise ratio (SNR) improvement, lesser mean square error (MSE) and percent of distortion (PRD) compared to other well-known methods. PMID:27542170

  2. Marginalised Stacked Denoising Autoencoders for Robust Representation of Real-Time Multi-View Action Recognition.

    PubMed

    Gu, Feng; Flórez-Revuelta, Francisco; Monekosso, Dorothy; Remagnino, Paolo

    2015-01-01

    Multi-view action recognition has gained a great interest in video surveillance, human computer interaction, and multimedia retrieval, where multiple cameras of different types are deployed to provide a complementary field of views. Fusion of multiple camera views evidently leads to more robust decisions on both tracking multiple targets and analysing complex human activities, especially where there are occlusions. In this paper, we incorporate the marginalised stacked denoising autoencoders (mSDA) algorithm to further improve the bag of words (BoWs) representation in terms of robustness and usefulness for multi-view action recognition. The resulting representations are fed into three simple fusion strategies as well as a multiple kernel learning algorithm at the classification stage. Based on the internal evaluation, the codebook size of BoWs and the number of layers of mSDA may not significantly affect recognition performance. According to results on three multi-view benchmark datasets, the proposed framework improves recognition performance across all three datasets and outputs record recognition performance, beating the state-of-art algorithms in the literature. It is also capable of performing real-time action recognition at a frame rate ranging from 33 to 45, which could be further improved by using more powerful machines in future applications. PMID:26193271

  3. Integration of the denoising, inpainting and local harmonic Bz algorithm for MREIT imaging of intact animals

    NASA Astrophysics Data System (ADS)

    Jeon, Kiwan; Kim, Hyung Joong; Lee, Chang-Ock; Seo, Jin Keun; Woo, Eung Je

    2010-12-01

    Conductivity imaging based on the current-injection MRI technique has been developed in magnetic resonance electrical impedance tomography. Current injected through a pair of surface electrodes induces a magnetic flux density distribution inside an imaging object, which results in additional magnetic field inhomogeneity. We can extract phase changes related to the current injection and obtain an image of the induced magnetic flux density. Without rotating the object inside the bore, we can measure only one component Bz of the magnetic flux density B = (Bx, By, Bz). Based on a relation between the internal conductivity distribution and Bz data subject to multiple current injections, one may reconstruct cross-sectional conductivity images. As the image reconstruction algorithm, we have been using the harmonic Bz algorithm in numerous experimental studies. Performing conductivity imaging of intact animal and human subjects, we found technical difficulties that originated from the MR signal void phenomena in the local regions of bones, lungs and gas-filled tubular organs. Measured Bz data inside such a problematic region contain an excessive amount of noise that deteriorates the conductivity image quality. In order to alleviate this technical problem, we applied hybrid methods incorporating ramp-preserving denoising, harmonic inpainting with isotropic diffusion and ROI imaging using the local harmonic Bz algorithm. These methods allow us to produce conductivity images of intact animals with best achievable quality. We suggest guidelines to choose a hybrid method depending on the overall noise level and existence of distinct problematic regions of MR signal void.

  4. Marginalised Stacked Denoising Autoencoders for Robust Representation of Real-Time Multi-View Action Recognition.

    PubMed

    Gu, Feng; Flórez-Revuelta, Francisco; Monekosso, Dorothy; Remagnino, Paolo

    2015-07-16

    Multi-view action recognition has gained a great interest in video surveillance, human computer interaction, and multimedia retrieval, where multiple cameras of different types are deployed to provide a complementary field of views. Fusion of multiple camera views evidently leads to more robust decisions on both tracking multiple targets and analysing complex human activities, especially where there are occlusions. In this paper, we incorporate the marginalised stacked denoising autoencoders (mSDA) algorithm to further improve the bag of words (BoWs) representation in terms of robustness and usefulness for multi-view action recognition. The resulting representations are fed into three simple fusion strategies as well as a multiple kernel learning algorithm at the classification stage. Based on the internal evaluation, the codebook size of BoWs and the number of layers of mSDA may not significantly affect recognition performance. According to results on three multi-view benchmark datasets, the proposed framework improves recognition performance across all three datasets and outputs record recognition performance, beating the state-of-art algorithms in the literature. It is also capable of performing real-time action recognition at a frame rate ranging from 33 to 45, which could be further improved by using more powerful machines in future applications.

  5. Marginalised Stacked Denoising Autoencoders for Robust Representation of Real-Time Multi-View Action Recognition

    PubMed Central

    Gu, Feng; Flórez-Revuelta, Francisco; Monekosso, Dorothy; Remagnino, Paolo

    2015-01-01

    Multi-view action recognition has gained a great interest in video surveillance, human computer interaction, and multimedia retrieval, where multiple cameras of different types are deployed to provide a complementary field of views. Fusion of multiple camera views evidently leads to more robust decisions on both tracking multiple targets and analysing complex human activities, especially where there are occlusions. In this paper, we incorporate the marginalised stacked denoising autoencoders (mSDA) algorithm to further improve the bag of words (BoWs) representation in terms of robustness and usefulness for multi-view action recognition. The resulting representations are fed into three simple fusion strategies as well as a multiple kernel learning algorithm at the classification stage. Based on the internal evaluation, the codebook size of BoWs and the number of layers of mSDA may not significantly affect recognition performance. According to results on three multi-view benchmark datasets, the proposed framework improves recognition performance across all three datasets and outputs record recognition performance, beating the state-of-art algorithms in the literature. It is also capable of performing real-time action recognition at a frame rate ranging from 33 to 45, which could be further improved by using more powerful machines in future applications. PMID:26193271

  6. Sub-band denoising and spline curve fitting method for hemodynamic measurement in perfusion MRI

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Huang, Hsiao-Ling; Hsu, Yuan-Yu; Chen, Chi-Chen; Chen, Ing-Yi; Wu, Liang-Chi; Liu, Ren-Shyan; Lin, Kang-Ping

    2003-05-01

    In clinical research, non-invasive MR perfusion imaging is capable of investigating brain perfusion phenomenon via various hemodynamic measurements, such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean trasnit time (MTT). These hemodynamic parameters are useful in diagnosing brain disorders such as stroke, infarction and periinfarct ischemia by further semi-quantitative analysis. However, the accuracy of quantitative analysis is usually affected by poor signal-to-noise ratio image quality. In this paper, we propose a hemodynamic measurement method based upon sub-band denoising and spline curve fitting processes to improve image quality for better hemodynamic quantitative analysis results. Ten sets of perfusion MRI data and corresponding PET images were used to validate the performance. For quantitative comparison, we evaluate gray/white matter CBF ratio. As a result, the hemodynamic semi-quantitative analysis result of mean gray to white matter CBF ratio is 2.10 +/- 0.34. The evaluated ratio of brain tissues in perfusion MRI is comparable to PET technique is less than 1-% difference in average. Furthermore, the method features excellent noise reduction and boundary preserving in image processing, and short hemodynamic measurement time.

  7. Toward automated denoising of single molecular Förster resonance energy transfer data.

    PubMed

    Lee, Hao-Chih; Lin, Bo-Lin; Chang, Wei-Hau; Tu, I-Ping

    2012-01-01

    A wide-field two-channel fluorescence microscope is a powerful tool as it allows for the study of conformation dynamics of hundreds to thousands of immobilized single molecules by Förster resonance energy transfer (FRET) signals. To date, the data reduction from a movie to a final set containing meaningful single-molecule FRET (smFRET) traces involves human inspection and intervention at several critical steps, greatly hampering the efficiency at the post-imaging stage. To facilitate the data reduction from smFRET movies to smFRET traces and to address the noise-limited issues, we developed a statistical denoising system toward fully automated processing. This data reduction system has embedded several novel approaches. First, as to background subtraction, high-order singular value decomposition (HOSVD) method is employed to extract spatial and temporal features. Second, to register and map the two color channels, the spots representing bleeding through the donor channel to the acceptor channel are used. Finally, correlation analysis and likelihood ratio statistic for the change point detection (CPD) are developed to study the two channels simultaneously, resolve FRET states, and report the dwelling time of each state. The performance of our method has been checked using both simulation and real data.

  8. Optimizing the De-Noise Neural Network Model for GPS Time-Series Monitoring of Structures.

    PubMed

    Kaloop, Mosbeh R; Hu, Jong Wan

    2015-01-01

    The Global Positioning System (GPS) is recently used widely in structures and other applications. Notwithstanding, the GPS accuracy still suffers from the errors afflicting the measurements, particularly the short-period displacement of structural components. Previously, the multi filter method is utilized to remove the displacement errors. This paper aims at using a novel application for the neural network prediction models to improve the GPS monitoring time series data. Four prediction models for the learning algorithms are applied and used with neural network solutions: back-propagation, Cascade-forward back-propagation, adaptive filter and extended Kalman filter, to estimate which model can be recommended. The noise simulation and bridge's short-period GPS of the monitoring displacement component of one Hz sampling frequency are used to validate the four models and the previous method. The results show that the Adaptive neural networks filter is suggested for de-noising the observations, specifically for the GPS displacement components of structures. Also, this model is expected to have significant influence on the design of structures in the low frequency responses and measurements' contents. PMID:26402687

  9. Simultaneous denoising and reconstruction of 5-D seismic data via damped rank-reduction method

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang; Zhang, Dong; Jin, Zhaoyu; Chen, Xiaohong; Zu, Shaohuan; Huang, Weilin; Gan, Shuwei

    2016-09-01

    The Cadzow rank-reduction method can be effectively utilized in simultaneously denoising and reconstructing 5-D seismic data that depend on four spatial dimensions. The classic version of Cadzow rank-reduction method arranges the 4-D spatial data into a level-four block Hankel/Toeplitz matrix and then applies truncated singular value decomposition (TSVD) for rank reduction. When the observed data are extremely noisy, which is often the feature of real seismic data, traditional TSVD cannot be adequate for attenuating the noise and reconstructing the signals. The reconstructed data tend to contain a significant amount of residual noise using the traditional TSVD method, which can be explained by the fact that the reconstructed data space is a mixture of both signal subspace and noise subspace. In order to better decompose the block Hankel matrix into signal and noise components, we introduced a damping operator into the traditional TSVD formula, which we call the damped rank-reduction method. The damped rank-reduction method can obtain a perfect reconstruction performance even when the observed data have extremely low signal-to-noise ratio. The feasibility of the improved 5-D seismic data reconstruction method was validated via both 5-D synthetic and field data examples. We presented comprehensive analysis of the data examples and obtained valuable experience and guidelines in better utilizing the proposed method in practice. Since the proposed method is convenient to implement and can achieve immediate improvement, we suggest its wide application in the industry.

  10. Randomized denoising autoencoders for smaller and efficient imaging based AD clinical trials

    PubMed Central

    Ithapu, Vamsi K.; Singh, Vikas; Okonkwo, Ozioma; Johnson, Sterling C.

    2015-01-01

    There is growing body of research devoted to designing imaging-based biomarkers that identify Alzheimer’s disease (AD) in its prodromal stage using statistical machine learning methods. Recently several authors investigated how clinical trials for AD can be made more efficient (i.e., smaller sample size) using predictive measures from such classification methods. In this paper, we explain why predictive measures given by such SVM type objectives may be less than ideal for use in the setting described above. We give a solution based on a novel deep learning model, randomized denoising autoencoders (rDA), which regresses on training labels y while also accounting for the variance, a property which is very useful for clinical trial design. Our results give strong improvements in sample size estimates over strategies based on multi-kernel learning. Also, rDA predictions appear to more accurately correlate to stages of disease. Separately, our formulation empirically shows how deep architectures can be applied in the large d, small n regime — the default situation in medical imaging. This result is of independent interest. PMID:25485413

  11. Towards denoising XMCD movies of fast magnetization dynamics using extended Kalman filter.

    PubMed

    Kopp, M; Harmeling, S; Schütz, G; Schölkopf, B; Fähnle, M

    2015-01-01

    The Kalman filter is a well-established approach to get information on the time-dependent state of a system from noisy observations. It was developed in the context of the Apollo project to see the deviation of the true trajectory of a rocket from the desired trajectory. Afterwards it was applied to many different systems with small numbers of components of the respective state vector (typically about 10). In all cases the equation of motion for the state vector was known exactly. The fast dissipative magnetization dynamics is often investigated by x-ray magnetic circular dichroism movies (XMCD movies), which are often very noisy. In this situation the number of components of the state vector is extremely large (about 10(5)), and the equation of motion for the dissipative magnetization dynamics (especially the values of the material parameters of this equation) is not well known. In the present paper it is shown by theoretical considerations that - nevertheless - there is no principle problem for the use of the Kalman filter to denoise XMCD movies of fast dissipative magnetization dynamics. PMID:25461588

  12. Optimizing the De-Noise Neural Network Model for GPS Time-Series Monitoring of Structures

    PubMed Central

    Kaloop, Mosbeh R.; Hu, Jong Wan

    2015-01-01

    The Global Positioning System (GPS) is recently used widely in structures and other applications. Notwithstanding, the GPS accuracy still suffers from the errors afflicting the measurements, particularly the short-period displacement of structural components. Previously, the multi filter method is utilized to remove the displacement errors. This paper aims at using a novel application for the neural network prediction models to improve the GPS monitoring time series data. Four prediction models for the learning algorithms are applied and used with neural network solutions: back-propagation, Cascade-forward back-propagation, adaptive filter and extended Kalman filter, to estimate which model can be recommended. The noise simulation and bridge’s short-period GPS of the monitoring displacement component of one Hz sampling frequency are used to validate the four models and the previous method. The results show that the Adaptive neural networks filter is suggested for de-noising the observations, specifically for the GPS displacement components of structures. Also, this model is expected to have significant influence on the design of structures in the low frequency responses and measurements’ contents. PMID:26402687

  13. Extended Kalman smoother with differential evolution technique for denoising of ECG signal.

    PubMed

    Panigrahy, D; Sahu, P K

    2016-09-01

    Electrocardiogram (ECG) signal gives a lot of information on the physiology of heart. In reality, noise from various sources interfere with the ECG signal. To get the correct information on physiology of the heart, noise cancellation of the ECG signal is required. In this paper, the effectiveness of extended Kalman smoother (EKS) with the differential evolution (DE) technique for noise cancellation of the ECG signal is investigated. DE is used as an automatic parameter selection method for the selection of ten optimized components of the ECG signal, and those are used to create the ECG signal according to the real ECG signal. These parameters are used by the EKS for the development of the state equation and also for initialization of the parameters of EKS. EKS framework is used for denoising the ECG signal from the single channel. The effectiveness of proposed noise cancellation technique has been evaluated by adding white, colored Gaussian noise and real muscle artifact noise at different SNR to some visually clean ECG signals from the MIT-BIH arrhythmia database. The proposed noise cancellation technique of ECG signal shows better signal to noise ratio (SNR) improvement, lesser mean square error (MSE) and percent of distortion (PRD) compared to other well-known methods.

  14. Computer-assisted counting of retinal cells by automatic segmentation after TV denoising

    PubMed Central

    2013-01-01

    Background Quantitative evaluation of mosaics of photoreceptors and neurons is essential in studies on development, aging and degeneration of the retina. Manual counting of samples is a time consuming procedure while attempts to automatization are subject to various restrictions from biological and preparation variability leading to both over- and underestimation of cell numbers. Here we present an adaptive algorithm to overcome many of these problems. Digital micrographs were obtained from cone photoreceptor mosaics visualized by anti-opsin immuno-cytochemistry in retinal wholemounts from a variety of mammalian species including primates. Segmentation of photoreceptors (from background, debris, blood vessels, other cell types) was performed by a procedure based on Rudin-Osher-Fatemi total variation (TV) denoising. Once 3 parameters are manually adjusted based on a sample, similarly structured images can be batch processed. The module is implemented in MATLAB and fully documented online. Results The object recognition procedure was tested on samples with a typical range of signal and background variations. We obtained results with error ratios of less than 10% in 16 of 18 samples and a mean error of less than 6% compared to manual counts. Conclusions The presented method provides a traceable module for automated acquisition of retinal cell density data. Remaining errors, including addition of background items, splitting or merging of objects might be further reduced by introduction of additional parameters. The module may be integrated into extended environments with features such as 3D-acquisition and recognition. PMID:24138794

  15. Perceptual Inference and Autistic Traits

    ERIC Educational Resources Information Center

    Skewes, Joshua C; Jegindø, Else-Marie; Gebauer, Line

    2015-01-01

    Autistic people are better at perceiving details. Major theories explain this in terms of bottom-up sensory mechanisms or in terms of top-down cognitive biases. Recently, it has become possible to link these theories within a common framework. This framework assumes that perception is implicit neural inference, combining sensory evidence with…

  16. Science Shorts: Observation versus Inference

    ERIC Educational Resources Information Center

    Leager, Craig R.

    2008-01-01

    When you observe something, how do you know for sure what you are seeing, feeling, smelling, or hearing? Asking students to think critically about their encounters with the natural world will help to strengthen their understanding and application of the science-process skills of observation and inference. In the following lesson, students make…

  17. Sample Size and Correlational Inference

    ERIC Educational Resources Information Center

    Anderson, Richard B.; Doherty, Michael E.; Friedrich, Jeff C.

    2008-01-01

    In 4 studies, the authors examined the hypothesis that the structure of the informational environment makes small samples more informative than large ones for drawing inferences about population correlations. The specific purpose of the studies was to test predictions arising from the signal detection simulations of R. B. Anderson, M. E. Doherty,…

  18. Improving Explanatory Inferences from Assessments

    ERIC Educational Resources Information Center

    Diakow, Ronli Phyllis

    2013-01-01

    This dissertation comprises three papers that propose, discuss, and illustrate models to make improved inferences about research questions regarding student achievement in education. Addressing the types of questions common in educational research today requires three different "extensions" to traditional educational assessment: (1)…

  19. A fast method for video deblurring based on a combination of gradient methods and denoising algorithms in Matlab and C environments

    NASA Astrophysics Data System (ADS)

    Mirzadeh, Zeynab; Mehri, Razieh; Rabbani, Hossein

    2010-01-01

    In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure. In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial approximation - intersection of confidence intervals) method that use an anisotropic window around each point and obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code, makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire program and so the runtime should be

  20. Inferences are for doing: the impact of approach and avoidance states on the generation of spontaneous trait inferences.

    PubMed

    Crawford, Matthew T; McCarthy, Randy J; Kjærstad, Hanne Lie; Skowronski, John J

    2013-03-01

    Spontaneous trait inferences (STIs) are ubiquitous and occur when perceivers spontaneously infer actor traits from actor behaviors. Previous research has elucidated the processes underlying STIs, but little work has focused on the functions of STIs. This article proposes that these unintentional early inferences serve a general approach or avoidance function. Two studies are reported in which external approach and avoidance motivations elicited via flexion-extension (Study 1) or physical warmth (Study 2) affect the encoding of trait-implying behavioral statements in a valence-matching manner. The results suggest that somatic states can act as cues that affect unintentional social information processing independently of the actual experience of the psychological states associated with those somatic states. Implications for a functional perspective on STIs are discussed.