2011-01-01
Background Gene regulatory networks play essential roles in living organisms to control growth, keep internal metabolism running and respond to external environmental changes. Understanding the connections and the activity levels of regulators is important for the research of gene regulatory networks. While relevance score based algorithms that reconstruct gene regulatory networks from transcriptome data can infer genome-wide gene regulatory networks, they are unfortunately prone to false positive results. Transcription factor activities (TFAs) quantitatively reflect the ability of the transcription factor to regulate target genes. However, classic relevance score based gene regulatory network reconstruction algorithms use models do not include the TFA layer, thus missing a key regulatory element. Results This work integrates TFA prediction algorithms with relevance score based network reconstruction algorithms to reconstruct gene regulatory networks with improved accuracy over classic relevance score based algorithms. This method is called Gene expression and Transcription factor activity based Relevance Network (GTRNetwork). Different combinations of TFA prediction algorithms and relevance score functions have been applied to find the most efficient combination. When the integrated GTRNetwork method was applied to E. coli data, the reconstructed genome-wide gene regulatory network predicted 381 new regulatory links. This reconstructed gene regulatory network including the predicted new regulatory links show promising biological significances. Many of the new links are verified by known TF binding site information, and many other links can be verified from the literature and databases such as EcoCyc. The reconstructed gene regulatory network is applied to a recent transcriptome analysis of E. coli during isobutanol stress. In addition to the 16 significantly changed TFAs detected in the original paper, another 7 significantly changed TFAs have been detected by using our reconstructed network. Conclusions The GTRNetwork algorithm introduces the hidden layer TFA into classic relevance score-based gene regulatory network reconstruction processes. Integrating the TFA biological information with regulatory network reconstruction algorithms significantly improves both detection of new links and reduces that rate of false positives. The application of GTRNetwork on E. coli gene transcriptome data gives a set of potential regulatory links with promising biological significance for isobutanol stress and other conditions. PMID:21668997
Photoacoustic image reconstruction via deep learning
NASA Astrophysics Data System (ADS)
Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes
2018-02-01
Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.
Harnessing Diversity towards the Reconstructing of Large Scale Gene Regulatory Networks
Yamanaka, Ryota; Kitano, Hiroaki
2013-01-01
Elucidating gene regulatory network (GRN) from large scale experimental data remains a central challenge in systems biology. Recently, numerous techniques, particularly consensus driven approaches combining different algorithms, have become a potentially promising strategy to infer accurate GRNs. Here, we develop a novel consensus inference algorithm, TopkNet that can integrate multiple algorithms to infer GRNs. Comprehensive performance benchmarking on a cloud computing framework demonstrated that (i) a simple strategy to combine many algorithms does not always lead to performance improvement compared to the cost of consensus and (ii) TopkNet integrating only high-performance algorithms provide significant performance improvement compared to the best individual algorithms and community prediction. These results suggest that a priori determination of high-performance algorithms is a key to reconstruct an unknown regulatory network. Similarity among gene-expression datasets can be useful to determine potential optimal algorithms for reconstruction of unknown regulatory networks, i.e., if expression-data associated with known regulatory network is similar to that with unknown regulatory network, optimal algorithms determined for the known regulatory network can be repurposed to infer the unknown regulatory network. Based on this observation, we developed a quantitative measure of similarity among gene-expression datasets and demonstrated that, if similarity between the two expression datasets is high, TopkNet integrating algorithms that are optimal for known dataset perform well on the unknown dataset. The consensus framework, TopkNet, together with the similarity measure proposed in this study provides a powerful strategy towards harnessing the wisdom of the crowds in reconstruction of unknown regulatory networks. PMID:24278007
Logsdon, Benjamin A.; Mezey, Jason
2010-01-01
Cellular gene expression measurements contain regulatory information that can be used to discover novel network relationships. Here, we present a new algorithm for network reconstruction powered by the adaptive lasso, a theoretically and empirically well-behaved method for selecting the regulatory features of a network. Any algorithms designed for network discovery that make use of directed probabilistic graphs require perturbations, produced by either experiments or naturally occurring genetic variation, to successfully infer unique regulatory relationships from gene expression data. Our approach makes use of appropriately selected cis-expression Quantitative Trait Loci (cis-eQTL), which provide a sufficient set of independent perturbations for maximum network resolution. We compare the performance of our network reconstruction algorithm to four other approaches: the PC-algorithm, QTLnet, the QDG algorithm, and the NEO algorithm, all of which have been used to reconstruct directed networks among phenotypes leveraging QTL. We show that the adaptive lasso can outperform these algorithms for networks of ten genes and ten cis-eQTL, and is competitive with the QDG algorithm for networks with thirty genes and thirty cis-eQTL, with rich topologies and hundreds of samples. Using this novel approach, we identify unique sets of directed relationships in Saccharomyces cerevisiae when analyzing genome-wide gene expression data for an intercross between a wild strain and a lab strain. We recover novel putative network relationships between a tyrosine biosynthesis gene (TYR1), and genes involved in endocytosis (RCY1), the spindle checkpoint (BUB2), sulfonate catabolism (JLP1), and cell-cell communication (PRM7). Our algorithm provides a synthesis of feature selection methods and graphical model theory that has the potential to reveal new directed regulatory relationships from the analysis of population level genetic and gene expression data. PMID:21152011
2011-01-01
Background Network inference methods reconstruct mathematical models of molecular or genetic networks directly from experimental data sets. We have previously reported a mathematical method which is exclusively data-driven, does not involve any heuristic decisions within the reconstruction process, and deliveres all possible alternative minimal networks in terms of simple place/transition Petri nets that are consistent with a given discrete time series data set. Results We fundamentally extended the previously published algorithm to consider catalysis and inhibition of the reactions that occur in the underlying network. The results of the reconstruction algorithm are encoded in the form of an extended Petri net involving control arcs. This allows the consideration of processes involving mass flow and/or regulatory interactions. As a non-trivial test case, the phosphate regulatory network of enterobacteria was reconstructed using in silico-generated time-series data sets on wild-type and in silico mutants. Conclusions The new exact algorithm reconstructs extended Petri nets from time series data sets by finding all alternative minimal networks that are consistent with the data. It suggested alternative molecular mechanisms for certain reactions in the network. The algorithm is useful to combine data from wild-type and mutant cells and may potentially integrate physiological, biochemical, pharmacological, and genetic data in the form of a single model. PMID:21762503
A neural network approach for image reconstruction in electron magnetic resonance tomography.
Durairaj, D Christopher; Krishna, Murali C; Murugesan, Ramachandran
2007-10-01
An object-oriented, artificial neural network (ANN) based, application system for reconstruction of two-dimensional spatial images in electron magnetic resonance (EMR) tomography is presented. The standard back propagation algorithm is utilized to train a three-layer sigmoidal feed-forward, supervised, ANN to perform the image reconstruction. The network learns the relationship between the 'ideal' images that are reconstructed using filtered back projection (FBP) technique and the corresponding projection data (sinograms). The input layer of the network is provided with a training set that contains projection data from various phantoms as well as in vivo objects, acquired from an EMR imager. Twenty five different network configurations are investigated to test the ability of the generalization of the network. The trained ANN then reconstructs two-dimensional temporal spatial images that present the distribution of free radicals in biological systems. Image reconstruction by the trained neural network shows better time complexity than the conventional iterative reconstruction algorithms such as multiplicative algebraic reconstruction technique (MART). The network is further explored for image reconstruction from 'noisy' EMR data and the results show better performance than the FBP method. The network is also tested for its ability to reconstruct from limited-angle EMR data set.
NASA Astrophysics Data System (ADS)
Pennington, Robert S.; Van den Broek, Wouter; Koch, Christoph T.
2014-05-01
We have reconstructed third-dimension specimen information from convergent-beam electron diffraction (CBED) patterns simulated using the stacked-Bloch-wave method. By reformulating the stacked-Bloch-wave formalism as an artificial neural network and optimizing with resilient back propagation, we demonstrate specimen orientation reconstructions with depth resolutions down to 5 nm. To show our algorithm's ability to analyze realistic data, we also discuss and demonstrate our algorithm reconstructing from noisy data and using a limited number of CBED disks. Applicability of this reconstruction algorithm to other specimen parameters is discussed.
Reconstruction of financial networks for robust estimation of systemic risk
NASA Astrophysics Data System (ADS)
Mastromatteo, Iacopo; Zarinelli, Elia; Marsili, Matteo
2012-03-01
In this paper we estimate the propagation of liquidity shocks through interbank markets when the information about the underlying credit network is incomplete. We show that techniques such as maximum entropy currently used to reconstruct credit networks severely underestimate the risk of contagion by assuming a trivial (fully connected) topology, a type of network structure which can be very different from the one empirically observed. We propose an efficient message-passing algorithm to explore the space of possible network structures and show that a correct estimation of the network degree of connectedness leads to more reliable estimations for systemic risk. Such an algorithm is also able to produce maximally fragile structures, providing a practical upper bound for the risk of contagion when the actual network structure is unknown. We test our algorithm on ensembles of synthetic data encoding some features of real financial networks (sparsity and heterogeneity), finding that more accurate estimations of risk can be achieved. Finally we find that this algorithm can be used to control the amount of information that regulators need to require from banks in order to sufficiently constrain the reconstruction of financial networks.
F-MAP: A Bayesian approach to infer the gene regulatory network using external hints
Shahdoust, Maryam; Mahjub, Hossein; Sadeghi, Mehdi
2017-01-01
The Common topological features of related species gene regulatory networks suggest reconstruction of the network of one species by using the further information from gene expressions profile of related species. We present an algorithm to reconstruct the gene regulatory network named; F-MAP, which applies the knowledge about gene interactions from related species. Our algorithm sets a Bayesian framework to estimate the precision matrix of one species microarray gene expressions dataset to infer the Gaussian Graphical model of the network. The conjugate Wishart prior is used and the information from related species is applied to estimate the hyperparameters of the prior distribution by using the factor analysis. Applying the proposed algorithm on six related species of drosophila shows that the precision of reconstructed networks is improved considerably compared to the precision of networks constructed by other Bayesian approaches. PMID:28938012
P-Finder: Reconstruction of Signaling Networks from Protein-Protein Interactions and GO Annotations.
Young-Rae Cho; Yanan Xin; Speegle, Greg
2015-01-01
Because most complex genetic diseases are caused by defects of cell signaling, illuminating a signaling cascade is essential for understanding their mechanisms. We present three novel computational algorithms to reconstruct signaling networks between a starting protein and an ending protein using genome-wide protein-protein interaction (PPI) networks and gene ontology (GO) annotation data. A signaling network is represented as a directed acyclic graph in a merged form of multiple linear pathways. An advanced semantic similarity metric is applied for weighting PPIs as the preprocessing of all three methods. The first algorithm repeatedly extends the list of nodes based on path frequency towards an ending protein. The second algorithm repeatedly appends edges based on the occurrence of network motifs which indicate the link patterns more frequently appearing in a PPI network than in a random graph. The last algorithm uses the information propagation technique which iteratively updates edge orientations based on the path strength and merges the selected directed edges. Our experimental results demonstrate that the proposed algorithms achieve higher accuracy than previous methods when they are tested on well-studied pathways of S. cerevisiae. Furthermore, we introduce an interactive web application tool, called P-Finder, to visualize reconstructed signaling networks.
NASA Astrophysics Data System (ADS)
Xia, Y.; Tian, J.; d'Angelo, P.; Reinartz, P.
2018-05-01
3D reconstruction of plants is hard to implement, as the complex leaf distribution highly increases the difficulty level in dense matching. Semi-Global Matching has been successfully applied to recover the depth information of a scene, but may perform variably when different matching cost algorithms are used. In this paper two matching cost computation algorithms, Census transform and an algorithm using a convolutional neural network, are tested for plant reconstruction based on Semi-Global Matching. High resolution close-range photogrammetric images from a handheld camera are used for the experiment. The disparity maps generated based on the two selected matching cost methods are comparable with acceptable quality, which shows the good performance of Census and the potential of neural networks to improve the dense matching.
NASA Astrophysics Data System (ADS)
Tomiwa, K. G.
2017-09-01
The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.
A Distributed Compressive Sensing Scheme for Event Capture in Wireless Visual Sensor Networks
NASA Astrophysics Data System (ADS)
Hou, Meng; Xu, Sen; Wu, Weiling; Lin, Fei
2018-01-01
Image signals which acquired by wireless visual sensor network can be used for specific event capture. This event capture is realized by image processing at the sink node. A distributed compressive sensing scheme is used for the transmission of these image signals from the camera nodes to the sink node. A measurement and joint reconstruction algorithm for these image signals are proposed in this paper. Make advantage of spatial correlation between images within a sensing area, the cluster head node which as the image decoder can accurately co-reconstruct these image signals. The subjective visual quality and the reconstruction error rate are used for the evaluation of reconstructed image quality. Simulation results show that the joint reconstruction algorithm achieves higher image quality at the same image compressive rate than the independent reconstruction algorithm.
Inferring Gene Regulatory Networks by Singular Value Decomposition and Gravitation Field Algorithm
Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang
2012-01-01
Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms. PMID:23226565
Chen, Chi-Kan
2017-07-26
The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step algorithms can potentially incorporate with different nonlinear differential equation models to reconstruct the GRN.
MIIC online: a web server to reconstruct causal or non-causal networks from non-perturbative data.
Sella, Nadir; Verny, Louis; Uguzzoni, Guido; Affeldt, Séverine; Isambert, Hervé
2018-07-01
We present a web server running the MIIC algorithm, a network learning method combining constraint-based and information-theoretic frameworks to reconstruct causal, non-causal or mixed networks from non-perturbative data, without the need for an a priori choice on the class of reconstructed network. Starting from a fully connected network, the algorithm first removes dispensable edges by iteratively subtracting the most significant information contributions from indirect paths between each pair of variables. The remaining edges are then filtered based on their confidence assessment or oriented based on the signature of causality in observational data. MIIC online server can be used for a broad range of biological data, including possible unobserved (latent) variables, from single-cell gene expression data to protein sequence evolution and outperforms or matches state-of-the-art methods for either causal or non-causal network reconstruction. MIIC online can be freely accessed at https://miic.curie.fr. Supplementary data are available at Bioinformatics online.
Michailidis, George
2014-01-01
Reconstructing transcriptional regulatory networks is an important task in functional genomics. Data obtained from experiments that perturb genes by knockouts or RNA interference contain useful information for addressing this reconstruction problem. However, such data can be limited in size and/or are expensive to acquire. On the other hand, observational data of the organism in steady state (e.g., wild-type) are more readily available, but their informational content is inadequate for the task at hand. We develop a computational approach to appropriately utilize both data sources for estimating a regulatory network. The proposed approach is based on a three-step algorithm to estimate the underlying directed but cyclic network, that uses as input both perturbation screens and steady state gene expression data. In the first step, the algorithm determines causal orderings of the genes that are consistent with the perturbation data, by combining an exhaustive search method with a fast heuristic that in turn couples a Monte Carlo technique with a fast search algorithm. In the second step, for each obtained causal ordering, a regulatory network is estimated using a penalized likelihood based method, while in the third step a consensus network is constructed from the highest scored ones. Extensive computational experiments show that the algorithm performs well in reconstructing the underlying network and clearly outperforms competing approaches that rely only on a single data source. Further, it is established that the algorithm produces a consistent estimate of the regulatory network. PMID:24586224
Tomše, Petra; Jensterle, Luka; Rep, Sebastijan; Grmek, Marko; Zaletel, Katja; Eidelberg, David; Dhawan, Vijay; Ma, Yilong; Trošt, Maja
2017-09-01
To evaluate the reproducibility of the expression of Parkinson's Disease Related Pattern (PDRP) across multiple sets of 18F-FDG-PET brain images reconstructed with different reconstruction algorithms. 18F-FDG-PET brain imaging was performed in two independent cohorts of Parkinson's disease (PD) patients and normal controls (NC). Slovenian cohort (20 PD patients, 20 NC) was scanned with Siemens Biograph mCT camera and reconstructed using FBP, FBP+TOF, OSEM, OSEM+TOF, OSEM+PSF and OSEM+PSF+TOF. American Cohort (20 PD patients, 7 NC) was scanned with GE Advance camera and reconstructed using 3DRP, FORE-FBP and FORE-Iterative. Expressions of two previously-validated PDRP patterns (PDRP-Slovenia and PDRP-USA) were calculated. We compared the ability of PDRP to discriminate PD patients from NC, differences and correlation between the corresponding subject scores and ROC analysis results across the different reconstruction algorithms. The expression of PDRP-Slovenia and PDRP-USA networks was significantly elevated in PD patients compared to NC (p<0.0001), regardless of reconstruction algorithms. PDRP expression strongly correlated between all studied algorithms and the reference algorithm (r⩾0.993, p<0.0001). Average differences in the PDRP expression among different algorithms varied within 0.73 and 0.08 of the reference value for PDRP-Slovenia and PDRP-USA, respectively. ROC analysis confirmed high similarity in sensitivity, specificity and AUC among all studied reconstruction algorithms. These results show that the expression of PDRP is reproducible across a variety of reconstruction algorithms of 18F-FDG-PET brain images. PDRP is capable of providing a robust metabolic biomarker of PD for multicenter 18F-FDG-PET images acquired in the context of differential diagnosis or clinical trials. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng
2018-02-01
Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.
Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng
2018-02-01
Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.
Road detection in SAR images using a tensor voting algorithm
NASA Astrophysics Data System (ADS)
Shen, Dajiang; Hu, Chun; Yang, Bing; Tian, Jinwen; Liu, Jian
2007-11-01
In this paper, the problem of the detection of road networks in Synthetic Aperture Radar (SAR) images is addressed. Most of the previous methods extract the road by detecting lines and network reconstruction. Traditional algorithms such as MRFs, GA, Level Set, used in the progress of reconstruction are iterative. The tensor voting methodology we proposed is non-iterative, and non-sensitive to initialization. Furthermore, the only free parameter is the size of the neighborhood, related to the scale. The algorithm we present is verified to be effective when it's applied to the road extraction using the real Radarsat Image.
Nemenman, Ilya; Escola, G Sean; Hlavacek, William S; Unkefer, Pat J; Unkefer, Clifford J; Wall, Michael E
2007-12-01
We investigate the ability of algorithms developed for reverse engineering of transcriptional regulatory networks to reconstruct metabolic networks from high-throughput metabolite profiling data. For benchmarking purposes, we generate synthetic metabolic profiles based on a well-established model for red blood cell metabolism. A variety of data sets are generated, accounting for different properties of real metabolic networks, such as experimental noise, metabolite correlations, and temporal dynamics. These data sets are made available online. We use ARACNE, a mainstream algorithm for reverse engineering of transcriptional regulatory networks from gene expression data, to predict metabolic interactions from these data sets. We find that the performance of ARACNE on metabolic data is comparable to that on gene expression data.
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
Fiber Orientation Estimation Guided by a Deep Network.
Ye, Chuyang; Prince, Jerry L
2017-09-01
Diffusion magnetic resonance imaging (dMRI) is currently the only tool for noninvasively imaging the brain's white matter tracts. The fiber orientation (FO) is a key feature computed from dMRI for tract reconstruction. Because the number of FOs in a voxel is usually small, dictionary-based sparse reconstruction has been used to estimate FOs. However, accurate estimation of complex FO configurations in the presence of noise can still be challenging. In this work we explore the use of a deep network for FO estimation in a dictionary-based framework and propose an algorithm named Fiber Orientation Reconstruction guided by a Deep Network (FORDN). FORDN consists of two steps. First, we use a smaller dictionary encoding coarse basis FOs to represent diffusion signals. To estimate the mixture fractions of the dictionary atoms, a deep network is designed to solve the sparse reconstruction problem. Second, the coarse FOs inform the final FO estimation, where a larger dictionary encoding a dense basis of FOs is used and a weighted ℓ 1 -norm regularized least squares problem is solved to encourage FOs that are consistent with the network output. FORDN was evaluated and compared with state-of-the-art algorithms that estimate FOs using sparse reconstruction on simulated and typical clinical dMRI data. The results demonstrate the benefit of using a deep network for FO estimation.
Systematic assignment of thermodynamic constraints in metabolic network models
Kümmel, Anne; Panke, Sven; Heinemann, Matthias
2006-01-01
Background The availability of genome sequences for many organisms enabled the reconstruction of several genome-scale metabolic network models. Currently, significant efforts are put into the automated reconstruction of such models. For this, several computational tools have been developed that particularly assist in identifying and compiling the organism-specific lists of metabolic reactions. In contrast, the last step of the model reconstruction process, which is the definition of the thermodynamic constraints in terms of reaction directionalities, still needs to be done manually. No computational method exists that allows for an automated and systematic assignment of reaction directions in genome-scale models. Results We present an algorithm that – based on thermodynamics, network topology and heuristic rules – automatically assigns reaction directions in metabolic models such that the reaction network is thermodynamically feasible with respect to the production of energy equivalents. It first exploits all available experimentally derived Gibbs energies of formation to identify irreversible reactions. As these thermodynamic data are not available for all metabolites, in a next step, further reaction directions are assigned on the basis of network topology considerations and thermodynamics-based heuristic rules. Briefly, the algorithm identifies reaction subsets from the metabolic network that are able to convert low-energy co-substrates into their high-energy counterparts and thus net produce energy. Our algorithm aims at disabling such thermodynamically infeasible cyclic operation of reaction subnetworks by assigning reaction directions based on a set of thermodynamics-derived heuristic rules. We demonstrate our algorithm on a genome-scale metabolic model of E. coli. The introduced systematic direction assignment yielded 130 irreversible reactions (out of 920 total reactions), which corresponds to about 70% of all irreversible reactions that are required to disable thermodynamically infeasible energy production. Conclusion Although not being fully comprehensive, our algorithm for systematic reaction direction assignment could define a significant number of irreversible reactions automatically with low computational effort. We envision that the presented algorithm is a valuable part of a computational framework that assists the automated reconstruction of genome-scale metabolic models. PMID:17123434
Semi-Supervised Multi-View Learning for Gene Network Reconstruction
Ceci, Michelangelo; Pio, Gianvito; Kuzmanovski, Vladimir; Džeroski, Sašo
2015-01-01
The task of gene regulatory network reconstruction from high-throughput data is receiving increasing attention in recent years. As a consequence, many inference methods for solving this task have been proposed in the literature. It has been recently observed, however, that no single inference method performs optimally across all datasets. It has also been shown that the integration of predictions from multiple inference methods is more robust and shows high performance across diverse datasets. Inspired by this research, in this paper, we propose a machine learning solution which learns to combine predictions from multiple inference methods. While this approach adds additional complexity to the inference process, we expect it would also carry substantial benefits. These would come from the automatic adaptation to patterns on the outputs of individual inference methods, so that it is possible to identify regulatory interactions more reliably when these patterns occur. This article demonstrates the benefits (in terms of accuracy of the reconstructed networks) of the proposed method, which exploits an iterative, semi-supervised ensemble-based algorithm. The algorithm learns to combine the interactions predicted by many different inference methods in the multi-view learning setting. The empirical evaluation of the proposed algorithm on a prokaryotic model organism (E. coli) and on a eukaryotic model organism (S. cerevisiae) clearly shows improved performance over the state of the art methods. The results indicate that gene regulatory network reconstruction for the real datasets is more difficult for S. cerevisiae than for E. coli. The software, all the datasets used in the experiments and all the results are available for download at the following link: http://figshare.com/articles/Semi_supervised_Multi_View_Learning_for_Gene_Network_Reconstruction/1604827. PMID:26641091
Improved Maximum Parsimony Models for Phylogenetic Networks.
Van Iersel, Leo; Jones, Mark; Scornavacca, Celine
2018-05-01
Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.
Predicting missing links in complex networks based on common neighbors and distance
Yang, Jinxuan; Zhang, Xiao-Dong
2016-01-01
The algorithms based on common neighbors metric to predict missing links in complex networks are very popular, but most of these algorithms do not account for missing links between nodes with no common neighbors. It is not accurate enough to reconstruct networks by using these methods in some cases especially when between nodes have less common neighbors. We proposed in this paper a new algorithm based on common neighbors and distance to improve accuracy of link prediction. Our proposed algorithm makes remarkable effect in predicting the missing links between nodes with no common neighbors and performs better than most existing currently used methods for a variety of real-world networks without increasing complexity. PMID:27905526
The accurate reconstruction of gene regulatory networks from large scale molecular profile datasets represents one of the grand challenges of Systems Biology. The Algorithm for the Reconstruction of Accurate Cellular Networks (ARACNe) represents one of the most effective tools to accomplish this goal. However, the initial Fixed Bandwidth (FB) implementation is both inefficient and unable to deal with sample sets providing largely uneven coverage of the probability density space.
Eo, Taejoon; Jun, Yohan; Kim, Taeseong; Jang, Jinseong; Lee, Ho-Joon; Hwang, Dosik
2018-04-06
To demonstrate accurate MR image reconstruction from undersampled k-space data using cross-domain convolutional neural networks (CNNs) METHODS: Cross-domain CNNs consist of 3 components: (1) a deep CNN operating on the k-space (KCNN), (2) a deep CNN operating on an image domain (ICNN), and (3) an interleaved data consistency operations. These components are alternately applied, and each CNN is trained to minimize the loss between the reconstructed and corresponding fully sampled k-spaces. The final reconstructed image is obtained by forward-propagating the undersampled k-space data through the entire network. Performances of K-net (KCNN with inverse Fourier transform), I-net (ICNN with interleaved data consistency), and various combinations of the 2 different networks were tested. The test results indicated that K-net and I-net have different advantages/disadvantages in terms of tissue-structure restoration. Consequently, the combination of K-net and I-net is superior to single-domain CNNs. Three MR data sets, the T 2 fluid-attenuated inversion recovery (T 2 FLAIR) set from the Alzheimer's Disease Neuroimaging Initiative and 2 data sets acquired at our local institute (T 2 FLAIR and T 1 weighted), were used to evaluate the performance of 7 conventional reconstruction algorithms and the proposed cross-domain CNNs, which hereafter is referred to as KIKI-net. KIKI-net outperforms conventional algorithms with mean improvements of 2.29 dB in peak SNR and 0.031 in structure similarity. KIKI-net exhibits superior performance over state-of-the-art conventional algorithms in terms of restoring tissue structures and removing aliasing artifacts. The results demonstrate that KIKI-net is applicable up to a reduction factor of 3 to 4 based on variable-density Cartesian undersampling. © 2018 International Society for Magnetic Resonance in Medicine.
Reverse engineering and analysis of large genome-scale gene networks
Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas
2013-01-01
Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web. PMID:23042249
Compressed sensing based missing nodes prediction in temporal communication network
NASA Astrophysics Data System (ADS)
Cheng, Guangquan; Ma, Yang; Liu, Zhong; Xie, Fuli
2018-02-01
The reconstruction of complex network topology is of great theoretical and practical significance. Most research so far focuses on the prediction of missing links. There are many mature algorithms for link prediction which have achieved good results, but research on the prediction of missing nodes has just begun. In this paper, we propose an algorithm for missing node prediction in complex networks. We detect the position of missing nodes based on their neighbor nodes under the theory of compressed sensing, and extend the algorithm to the case of multiple missing nodes using spectral clustering. Experiments on real public network datasets and simulated datasets show that our algorithm can detect the locations of hidden nodes effectively with high precision.
Context-specific metabolic networks are consistent with experiments.
Becker, Scott A; Palsson, Bernhard O
2008-05-16
Reconstructions of cellular metabolism are publicly available for a variety of different microorganisms and some mammalian genomes. To date, these reconstructions are "genome-scale" and strive to include all reactions implied by the genome annotation, as well as those with direct experimental evidence. Clearly, many of the reactions in a genome-scale reconstruction will not be active under particular conditions or in a particular cell type. Methods to tailor these comprehensive genome-scale reconstructions into context-specific networks will aid predictive in silico modeling for a particular situation. We present a method called Gene Inactivity Moderated by Metabolism and Expression (GIMME) to achieve this goal. The GIMME algorithm uses quantitative gene expression data and one or more presupposed metabolic objectives to produce the context-specific reconstruction that is most consistent with the available data. Furthermore, the algorithm provides a quantitative inconsistency score indicating how consistent a set of gene expression data is with a particular metabolic objective. We show that this algorithm produces results consistent with biological experiments and intuition for adaptive evolution of bacteria, rational design of metabolic engineering strains, and human skeletal muscle cells. This work represents progress towards producing constraint-based models of metabolism that are specific to the conditions where the expression profiling data is available.
Neural network Hilbert transform based filtered backprojection for fast inline x-ray inspection
NASA Astrophysics Data System (ADS)
Janssens, Eline; De Beenhouwer, Jan; Van Dael, Mattias; De Schryver, Thomas; Van Hoorebeke, Luc; Verboven, Pieter; Nicolai, Bart; Sijbers, Jan
2018-03-01
X-ray imaging is an important tool for quality control since it allows to inspect the interior of products in a non-destructive way. Conventional x-ray imaging, however, is slow and expensive. Inline x-ray inspection, on the other hand, can pave the way towards fast and individual quality control, provided that a sufficiently high throughput can be achieved at a minimal cost. To meet these criteria, an inline inspection acquisition geometry is proposed where the object moves and rotates on a conveyor belt while it passes a fixed source and detector. Moreover, for this acquisition geometry, a new neural-network-based reconstruction algorithm is introduced: the neural network Hilbert transform based filtered backprojection. The proposed algorithm is evaluated both on simulated and real inline x-ray data and has shown to generate high quality reconstructions of 400 × 400 reconstruction pixels within 200 ms, thereby meeting the high throughput criteria.
NASA Astrophysics Data System (ADS)
Bae, Kyung-Hoon; Lee, Jungjoon; Kim, Eun-Soo
2008-06-01
In this paper, a variable disparity estimation (VDE)-based intermediate view reconstruction (IVR) in dynamic flow allocation (DFA) over an Ethernet passive optical network (EPON)-based access network is proposed. In the proposed system, the stereoscopic images are estimated by a variable block-matching algorithm (VBMA), and they are transmitted to the receiver through DFA over EPON. This scheme improves a priority-based access network by converting it to a flow-based access network with a new access mechanism and scheduling algorithm, and then 16-view images are synthesized by the IVR using VDE. Some experimental results indicate that the proposed system improves the peak-signal-to-noise ratio (PSNR) to as high as 4.86 dB and reduces the processing time to 3.52 s. Additionally, the network service provider can provide upper limits of transmission delays by the flow. The modeling and simulation results, including mathematical analyses, from this scheme are also provided.
Integrating Genetic and Functional Genomic Data to Elucidate Common Disease Tra
NASA Astrophysics Data System (ADS)
Schadt, Eric
2005-03-01
The reconstruction of genetic networks in mammalian systems is one of the primary goals in biological research, especially as such reconstructions relate to elucidating not only common, polygenic human diseases, but living systems more generally. Here I present a statistical procedure for inferring causal relationships between gene expression traits and more classic clinical traits, including complex disease traits. This procedure has been generalized to the gene network reconstruction problem, where naturally occurring genetic variations in segregating mouse populations are used as a source of perturbations to elucidate tissue-specific gene networks. Differences in the extent of genetic control between genders and among four different tissues are highlighted. I also demonstrate that the networks derived from expression data in segregating mouse populations using the novel network reconstruction algorithm are able to capture causal associations between genes that result in increased predictive power, compared to more classically reconstructed networks derived from the same data. This approach to causal inference in large segregating mouse populations over multiple tissues not only elucidates fundamental aspects of transcriptional control, it also allows for the objective identification of key drivers of common human diseases.
SCENERY: a web application for (causal) network reconstruction from cytometry data
Papoutsoglou, Georgios; Athineou, Giorgos; Lagani, Vincenzo; Xanthopoulos, Iordanis; Schmidt, Angelika; Éliás, Szabolcs; Tegnér, Jesper
2017-01-01
Abstract Flow and mass cytometry technologies can probe proteins as biological markers in thousands of individual cells simultaneously, providing unprecedented opportunities for reconstructing networks of protein interactions through machine learning algorithms. The network reconstruction (NR) problem has been well-studied by the machine learning community. However, the potentials of available methods remain largely unknown to the cytometry community, mainly due to their intrinsic complexity and the lack of comprehensive, powerful and easy-to-use NR software implementations specific for cytometry data. To bridge this gap, we present Single CEll NEtwork Reconstruction sYstem (SCENERY), a web server featuring several standard and advanced cytometry data analysis methods coupled with NR algorithms in a user-friendly, on-line environment. In SCENERY, users may upload their data and set their own study design. The server offers several data analysis options categorized into three classes of methods: data (pre)processing, statistical analysis and NR. The server also provides interactive visualization and download of results as ready-to-publish images or multimedia reports. Its core is modular and based on the widely-used and robust R platform allowing power users to extend its functionalities by submitting their own NR methods. SCENERY is available at scenery.csd.uoc.gr or http://mensxmachina.org/en/software/. PMID:28525568
Three-dimensional slum urban reconstruction in Envisat and Google Earth Egypt
NASA Astrophysics Data System (ADS)
Marghany, M.; Genderen, J. v.
2014-02-01
This study aims to aim to investigate the capability of ENVISAT ASAR satellite and Google Earth data for three-dimensional (3-D) slum urban reconstruction in developed country such as Egypt. The main objective of this work is to utilize 3-D automatic detection algorithm for urban slum in ENVISAT ASAR and Google Erath images were acquired in Cairo, Egypt using Fuzzy B-spline algorithm. The results show that fuzzy algorithm is the best indicator for chaotic urban slum as it can discriminate them from its surrounding environment. The combination of Fuzzy and B-spline then used to reconstruct 3-D of urban slam. The results show that urban slums, road network, and infrastructures are perfectly discriminated. It can therefore be concluded that fuzzy algorithm is an appropriate algorithm for chaotic urban slum automatic detection in ENVSIAT ASAR and Google Earth data.
Real time network traffic monitoring for wireless local area networks based on compressed sensing
NASA Astrophysics Data System (ADS)
Balouchestani, Mohammadreza
2017-05-01
A wireless local area network (WLAN) is an important type of wireless networks which connotes different wireless nodes in a local area network. WLANs suffer from important problems such as network load balancing, large amount of energy, and load of sampling. This paper presents a new networking traffic approach based on Compressed Sensing (CS) for improving the quality of WLANs. The proposed architecture allows reducing Data Delay Probability (DDP) to 15%, which is a good record for WLANs. The proposed architecture is increased Data Throughput (DT) to 22 % and Signal to Noise (S/N) ratio to 17 %, which provide a good background for establishing high qualified local area networks. This architecture enables continuous data acquisition and compression of WLAN's signals that are suitable for a variety of other wireless networking applications. At the transmitter side of each wireless node, an analog-CS framework is applied at the sensing step before analog to digital converter in order to generate the compressed version of the input signal. At the receiver side of wireless node, a reconstruction algorithm is applied in order to reconstruct the original signals from the compressed signals with high probability and enough accuracy. The proposed algorithm out-performs existing algorithms by achieving a good level of Quality of Service (QoS). This ability allows reducing 15 % of Bit Error Rate (BER) at each wireless node.
Zhang, Wanhong; Zhou, Tong
2015-01-01
Motivation Identifying gene regulatory networks (GRNs) which consist of a large number of interacting units has become a problem of paramount importance in systems biology. Situations exist extensively in which causal interacting relationships among these units are required to be reconstructed from measured expression data and other a priori information. Though numerous classical methods have been developed to unravel the interactions of GRNs, these methods either have higher computing complexities or have lower estimation accuracies. Note that great similarities exist between identification of genes that directly regulate a specific gene and a sparse vector reconstruction, which often relates to the determination of the number, location and magnitude of nonzero entries of an unknown vector by solving an underdetermined system of linear equations y = Φx. Based on these similarities, we propose a novel framework of sparse reconstruction to identify the structure of a GRN, so as to increase accuracy of causal regulation estimations, as well as to reduce their computational complexity. Results In this paper, a sparse reconstruction framework is proposed on basis of steady-state experiment data to identify GRN structure. Different from traditional methods, this approach is adopted which is well suitable for a large-scale underdetermined problem in inferring a sparse vector. We investigate how to combine the noisy steady-state experiment data and a sparse reconstruction algorithm to identify causal relationships. Efficiency of this method is tested by an artificial linear network, a mitogen-activated protein kinase (MAPK) pathway network and the in silico networks of the DREAM challenges. The performance of the suggested approach is compared with two state-of-the-art algorithms, the widely adopted total least-squares (TLS) method and those available results on the DREAM project. Actual results show that, with a lower computational cost, the proposed method can significantly enhance estimation accuracy and greatly reduce false positive and negative errors. Furthermore, numerical calculations demonstrate that the proposed algorithm may have faster convergence speed and smaller fluctuation than other methods when either estimate error or estimate bias is considered. PMID:26207991
Empirical Bayes conditional independence graphs for regulatory network recovery.
Mahdi, Rami; Madduri, Abishek S; Wang, Guoqing; Strulovici-Barel, Yael; Salit, Jacqueline; Hackett, Neil R; Crystal, Ronald G; Mezey, Jason G
2012-08-01
Computational inference methods that make use of graphical models to extract regulatory networks from gene expression data can have difficulty reconstructing dense regions of a network, a consequence of both computational complexity and unreliable parameter estimation when sample size is small. As a result, identification of hub genes is of special difficulty for these methods. We present a new algorithm, Empirical Light Mutual Min (ELMM), for large network reconstruction that has properties well suited for recovery of graphs with high-degree nodes. ELMM reconstructs the undirected graph of a regulatory network using empirical Bayes conditional independence testing with a heuristic relaxation of independence constraints in dense areas of the graph. This relaxation allows only one gene of a pair with a putative relation to be aware of the network connection, an approach that is aimed at easing multiple testing problems associated with recovering densely connected structures. Using in silico data, we show that ELMM has better performance than commonly used network inference algorithms including GeneNet, ARACNE, FOCI, GENIE3 and GLASSO. We also apply ELMM to reconstruct a network among 5492 genes expressed in human lung airway epithelium of healthy non-smokers, healthy smokers and individuals with chronic obstructive pulmonary disease assayed using microarrays. The analysis identifies dense sub-networks that are consistent with known regulatory relationships in the lung airway and also suggests novel hub regulatory relationships among a number of genes that play roles in oxidative stress and secretion. Software for running ELMM is made available at http://mezeylab.cb.bscb.cornell.edu/Software.aspx. ramimahdi@yahoo.com or jgm45@cornell.edu Supplementary data are available at Bioinformatics online.
The Convolutional Visual Network for Identification and Reconstruction of NOvA Events
DOE Office of Scientific and Technical Information (OSTI.GOV)
Psihas, Fernanda
In 2016 the NOvA experiment released results for the observation of oscillations in the vμ and ve channels as well as ve cross section measurements using neutrinos from Fermilab’s NuMI beam. These and other measurements in progress rely on the accurate identification and reconstruction of the neutrino flavor and energy recorded by our detectors. This presentation describes the first application of convolutional neural network technology for event identification and reconstruction in particle detectors like NOvA. The Convolutional Visual Network (CVN) Algorithm was developed for identification, categorization, and reconstruction of NOvA events. It increased the selection efficiency of the ve appearancemore » signal by 40% and studies show potential impact to the vμ disappearance analysis.« less
Yoon, Young-Gyu; Dai, Peilun; Wohlwend, Jeremy; Chang, Jae-Byum; Marblestone, Adam H.; Boyden, Edward S.
2017-01-01
We here introduce and study the properties, via computer simulation, of a candidate automated approach to algorithmic reconstruction of dense neural morphology, based on simulated data of the kind that would be obtained via two emerging molecular technologies—expansion microscopy (ExM) and in-situ molecular barcoding. We utilize a convolutional neural network to detect neuronal boundaries from protein-tagged plasma membrane images obtained via ExM, as well as a subsequent supervoxel-merging pipeline guided by optical readout of information-rich, cell-specific nucleic acid barcodes. We attempt to use conservative imaging and labeling parameters, with the goal of establishing a baseline case that points to the potential feasibility of optical circuit reconstruction, leaving open the possibility of higher-performance labeling technologies and algorithms. We find that, even with these conservative assumptions, an all-optical approach to dense neural morphology reconstruction may be possible via the proposed algorithmic framework. Future work should explore both the design-space of chemical labels and barcodes, as well as algorithms, to ultimately enable routine, high-performance optical circuit reconstruction. PMID:29114215
Yoon, Young-Gyu; Dai, Peilun; Wohlwend, Jeremy; Chang, Jae-Byum; Marblestone, Adam H; Boyden, Edward S
2017-01-01
We here introduce and study the properties, via computer simulation, of a candidate automated approach to algorithmic reconstruction of dense neural morphology, based on simulated data of the kind that would be obtained via two emerging molecular technologies-expansion microscopy (ExM) and in-situ molecular barcoding. We utilize a convolutional neural network to detect neuronal boundaries from protein-tagged plasma membrane images obtained via ExM, as well as a subsequent supervoxel-merging pipeline guided by optical readout of information-rich, cell-specific nucleic acid barcodes. We attempt to use conservative imaging and labeling parameters, with the goal of establishing a baseline case that points to the potential feasibility of optical circuit reconstruction, leaving open the possibility of higher-performance labeling technologies and algorithms. We find that, even with these conservative assumptions, an all-optical approach to dense neural morphology reconstruction may be possible via the proposed algorithmic framework. Future work should explore both the design-space of chemical labels and barcodes, as well as algorithms, to ultimately enable routine, high-performance optical circuit reconstruction.
NASA Astrophysics Data System (ADS)
Helama, S.; Makarenko, N. G.; Karimova, L. M.; Kruglun, O. A.; Timonen, M.; Holopainen, J.; Meriläinen, J.; Eronen, M.
2009-03-01
Tree-rings tell of past climates. To do so, tree-ring chronologies comprising numerous climate-sensitive living-tree and subfossil time-series need to be "transferred" into palaeoclimate estimates using transfer functions. The purpose of this study is to compare different types of transfer functions, especially linear and nonlinear algorithms. Accordingly, multiple linear regression (MLR), linear scaling (LSC) and artificial neural networks (ANN, nonlinear algorithm) were compared. Transfer functions were built using a regional tree-ring chronology and instrumental temperature observations from Lapland (northern Finland and Sweden). In addition, conventional MLR was compared with a hybrid model whereby climate was reconstructed separately for short- and long-period timescales prior to combining the bands of timescales into a single hybrid model. The fidelity of the different reconstructions was validated against instrumental climate data. The reconstructions by MLR and ANN showed reliable reconstruction capabilities over the instrumental period (AD 1802-1998). LCS failed to reach reasonable verification statistics and did not qualify as a reliable reconstruction: this was due mainly to exaggeration of the low-frequency climatic variance. Over this instrumental period, the reconstructed low-frequency amplitudes of climate variability were rather similar by MLR and ANN. Notably greater differences between the models were found over the actual reconstruction period (AD 802-1801). A marked temperature decline, as reconstructed by MLR, from the Medieval Warm Period (AD 931-1180) to the Little Ice Age (AD 1601-1850), was evident in all the models. This decline was approx. 0.5°C as reconstructed by MLR. Different ANN based palaeotemperatures showed simultaneous cooling of 0.2 to 0.5°C, depending on algorithm. The hybrid MLR did not seem to provide further benefit above conventional MLR in our sample. The robustness of the conventional MLR over the calibration, verification and reconstruction periods qualified it as a reasonable transfer function for our forest-limit (i.e., timberline) dataset. ANN appears a potential tool for other environments and/or proxies having more complex and noisier climatic relationships.
Compressed sensing of ECG signal for wireless system with new fast iterative method.
Tawfic, Israa; Kayhan, Sema
2015-12-01
Recent experiments in wireless body area network (WBAN) show that compressive sensing (CS) is a promising tool to compress the Electrocardiogram signal ECG signal. The performance of CS is based on algorithms use to reconstruct exactly or approximately the original signal. In this paper, we present two methods work with absence and presence of noise, these methods are Least Support Orthogonal Matching Pursuit (LS-OMP) and Least Support Denoising-Orthogonal Matching Pursuit (LSD-OMP). The algorithms achieve correct support recovery without requiring sparsity knowledge. We derive an improved restricted isometry property (RIP) based conditions over the best known results. The basic procedures are done by observational and analytical of a different Electrocardiogram signal downloaded them from PhysioBankATM. Experimental results show that significant performance in term of reconstruction quality and compression rate can be obtained by these two new proposed algorithms, and help the specialist gathering the necessary information from the patient in less time if we use Magnetic Resonance Imaging (MRI) application, or reconstructed the patient data after sending it through the network. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Honey bee-inspired algorithms for SNP haplotype reconstruction problem
NASA Astrophysics Data System (ADS)
PourkamaliAnaraki, Maryam; Sadeghi, Mehdi
2016-03-01
Reconstructing haplotypes from SNP fragments is an important problem in computational biology. There have been a lot of interests in this field because haplotypes have been shown to contain promising data for disease association research. It is proved that haplotype reconstruction in Minimum Error Correction model is an NP-hard problem. Therefore, several methods such as clustering techniques, evolutionary algorithms, neural networks and swarm intelligence approaches have been proposed in order to solve this problem in appropriate time. In this paper, we have focused on various evolutionary clustering techniques and try to find an efficient technique for solving haplotype reconstruction problem. It can be referred from our experiments that the clustering methods relying on the behaviour of honey bee colony in nature, specifically bees algorithm and artificial bee colony methods, are expected to result in more efficient solutions. An application program of the methods is available at the following link. http://www.bioinf.cs.ipm.ir/software/haprs/
SCENERY: a web application for (causal) network reconstruction from cytometry data.
Papoutsoglou, Georgios; Athineou, Giorgos; Lagani, Vincenzo; Xanthopoulos, Iordanis; Schmidt, Angelika; Éliás, Szabolcs; Tegnér, Jesper; Tsamardinos, Ioannis
2017-07-03
Flow and mass cytometry technologies can probe proteins as biological markers in thousands of individual cells simultaneously, providing unprecedented opportunities for reconstructing networks of protein interactions through machine learning algorithms. The network reconstruction (NR) problem has been well-studied by the machine learning community. However, the potentials of available methods remain largely unknown to the cytometry community, mainly due to their intrinsic complexity and the lack of comprehensive, powerful and easy-to-use NR software implementations specific for cytometry data. To bridge this gap, we present Single CEll NEtwork Reconstruction sYstem (SCENERY), a web server featuring several standard and advanced cytometry data analysis methods coupled with NR algorithms in a user-friendly, on-line environment. In SCENERY, users may upload their data and set their own study design. The server offers several data analysis options categorized into three classes of methods: data (pre)processing, statistical analysis and NR. The server also provides interactive visualization and download of results as ready-to-publish images or multimedia reports. Its core is modular and based on the widely-used and robust R platform allowing power users to extend its functionalities by submitting their own NR methods. SCENERY is available at scenery.csd.uoc.gr or http://mensxmachina.org/en/software/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction.
Kang, Eunhee; Min, Junhong; Ye, Jong Chul
2017-10-01
Due to the potential risk of inducing cancer, radiation exposure by X-ray CT devices should be reduced for routine patient scanning. However, in low-dose X-ray CT, severe artifacts typically occur due to photon starvation, beam hardening, and other causes, all of which decrease the reliability of the diagnosis. Thus, a high-quality reconstruction method from low-dose X-ray CT data has become a major research topic in the CT community. Conventional model-based de-noising approaches are, however, computationally very expensive, and image-domain de-noising approaches cannot readily remove CT-specific noise patterns. To tackle these problems, we want to develop a new low-dose X-ray CT algorithm based on a deep-learning approach. We propose an algorithm which uses a deep convolutional neural network (CNN) which is applied to the wavelet transform coefficients of low-dose CT images. More specifically, using a directional wavelet transform to extract the directional component of artifacts and exploit the intra- and inter- band correlations, our deep network can effectively suppress CT-specific noise. In addition, our CNN is designed with a residual learning architecture for faster network training and better performance. Experimental results confirm that the proposed algorithm effectively removes complex noise patterns from CT images derived from a reduced X-ray dose. In addition, we show that the wavelet-domain CNN is efficient when used to remove noise from low-dose CT compared to existing approaches. Our results were rigorously evaluated by several radiologists at the Mayo Clinic and won second place at the 2016 "Low-Dose CT Grand Challenge." To the best of our knowledge, this work is the first deep-learning architecture for low-dose CT reconstruction which has been rigorously evaluated and proven to be effective. In addition, the proposed algorithm, in contrast to existing model-based iterative reconstruction (MBIR) methods, has considerable potential to benefit from large data sets. Therefore, we believe that the proposed algorithm opens a new direction in the area of low-dose CT research. © 2017 American Association of Physicists in Medicine.
Deep Convolutional Framelet Denosing for Low-Dose CT via Wavelet Residual Network.
Kang, Eunhee; Chang, Won; Yoo, Jaejun; Ye, Jong Chul
2018-06-01
Model-based iterative reconstruction algorithms for low-dose X-ray computed tomography (CT) are computationally expensive. To address this problem, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the textures were not fully recovered. To address this problem, here we propose a novel framelet-based denoising algorithm using wavelet residual network which synergistically combines the expressive power of deep learning and the performance guarantee from the framelet-based denoising algorithms. The new algorithms were inspired by the recent interpretation of the deep CNN as a cascaded convolution framelet signal representation. Extensive experimental results confirm that the proposed networks have significantly improved performance and preserve the detail texture of the original images.
Carré, Clément; Mas, André; Krouk, Gabriel
2017-01-01
Inferring transcriptional gene regulatory networks from transcriptomic datasets is a key challenge of systems biology, with potential impacts ranging from medicine to agronomy. There are several techniques used presently to experimentally assay transcription factors to target relationships, defining important information about real gene regulatory networks connections. These techniques include classical ChIP-seq, yeast one-hybrid, or more recently, DAP-seq or target technologies. These techniques are usually used to validate algorithm predictions. Here, we developed a reverse engineering approach based on mathematical and computer simulation to evaluate the impact that this prior knowledge on gene regulatory networks may have on training machine learning algorithms. First, we developed a gene regulatory networks-simulating engine called FRANK (Fast Randomizing Algorithm for Network Knowledge) that is able to simulate large gene regulatory networks (containing 10 4 genes) with characteristics of gene regulatory networks observed in vivo. FRANK also generates stable or oscillatory gene expression directly produced by the simulated gene regulatory networks. The development of FRANK leads to important general conclusions concerning the design of large and stable gene regulatory networks harboring scale free properties (built ex nihilo). In combination with supervised (accepting prior knowledge) support vector machine algorithm we (i) address biologically oriented questions concerning our capacity to accurately reconstruct gene regulatory networks and in particular we demonstrate that prior-knowledge structure is crucial for accurate learning, and (ii) draw conclusions to inform experimental design to performed learning able to solve gene regulatory networks in the future. By demonstrating that our predictions concerning the influence of the prior-knowledge structure on support vector machine learning capacity holds true on real data ( Escherichia coli K14 network reconstruction using network and transcriptomic data), we show that the formalism used to build FRANK can to some extent be a reasonable model for gene regulatory networks in real cells.
Prediction of missing links and reconstruction of complex networks
NASA Astrophysics Data System (ADS)
Zhang, Cheng-Jun; Zeng, An
2016-04-01
Predicting missing links in complex networks is of great significance from both theoretical and practical point of view, which not only helps us understand the evolution of real systems but also relates to many applications in social, biological and online systems. In this paper, we study the features of different simple link prediction methods, revealing that they may lead to the distortion of networks’ structural and dynamical properties. Moreover, we find that high prediction accuracy is not definitely corresponding to a high performance in preserving the network properties when using link prediction methods to reconstruct networks. Our work highlights the importance of considering the feedback effect of the link prediction methods on network properties when designing the algorithms.
High-speed parallel implementation of a modified PBR algorithm on DSP-based EH topology
NASA Astrophysics Data System (ADS)
Rajan, K.; Patnaik, L. M.; Ramakrishna, J.
1997-08-01
Algebraic Reconstruction Technique (ART) is an age-old method used for solving the problem of three-dimensional (3-D) reconstruction from projections in electron microscopy and radiology. In medical applications, direct 3-D reconstruction is at the forefront of investigation. The simultaneous iterative reconstruction technique (SIRT) is an ART-type algorithm with the potential of generating in a few iterations tomographic images of a quality comparable to that of convolution backprojection (CBP) methods. Pixel-based reconstruction (PBR) is similar to SIRT reconstruction, and it has been shown that PBR algorithms give better quality pictures compared to those produced by SIRT algorithms. In this work, we propose a few modifications to the PBR algorithms. The modified algorithms are shown to give better quality pictures compared to PBR algorithms. The PBR algorithm and the modified PBR algorithms are highly compute intensive, Not many attempts have been made to reconstruct objects in the true 3-D sense because of the high computational overhead. In this study, we have developed parallel two-dimensional (2-D) and 3-D reconstruction algorithms based on modified PBR. We attempt to solve the two problems encountered by the PBR and modified PBR algorithms, i.e., the long computational time and the large memory requirements, by parallelizing the algorithm on a multiprocessor system. We investigate the possible task and data partitioning schemes by exploiting the potential parallelism in the PBR algorithm subject to minimizing the memory requirement. We have implemented an extended hypercube (EH) architecture for the high-speed execution of the 3-D reconstruction algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs) and dual-port random access memories (DPR) as channels between the PEs. We discuss and compare the performances of the PBR algorithm on an IBM 6000 RISC workstation, on a Silicon Graphics Indigo 2 workstation, and on an EH system. The results show that an EH(3,1) using DSP chips as PEs executes the modified PBR algorithm about 100 times faster than an LBM 6000 RISC workstation. We have executed the algorithms on a 4-node IBM SP2 parallel computer. The results show that execution time of the algorithm on an EH(3,1) is better than that of a 4-node IBM SP2 system. The speed-up of an EH(3,1) system with eight PEs and one network controller is approximately 7.85.
Reconstruction of in-plane strain maps using hybrid dense sensor network composed of sensing skin
NASA Astrophysics Data System (ADS)
Downey, Austin; Laflamme, Simon; Ubertini, Filippo
2016-12-01
The authors have recently developed a soft-elastomeric capacitive (SEC)-based thin film sensor for monitoring strain on mesosurfaces. Arranged in a network configuration, the sensing system is analogous to a biological skin, where local strain can be monitored over a global area. Under plane stress conditions, the sensor output contains the additive measurement of the two principal strain components over the monitored surface. In applications where the evaluation of strain maps is useful, in structural health monitoring for instance, such signal must be decomposed into linear strain components along orthogonal directions. Previous work has led to an algorithm that enabled such decomposition by leveraging a dense sensor network configuration with the addition of assumed boundary conditions. Here, we significantly improve the algorithm’s accuracy by leveraging mature off-the-shelf solutions to create a hybrid dense sensor network (HDSN) to improve on the boundary condition assumptions. The system’s boundary conditions are enforced using unidirectional RSGs and assumed virtual sensors. Results from an extensive experimental investigation demonstrate the good performance of the proposed algorithm and its robustness with respect to sensors’ layout. Overall, the proposed algorithm is seen to effectively leverage the advantages of a hybrid dense network for application of the thin film sensor to reconstruct surface strain fields over large surfaces.
Two algorithms for neural-network design and training with application to channel equalization.
Sweatman, C Z; Mulgrew, B; Gibson, G J
1998-01-01
We describe two algorithms for designing and training neural-network classifiers. The first, the linear programming slab algorithm (LPSA), is motivated by the problem of reconstructing digital signals corrupted by passage through a dispersive channel and by additive noise. It constructs a multilayer perceptron (MLP) to separate two disjoint sets by using linear programming methods to identify network parameters. The second, the perceptron learning slab algorithm (PLSA), avoids the computational costs of linear programming by using an error-correction approach to identify parameters. Both algorithms operate in highly constrained parameter spaces and are able to exploit symmetry in the classification problem. Using these algorithms, we develop a number of procedures for the adaptive equalization of a complex linear 4-quadrature amplitude modulation (QAM) channel, and compare their performance in a simulation study. Results are given for both stationary and time-varying channels, the latter based on the COST 207 GSM propagation model.
Wu, Kai; Liu, Jing; Wang, Shuai
2016-01-01
Evolutionary games (EG) model a common type of interactions in various complex, networked, natural and social systems. Given such a system with only profit sequences being available, reconstructing the interacting structure of EG networks is fundamental to understand and control its collective dynamics. Existing approaches used to handle this problem, such as the lasso, a convex optimization method, need a user-defined constant to control the tradeoff between the natural sparsity of networks and measurement error (the difference between observed data and simulated data). However, a shortcoming of these approaches is that it is not easy to determine these key parameters which can maximize the performance. In contrast to these approaches, we first model the EG network reconstruction problem as a multiobjective optimization problem (MOP), and then develop a framework which involves multiobjective evolutionary algorithm (MOEA), followed by solution selection based on knee regions, termed as MOEANet, to solve this MOP. We also design an effective initialization operator based on the lasso for MOEA. We apply the proposed method to reconstruct various types of synthetic and real-world networks, and the results show that our approach is effective to avoid the above parameter selecting problem and can reconstruct EG networks with high accuracy. PMID:27886244
NASA Astrophysics Data System (ADS)
Wu, Kai; Liu, Jing; Wang, Shuai
2016-11-01
Evolutionary games (EG) model a common type of interactions in various complex, networked, natural and social systems. Given such a system with only profit sequences being available, reconstructing the interacting structure of EG networks is fundamental to understand and control its collective dynamics. Existing approaches used to handle this problem, such as the lasso, a convex optimization method, need a user-defined constant to control the tradeoff between the natural sparsity of networks and measurement error (the difference between observed data and simulated data). However, a shortcoming of these approaches is that it is not easy to determine these key parameters which can maximize the performance. In contrast to these approaches, we first model the EG network reconstruction problem as a multiobjective optimization problem (MOP), and then develop a framework which involves multiobjective evolutionary algorithm (MOEA), followed by solution selection based on knee regions, termed as MOEANet, to solve this MOP. We also design an effective initialization operator based on the lasso for MOEA. We apply the proposed method to reconstruct various types of synthetic and real-world networks, and the results show that our approach is effective to avoid the above parameter selecting problem and can reconstruct EG networks with high accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Sen, Satyabrata; Berry, M. L..
Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) program supported the development of networks of commercial-off-the-shelf (COTS) radiation counters for detecting, localizing, and identifying low-level radiation sources. Under this program, a series of indoor and outdoor tests were conducted with multiple source strengths and types, different background profiles, and various types of source and detector movements. Following the tests, network algorithms were replayed in various re-constructed scenarios using sub-networks. These measurements and algorithm traces together provide a rich collection of highly valuable datasets for testing the current and next generation radiation network algorithms, including the ones (tomore » be) developed by broader R&D communities such as distributed detection, information fusion, and sensor networks. From this multiple TeraByte IRSS database, we distilled out and packaged the first batch of canonical datasets for public release. They include measurements from ten indoor and two outdoor tests which represent increasingly challenging baseline scenarios for robustly testing radiation network algorithms.« less
Zhang, Zhilin; Jung, Tzyy-Ping; Makeig, Scott; Rao, Bhaskar D
2013-02-01
Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The design of a telemonitoring system via a wireless body area network with low energy consumption for ambulatory use is highly desirable. As an emerging technique, compressed sensing (CS) shows great promise in compressing/reconstructing data with low energy consumption. However, due to some specific characteristics of raw FECG recordings such as nonsparsity and strong noise contamination, current CS algorithms generally fail in this application. This paper proposes to use the block sparse Bayesian learning framework to compress/reconstruct nonsparse raw FECG recordings. Experimental results show that the framework can reconstruct the raw recordings with high quality. Especially, the reconstruction does not destroy the interdependence relation among the multichannel recordings. This ensures that the independent component analysis decomposition of the reconstructed recordings has high fidelity. Furthermore, the framework allows the use of a sparse binary sensing matrix with much fewer nonzero entries to compress recordings. Particularly, each column of the matrix can contain only two nonzero entries. This shows that the framework, compared to other algorithms such as current CS algorithms and wavelet algorithms, can greatly reduce code execution in CPU in the data compression stage.
Passing messages between biological networks to refine predicted interactions.
Glass, Kimberly; Huttenhower, Curtis; Quackenbush, John; Yuan, Guo-Cheng
2013-01-01
Regulatory network reconstruction is a fundamental problem in computational biology. There are significant limitations to such reconstruction using individual datasets, and increasingly people attempt to construct networks using multiple, independent datasets obtained from complementary sources, but methods for this integration are lacking. We developed PANDA (Passing Attributes between Networks for Data Assimilation), a message-passing model using multiple sources of information to predict regulatory relationships, and used it to integrate protein-protein interaction, gene expression, and sequence motif data to reconstruct genome-wide, condition-specific regulatory networks in yeast as a model. The resulting networks were not only more accurate than those produced using individual data sets and other existing methods, but they also captured information regarding specific biological mechanisms and pathways that were missed using other methodologies. PANDA is scalable to higher eukaryotes, applicable to specific tissue or cell type data and conceptually generalizable to include a variety of regulatory, interaction, expression, and other genome-scale data. An implementation of the PANDA algorithm is available at www.sourceforge.net/projects/panda-net.
View-interpolation of sparsely sampled sinogram using convolutional neural network
NASA Astrophysics Data System (ADS)
Lee, Hoyeon; Lee, Jongha; Cho, Suengryong
2017-02-01
Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.
Mapping chemicals in air using an environmental CAT scanning system: evaluation of algorithms
NASA Astrophysics Data System (ADS)
Samanta, A.; Todd, L. A.
A new technique is being developed which creates near real-time maps of chemical concentrations in air for environmental and occupational environmental applications. This technique, we call Environmental CAT Scanning, combines the real-time measuring technique of open-path Fourier transform infrared spectroscopy with the mapping capabilitites of computed tomography to produce two-dimensional concentration maps. With this system, a network of open-path measurements is obtained over an area; measurements are then processed using a tomographic algorithm to reconstruct the concentrations. This research focussed on the process of evaluating and selecting appropriate reconstruction algorithms, for use in the field, by using test concentration data from both computer simultation and laboratory chamber studies. Four algorithms were tested using three types of data: (1) experimental open-path data from studies that used a prototype opne-path Fourier transform/computed tomography system in an exposure chamber; (2) synthetic open-path data generated from maps created by kriging point samples taken in the chamber studies (in 1), and; (3) synthetic open-path data generated using a chemical dispersion model to create time seires maps. The iterative algorithms used to reconstruct the concentration data were: Algebraic Reconstruction Technique without Weights (ART1), Algebraic Reconstruction Technique with Weights (ARTW), Maximum Likelihood with Expectation Maximization (MLEM) and Multiplicative Algebraic Reconstruction Technique (MART). Maps were evaluated quantitatively and qualitatively. In general, MART and MLEM performed best, followed by ARTW and ART1. However, algorithm performance varied under different contaminant scenarios. This study showed the importance of using a variety of maps, particulary those generated using dispersion models. The time series maps provided a more rigorous test of the algorithms and allowed distinctions to be made among the algorithms. A comprehensive evaluation of algorithms, for the environmental application of tomography, requires the use of a battery of test concentration data before field implementation, which models reality and tests the limits of the algorithms.
Super-resolution reconstruction of MR image with a novel residual learning network algorithm
NASA Astrophysics Data System (ADS)
Shi, Jun; Liu, Qingping; Wang, Chaofeng; Zhang, Qi; Ying, Shihui; Xu, Haoyu
2018-04-01
Spatial resolution is one of the key parameters of magnetic resonance imaging (MRI). The image super-resolution (SR) technique offers an alternative approach to improve the spatial resolution of MRI due to its simplicity. Convolutional neural networks (CNN)-based SR algorithms have achieved state-of-the-art performance, in which the global residual learning (GRL) strategy is now commonly used due to its effectiveness for learning image details for SR. However, the partial loss of image details usually happens in a very deep network due to the degradation problem. In this work, we propose a novel residual learning-based SR algorithm for MRI, which combines both multi-scale GRL and shallow network block-based local residual learning (LRL). The proposed LRL module works effectively in capturing high-frequency details by learning local residuals. One simulated MRI dataset and two real MRI datasets have been used to evaluate our algorithm. The experimental results show that the proposed SR algorithm achieves superior performance to all of the other compared CNN-based SR algorithms in this work.
Revealing how network structure affects accuracy of link prediction
NASA Astrophysics Data System (ADS)
Yang, Jin-Xuan; Zhang, Xiao-Dong
2017-08-01
Link prediction plays an important role in network reconstruction and network evolution. The network structure affects the accuracy of link prediction, which is an interesting problem. In this paper we use common neighbors and the Gini coefficient to reveal the relation between them, which can provide a good reference for the choice of a suitable link prediction algorithm according to the network structure. Moreover, the statistical analysis reveals correlation between the common neighbors index, Gini coefficient index and other indices to describe the network structure, such as Laplacian eigenvalues, clustering coefficient, degree heterogeneity, and assortativity of network. Furthermore, a new method to predict missing links is proposed. The experimental results show that the proposed algorithm yields better prediction accuracy and robustness to the network structure than existing currently used methods for a variety of real-world networks.
Efficient content-based low-altitude images correlated network and strips reconstruction
NASA Astrophysics Data System (ADS)
He, Haiqing; You, Qi; Chen, Xiaoyong
2017-01-01
The manual intervention method is widely used to reconstruct strips for further aerial triangulation in low-altitude photogrammetry. Clearly the method for fully automatic photogrammetric data processing is not an expected way. In this paper, we explore a content-based approach without manual intervention or external information for strips reconstruction. Feature descriptors in the local spatial patterns are extracted by SIFT to construct vocabulary tree, in which these features are encoded in terms of TF-IDF numerical statistical algorithm to generate new representation for each low-altitude image. Then images correlated network is reconstructed by similarity measure, image matching and geometric graph theory. Finally, strips are reconstructed automatically by tracing straight lines and growing adjacent images gradually. Experimental results show that the proposed approach is highly effective in automatically rearranging strips of lowaltitude images and can provide rough relative orientation for further aerial triangulation.
A deep learning-based reconstruction of cosmic ray-induced air showers
NASA Astrophysics Data System (ADS)
Erdmann, M.; Glombitza, J.; Walz, D.
2018-01-01
We describe a method of reconstructing air showers induced by cosmic rays using deep learning techniques. We simulate an observatory consisting of ground-based particle detectors with fixed locations on a regular grid. The detector's responses to traversing shower particles are signal amplitudes as a function of time, which provide information on transverse and longitudinal shower properties. In order to take advantage of convolutional network techniques specialized in local pattern recognition, we convert all information to the image-like grid of the detectors. In this way, multiple features, such as arrival times of the first particles and optimized characterizations of time traces, are processed by the network. The reconstruction quality of the cosmic ray arrival direction turns out to be competitive with an analytic reconstruction algorithm. The reconstructed shower direction, energy and shower depth show the expected improvement in resolution for higher cosmic ray energy.
Guo, Xiaobo; Zhang, Ye; Hu, Wenhao; Tan, Haizhu; Wang, Xueqin
2014-01-01
Nonlinear dependence is general in regulation mechanism of gene regulatory networks (GRNs). It is vital to properly measure or test nonlinear dependence from real data for reconstructing GRNs and understanding the complex regulatory mechanisms within the cellular system. A recently developed measurement called the distance correlation (DC) has been shown powerful and computationally effective in nonlinear dependence for many situations. In this work, we incorporate the DC into inferring GRNs from the gene expression data without any underling distribution assumptions. We propose three DC-based GRNs inference algorithms: CLR-DC, MRNET-DC and REL-DC, and then compare them with the mutual information (MI)-based algorithms by analyzing two simulated data: benchmark GRNs from the DREAM challenge and GRNs generated by SynTReN network generator, and an experimentally determined SOS DNA repair network in Escherichia coli. According to both the receiver operator characteristic (ROC) curve and the precision-recall (PR) curve, our proposed algorithms significantly outperform the MI-based algorithms in GRNs inference.
Inferring Nonlinear Gene Regulatory Networks from Gene Expression Data Based on Distance Correlation
Guo, Xiaobo; Zhang, Ye; Hu, Wenhao; Tan, Haizhu; Wang, Xueqin
2014-01-01
Nonlinear dependence is general in regulation mechanism of gene regulatory networks (GRNs). It is vital to properly measure or test nonlinear dependence from real data for reconstructing GRNs and understanding the complex regulatory mechanisms within the cellular system. A recently developed measurement called the distance correlation (DC) has been shown powerful and computationally effective in nonlinear dependence for many situations. In this work, we incorporate the DC into inferring GRNs from the gene expression data without any underling distribution assumptions. We propose three DC-based GRNs inference algorithms: CLR-DC, MRNET-DC and REL-DC, and then compare them with the mutual information (MI)-based algorithms by analyzing two simulated data: benchmark GRNs from the DREAM challenge and GRNs generated by SynTReN network generator, and an experimentally determined SOS DNA repair network in Escherichia coli. According to both the receiver operator characteristic (ROC) curve and the precision-recall (PR) curve, our proposed algorithms significantly outperform the MI-based algorithms in GRNs inference. PMID:24551058
Reconstruction of stochastic temporal networks through diffusive arrival times
NASA Astrophysics Data System (ADS)
Li, Xun; Li, Xiang
2017-06-01
Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications.
Reconstruction of stochastic temporal networks through diffusive arrival times
Li, Xun; Li, Xiang
2017-01-01
Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications. PMID:28604687
Passing Messages between Biological Networks to Refine Predicted Interactions
Glass, Kimberly; Huttenhower, Curtis; Quackenbush, John; Yuan, Guo-Cheng
2013-01-01
Regulatory network reconstruction is a fundamental problem in computational biology. There are significant limitations to such reconstruction using individual datasets, and increasingly people attempt to construct networks using multiple, independent datasets obtained from complementary sources, but methods for this integration are lacking. We developed PANDA (Passing Attributes between Networks for Data Assimilation), a message-passing model using multiple sources of information to predict regulatory relationships, and used it to integrate protein-protein interaction, gene expression, and sequence motif data to reconstruct genome-wide, condition-specific regulatory networks in yeast as a model. The resulting networks were not only more accurate than those produced using individual data sets and other existing methods, but they also captured information regarding specific biological mechanisms and pathways that were missed using other methodologies. PANDA is scalable to higher eukaryotes, applicable to specific tissue or cell type data and conceptually generalizable to include a variety of regulatory, interaction, expression, and other genome-scale data. An implementation of the PANDA algorithm is available at www.sourceforge.net/projects/panda-net. PMID:23741402
Joint reconstruction of multiview compressed images.
Thirumalai, Vijayaraghavan; Frossard, Pascal
2013-05-01
Distributed representation of correlated multiview images is an important problem that arises in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed images are decoded together in order to take benefit from the image correlation. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG) with a balanced rate distribution among different cameras. A central decoder first estimates the inter-view image correlation from the independently compressed data. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images, which comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be as close as possible to their compressed versions. We show through experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our algorithm compares advantageously to state-of-the-art distributed coding schemes based on motion learning and on the DISCOVER algorithm.
NASA Astrophysics Data System (ADS)
Liu, Sha; Liu, Shi; Tong, Guowei
2017-11-01
In industrial areas, temperature distribution information provides a powerful data support for improving system efficiency, reducing pollutant emission, ensuring safety operation, etc. As a noninvasive measurement technology, acoustic tomography (AT) has been widely used to measure temperature distribution where the efficiency of the reconstruction algorithm is crucial for the reliability of the measurement results. Different from traditional reconstruction techniques, in this paper a two-phase reconstruction method is proposed to ameliorate the reconstruction accuracy (RA). In the first phase, the measurement domain is discretized by a coarse square grid to reduce the number of unknown variables to mitigate the ill-posed nature of the AT inverse problem. By taking into consideration the inaccuracy of the measured time-of-flight data, a new cost function is constructed to improve the robustness of the estimation, and a grey wolf optimizer is used to solve the proposed cost function to obtain the temperature distribution on the coarse grid. In the second phase, the Adaboost.RT based BP neural network algorithm is developed for predicting the temperature distribution on the refined grid in accordance with the temperature distribution data estimated in the first phase. Numerical simulations and experiment measurement results validate the superiority of the proposed reconstruction algorithm in improving the robustness and RA.
Model-Free Reconstruction of Excitatory Neuronal Connectivity from Calcium Imaging Signals
Stetter, Olav; Battaglia, Demian; Soriano, Jordi; Geisel, Theo
2012-01-01
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local. PMID:22927808
Zheng, Guangyong; Xu, Yaochen; Zhang, Xiujun; Liu, Zhi-Ping; Wang, Zhuo; Chen, Luonan; Zhu, Xin-Guang
2016-12-23
A gene regulatory network (GRN) represents interactions of genes inside a cell or tissue, in which vertexes and edges stand for genes and their regulatory interactions respectively. Reconstruction of gene regulatory networks, in particular, genome-scale networks, is essential for comparative exploration of different species and mechanistic investigation of biological processes. Currently, most of network inference methods are computationally intensive, which are usually effective for small-scale tasks (e.g., networks with a few hundred genes), but are difficult to construct GRNs at genome-scale. Here, we present a software package for gene regulatory network reconstruction at a genomic level, in which gene interaction is measured by the conditional mutual information measurement using a parallel computing framework (so the package is named CMIP). The package is a greatly improved implementation of our previous PCA-CMI algorithm. In CMIP, we provide not only an automatic threshold determination method but also an effective parallel computing framework for network inference. Performance tests on benchmark datasets show that the accuracy of CMIP is comparable to most current network inference methods. Moreover, running tests on synthetic datasets demonstrate that CMIP can handle large datasets especially genome-wide datasets within an acceptable time period. In addition, successful application on a real genomic dataset confirms its practical applicability of the package. This new software package provides a powerful tool for genomic network reconstruction to biological community. The software can be accessed at http://www.picb.ac.cn/CMIP/ .
Deng, Wenping; Zhang, Kui; Liu, Sanzhen; Zhao, Patrick; Xu, Shizhong; Wei, Hairong
2018-04-30
Joint reconstruction of multiple gene regulatory networks (GRNs) using gene expression data from multiple tissues/conditions is very important for understanding common and tissue/condition-specific regulation. However, there are currently no computational models and methods available for directly constructing such multiple GRNs that not only share some common hub genes but also possess tissue/condition-specific regulatory edges. In this paper, we proposed a new graphic Gaussian model for joint reconstruction of multiple gene regulatory networks (JRmGRN), which highlighted hub genes, using gene expression data from several tissues/conditions. Under the framework of Gaussian graphical model, JRmGRN method constructs the GRNs through maximizing a penalized log likelihood function. We formulated it as a convex optimization problem, and then solved it with an alternating direction method of multipliers (ADMM) algorithm. The performance of JRmGRN was first evaluated with synthetic data and the results showed that JRmGRN outperformed several other methods for reconstruction of GRNs. We also applied our method to real Arabidopsis thaliana RNA-seq data from two light regime conditions in comparison with other methods, and both common hub genes and some conditions-specific hub genes were identified with higher accuracy and precision. JRmGRN is available as a R program from: https://github.com/wenpingd. hairong@mtu.edu. Proof of theorem, derivation of algorithm and supplementary data are available at Bioinformatics online.
Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che
2014-01-16
To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks.
2014-01-01
Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks. PMID:24428926
An ECG signals compression method and its validation using NNs.
Fira, Catalina Monica; Goras, Liviu
2008-04-01
This paper presents a new algorithm for electrocardiogram (ECG) signal compression based on local extreme extraction, adaptive hysteretic filtering and Lempel-Ziv-Welch (LZW) coding. The algorithm has been verified using eight of the most frequent normal and pathological types of cardiac beats and an multi-layer perceptron (MLP) neural network trained with original cardiac patterns and tested with reconstructed ones. Aspects regarding the possibility of using the principal component analysis (PCA) to cardiac pattern classification have been investigated as well. A new compression measure called "quality score," which takes into account both the reconstruction errors and the compression ratio, is proposed.
Cellular neural networks, the Navier-Stokes equation, and microarray image reconstruction.
Zineddin, Bachar; Wang, Zidong; Liu, Xiaohui
2011-11-01
Although the last decade has witnessed a great deal of improvements achieved for the microarray technology, many major developments in all the main stages of this technology, including image processing, are still needed. Some hardware implementations of microarray image processing have been proposed in the literature and proved to be promising alternatives to the currently available software systems. However, the main drawback of those proposed approaches is the unsuitable addressing of the quantification of the gene spot in a realistic way without any assumption about the image surface. Our aim in this paper is to present a new image-reconstruction algorithm using the cellular neural network that solves the Navier-Stokes equation. This algorithm offers a robust method for estimating the background signal within the gene-spot region. The MATCNN toolbox for Matlab is used to test the proposed method. Quantitative comparisons are carried out, i.e., in terms of objective criteria, between our approach and some other available methods. It is shown that the proposed algorithm gives highly accurate and realistic measurements in a fully automated manner within a remarkably efficient time.
The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking
Farrell, Steven; Anderson, Dustin; Calafiura, Paolo; ...
2017-08-08
Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problemmore » thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. Furthermore, we will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data.« less
The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell, Steven; Anderson, Dustin; Calafiura, Paolo
Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problemmore » thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. Furthermore, we will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data.« less
The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking
NASA Astrophysics Data System (ADS)
Farrell, Steven; Anderson, Dustin; Calafiura, Paolo; Cerati, Giuseppe; Gray, Lindsey; Kowalkowski, Jim; Mudigonda, Mayur; Prabhat; Spentzouris, Panagiotis; Spiropoulou, Maria; Tsaris, Aristeidis; Vlimant, Jean-Roch; Zheng, Stephan
2017-08-01
Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problem thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. We will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data.
2013-01-12
www.security-informatics.com/content/2/1/1 References 1. SM Radilm, C Flint, GE Tita , Spatializing Social Networks: Using Social Network Analysis to...http://www.tandfonline.com/doi/ abs/10.1080/00045600903550428 2. G Tita , S Radil, Spatializing the social networks of gangs to explore patterns of...violence. Journal of Quantitative Criminology. 27, 1–25 (2011) 3. G Tita , JK Riley, G Ridgeway, AF Abrahamse, P Greenwood, Reducing Gun Violence
Multivariate system of polarization tomography of biological crystals birefringence networks
NASA Astrophysics Data System (ADS)
Zabolotna, N. I.; Pavlov, S. V.; Ushenko, A. G.; Sobko, O. V.; Savich, V. O.
2014-08-01
The results of optical modeling of biological tissues polycrystalline multilayer networks have been presented. Algorithms of reconstruction of parameter distributions were determined that describe the linear and circular birefringence. For the separation of the manifestations of these mechanisms we propose a method of space-frequency filtering. Criteria for differentiation of benign and malignant tissues of the women reproductive sphere were found.
Preciat Gonzalez, German A.; El Assal, Lemmer R. P.; Noronha, Alberto; ...
2017-06-14
The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, manymore » algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preciat Gonzalez, German A.; El Assal, Lemmer R. P.; Noronha, Alberto
The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, manymore » algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.« less
Preciat Gonzalez, German A; El Assal, Lemmer R P; Noronha, Alberto; Thiele, Ines; Haraldsdóttir, Hulda S; Fleming, Ronan M T
2017-06-14
The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, many algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.
Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks
NASA Astrophysics Data System (ADS)
Zhelavskaya, I. S.; Shprits, Y.; Spasojevic, M.
2017-12-01
We present a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of October 1, 2012 - July 1, 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2 ≤ L ≤ 6 and all local times. We validate and test the model by measuring its performance on independent datasets withheld from the training set and by comparing the model predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in-situ observations by using machine learning techniques.
Metabolic Network Modeling of Microbial Communities
Biggs, Matthew B.; Medlock, Gregory L.; Kolling, Glynis L.
2015-01-01
Genome-scale metabolic network reconstructions and constraint-based analysis are powerful methods that have the potential to make functional predictions about microbial communities. Current use of genome-scale metabolic networks to characterize the metabolic functions of microbial communities includes species compartmentalization, separating species-level and community-level objectives, dynamic analysis, the “enzyme-soup” approach, multi-scale modeling, and others. There are many challenges inherent to the field, including a need for tools that accurately assign high-level omics signals to individual community members, new automated reconstruction methods that rival manual curation, and novel algorithms for integrating omics data and engineering communities. As technologies and modeling frameworks improve, we expect that there will be proportional advances in the fields of ecology, health science, and microbial community engineering. PMID:26109480
Designing synthetic networks in silico: a generalised evolutionary algorithm approach.
Smith, Robert W; van Sluijs, Bob; Fleck, Christian
2017-12-02
Evolution has led to the development of biological networks that are shaped by environmental signals. Elucidating, understanding and then reconstructing important network motifs is one of the principal aims of Systems & Synthetic Biology. Consequently, previous research has focused on finding optimal network structures and reaction rates that respond to pulses or produce stable oscillations. In this work we present a generalised in silico evolutionary algorithm that simultaneously finds network structures and reaction rates (genotypes) that can satisfy multiple defined objectives (phenotypes). The key step to our approach is to translate a schema/binary-based description of biological networks into systems of ordinary differential equations (ODEs). The ODEs can then be solved numerically to provide dynamic information about an evolved networks functionality. Initially we benchmark algorithm performance by finding optimal networks that can recapitulate concentration time-series data and perform parameter optimisation on oscillatory dynamics of the Repressilator. We go on to show the utility of our algorithm by finding new designs for robust synthetic oscillators, and by performing multi-objective optimisation to find a set of oscillators and feed-forward loops that are optimal at balancing different system properties. In sum, our results not only confirm and build on previous observations but we also provide new designs of synthetic oscillators for experimental construction. In this work we have presented and tested an evolutionary algorithm that can design a biological network to produce desired output. Given that previous designs of synthetic networks have been limited to subregions of network- and parameter-space, the use of our evolutionary optimisation algorithm will enable Synthetic Biologists to construct new systems with the potential to display a wider range of complex responses.
Reconstruction and Simulation of Neocortical Microcircuitry.
Markram, Henry; Muller, Eilif; Ramaswamy, Srikanth; Reimann, Michael W; Abdellah, Marwan; Sanchez, Carlos Aguado; Ailamaki, Anastasia; Alonso-Nanclares, Lidia; Antille, Nicolas; Arsever, Selim; Kahou, Guy Antoine Atenekeng; Berger, Thomas K; Bilgili, Ahmet; Buncic, Nenad; Chalimourda, Athanassia; Chindemi, Giuseppe; Courcol, Jean-Denis; Delalondre, Fabien; Delattre, Vincent; Druckmann, Shaul; Dumusc, Raphael; Dynes, James; Eilemann, Stefan; Gal, Eyal; Gevaert, Michael Emiel; Ghobril, Jean-Pierre; Gidon, Albert; Graham, Joe W; Gupta, Anirudh; Haenel, Valentin; Hay, Etay; Heinis, Thomas; Hernando, Juan B; Hines, Michael; Kanari, Lida; Keller, Daniel; Kenyon, John; Khazen, Georges; Kim, Yihwa; King, James G; Kisvarday, Zoltan; Kumbhar, Pramod; Lasserre, Sébastien; Le Bé, Jean-Vincent; Magalhães, Bruno R C; Merchán-Pérez, Angel; Meystre, Julie; Morrice, Benjamin Roy; Muller, Jeffrey; Muñoz-Céspedes, Alberto; Muralidhar, Shruti; Muthurasa, Keerthan; Nachbaur, Daniel; Newton, Taylor H; Nolte, Max; Ovcharenko, Aleksandr; Palacios, Juan; Pastor, Luis; Perin, Rodrigo; Ranjan, Rajnish; Riachi, Imad; Rodríguez, José-Rodrigo; Riquelme, Juan Luis; Rössert, Christian; Sfyrakis, Konstantinos; Shi, Ying; Shillcock, Julian C; Silberberg, Gilad; Silva, Ricardo; Tauheed, Farhan; Telefont, Martin; Toledo-Rodriguez, Maria; Tränkler, Thomas; Van Geit, Werner; Díaz, Jafet Villafranca; Walker, Richard; Wang, Yun; Zaninetta, Stefano M; DeFelipe, Javier; Hill, Sean L; Segev, Idan; Schürmann, Felix
2015-10-08
We present a first-draft digital reconstruction of the microcircuitry of somatosensory cortex of juvenile rat. The reconstruction uses cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm(3) containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses. Simulations reproduce an array of in vitro and in vivo experiments without parameter tuning. Additionally, we find a spectrum of network states with a sharp transition from synchronous to asynchronous activity, modulated by physiological mechanisms. The spectrum of network states, dynamically reconfigured around this transition, supports diverse information processing strategies. VIDEO ABSTRACT. Copyright © 2015 Elsevier Inc. All rights reserved.
3D reconstruction of synapses with deep learning based on EM Images
NASA Astrophysics Data System (ADS)
Xiao, Chi; Rao, Qiang; Zhang, Dandan; Chen, Xi; Han, Hua; Xie, Qiwei
2017-03-01
Recently, due to the rapid development of electron microscope (EM) with its high resolution, stacks delivered by EM can be used to analyze a variety of components that are critical to understand brain function. Since synaptic study is essential in neurobiology and can be analyzed by EM stacks, the automated routines for reconstruction of synapses based on EM Images can become a very useful tool for analyzing large volumes of brain tissue and providing the ability to understand the mechanism of brain. In this article, we propose a novel automated method to realize 3D reconstruction of synapses for Automated Tapecollecting Ultra Microtome Scanning Electron Microscopy (ATUM-SEM) with deep learning. Being different from other reconstruction algorithms, which employ classifier to segment synaptic clefts directly. We utilize deep learning method and segmentation algorithm to obtain synaptic clefts as well as promote the accuracy of reconstruction. The proposed method contains five parts: (1) using modified Moving Least Square (MLS) deformation algorithm and Scale Invariant Feature Transform (SIFT) features to register adjacent sections, (2) adopting Faster Region Convolutional Neural Networks (Faster R-CNN) algorithm to detect synapses, (3) utilizing screening method which takes context cues of synapses into consideration to reduce the false positive rate, (4) combining a practical morphology algorithm with a suitable fitting function to segment synaptic clefts and optimize the shape of them, (5) applying the plugin in FIJI to show the final 3D visualization of synapses. Experimental results on ATUM-SEM images demonstrate the effectiveness of our proposed method.
Mass Conservation and Inference of Metabolic Networks from High-Throughput Mass Spectrometry Data
Bandaru, Pradeep; Bansal, Mukesh
2011-01-01
Abstract We present a step towards the metabolome-wide computational inference of cellular metabolic reaction networks from metabolic profiling data, such as mass spectrometry. The reconstruction is based on identification of irreducible statistical interactions among the metabolite activities using the ARACNE reverse-engineering algorithm and on constraining possible metabolic transformations to satisfy the conservation of mass. The resulting algorithms are validated on synthetic data from an abridged computational model of Escherichia coli metabolism. Precision rates upwards of 50% are routinely observed for identification of full metabolic reactions, and recalls upwards of 20% are also seen. PMID:21314454
NASA Astrophysics Data System (ADS)
Waldmann, I. P.
2016-04-01
Here, we introduce the RobERt (Robotic Exoplanet Recognition) algorithm for the classification of exoplanetary emission spectra. Spectral retrieval of exoplanetary atmospheres frequently requires the preselection of molecular/atomic opacities to be defined by the user. In the era of open-source, automated, and self-sufficient retrieval algorithms, manual input should be avoided. User dependent input could, in worst-case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is based on deep-belief neural (DBN) networks trained to accurately recognize molecular signatures for a wide range of planets, atmospheric thermal profiles, and compositions. Reconstructions of the learned features, also referred to as the “dreams” of the network, indicate good convergence and an accurate representation of molecular features in the DBN. Using these deep neural networks, we work toward retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data, and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.
A program to compute the soft Robinson-Foulds distance between phylogenetic networks.
Lu, Bingxin; Zhang, Louxin; Leong, Hon Wai
2017-03-14
Over the past two decades, phylogenetic networks have been studied to model reticulate evolutionary events. The relationships among phylogenetic networks, phylogenetic trees and clusters serve as the basis for reconstruction and comparison of phylogenetic networks. To understand these relationships, two problems are raised: the tree containment problem, which asks whether a phylogenetic tree is displayed in a phylogenetic network, and the cluster containment problem, which asks whether a cluster is represented at a node in a phylogenetic network. Both the problems are NP-complete. A fast exponential-time algorithm for the cluster containment problem on arbitrary networks is developed and implemented in C. The resulting program is further extended into a computer program for fast computation of the Soft Robinson-Foulds distance between phylogenetic networks. Two computer programs are developed for facilitating reconstruction and validation of phylogenetic network models in evolutionary and comparative genomics. Our simulation tests indicated that they are fast enough for use in practice. Additionally, the distribution of the Soft Robinson-Foulds distance between phylogenetic networks is demonstrated to be unlikely normal by our simulation data.
A convolutional neural network neutrino event classifier
Aurisano, A.; Radovic, A.; Rocco, D.; ...
2016-09-01
Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less
A convolutional neural network neutrino event classifier
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurisano, A.; Radovic, A.; Rocco, D.
Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less
Gogoshin, Grigoriy; Boerwinkle, Eric
2017-01-01
Abstract Bayesian network (BN) reconstruction is a prototypical systems biology data analysis approach that has been successfully used to reverse engineer and model networks reflecting different layers of biological organization (ranging from genetic to epigenetic to cellular pathway to metabolomic). It is especially relevant in the context of modern (ongoing and prospective) studies that generate heterogeneous high-throughput omics datasets. However, there are both theoretical and practical obstacles to the seamless application of BN modeling to such big data, including computational inefficiency of optimal BN structure search algorithms, ambiguity in data discretization, mixing data types, imputation and validation, and, in general, limited scalability in both reconstruction and visualization of BNs. To overcome these and other obstacles, we present BNOmics, an improved algorithm and software toolkit for inferring and analyzing BNs from omics datasets. BNOmics aims at comprehensive systems biology—type data exploration, including both generating new biological hypothesis and testing and validating the existing ones. Novel aspects of the algorithm center around increasing scalability and applicability to varying data types (with different explicit and implicit distributional assumptions) within the same analysis framework. An output and visualization interface to widely available graph-rendering software is also included. Three diverse applications are detailed. BNOmics was originally developed in the context of genetic epidemiology data and is being continuously optimized to keep pace with the ever-increasing inflow of available large-scale omics datasets. As such, the software scalability and usability on the less than exotic computer hardware are a priority, as well as the applicability of the algorithm and software to the heterogeneous datasets containing many data types—single-nucleotide polymorphisms and other genetic/epigenetic/transcriptome variables, metabolite levels, epidemiological variables, endpoints, and phenotypes, etc. PMID:27681505
Gogoshin, Grigoriy; Boerwinkle, Eric; Rodin, Andrei S
2017-04-01
Bayesian network (BN) reconstruction is a prototypical systems biology data analysis approach that has been successfully used to reverse engineer and model networks reflecting different layers of biological organization (ranging from genetic to epigenetic to cellular pathway to metabolomic). It is especially relevant in the context of modern (ongoing and prospective) studies that generate heterogeneous high-throughput omics datasets. However, there are both theoretical and practical obstacles to the seamless application of BN modeling to such big data, including computational inefficiency of optimal BN structure search algorithms, ambiguity in data discretization, mixing data types, imputation and validation, and, in general, limited scalability in both reconstruction and visualization of BNs. To overcome these and other obstacles, we present BNOmics, an improved algorithm and software toolkit for inferring and analyzing BNs from omics datasets. BNOmics aims at comprehensive systems biology-type data exploration, including both generating new biological hypothesis and testing and validating the existing ones. Novel aspects of the algorithm center around increasing scalability and applicability to varying data types (with different explicit and implicit distributional assumptions) within the same analysis framework. An output and visualization interface to widely available graph-rendering software is also included. Three diverse applications are detailed. BNOmics was originally developed in the context of genetic epidemiology data and is being continuously optimized to keep pace with the ever-increasing inflow of available large-scale omics datasets. As such, the software scalability and usability on the less than exotic computer hardware are a priority, as well as the applicability of the algorithm and software to the heterogeneous datasets containing many data types-single-nucleotide polymorphisms and other genetic/epigenetic/transcriptome variables, metabolite levels, epidemiological variables, endpoints, and phenotypes, etc.
NASA Astrophysics Data System (ADS)
Hong, Junseok; Kim, Yong Ha; Chung, Jong-Kyun; Ssessanga, Nicholas; Kwak, Young-Sil
2017-03-01
In South Korea, there are about 80 Global Positioning System (GPS) monitoring stations providing total electron content (TEC) every 10 min, which can be accessed through Korea Astronomy and Space Science Institute (KASI) for scientific use. We applied the computerized ionospheric tomography (CIT) algorithm to the TEC dataset from this GPS network for monitoring the regional ionosphere over South Korea. The algorithm utilizes multiplicative algebraic reconstruction technique (MART) with an initial condition of the latest International Reference Ionosphere-2016 model (IRI-2016). In order to reduce the number of unknown variables, the vertical profiles of electron density are expressed with a linear combination of empirical orthonormal functions (EOFs) that were derived from the IRI empirical profiles. Although the number of receiver sites is much smaller than that of Japan, the CIT algorithm yielded reasonable structure of the ionosphere over South Korea. We verified the CIT results with NmF2 from ionosondes in Icheon and Jeju and also with GPS TEC at the center of South Korea. In addition, the total time required for CIT calculation was only about 5 min, enabling the exploration of the vertical ionospheric structure in near real time.
Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms
Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas
2016-01-01
Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640
Megchelenbrink, Wout; Huynen, Martijn; Marchiori, Elena
2014-01-01
Constraint-based models of metabolic networks are typically underdetermined, because they contain more reactions than metabolites. Therefore the solutions to this system do not consist of unique flux rates for each reaction, but rather a space of possible flux rates. By uniformly sampling this space, an estimated probability distribution for each reaction's flux in the network can be obtained. However, sampling a high dimensional network is time-consuming. Furthermore, the constraints imposed on the network give rise to an irregularly shaped solution space. Therefore more tailored, efficient sampling methods are needed. We propose an efficient sampling algorithm (called optGpSampler), which implements the Artificial Centering Hit-and-Run algorithm in a different manner than the sampling algorithm implemented in the COBRA Toolbox for metabolic network analysis, here called gpSampler. Results of extensive experiments on different genome-scale metabolic networks show that optGpSampler is up to 40 times faster than gpSampler. Application of existing convergence diagnostics on small network reconstructions indicate that optGpSampler converges roughly ten times faster than gpSampler towards similar sampling distributions. For networks of higher dimension (i.e. containing more than 500 reactions), we observed significantly better convergence of optGpSampler and a large deviation between the samples generated by the two algorithms. optGpSampler for Matlab and Python is available for non-commercial use at: http://cs.ru.nl/~wmegchel/optGpSampler/.
NASA Astrophysics Data System (ADS)
Ushenko, V. A.; Sidor, M. I.; Marchuk, Yu F.; Pashkovskaya, N. V.; Andreichuk, D. R.
2015-03-01
We report a model of Mueller-matrix description of optical anisotropy of protein networks in biological tissues with allowance for the linear birefringence and dichroism. The model is used to construct the reconstruction algorithms of coordinate distributions of phase shifts and the linear dichroism coefficient. In the statistical analysis of such distributions, we have found the objective criteria of differentiation between benign and malignant tissues of the female reproductive system. From the standpoint of evidence-based medicine, we have determined the operating characteristics (sensitivity, specificity and accuracy) of the Mueller-matrix reconstruction method of optical anisotropy parameters and demonstrated its effectiveness in the differentiation of benign and malignant tumours.
An Algorithm for the Mixed Transportation Network Design Problem
Liu, Xinyu; Chen, Qun
2016-01-01
This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803
An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks
Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang
2016-01-01
To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It’s theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods. PMID:27669250
Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang
2016-09-22
To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It's theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.
Emad, Amin; Milenkovic, Olgica
2014-01-01
We introduce a novel algorithm for inference of causal gene interactions, termed CaSPIAN (Causal Subspace Pursuit for Inference and Analysis of Networks), which is based on coupling compressive sensing and Granger causality techniques. The core of the approach is to discover sparse linear dependencies between shifted time series of gene expressions using a sequential list-version of the subspace pursuit reconstruction algorithm and to estimate the direction of gene interactions via Granger-type elimination. The method is conceptually simple and computationally efficient, and it allows for dealing with noisy measurements. Its performance as a stand-alone platform without biological side-information was tested on simulated networks, on the synthetic IRMA network in Saccharomyces cerevisiae, and on data pertaining to the human HeLa cell network and the SOS network in E. coli. The results produced by CaSPIAN are compared to the results of several related algorithms, demonstrating significant improvements in inference accuracy of documented interactions. These findings highlight the importance of Granger causality techniques for reducing the number of false-positives, as well as the influence of noise and sampling period on the accuracy of the estimates. In addition, the performance of the method was tested in conjunction with biological side information of the form of sparse “scaffold networks”, to which new edges were added using available RNA-seq or microarray data. These biological priors aid in increasing the sensitivity and precision of the algorithm in the small sample regime. PMID:24622336
Reconstruction of biological pathways and metabolic networks from in silico labeled metabolites.
Hadadi, Noushin; Hafner, Jasmin; Soh, Keng Cher; Hatzimanikatis, Vassily
2017-01-01
Reaction atom mappings track the positional changes of all of the atoms between the substrates and the products as they undergo the biochemical transformation. However, information on atom transitions in the context of metabolic pathways is not widely available in the literature. The understanding of metabolic pathways at the atomic level is of great importance as it can deconvolute the overlapping catabolic/anabolic pathways resulting in the observed metabolic phenotype. The automated identification of atom transitions within a metabolic network is a very challenging task since the degree of complexity of metabolic networks dramatically increases when we transit from metabolite-level studies to atom-level studies. Despite being studied extensively in various approaches, the field of atom mapping of metabolic networks is lacking an automated approach, which (i) accounts for the information of reaction mechanism for atom mapping and (ii) is extendable from individual atom-mapped reactions to atom-mapped reaction networks. Hereby, we introduce a computational framework, iAM.NICE (in silico Atom Mapped Network Integrated Computational Explorer), for the systematic atom-level reconstruction of metabolic networks from in silico labelled substrates. iAM.NICE is to our knowledge the first automated atom-mapping algorithm that is based on the underlying enzymatic biotransformation mechanisms, and its application goes beyond individual reactions and it can be used for the reconstruction of atom-mapped metabolic networks. We illustrate the applicability of our method through the reconstruction of atom-mapped reactions of the KEGG database and we provide an example of an atom-level representation of the core metabolic network of E. coli. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sungjun Lim; Nowak, Michael R; Yoonsuck Choe
2016-08-01
We present a novel, parallelizable algorithm capable of automatically reconstructing and calculating anatomical statistics of cerebral vascular networks embedded in large volumes of Rat Nissl-stained data. In this paper, we report the results of our method using Rattus somatosensory cortical data acquired using Knife-Edge Scanning Microscopy. Our algorithm performs the reconstruction task with averaged precision, recall, and F2-score of 0.978, 0.892, and 0.902 respectively. Calculated anatomical statistics show some conformance to values previously reported. The results that can be obtained from our method are expected to help explicate the relationship between the structural organization of the microcirculation and normal (and abnormal) cerebral functioning.
Regional influences on reconstructed global mean sea level
NASA Astrophysics Data System (ADS)
Natarov, Svetlana I.; Merrifield, Mark A.; Becker, Janet M.; Thompson, Phillip R.
2017-04-01
Reconstructions of global mean sea level (GMSL) based on tide gauge measurements tend to exhibit common multidecadal rate fluctuations over the twentieth century. GMSL rate changes may result from physical drivers, such as changes in radiative forcing or land water storage. Alternatively, these fluctuations may represent artifacts due to sampling limitations inherent in the historical tide gauge network. In particular, a high percentage of tide gauges used in reconstructions, especially prior to the 1950s, are from Europe and North America in the North Atlantic region. Here a GMSL reconstruction based on the reduced space optimal interpolation algorithm is deconstructed, with the contributions of individual tide gauge stations quantified and assessed regionally. It is demonstrated that the North Atlantic region has a disproportionate influence on reconstructed GMSL rate fluctuations prior to the 1950s, notably accounting for a rate minimum in the 1920s and contributing to a rate maximum in the 1950s. North Atlantic coastal sea level fluctuations related to wind-driven ocean volume redistribution likely contribute to these estimated GMSL rate inflections. The findings support previous claims that multidecadal rate changes in GMSL reconstructions are likely related to the geographic distribution of tide gauge stations within a sparse global network.
Kaltdorf, Martin; Dandekar, Thomas; Naseem, Muhammad
2017-01-01
In order to increase our understanding of biological dependencies in plant immune signaling pathways, the known interactions involved in plant immune networks are modeled. This allows computational analysis to predict the functions of growth related hormones in plant-pathogen interaction. The SQUAD (Standardized Qualitative Dynamical Systems) algorithm first determines stable system states in the network and then use them to compute continuous dynamical system states. Our reconstructed Boolean model encompassing hormone immune networks of Arabidopsis thaliana (Arabidopsis) and pathogenicity factors injected by model pathogen Pseudomonas syringae pv. tomato DC3000 (Pst DC3000) can be exploited to determine the impact of growth hormones in plant immunity. We describe a detailed working protocol how to use the modified SQUAD-package by exemplifying the contrasting effects of auxin and cytokinins in shaping plant-pathogen interaction.
Track vertex reconstruction with neural networks at the first level trigger of Belle II
NASA Astrophysics Data System (ADS)
Neuhaus, Sara; Skambraks, Sebastian; Kiesling, Christian
2017-08-01
The track trigger is one of the main components of the Belle II first level trigger, taking input from the Central Drift Chamber (CDC). It consists of several stages, first combining hits to track segments, followed by a 2D track finding in the transverse plane and finally a 3D track reconstruction. The results of the track trigger are the track multiplicity, the momentum vector of each track and the longitudinal displacement of the origin or production vertex of each track ("z-vertex"). The latter allows to reject background tracks from outside of the interaction region and thus to suppress a large fraction of the machine background. This contribution focuses on the track finding stage using Hough transforms and on the z-vertex reconstruction with neural networks. We describe the algorithms and show performance studies on simulated events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldmann, I. P., E-mail: ingo@star.ucl.ac.uk
Here, we introduce the RobERt (Robotic Exoplanet Recognition) algorithm for the classification of exoplanetary emission spectra. Spectral retrieval of exoplanetary atmospheres frequently requires the preselection of molecular/atomic opacities to be defined by the user. In the era of open-source, automated, and self-sufficient retrieval algorithms, manual input should be avoided. User dependent input could, in worst-case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is based on deep-belief neural (DBN) networks trained to accurately recognize molecular signatures for a wide range of planets, atmospheric thermal profiles, and compositions. Reconstructions of the learned features, also referred to as themore » “dreams” of the network, indicate good convergence and an accurate representation of molecular features in the DBN. Using these deep neural networks, we work toward retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data, and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.« less
Ab initio nanostructure determination
NASA Astrophysics Data System (ADS)
Gujarathi, Saurabh
Reconstruction of complex structures is an inverse problem arising in virtually all areas of science and technology, from protein structure determination to bulk heterostructure solar cells and the structure of nanoparticles. This problem is cast as a complex network problem where the edges in a network have weights equal to the Euclidean distance between their endpoints. A method, called Tribond, for the reconstruction of the locations of the nodes of the network given only the edge weights of the Euclidean network is presented. The timing results indicate that the algorithm is a low order polynomial in the number of nodes in the network in two dimensions. Reconstruction of Euclidean networks in two dimensions of about one thousand nodes in approximately twenty four hours on a desktop computer using this implementation is done. In three dimensions, the computational cost for the reconstruction is a higher order polynomial in the number of nodes and reconstruction of small Euclidean networks in three dimensions is shown. If a starting network of size five is assumed to be given, then for a network of size 100, the remaining reconstruction can be done in about two hours on a desktop computer. In situations when we have less precise data, modifications of the method may be necessary and are discussed. A related problem in one dimension known as the Optimal Golomb ruler (OGR) is also studied. A statistical physics Hamiltonian to describe the OGR problem is introduced and the first order phase transition from a symmetric low constraint phase to a complex symmetry broken phase at high constraint is studied. Despite the fact that the Hamiltonian is not disordered, the asymmetric phase is highly irregular with geometric frustration. The phase diagram is obtained and it is seen that even at a very low temperature T there is a phase transition at finite and non-zero value of the constraint parameter gamma/mu. Analytic calculations for the scaling of the density and free energy of the ruler are done and they are compared with those from the mean field approach. A scaling law is also derived for the length of OGR, which is consistent with Erdos conjecture and with numerical results.
NASA Astrophysics Data System (ADS)
Ge, Yuanzheng; Chen, Bin; liu, Liang; Qiu, Xiaogang; Song, Hongbin; Wang, Yong
2018-02-01
Individual-based computational environment provides an effective solution to study complex social events by reconstructing scenarios. Challenges remain in reconstructing the virtual scenarios and reproducing the complex evolution. In this paper, we propose a framework to reconstruct a synthetic computational environment, reproduce the epidemic outbreak, and evaluate management interventions in a virtual university. The reconstructed computational environment includes 4 fundamental components: the synthetic population, behavior algorithms, multiple social networks, and geographic campus environment. In the virtual university, influenza H1N1 transmission experiments are conducted, and gradually enhanced interventions are evaluated and compared quantitatively. The experiment results indicate that the reconstructed virtual environment provides a solution to reproduce complex emergencies and evaluate policies to be executed in the real world.
NASA Astrophysics Data System (ADS)
Marghany, Maged
2014-06-01
A critical challenges in urban aeras is slums. In fact, they are considered a source of crime and disease due to poor-quality housing, unsanitary conditions, poor infrastructures and occupancy security. The poor in the dense urban slums are the most vulnerable to infection due to (i) inadequate and restricted access to safety, drinking water and sufficient quantities of water for personal hygiene; (ii) the lack of removal and treatment of excreta; and (iii) the lack of removal of solid waste. This study aims to investigate the capability of ENVISAT ASAR satellite and Google Earth data for three-dimensional (3-D) slum urban reconstruction in developed countries such as Egypt. The main objective of this work is to utilize some 3-D automatic detection algorithm for urban slum in ENVISAT ASAR and Google Erath images were acquired in Cairo, Egypt using Fuzzy B-spline algorithm. The results show that the fuzzy algorithm is the best indicator for chaotic urban slum as it can discriminate between them from its surrounding environment. The combination of Fuzzy and B-spline then used to reconstruct 3-D of urban slum. The results show that urban slums, road network, and infrastructures are perfectly discriminated. It can therefore be concluded that the fuzzy algorithm is an appropriate algorithm for chaotic urban slum automatic detection in ENVSIAT ASAR and Google Earth data.
Advancements to the planogram frequency–distance rebinning algorithm
Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E
2010-01-01
In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact reconstruction) and planogram filtered backprojection image reconstruction algorithms. We show that the PFDRX algorithm produces images that are nearly as accurate as images reconstructed with the planogram filtered backprojection algorithm and more accurate than images reconstructed with the PFDR+FBP algorithm. Both the PFDR+FBP and PFDRX algorithms provide a dramatic improvement in computation time over the planogram filtered backprojection algorithm. PMID:20436790
Automatic inference of multicellular regulatory networks using informative priors.
Sun, Xiaoyun; Hong, Pengyu
2009-01-01
To fully understand the mechanisms governing animal development, computational models and algorithms are needed to enable quantitative studies of the underlying regulatory networks. We developed a mathematical model based on dynamic Bayesian networks to model multicellular regulatory networks that govern cell differentiation processes. A machine-learning method was developed to automatically infer such a model from heterogeneous data. We show that the model inference procedure can be greatly improved by incorporating interaction data across species. The proposed approach was applied to C. elegans vulval induction to reconstruct a model capable of simulating C. elegans vulval induction under 73 different genetic conditions.
Brouwer, Darren H
2013-01-01
An algorithm is presented for solving the structures of silicate network materials such as zeolites or layered silicates from solid-state (29)Si double-quantum NMR data for situations in which the crystallographic space group is not known. The algorithm is explained and illustrated in detail using a hypothetical two-dimensional network structure as a working example. The algorithm involves an atom-by-atom structure building process in which candidate partial structures are evaluated according to their agreement with Si-O-Si connectivity information, symmetry restraints, and fits to (29)Si double quantum NMR curves followed by minimization of a cost function that incorporates connectivity, symmetry, and quality of fit to the double quantum curves. The two-dimensional network material is successfully reconstructed from hypothetical NMR data that can be reasonably expected to be obtained for real samples. This advance in "NMR crystallography" is expected to be important for structure determination of partially ordered silicate materials for which diffraction provides very limited structural information. Copyright © 2013 Elsevier Inc. All rights reserved.
de Santos-Sierra, Daniel; Sendiña-Nadal, Irene; Leyva, Inmaculada; Almendral, Juan A; Ayali, Amir; Anava, Sarit; Sánchez-Ávila, Carmen; Boccaletti, Stefano
2015-06-01
Large scale phase-contrast images taken at high resolution through the life of a cultured neuronal network are analyzed by a graph-based unsupervised segmentation algorithm with a very low computational cost, scaling linearly with the image size. The processing automatically retrieves the whole network structure, an object whose mathematical representation is a matrix in which nodes are identified neurons or neurons' clusters, and links are the reconstructed connections between them. The algorithm is also able to extract any other relevant morphological information characterizing neurons and neurites. More importantly, and at variance with other segmentation methods that require fluorescence imaging from immunocytochemistry techniques, our non invasive measures entitle us to perform a longitudinal analysis during the maturation of a single culture. Such an analysis furnishes the way of individuating the main physical processes underlying the self-organization of the neurons' ensemble into a complex network, and drives the formulation of a phenomenological model yet able to describe qualitatively the overall scenario observed during the culture growth. © 2014 International Society for Advancement of Cytometry.
NASA Astrophysics Data System (ADS)
La Foy, Roderick; Vlachos, Pavlos
2011-11-01
An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.
Reconstruction of Tissue-Specific Metabolic Networks Using CORDA
Schultz, André; Qutub, Amina A.
2016-01-01
Human metabolism involves thousands of reactions and metabolites. To interpret this complexity, computational modeling becomes an essential experimental tool. One of the most popular techniques to study human metabolism as a whole is genome scale modeling. A key challenge to applying genome scale modeling is identifying critical metabolic reactions across diverse human tissues. Here we introduce a novel algorithm called Cost Optimization Reaction Dependency Assessment (CORDA) to build genome scale models in a tissue-specific manner. CORDA performs more efficiently computationally, shows better agreement to experimental data, and displays better model functionality and capacity when compared to previous algorithms. CORDA also returns reaction associations that can greatly assist in any manual curation to be performed following the automated reconstruction process. Using CORDA, we developed a library of 76 healthy and 20 cancer tissue-specific reconstructions. These reconstructions identified which metabolic pathways are shared across diverse human tissues. Moreover, we identified changes in reactions and pathways that are differentially included and present different capacity profiles in cancer compared to healthy tissues, including up-regulation of folate metabolism, the down-regulation of thiamine metabolism, and tight regulation of oxidative phosphorylation. PMID:26942765
Kim, Hyungjin; Park, Chang Min; Lee, Myunghee; Park, Sang Joon; Song, Yong Sub; Lee, Jong Hyuk; Hwang, Eui Jin; Goo, Jin Mo
2016-01-01
To identify the impact of reconstruction algorithms on CT radiomic features of pulmonary tumors and to reveal and compare the intra- and inter-reader and inter-reconstruction algorithm variability of each feature. Forty-two patients (M:F = 19:23; mean age, 60.43±10.56 years) with 42 pulmonary tumors (22.56±8.51mm) underwent contrast-enhanced CT scans, which were reconstructed with filtered back projection and commercial iterative reconstruction algorithm (level 3 and 5). Two readers independently segmented the whole tumor volume. Fifteen radiomic features were extracted and compared among reconstruction algorithms. Intra- and inter-reader variability and inter-reconstruction algorithm variability were calculated using coefficients of variation (CVs) and then compared. Among the 15 features, 5 first-order tumor intensity features and 4 gray level co-occurrence matrix (GLCM)-based features showed significant differences (p<0.05) among reconstruction algorithms. As for the variability, effective diameter, sphericity, entropy, and GLCM entropy were the most robust features (CV≤5%). Inter-reader variability was larger than intra-reader or inter-reconstruction algorithm variability in 9 features. However, for entropy, homogeneity, and 4 GLCM-based features, inter-reconstruction algorithm variability was significantly greater than inter-reader variability (p<0.013). Most of the radiomic features were significantly affected by the reconstruction algorithms. Inter-reconstruction algorithm variability was greater than inter-reader variability for entropy, homogeneity, and GLCM-based features.
Large-scale urban point cloud labeling and reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Liqiang; Li, Zhuqiang; Li, Anjian; Liu, Fangyu
2018-04-01
The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. In this paper, a novel framework is proposed for classification and reconstruction of airborne laser scanning point cloud data. To label point clouds, we present a rectified linear units neural network named ReLu-NN where the rectified linear units (ReLu) instead of the traditional sigmoid are taken as the activation function in order to speed up the convergence. Since the features of the point cloud are sparse, we reduce the number of neurons by the dropout to avoid over-fitting of the training process. The set of feature descriptors for each 3D point is encoded through self-taught learning, and forms a discriminative feature representation which is taken as the input of the ReLu-NN. The segmented building points are consolidated through an edge-aware point set resampling algorithm, and then they are reconstructed into 3D lightweight models using the 2.5D contouring method (Zhou and Neumann, 2010). Compared with deep learning approaches, the ReLu-NN introduced can easily classify unorganized point clouds without rasterizing the data, and it does not need a large number of training samples. Most of the parameters in the network are learned, and thus the intensive parameter tuning cost is significantly reduced. Experimental results on various datasets demonstrate that the proposed framework achieves better performance than other related algorithms in terms of classification accuracy and reconstruction quality.
An algebra-based method for inferring gene regulatory networks.
Vera-Licona, Paola; Jarrah, Abdul; Garcia-Puente, Luis David; McGee, John; Laubenbacher, Reinhard
2014-03-26
The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used. This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also predicting several of the dynamic patterns present in the network. Boolean polynomial dynamical systems provide a powerful modeling framework for the reverse engineering of gene regulatory networks, that enables a rich mathematical structure on the model search space. A C++ implementation of the method, distributed under LPGL license, is available, together with the source code, at http://www.paola-vera-licona.net/Software/EARevEng/REACT.html.
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
Accelerated Optical Projection Tomography Applied to In Vivo Imaging of Zebrafish
Correia, Teresa; Yin, Jun; Ramel, Marie-Christine; Andrews, Natalie; Katan, Matilda; Bugeon, Laurence; Dallman, Margaret J.; McGinty, James; Frankel, Paul; French, Paul M. W.; Arridge, Simon
2015-01-01
Optical projection tomography (OPT) provides a non-invasive 3-D imaging modality that can be applied to longitudinal studies of live disease models, including in zebrafish. Current limitations include the requirement of a minimum number of angular projections for reconstruction of reasonable OPT images using filtered back projection (FBP), which is typically several hundred, leading to acquisition times of several minutes. It is highly desirable to decrease the number of required angular projections to decrease both the total acquisition time and the light dose to the sample. This is particularly important to enable longitudinal studies, which involve measurements of the same fish at different time points. In this work, we demonstrate that the use of an iterative algorithm to reconstruct sparsely sampled OPT data sets can provide useful 3-D images with 50 or fewer projections, thereby significantly decreasing the minimum acquisition time and light dose while maintaining image quality. A transgenic zebrafish embryo with fluorescent labelling of the vasculature was imaged to acquire densely sampled (800 projections) and under-sampled data sets of transmitted and fluorescence projection images. The under-sampled OPT data sets were reconstructed using an iterative total variation-based image reconstruction algorithm and compared against FBP reconstructions of the densely sampled data sets. To illustrate the potential for quantitative analysis following rapid OPT data acquisition, a Hessian-based method was applied to automatically segment the reconstructed images to select the vasculature network. Results showed that 3-D images of the zebrafish embryo and its vasculature of sufficient visual quality for quantitative analysis can be reconstructed using the iterative algorithm from only 32 projections—achieving up to 28 times improvement in imaging speed and leading to total acquisition times of a few seconds. PMID:26308086
Autofocusing in digital holography using deep learning
NASA Astrophysics Data System (ADS)
Ren, Zhenbo; Xu, Zhimin; Lam, Edmund Y.
2018-02-01
In digital holography, it is critical to know the distance in order to reconstruct the multi-sectional object. This autofocusing is traditionally solved by reconstructing a stack of in-focus and out-of-focus images and using some focus metric, such as entropy or variance, to calculate the sharpness of each reconstructed image. Then the distance corresponding to the sharpest image is determined as the focal position. This method is effective but computationally demanding and time-consuming. To get an accurate estimation, one has to reconstruct many images. Sometimes after a coarse search, a refinement is needed. To overcome this problem in autofocusing, we propose to use deep learning, i.e., a convolutional neural network (CNN), to solve this problem. Autofocusing is viewed as a classification problem, in which the true distance is transferred as a label. To estimate the distance is equated to labeling a hologram correctly. To train such an algorithm, totally 1000 holograms are captured under the same environment, i.e., exposure time, incident angle, object, except the distance. There are 5 labels corresponding to 5 distances. These data are randomly split into three datasets to train, validate and test a CNN network. Experimental results show that the trained network is capable of predicting the distance without reconstructing or knowing any physical parameters about the setup. The prediction time using this method is far less than traditional autofocusing methods.
ROBNCA: robust network component analysis for recovering transcription factor activities.
Noor, Amina; Ahmad, Aitzaz; Serpedin, Erchin; Nounou, Mohamed; Nounou, Hazem
2013-10-01
Network component analysis (NCA) is an efficient method of reconstructing the transcription factor activity (TFA), which makes use of the gene expression data and prior information available about transcription factor (TF)-gene regulations. Most of the contemporary algorithms either exhibit the drawback of inconsistency and poor reliability, or suffer from prohibitive computational complexity. In addition, the existing algorithms do not possess the ability to counteract the presence of outliers in the microarray data. Hence, robust and computationally efficient algorithms are needed to enable practical applications. We propose ROBust Network Component Analysis (ROBNCA), a novel iterative algorithm that explicitly models the possible outliers in the microarray data. An attractive feature of the ROBNCA algorithm is the derivation of a closed form solution for estimating the connectivity matrix, which was not available in prior contributions. The ROBNCA algorithm is compared with FastNCA and the non-iterative NCA (NI-NCA). ROBNCA estimates the TF activity profiles as well as the TF-gene control strength matrix with a much higher degree of accuracy than FastNCA and NI-NCA, irrespective of varying noise, correlation and/or amount of outliers in case of synthetic data. The ROBNCA algorithm is also tested on Saccharomyces cerevisiae data and Escherichia coli data, and it is observed to outperform the existing algorithms. The run time of the ROBNCA algorithm is comparable with that of FastNCA, and is hundreds of times faster than NI-NCA. The ROBNCA software is available at http://people.tamu.edu/∼amina/ROBNCA
NASA Astrophysics Data System (ADS)
Bennett, C.; Dunne, J. F.; Trimby, S.; Richardson, D.
2017-02-01
A recurrent non-linear autoregressive with exogenous input (NARX) neural network is proposed, and a suitable fully-recurrent training methodology is adapted and tuned, for reconstructing cylinder pressure in multi-cylinder IC engines using measured crank kinematics. This type of indirect sensing is important for cost effective closed-loop combustion control and for On-Board Diagnostics. The challenge addressed is to accurately predict cylinder pressure traces within the cycle under generalisation conditions: i.e. using data not previously seen by the network during training. This involves direct construction and calibration of a suitable inverse crank dynamic model, which owing to singular behaviour at top-dead-centre (TDC), has proved difficult via physical model construction, calibration, and inversion. The NARX architecture is specialised and adapted to cylinder pressure reconstruction, using a fully-recurrent training methodology which is needed because the alternatives are too slow and unreliable for practical network training on production engines. The fully-recurrent Robust Adaptive Gradient Descent (RAGD) algorithm, is tuned initially using synthesised crank kinematics, and then tested on real engine data to assess the reconstruction capability. Real data is obtained from a 1.125 l, 3-cylinder, in-line, direct injection spark ignition (DISI) engine involving synchronised measurements of crank kinematics and cylinder pressure across a range of steady-state speed and load conditions. The paper shows that a RAGD-trained NARX network using both crank velocity and crank acceleration as input information, provides fast and robust training. By using the optimum epoch identified during RAGD training, acceptably accurate cylinder pressures, and especially accurate location-of-peak-pressure, can be reconstructed robustly under generalisation conditions, making it the most practical NARX configuration and recurrent training methodology for use on production engines.
Data-Driven Neural Network Model for Robust Reconstruction of Automobile Casting
NASA Astrophysics Data System (ADS)
Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Lu
2017-09-01
In computer vision system, it is a challenging task to robustly reconstruct complex 3D geometries of automobile castings. However, 3D scanning data is usually interfered by noises, the scanning resolution is low, these effects normally lead to incomplete matching and drift phenomenon. In order to solve these problems, a data-driven local geometric learning model is proposed to achieve robust reconstruction of automobile casting. In order to relieve the interference of sensor noise and to be compatible with incomplete scanning data, a 3D convolution neural network is established to match the local geometric features of automobile casting. The proposed neural network combines the geometric feature representation with the correlation metric function to robustly match the local correspondence. We use the truncated distance field(TDF) around the key point to represent the 3D surface of casting geometry, so that the model can be directly embedded into the 3D space to learn the geometric feature representation; Finally, the training labels is automatically generated for depth learning based on the existing RGB-D reconstruction algorithm, which accesses to the same global key matching descriptor. The experimental results show that the matching accuracy of our network is 92.2% for automobile castings, the closed loop rate is about 74.0% when the matching tolerance threshold τ is 0.2. The matching descriptors performed well and retained 81.6% matching accuracy at 95% closed loop. For the sparse geometric castings with initial matching failure, the 3D matching object can be reconstructed robustly by training the key descriptors. Our method performs 3D reconstruction robustly for complex automobile castings.
Jones matrix polarization-correlation mapping of biological crystals networks
NASA Astrophysics Data System (ADS)
Ushenko, O. G.; Ushenko, Yu. O.; Pidkamin, L. Y.; Sidor, M. I.; Vanchuliak, O.; Motrich, A. V.; Gorsky, M. P.; Meglinskiy, I.; Marchuk, Yu. F.
2017-08-01
It has been proposed the optical model of Jones-matrix description of mechanisms of optical anisotropy of polycrystalline films of human bile, namely optical activity and birefringence. The algorithm of reconstruction of distributions of parameters - optical rotation angles and phase shifts of the indicated anisotropy types has been elaborated. The objective criteria of differentiation of bile films taken from healthy donors and patients with cholelithiasis by means of statistic analysis of such distributions have been determined. The operational characteristics (sensitivity, specificity and accuracy) of Jones-matrix reconstruction method of optical anisotropy parameters were defined.
Research on compressive sensing reconstruction algorithm based on total variation model
NASA Astrophysics Data System (ADS)
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
NASA Astrophysics Data System (ADS)
Jin, Dakai; Lu, Jia; Zhang, Xiaoliu; Chen, Cheng; Bai, ErWei; Saha, Punam K.
2017-03-01
Osteoporosis is associated with increased fracture risk. Recent advancement in the area of in vivo imaging allows segmentation of trabecular bone (TB) microstructures, which is a known key determinant of bone strength and fracture risk. An accurate biomechanical modelling of TB micro-architecture provides a comprehensive summary measure of bone strength and fracture risk. In this paper, a new direct TB biomechanical modelling method using nonlinear manifold-based volumetric reconstruction of trabecular network is presented. It is accomplished in two sequential modules. The first module reconstructs a nonlinear manifold-based volumetric representation of TB networks from three-dimensional digital images. Specifically, it starts with the fuzzy digital segmentation of a TB network, and computes its surface and curve skeletons. An individual trabecula is identified as a topological segment in the curve skeleton. Using geometric analysis, smoothing and optimization techniques, the algorithm generates smooth, curved, and continuous representations of individual trabeculae glued at their junctions. Also, the method generates a geometrically consistent TB volume at junctions. In the second module, a direct computational biomechanical stress-strain analysis is applied on the reconstructed TB volume to predict mechanical measures. The accuracy of the method was examined using micro-CT imaging of cadaveric distal tibia specimens (N = 12). A high linear correlation (r = 0.95) between TB volume computed using the new manifold-modelling algorithm and that directly derived from the voxel-based micro-CT images was observed. Young's modulus (YM) was computed using direct mechanical analysis on the TB manifold-model over a cubical volume of interest (VOI), and its correlation with the YM, computed using micro-CT based conventional finite-element analysis over the same VOI, was examined. A moderate linear correlation (r = 0.77) was observed between the two YM measures. This preliminary results show the accuracy of the new nonlinear manifold modelling algorithm for TB, and demonstrate the feasibility of a new direct mechanical strain-strain analysis on a nonlinear manifold model of a highly complex biological structure.
Automation on the generation of genome-scale metabolic models.
Reyes, R; Gamermann, D; Montagud, A; Fuente, D; Triana, J; Urchueguía, J F; de Córdoba, P Fernández
2012-12-01
Nowadays, the reconstruction of genome-scale metabolic models is a nonautomatized and interactive process based on decision making. This lengthy process usually requires a full year of one person's work in order to satisfactory collect, analyze, and validate the list of all metabolic reactions present in a specific organism. In order to write this list, one manually has to go through a huge amount of genomic, metabolomic, and physiological information. Currently, there is no optimal algorithm that allows one to automatically go through all this information and generate the models taking into account probabilistic criteria of unicity and completeness that a biologist would consider. This work presents the automation of a methodology for the reconstruction of genome-scale metabolic models for any organism. The methodology that follows is the automatized version of the steps implemented manually for the reconstruction of the genome-scale metabolic model of a photosynthetic organism, Synechocystis sp. PCC6803. The steps for the reconstruction are implemented in a computational platform (COPABI) that generates the models from the probabilistic algorithms that have been developed. For validation of the developed algorithm robustness, the metabolic models of several organisms generated by the platform have been studied together with published models that have been manually curated. Network properties of the models, like connectivity and average shortest mean path of the different models, have been compared and analyzed.
The Applied Mathematics for Power Systems (AMPS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael
2012-07-24
Increased deployment of new technologies, e.g., renewable generation and electric vehicles, is rapidly transforming electrical power networks by crossing previously distinct spatiotemporal scales and invalidating many traditional approaches for designing, analyzing, and operating power grids. This trend is expected to accelerate over the coming years, bringing the disruptive challenge of complexity, but also opportunities to deliver unprecedented efficiency and reliability. Our Applied Mathematics for Power Systems (AMPS) Center will discover, enable, and solve emerging mathematics challenges arising in power systems and, more generally, in complex engineered networks. We will develop foundational applied mathematics resulting in rigorous algorithms and simulation toolboxesmore » for modern and future engineered networks. The AMPS Center deconstruction/reconstruction approach 'deconstructs' complex networks into sub-problems within non-separable spatiotemporal scales, a missing step in 20th century modeling of engineered networks. These sub-problems are addressed within the appropriate AMPS foundational pillar - complex systems, control theory, and optimization theory - and merged or 'reconstructed' at their boundaries into more general mathematical descriptions of complex engineered networks where important new questions are formulated and attacked. These two steps, iterated multiple times, will bridge the growing chasm between the legacy power grid and its future as a complex engineered network.« less
Reverse Engineering Validation using a Benchmark Synthetic Gene Circuit in Human Cells
Kang, Taek; White, Jacob T.; Xie, Zhen; Benenson, Yaakov; Sontag, Eduardo; Bleris, Leonidas
2013-01-01
Multi-component biological networks are often understood incompletely, in large part due to the lack of reliable and robust methodologies for network reverse engineering and characterization. As a consequence, developing automated and rigorously validated methodologies for unraveling the complexity of biomolecular networks in human cells remains a central challenge to life scientists and engineers. Today, when it comes to experimental and analytical requirements, there exists a great deal of diversity in reverse engineering methods, which renders the independent validation and comparison of their predictive capabilities difficult. In this work we introduce an experimental platform customized for the development and verification of reverse engineering and pathway characterization algorithms in mammalian cells. Specifically, we stably integrate a synthetic gene network in human kidney cells and use it as a benchmark for validating reverse engineering methodologies. The network, which is orthogonal to endogenous cellular signaling, contains a small set of regulatory interactions that can be used to quantify the reconstruction performance. By performing successive perturbations to each modular component of the network and comparing protein and RNA measurements, we study the conditions under which we can reliably reconstruct the causal relationships of the integrated synthetic network. PMID:23654266
Reverse engineering validation using a benchmark synthetic gene circuit in human cells.
Kang, Taek; White, Jacob T; Xie, Zhen; Benenson, Yaakov; Sontag, Eduardo; Bleris, Leonidas
2013-05-17
Multicomponent biological networks are often understood incompletely, in large part due to the lack of reliable and robust methodologies for network reverse engineering and characterization. As a consequence, developing automated and rigorously validated methodologies for unraveling the complexity of biomolecular networks in human cells remains a central challenge to life scientists and engineers. Today, when it comes to experimental and analytical requirements, there exists a great deal of diversity in reverse engineering methods, which renders the independent validation and comparison of their predictive capabilities difficult. In this work we introduce an experimental platform customized for the development and verification of reverse engineering and pathway characterization algorithms in mammalian cells. Specifically, we stably integrate a synthetic gene network in human kidney cells and use it as a benchmark for validating reverse engineering methodologies. The network, which is orthogonal to endogenous cellular signaling, contains a small set of regulatory interactions that can be used to quantify the reconstruction performance. By performing successive perturbations to each modular component of the network and comparing protein and RNA measurements, we study the conditions under which we can reliably reconstruct the causal relationships of the integrated synthetic network.
NASA Technical Reports Server (NTRS)
Sayood, K.; Chen, Y. C.; Wang, X.
1992-01-01
During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.
A neural network technique for remeshing of bone microstructure.
Fischer, Anath; Holdstein, Yaron
2012-01-01
Today, there is major interest within the biomedical community in developing accurate noninvasive means for the evaluation of bone microstructure and bone quality. Recent improvements in 3D imaging technology, among them development of micro-CT and micro-MRI scanners, allow in-vivo 3D high-resolution scanning and reconstruction of large specimens or even whole bone models. Thus, the tendency today is to evaluate bone features using 3D assessment techniques rather than traditional 2D methods. For this purpose, high-quality meshing methods are required. However, the 3D meshes produced from current commercial systems usually are of low quality with respect to analysis and rapid prototyping. 3D model reconstruction of bone is difficult due to the complexity of bone microstructure. The small bone features lead to a great deal of neighborhood ambiguity near each vertex. The relatively new neural network method for mesh reconstruction has the potential to create or remesh 3D models accurately and quickly. A neural network (NN), which resembles an artificial intelligence (AI) algorithm, is a set of interconnected neurons, where each neuron is capable of making an autonomous arithmetic calculation. Moreover, each neuron is affected by its surrounding neurons through the structure of the network. This paper proposes an extension of the growing neural gas (GNN) neural network technique for remeshing a triangular manifold mesh that represents bone microstructure. This method has the advantage of reconstructing the surface of a genus-n freeform object without a priori knowledge regarding the original object, its topology, or its shape.
Tabu Search enhances network robustness under targeted attacks
NASA Astrophysics Data System (ADS)
Sun, Shi-wen; Ma, Yi-lin; Li, Rui-qi; Wang, Li; Xia, Cheng-yi
2016-03-01
We focus on the optimization of network robustness with respect to intentional attacks on high-degree nodes. Given an existing network, this problem can be considered as a typical single-objective combinatorial optimization problem. Based on the heuristic Tabu Search optimization algorithm, a link-rewiring method is applied to reconstruct the network while keeping the degree of every node unchanged. Through numerical simulations, BA scale-free network and two real-world networks are investigated to verify the effectiveness of the proposed optimization method. Meanwhile, we analyze how the optimization affects other topological properties of the networks, including natural connectivity, clustering coefficient and degree-degree correlation. The current results can help to improve the robustness of existing complex real-world systems, as well as to provide some insights into the design of robust networks.
Sun, Jiedi; Yu, Yang; Wen, Jiangtao
2017-01-01
Remote monitoring of bearing conditions, using wireless sensor network (WSN), is a developing trend in the industrial field. In complicated industrial environments, WSN face three main constraints: low energy, less memory, and low operational capability. Conventional data-compression methods, which concentrate on data compression only, cannot overcome these limitations. Aiming at these problems, this paper proposed a compressed data acquisition and reconstruction scheme based on Compressed Sensing (CS) which is a novel signal-processing technique and applied it for bearing conditions monitoring via WSN. The compressed data acquisition is realized by projection transformation and can greatly reduce the data volume, which needs the nodes to process and transmit. The reconstruction of original signals is achieved in the host computer by complicated algorithms. The bearing vibration signals not only exhibit the sparsity property, but also have specific structures. This paper introduced the block sparse Bayesian learning (BSBL) algorithm which works by utilizing the block property and inherent structures of signals to reconstruct CS sparsity coefficients of transform domains and further recover the original signals. By using the BSBL, CS reconstruction can be improved remarkably. Experiments and analyses showed that BSBL method has good performance and is suitable for practical bearing-condition monitoring. PMID:28635623
GPU implementation of prior image constrained compressed sensing (PICCS)
NASA Astrophysics Data System (ADS)
Nett, Brian E.; Tang, Jie; Chen, Guang-Hong
2010-04-01
The Prior Image Constrained Compressed Sensing (PICCS) algorithm (Med. Phys. 35, pg. 660, 2008) has been applied to several computed tomography applications with both standard CT systems and flat-panel based systems designed for guiding interventional procedures and radiation therapy treatment delivery. The PICCS algorithm typically utilizes a prior image which is reconstructed via the standard Filtered Backprojection (FBP) reconstruction algorithm. The algorithm then iteratively solves for the image volume that matches the measured data, while simultaneously assuring the image is similar to the prior image. The PICCS algorithm has demonstrated utility in several applications including: improved temporal resolution reconstruction, 4D respiratory phase specific reconstructions for radiation therapy, and cardiac reconstruction from data acquired on an interventional C-arm. One disadvantage of the PICCS algorithm, just as other iterative algorithms, is the long computation times typically associated with reconstruction. In order for an algorithm to gain clinical acceptance reconstruction must be achievable in minutes rather than hours. In this work the PICCS algorithm has been implemented on the GPU in order to significantly reduce the reconstruction time of the PICCS algorithm. The Compute Unified Device Architecture (CUDA) was used in this implementation.
Pant, Jeevan K; Krishnan, Sridhar
2014-04-01
A new algorithm for the reconstruction of electrocardiogram (ECG) signals and a dictionary learning algorithm for the enhancement of its reconstruction performance for a class of signals are proposed. The signal reconstruction algorithm is based on minimizing the lp pseudo-norm of the second-order difference, called as the lp(2d) pseudo-norm, of the signal. The optimization involved is carried out using a sequential conjugate-gradient algorithm. The dictionary learning algorithm uses an iterative procedure wherein a signal reconstruction and a dictionary update steps are repeated until a convergence criterion is satisfied. The signal reconstruction step is implemented by using the proposed signal reconstruction algorithm and the dictionary update step is implemented by using the linear least-squares method. Extensive simulation results demonstrate that the proposed algorithm yields improved reconstruction performance for temporally correlated ECG signals relative to the state-of-the-art lp(1d)-regularized least-squares and Bayesian learning based algorithms. Also for a known class of signals, the reconstruction performance of the proposed algorithm can be improved by applying it in conjunction with a dictionary obtained using the proposed dictionary learning algorithm.
Zhao, Zi-Fang; Li, Xue-Zhu; Wan, You
2017-12-01
The local field potential (LFP) is a signal reflecting the electrical activity of neurons surrounding the electrode tip. Synchronization between LFP signals provides important details about how neural networks are organized. Synchronization between two distant brain regions is hard to detect using linear synchronization algorithms like correlation and coherence. Synchronization likelihood (SL) is a non-linear synchronization-detecting algorithm widely used in studies of neural signals from two distant brain areas. One drawback of non-linear algorithms is the heavy computational burden. In the present study, we proposed a graphic processing unit (GPU)-accelerated implementation of an SL algorithm with optional 2-dimensional time-shifting. We tested the algorithm with both artificial data and raw LFP data. The results showed that this method revealed detailed information from original data with the synchronization values of two temporal axes, delay time and onset time, and thus can be used to reconstruct the temporal structure of a neural network. Our results suggest that this GPU-accelerated method can be extended to other algorithms for processing time-series signals (like EEG and fMRI) using similar recording techniques.
An algebra-based method for inferring gene regulatory networks
2014-01-01
Background The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used. Results This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also predicting several of the dynamic patterns present in the network. Conclusions Boolean polynomial dynamical systems provide a powerful modeling framework for the reverse engineering of gene regulatory networks, that enables a rich mathematical structure on the model search space. A C++ implementation of the method, distributed under LPGL license, is available, together with the source code, at http://www.paola-vera-licona.net/Software/EARevEng/REACT.html. PMID:24669835
The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2014-06-01
Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.
Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV
NASA Astrophysics Data System (ADS)
Fahringer, Timothy W.; Thurow, Brian S.
2016-09-01
A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.
Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography
Wang, Kun; Su, Richard; Oraevsky, Alexander A; Anastasio, Mark A
2012-01-01
Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response, and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely, a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications. PMID:22864062
Tang, Jie; Nett, Brian E; Chen, Guang-Hong
2009-10-07
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
Mohamed Salleh, Faridah Hani; Arif, Shereena Mohd; Zainudin, Suhaila; Firdaus-Raih, Mohd
2015-12-01
A gene regulatory network (GRN) is a large and complex network consisting of interacting elements that, over time, affect each other's state. The dynamics of complex gene regulatory processes are difficult to understand using intuitive approaches alone. To overcome this problem, we propose an algorithm for inferring the regulatory interactions from knock-out data using a Gaussian model combines with Pearson Correlation Coefficient (PCC). There are several problems relating to GRN construction that have been outlined in this paper. We demonstrated the ability of our proposed method to (1) predict the presence of regulatory interactions between genes, (2) their directionality and (3) their states (activation or suppression). The algorithm was applied to network sizes of 10 and 50 genes from DREAM3 datasets and network sizes of 10 from DREAM4 datasets. The predicted networks were evaluated based on AUROC and AUPR. We discovered that high false positive values were generated by our GRN prediction methods because the indirect regulations have been wrongly predicted as true relationships. We achieved satisfactory results as the majority of sub-networks achieved AUROC values above 0.5. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemkiewicz, J; Palmiotti, A; Miner, M
2014-06-01
Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU valuesmore » were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation treatment planning accuracy.« less
Feng, Yanqiu; Song, Yanli; Wang, Cong; Xin, Xuegang; Feng, Qianjin; Chen, Wufan
2013-10-01
To develop and test a new algorithm for fast direct Fourier transform (DrFT) reconstruction of MR data on non-Cartesian trajectories composed of lines with equally spaced points. The DrFT, which is normally used as a reference in evaluating the accuracy of other reconstruction methods, can reconstruct images directly from non-Cartesian MR data without interpolation. However, DrFT reconstruction involves substantially intensive computation, which makes the DrFT impractical for clinical routine applications. In this article, the Chirp transform algorithm was introduced to accelerate the DrFT reconstruction of radial and Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) MRI data located on the trajectories that are composed of lines with equally spaced points. The performance of the proposed Chirp transform algorithm-DrFT algorithm was evaluated by using simulation and in vivo MRI data. After implementing the algorithm on a graphics processing unit, the proposed Chirp transform algorithm-DrFT algorithm achieved an acceleration of approximately one order of magnitude, and the speed-up factor was further increased to approximately three orders of magnitude compared with the traditional single-thread DrFT reconstruction. Implementation the Chirp transform algorithm-DrFT algorithm on the graphics processing unit can efficiently calculate the DrFT reconstruction of the radial and PROPELLER MRI data. Copyright © 2012 Wiley Periodicals, Inc.
The Pixon Method for Data Compression Image Classification, and Image Reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard; Yahil, Amos
2002-01-01
As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.
Bellot, Pau; Olsen, Catharina; Salembier, Philippe; Oliveras-Vergés, Albert; Meyer, Patrick E
2015-09-29
In the last decade, a great number of methods for reconstructing gene regulatory networks from expression data have been proposed. However, very few tools and datasets allow to evaluate accurately and reproducibly those methods. Hence, we propose here a new tool, able to perform a systematic, yet fully reproducible, evaluation of transcriptional network inference methods. Our open-source and freely available Bioconductor package aggregates a large set of tools to assess the robustness of network inference algorithms against different simulators, topologies, sample sizes and noise intensities. The benchmarking framework that uses various datasets highlights the specialization of some methods toward network types and data. As a result, it is possible to identify the techniques that have broad overall performances.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.
Genetic Network Programming with Reconstructed Individuals
NASA Astrophysics Data System (ADS)
Ye, Fengming; Mabu, Shingo; Wang, Lutao; Eto, Shinji; Hirasawa, Kotaro
A lot of research on evolutionary computation has been done and some significant classical methods such as Genetic Algorithm (GA), Genetic Programming (GP), Evolutionary Programming (EP), and Evolution Strategies (ES) have been studied. Recently, a new approach named Genetic Network Programming (GNP) has been proposed. GNP can evolve itself and find the optimal solution. It is based on the idea of Genetic Algorithm and uses the data structure of directed graphs. Many papers have demonstrated that GNP can deal with complex problems in the dynamic environments very efficiently and effectively. As a result, recently, GNP is getting more and more attentions and is used in many different areas such as data mining, extracting trading rules of stock markets, elevator supervised control systems, etc., and GNP has obtained some outstanding results. Focusing on the GNP's distinguished expression ability of the graph structure, this paper proposes a method named Genetic Network Programming with Reconstructed Individuals (GNP-RI). The aim of GNP-RI is to balance the exploitation and exploration of GNP, that is, to strengthen the exploitation ability by using the exploited information extensively during the evolution process of GNP and finally obtain better performances than that of GNP. In the proposed method, the worse individuals are reconstructed and enhanced by the elite information before undergoing genetic operations (mutation and crossover). The enhancement of worse individuals mimics the maturing phenomenon in nature, where bad individuals can become smarter after receiving a good education. In this paper, GNP-RI is applied to the tile-world problem which is an excellent bench mark for evaluating the proposed architecture. The performance of GNP-RI is compared with that of the conventional GNP. The simulation results show some advantages of GNP-RI demonstrating its superiority over the conventional GNPs.
Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video
NASA Astrophysics Data System (ADS)
Li, Honggui
2017-09-01
This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.
Reconstructing targetable pathways in lung cancer by integrating diverse omics data
Balbin, O. Alejandro; Prensner, John R.; Sahu, Anirban; Yocum, Anastasia; Shankar, Sunita; Malik, Rohit; Fermin, Damian; Dhanasekaran, Saravana M.; Chandler, Benjamin; Thomas, Dafydd; Beer, David G.; Cao, Xuhong; Nesvizhskii, Alexey I.; Chinnaiyan, Arul M.
2014-01-01
Global ‘multi-omics’ profiling of cancer cells harbours the potential for characterizing the signaling networks associated with specific oncogenes. Here we profile the transcriptome, proteome and phosphoproteome in a panel of non-small cell lung cancer (NSCLC) cell lines in order to reconstruct targetable networks associated with KRAS dependency. We develop a two-step bioinformatics strategy addressing the challenge of integrating these disparate data sets. We first define an ‘abundance-score’ combining transcript, protein and phospho-protein abundances to nominate differentially abundant proteins and then use the Prize Collecting Steiner Tree algorithm to identify functional sub-networks. We identify three modules centered on KRAS and MET, LCK and PAK1 and b-Catenin. We validate activation of these proteins in KRAS-dependent (KRAS-Dep) cells and perform functional studies defining LCK as a critical gene for cell proliferation in KRAS-Dep but not KRAS-independent NSCLCs. These results suggest that LCK is a potential druggable target protein in KRAS-Dep lung cancers. PMID:24135919
JCell--a Java-based framework for inferring regulatory networks from time series data.
Spieth, C; Supper, J; Streichert, F; Speer, N; Zell, A
2006-08-15
JCell is a Java-based application for reconstructing gene regulatory networks from experimental data. The framework provides several algorithms to identify genetic and metabolic dependencies based on experimental data conjoint with mathematical models to describe and simulate regulatory systems. Owing to the modular structure, researchers can easily implement new methods. JCell is a pure Java application with additional scripting capabilities and thus widely usable, e.g. on parallel or cluster computers. The software is freely available for download at http://www-ra.informatik.uni-tuebingen.de/software/JCell.
Ping, Bo; Su, Fenzhen; Meng, Yunshan
2016-01-01
In this study, an improved Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm for determination of missing values in a spatio-temporal dataset is presented. Compared with the ordinary DINEOF algorithm, the iterative reconstruction procedure until convergence based on every fixed EOF to determine the optimal EOF mode is not necessary and the convergence criterion is only reached once in the improved DINEOF algorithm. Moreover, in the ordinary DINEOF algorithm, after optimal EOF mode determination, the initial matrix with missing data will be iteratively reconstructed based on the optimal EOF mode until the reconstruction is convergent. However, the optimal EOF mode may be not the best EOF for some reconstructed matrices generated in the intermediate steps. Hence, instead of using asingle EOF to fill in the missing data, in the improved algorithm, the optimal EOFs for reconstruction are variable (because the optimal EOFs are variable, the improved algorithm is called VE-DINEOF algorithm in this study). To validate the accuracy of the VE-DINEOF algorithm, a sea surface temperature (SST) data set is reconstructed by using the DINEOF, I-DINEOF (proposed in 2015) and VE-DINEOF algorithms. Four parameters (Pearson correlation coefficient, signal-to-noise ratio, root-mean-square error, and mean absolute difference) are used as a measure of reconstructed accuracy. Compared with the DINEOF and I-DINEOF algorithms, the VE-DINEOF algorithm can significantly enhance the accuracy of reconstruction and shorten the computational time.
Gerhard, Felipe; Kispersky, Tilman; Gutierrez, Gabrielle J.; Marder, Eve; Kramer, Mark; Eden, Uri
2013-01-01
Identifying the structure and dynamics of synaptic interactions between neurons is the first step to understanding neural network dynamics. The presence of synaptic connections is traditionally inferred through the use of targeted stimulation and paired recordings or by post-hoc histology. More recently, causal network inference algorithms have been proposed to deduce connectivity directly from electrophysiological signals, such as extracellularly recorded spiking activity. Usually, these algorithms have not been validated on a neurophysiological data set for which the actual circuitry is known. Recent work has shown that traditional network inference algorithms based on linear models typically fail to identify the correct coupling of a small central pattern generating circuit in the stomatogastric ganglion of the crab Cancer borealis. In this work, we show that point process models of observed spike trains can guide inference of relative connectivity estimates that match the known physiological connectivity of the central pattern generator up to a choice of threshold. We elucidate the necessary steps to derive faithful connectivity estimates from a model that incorporates the spike train nature of the data. We then apply the model to measure changes in the effective connectivity pattern in response to two pharmacological interventions, which affect both intrinsic neural dynamics and synaptic transmission. Our results provide the first successful application of a network inference algorithm to a circuit for which the actual physiological synapses between neurons are known. The point process methodology presented here generalizes well to larger networks and can describe the statistics of neural populations. In general we show that advanced statistical models allow for the characterization of effective network structure, deciphering underlying network dynamics and estimating information-processing capabilities. PMID:23874181
Markov prior-based block-matching algorithm for superdimension reconstruction of porous media
NASA Astrophysics Data System (ADS)
Li, Yang; He, Xiaohai; Teng, Qizhi; Feng, Junxi; Wu, Xiaohong
2018-04-01
A superdimension reconstruction algorithm is used for the reconstruction of three-dimensional (3D) structures of a porous medium based on a single two-dimensional image. The algorithm borrows the concepts of "blocks," "learning," and "dictionary" from learning-based superresolution reconstruction and applies them to the 3D reconstruction of a porous medium. In the neighborhood-matching process of the conventional superdimension reconstruction algorithm, the Euclidean distance is used as a criterion, although it may not really reflect the structural correlation between adjacent blocks in an actual situation. Hence, in this study, regular items are adopted as prior knowledge in the reconstruction process, and a Markov prior-based block-matching algorithm for superdimension reconstruction is developed for more accurate reconstruction. The algorithm simultaneously takes into consideration the probabilistic relationship between the already reconstructed blocks in three different perpendicular directions (x , y , and z ) and the block to be reconstructed, and the maximum value of the probability product of the blocks to be reconstructed (as found in the dictionary for the three directions) is adopted as the basis for the final block selection. Using this approach, the problem of an imprecise spatial structure caused by a point simulation can be overcome. The problem of artifacts in the reconstructed structure is also addressed through the addition of hard data and by neighborhood matching. To verify the improved reconstruction accuracy of the proposed method, the statistical and morphological features of the results from the proposed method and traditional superdimension reconstruction method are compared with those of the target system. The proposed superdimension reconstruction algorithm is confirmed to enable a more accurate reconstruction of the target system while also eliminating artifacts.
NASA Astrophysics Data System (ADS)
Ren, Zhong; Liu, Guodong; Huang, Zhen
2012-11-01
The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matenine, D; Cote, G; Mascolo-Fortin, J
2016-06-15
Purpose: Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersections between the photons’ trajectories and the object, also called ray-tracing or system matrix computation. This work evaluates different ways to store the system matrix, aiming to reconstruct dense image grids in reasonable time. Methods: We propose an optimized implementation of the Siddon’s algorithm using graphics processing units (GPUs) with a novel data storage scheme. The algorithm computes a part of the system matrix on demand, typically, for one projection angle. The proposed method was enhanced with accelerating options: storage of larger subsets of themore » system matrix, systematic reuse of data via geometric symmetries, an arithmetic-rich parallel code and code configuration via machine learning. It was tested on geometries mimicking a cone beam CT acquisition of a human head. To realistically assess the execution time, the ray-tracing routines were integrated into a regularized Poisson-based reconstruction algorithm. The proposed scheme was also compared to a different approach, where the system matrix is fully pre-computed and loaded at reconstruction time. Results: Fast ray-tracing of realistic acquisition geometries, which often lack spatial symmetry properties, was enabled via the proposed method. Ray-tracing interleaved with projection and backprojection operations required significant additional time. In most cases, ray-tracing was shown to use about 66 % of the total reconstruction time. In absolute terms, tracing times varied from 3.6 s to 7.5 min, depending on the problem size. The presence of geometrical symmetries allowed for non-negligible ray-tracing and reconstruction time reduction. Arithmetic-rich parallel code and machine learning permitted a modest reconstruction time reduction, in the order of 1 %. Conclusion: Partial system matrix storage permitted the reconstruction of higher 3D image grid sizes and larger projection datasets at the cost of additional time, when compared to the fully pre-computed approach. This work was supported in part by the Fonds de recherche du Quebec - Nature et technologies (FRQ-NT). The authors acknowledge partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council of Canada (Grant No. 432290).« less
Inversion of 2-D DC resistivity data using rapid optimization and minimal complexity neural network
NASA Astrophysics Data System (ADS)
Singh, U. K.; Tiwari, R. K.; Singh, S. B.
2010-02-01
The backpropagation (BP) artificial neural network (ANN) technique of optimization based on steepest descent algorithm is known to be inept for its poor performance and does not ensure global convergence. Nonlinear and complex DC resistivity data require efficient ANN model and more intensive optimization procedures for better results and interpretations. Improvements in the computational ANN modeling process are described with the goals of enhancing the optimization process and reducing ANN model complexity. Well-established optimization methods, such as Radial basis algorithm (RBA) and Levenberg-Marquardt algorithms (LMA) have frequently been used to deal with complexity and nonlinearity in such complex geophysical records. We examined here the efficiency of trained LMA and RB networks by using 2-D synthetic resistivity data and then finally applied to the actual field vertical electrical resistivity sounding (VES) data collected from the Puga Valley, Jammu and Kashmir, India. The resulting ANN reconstruction resistivity results are compared with the result of existing inversion approaches, which are in good agreement. The depths and resistivity structures obtained by the ANN methods also correlate well with the known drilling results and geologic boundaries. The application of the above ANN algorithms proves to be robust and could be used for fast estimation of resistive structures for other complex earth model also.
Su, Ri-Qi; Wang, Wen-Xu; Wang, Xiao; Lai, Ying-Cheng
2016-01-01
Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key challenge is that the signals collected are necessarily time delayed, due to the varying physical distances from the nodes to the data collection centre. To meet this challenge, we develop a compressive-sensing-based approach enabling reconstruction of the full topology of the underlying geospatial network and more importantly, accurate estimate of the time delays. A standard triangularization algorithm can then be employed to find the physical locations of the nodes in the network. We further demonstrate successful detection of a hidden node (or a hidden source or threat), from which no signal can be obtained, through accurate detection of all its neighbouring nodes. As a geospatial network has the feature that a node tends to connect with geophysically nearby nodes, the localized region that contains the hidden node can be identified. PMID:26909187
Face sketch recognition based on edge enhancement via deep learning
NASA Astrophysics Data System (ADS)
Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong
2017-11-01
In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.
Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.
2017-01-01
Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930
CT cardiac imaging: evolution from 2D to 3D backprojection
NASA Astrophysics Data System (ADS)
Tang, Xiangyang; Pan, Tinsu; Sasaki, Kosuke
2004-04-01
The state-of-the-art multiple detector-row CT, which usually employs fan beam reconstruction algorithms by approximating a cone beam geometry into a fan beam geometry, has been well recognized as an important modality for cardiac imaging. At present, the multiple detector-row CT is evolving into volumetric CT, in which cone beam reconstruction algorithms are needed to combat cone beam artifacts caused by large cone angle. An ECG-gated cardiac cone beam reconstruction algorithm based upon the so-called semi-CB geometry is implemented in this study. To get the highest temporal resolution, only the projection data corresponding to 180° plus the cone angle are row-wise rebinned into the semi-CB geometry for three-dimensional reconstruction. Data extrapolation is utilized to extend the z-coverage of the ECG-gated cardiac cone beam reconstruction algorithm approaching the edge of a CT detector. A helical body phantom is used to evaluate the ECG-gated cone beam reconstruction algorithm"s z-coverage and capability of suppressing cone beam artifacts. Furthermore, two sets of cardiac data scanned by a multiple detector-row CT scanner at 16 x 1.25 (mm) and normalized pitch 0.275 and 0.3 respectively are used to evaluate the ECG-gated CB reconstruction algorithm"s imaging performance. As a reference, the images reconstructed by a fan beam reconstruction algorithm for multiple detector-row CT are also presented. The qualitative evaluation shows that, the ECG-gated cone beam reconstruction algorithm outperforms its fan beam counterpart from the perspective of cone beam artifact suppression and z-coverage while the temporal resolution is well maintained. Consequently, the scan speed can be increased to reduce the contrast agent amount and injection time, improve the patient comfort and x-ray dose efficiency. Based up on the comparison, it is believed that, with the transition of multiple detector-row CT into volumetric CT, ECG-gated cone beam reconstruction algorithms will provide better image quality for CT cardiac applications.
Image reconstruction through thin scattering media by simulated annealing algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zuo, Haoyi; Pang, Lin; Yang, Zuogang; Zhang, Xicheng; Zhu, Jianhua
2018-07-01
An idea for reconstructing the image of an object behind thin scattering media is proposed by phase modulation. The optimized phase mask is achieved by modulating the scattered light using simulated annealing algorithm. The correlation coefficient is exploited as a fitness function to evaluate the quality of reconstructed image. The reconstructed images optimized from simulated annealing algorithm and genetic algorithm are compared in detail. The experimental results show that our proposed method has better definition and higher speed than genetic algorithm.
A Novel Algorithm for Detecting Protein Complexes with the Breadth First Search
Tang, Xiwei; Wang, Jianxin; Li, Min; He, Yiming; Pan, Yi
2014-01-01
Most biological processes are carried out by protein complexes. A substantial number of false positives of the protein-protein interaction (PPI) data can compromise the utility of the datasets for complexes reconstruction. In order to reduce the impact of such discrepancies, a number of data integration and affinity scoring schemes have been devised. The methods encode the reliabilities (confidence) of physical interactions between pairs of proteins. The challenge now is to identify novel and meaningful protein complexes from the weighted PPI network. To address this problem, a novel protein complex mining algorithm ClusterBFS (Cluster with Breadth-First Search) is proposed. Based on the weighted density, ClusterBFS detects protein complexes of the weighted network by the breadth first search algorithm, which originates from a given seed protein used as starting-point. The experimental results show that ClusterBFS performs significantly better than the other computational approaches in terms of the identification of protein complexes. PMID:24818139
NASA Astrophysics Data System (ADS)
Zubaidi, Salah L.; Dooley, Jayne; Alkhaddar, Rafid M.; Abdellatif, Mawada; Al-Bugharbee, Hussein; Ortega-Martorell, Sandra
2018-06-01
Valid and dependable water demand prediction is a major element of the effective and sustainable expansion of municipal water infrastructures. This study provides a novel approach to quantifying water demand through the assessment of climatic factors, using a combination of a pretreatment signal technique, a hybrid particle swarm optimisation algorithm and an artificial neural network (PSO-ANN). The Singular Spectrum Analysis (SSA) technique was adopted to decompose and reconstruct water consumption in relation to six weather variables, to create a seasonal and stochastic time series. The results revealed that SSA is a powerful technique, capable of decomposing the original time series into many independent components including trend, oscillatory behaviours and noise. In addition, the PSO-ANN algorithm was shown to be a reliable prediction model, outperforming the hybrid Backtracking Search Algorithm BSA-ANN in terms of fitness function (RMSE). The findings of this study also support the view that water demand is driven by climatological variables.
Time-of-flight PET image reconstruction using origin ensembles.
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-07
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Time-of-flight PET image reconstruction using origin ensembles
NASA Astrophysics Data System (ADS)
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-01
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Self-organizing adaptive map: autonomous learning of curves and surfaces from point samples.
Piastra, Marco
2013-05-01
Competitive Hebbian Learning (CHL) (Martinetz, 1993) is a simple and elegant method for estimating the topology of a manifold from point samples. The method has been adopted in a number of self-organizing networks described in the literature and has given rise to related studies in the fields of geometry and computational topology. Recent results from these fields have shown that a faithful reconstruction can be obtained using the CHL method only for curves and surfaces. Within these limitations, these findings constitute a basis for defining a CHL-based, growing self-organizing network that produces a faithful reconstruction of an input manifold. The SOAM (Self-Organizing Adaptive Map) algorithm adapts its local structure autonomously in such a way that it can match the features of the manifold being learned. The adaptation process is driven by the defects arising when the network structure is inadequate, which cause a growth in the density of units. Regions of the network undergo a phase transition and change their behavior whenever a simple, local condition of topological regularity is met. The phase transition is eventually completed across the entire structure and the adaptation process terminates. In specific conditions, the structure thus obtained is homeomorphic to the input manifold. During the adaptation process, the network also has the capability to focus on the acquisition of input point samples in critical regions, with a substantial increase in efficiency. The behavior of the network has been assessed experimentally with typical data sets for surface reconstruction, including suboptimal conditions, e.g. with undersampling and noise. Copyright © 2012 Elsevier Ltd. All rights reserved.
Fourier polarimetry of the birefringence distribution of myocardium tissue
NASA Astrophysics Data System (ADS)
Ushenko, O. G.; Dubolazov, O. V.; Ushenko, V. O.; Gorsky, M. P.; Soltys, I. V.; Olar, O. V.
2015-11-01
The results of optical modeling of biological tissues polycrystalline multilayer networks have been presented. Algorithms of reconstruction of parameter distributions were determined that describe the linear and circular birefringence. For the separation of the manifestations of these mechanisms we propose a method of space-frequency filtering. Criteria for differentiation of causes of death due to coronary heart disease (CHD) and acute coronary insufficiency (ACI) were found.
Methods and means of laser polarimetry microscopy of optically anisotropic biological layers
NASA Astrophysics Data System (ADS)
Ushenko, A. G.; Dubolazov, A. V.; Ushenko, V. A.; Ushenko, Yu. A.; Sakhnovskiy, M. Y.; Olar, O. I.
2016-09-01
The results of optical modeling of biological tissues polycrystalline multilayer networks have been presented. Algorithms of reconstruction of parameter distributions were determined that describe the linear and circular birefringence. For the separation of the manifestations of these mechanisms we propose a method of space-frequency filtering. Criteria for differentiation of benign and malignant tissues of the women reproductive sphere were found.
Methods and means of Stokes-polarimetry microscopy of optically anisotropic biological layers
NASA Astrophysics Data System (ADS)
Ushenko, A. G.; Dubolazov, A. V.; Ushenko, V. A.; Ushenko, Yu. A.; Sakhnovskiy, M. Yu.; Sidor, M.; Prydiy, O. G.; Olar, O. I.; Lakusta, I. I.
2016-12-01
The results of optical modeling of biological tissues polycrystalline multilayer networks have been presented. Algorithms of reconstruction of parameter distributions were determined that describe the linear and circular birefringence. For the separation of the manifestations of these mechanisms we propose a method of space-frequency filtering. Criteria for differentiation of benign and malignant tissues of the women reproductive sphere were found.
NASA Astrophysics Data System (ADS)
Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Roh, Seungkuk
2016-05-01
In this paper, we propose a new image reconstruction algorithm considering the geometric information of acoustic sources and senor detector and review the two-step reconstruction algorithm which was previously proposed based on the geometrical information of ROI(region of interest) considering the finite size of acoustic sensor element. In a new image reconstruction algorithm, not only mathematical analysis is very simple but also its software implementation is very easy because we don't need to use the FFT. We verify the effectiveness of the proposed reconstruction algorithm by showing the simulation results by using Matlab k-wave toolkit.
Inference of Spatio-Temporal Functions Over Graphs via Multikernel Kriged Kalman Filtering
NASA Astrophysics Data System (ADS)
Ioannidis, Vassilis N.; Romero, Daniel; Giannakis, Georgios B.
2018-06-01
Inference of space-time varying signals on graphs emerges naturally in a plethora of network science related applications. A frequently encountered challenge pertains to reconstructing such dynamic processes, given their values over a subset of vertices and time instants. The present paper develops a graph-aware kernel-based kriged Kalman filter that accounts for the spatio-temporal variations, and offers efficient online reconstruction, even for dynamically evolving network topologies. The kernel-based learning framework bypasses the need for statistical information by capitalizing on the smoothness that graph signals exhibit with respect to the underlying graph. To address the challenge of selecting the appropriate kernel, the proposed filter is combined with a multi-kernel selection module. Such a data-driven method selects a kernel attuned to the signal dynamics on-the-fly within the linear span of a pre-selected dictionary. The novel multi-kernel learning algorithm exploits the eigenstructure of Laplacian kernel matrices to reduce computational complexity. Numerical tests with synthetic and real data demonstrate the superior reconstruction performance of the novel approach relative to state-of-the-art alternatives.
The algorithm of central axis in surface reconstruction
NASA Astrophysics Data System (ADS)
Zhao, Bao Ping; Zhang, Zheng Mei; Cai Li, Ji; Sun, Da Ming; Cao, Hui Ying; Xing, Bao Liang
2017-09-01
Reverse engineering is an important technique means of product imitation and new product development. Its core technology -- surface reconstruction is the current research for scholars. In the various algorithms of surface reconstruction, using axis reconstruction is a kind of important method. For the various reconstruction, using medial axis algorithm was summarized, pointed out the problems existed in various methods, as well as the place needs to be improved. Also discussed the later surface reconstruction and development of axial direction.
A reconstruction algorithm for helical CT imaging on PI-planes.
Liang, Hongzhu; Zhang, Cishen; Yan, Ming
2006-01-01
In this paper, a Feldkamp type approximate reconstruction algorithm is presented for helical cone-beam Computed Tomography. To effectively suppress artifacts due to large cone angle scanning, it is proposed to reconstruct the object point-wisely on unique customized tilted PI-planes which are close to the data collecting helices of the corresponding points. Such a reconstruction scheme can considerably suppress the artifacts in the cone-angle scanning. Computer simulations show that the proposed algorithm can provide improved imaging performance compared with the existing approximate cone-beam reconstruction algorithms.
Lung evolution as a cipher for physiology
Torday, J. S.; Rehan, V. K.
2009-01-01
In the postgenomic era, we need an algorithm to readily translate genes into physiologic principles. The failure to advance biomedicine is due to the false hope raised in the wake of the Human Genome Project (HGP) by the promise of systems biology as a ready means of reconstructing physiology from genes. like the atom in physics, the cell, not the gene, is the smallest completely functional unit of biology. Trying to reassemble gene regulatory networks without accounting for this fundamental feature of evolution will result in a genomic atlas, but not an algorithm for functional genomics. For example, the evolution of the lung can be “deconvoluted” by applying cell-cell communication mechanisms to all aspects of lung biology development, homeostasis, and regeneration/repair. Gene regulatory networks common to these processes predict ontogeny, phylogeny, and the disease-related consequences of failed signaling. This algorithm elucidates characteristics of vertebrate physiology as a cascade of emergent and contingent cellular adaptational responses. By reducing complex physiological traits to gene regulatory networks and arranging them hierarchically in a self-organizing map, like the periodic table of elements in physics, the first principles of physiology will emerge. PMID:19366785
Fast tomographic methods for the tokamak ISTTOK
NASA Astrophysics Data System (ADS)
Carvalho, P. J.; Thomsen, H.; Gori, S.; Toussaint, U. v.; Weller, A.; Coelho, R.; Neto, A.; Pereira, T.; Silva, C.; Fernandes, H.
2008-04-01
The achievement of long duration, alternating current discharges on the tokamak IST-TOK requires a real-time plasma position control system. The plasma position determination based on magnetic probes system has been found to be inadequate during the current inversion due to the reduced plasma current. A tomography diagnostic has been therefore installed to supply the required feedback to the control system. Several tomographic methods are available for soft X-ray or bolo-metric tomography, among which the Cormack and Neural networks methods stand out due to their inherent speed of up to 1000 reconstructions per second, with currently available technology. This paper discusses the application of these algorithms on fusion devices while comparing performance and reliability of the results. It has been found that although the Cormack based inversion proved to be faster, the neural networks reconstruction has fewer artifacts and is more accurate.
Costa, Cecília M; Silva, Ittalo S; de Sousa, Rafael D; Hortegal, Renato A; Regis, Carlos Danilo M
Myocardial infarction is one of the leading causes of death worldwide. As it is life threatening, it requires an immediate and precise treatment. Due to this, a growing number of research and innovations in the field of biomedical signal processing is in high demand. This paper proposes the association of Reconstructed Phase Space and Artificial Neural Networks for Vectorcardiography Myocardial Infarction Recognition. The algorithm promotes better results for the box size 10 × 10 and the combination of four parameters: box counting (Vx), box counting (Vz), self-similarity method (Vx) and self-similarity method (Vy) with sensitivity = 92%, specificity = 96% and accuracy = 94%. The topographic diagnosis presented different performances for different types of infarctions with better results for anterior wall infarctions and less accurate results for inferior infarctions. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Hu; Zhang, Yi; Zhou, Jiliu; Wang, Ge
2017-09-01
Given the potential risk of X-ray radiation to the patient, low-dose CT has attracted a considerable interest in the medical imaging field. Currently, the main stream low-dose CT methods include vendor-specific sinogram domain filtration and iterative reconstruction algorithms, but they need to access raw data whose formats are not transparent to most users. Due to the difficulty of modeling the statistical characteristics in the image domain, the existing methods for directly processing reconstructed images cannot eliminate image noise very well while keeping structural details. Inspired by the idea of deep learning, here we combine the autoencoder, deconvolution network, and shortcut connections into the residual encoder-decoder convolutional neural network (RED-CNN) for low-dose CT imaging. After patch-based training, the proposed RED-CNN achieves a competitive performance relative to the-state-of-art methods. Especially, our method has been favorably evaluated in terms of noise suppression and structural preservation.
NASA Astrophysics Data System (ADS)
Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr
2017-12-01
There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-01-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of TOF scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (Direct Image Reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias vs. variance performance to iterative TOF reconstruction with a matched resolution model. PMID:27032968
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
NASA Astrophysics Data System (ADS)
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-05-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.
Theory and algorithms for image reconstruction on chords and within regions of interest
NASA Astrophysics Data System (ADS)
Zou, Yu; Pan, Xiaochuan; Sidky, Emilâ Y.
2005-11-01
We introduce a formula for image reconstruction on a chord of a general source trajectory. We subsequently develop three algorithms for exact image reconstruction on a chord from data acquired with the general trajectory. Interestingly, two of the developed algorithms can accommodate data containing transverse truncations. The widely used helical trajectory and other trajectories discussed in literature can be interpreted as special cases of the general trajectory, and the developed theory and algorithms are thus directly applicable to reconstructing images exactly from data acquired with these trajectories. For instance, chords on a helical trajectory are equivalent to the n-PI-line segments. In this situation, the proposed algorithms become the algorithms that we proposed previously for image reconstruction on PI-line segments. We have performed preliminary numerical studies, which include the study on image reconstruction on chords of two-circle trajectory, which is nonsmooth, and on n-PI lines of a helical trajectory, which is smooth. Quantitative results of these studies verify and demonstrate the proposed theory and algorithms.
Optimisation algorithms for ECG data compression.
Haugland, D; Heber, J G; Husøy, J H
1997-07-01
The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.
A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.
Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe
2018-01-01
Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.
Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm
NASA Astrophysics Data System (ADS)
Elahi, Sana; kaleem, Muhammad; Omer, Hammad
2018-01-01
Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.
NASA Astrophysics Data System (ADS)
Fotin, Sergei V.; Yin, Yin; Haldankar, Hrishikesh; Hoffmeister, Jeffrey W.; Periaswamy, Senthil
2016-03-01
Computer-aided detection (CAD) has been used in screening mammography for many years and is likely to be utilized for digital breast tomosynthesis (DBT). Higher detection performance is desirable as it may have an impact on radiologist's decisions and clinical outcomes. Recently the algorithms based on deep convolutional architectures have been shown to achieve state of the art performance in object classification and detection. Similarly, we trained a deep convolutional neural network directly on patches sampled from two-dimensional mammography and reconstructed DBT volumes and compared its performance to a conventional CAD algorithm that is based on computation and classification of hand-engineered features. The detection performance was evaluated on the independent test set of 344 DBT reconstructions (GE SenoClaire 3D, iterative reconstruction algorithm) containing 328 suspicious and 115 malignant soft tissue densities including masses and architectural distortions. Detection sensitivity was measured on a region of interest (ROI) basis at the rate of five detection marks per volume. Moving from conventional to deep learning approach resulted in increase of ROI sensitivity from 0:832 +/- 0:040 to 0:893 +/- 0:033 for suspicious ROIs; and from 0:852 +/- 0:065 to 0:930 +/- 0:046 for malignant ROIs. These results indicate the high utility of deep feature learning in the analysis of DBT data and high potential of the method for broader medical image analysis tasks.
Wang, Gang; Zhao, Zhikai; Ning, Yongjie
2018-05-28
As the application of a coal mine Internet of Things (IoT), mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH) is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.
Sparse dynamical Boltzmann machine for reconstructing complex networks with binary dynamics
NASA Astrophysics Data System (ADS)
Chen, Yu-Zhong; Lai, Ying-Cheng
2018-03-01
Revealing the structure and dynamics of complex networked systems from observed data is a problem of current interest. Is it possible to develop a completely data-driven framework to decipher the network structure and different types of dynamical processes on complex networks? We develop a model named sparse dynamical Boltzmann machine (SDBM) as a structural estimator for complex networks that host binary dynamical processes. The SDBM attains its topology according to that of the original system and is capable of simulating the original binary dynamical process. We develop a fully automated method based on compressive sensing and a clustering algorithm to construct the SDBM. We demonstrate, for a variety of representative dynamical processes on model and real world complex networks, that the equivalent SDBM can recover the network structure of the original system and simulates its dynamical behavior with high precision.
Sparse dynamical Boltzmann machine for reconstructing complex networks with binary dynamics.
Chen, Yu-Zhong; Lai, Ying-Cheng
2018-03-01
Revealing the structure and dynamics of complex networked systems from observed data is a problem of current interest. Is it possible to develop a completely data-driven framework to decipher the network structure and different types of dynamical processes on complex networks? We develop a model named sparse dynamical Boltzmann machine (SDBM) as a structural estimator for complex networks that host binary dynamical processes. The SDBM attains its topology according to that of the original system and is capable of simulating the original binary dynamical process. We develop a fully automated method based on compressive sensing and a clustering algorithm to construct the SDBM. We demonstrate, for a variety of representative dynamical processes on model and real world complex networks, that the equivalent SDBM can recover the network structure of the original system and simulates its dynamical behavior with high precision.
High resolution x-ray CMT: Reconstruction methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, J.K.
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less
Xu, Q; Yang, D; Tan, J; Anastasio, M
2012-06-01
To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment localization. © 2012 American Association of Physicists in Medicine.
Xiaodong Zhuge; Palenstijn, Willem Jan; Batenburg, Kees Joost
2016-01-01
In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.
Lopez-Martin, Manuel; Carro, Belen; Sanchez-Esguevillas, Antonio; Lloret, Jaime
2017-08-26
The purpose of a Network Intrusion Detection System is to detect intrusive, malicious activities or policy violations in a host or host's network. In current networks, such systems are becoming more important as the number and variety of attacks increase along with the volume and sensitiveness of the information exchanged. This is of particular interest to Internet of Things networks, where an intrusion detection system will be critical as its economic importance continues to grow, making it the focus of future intrusion attacks. In this work, we propose a new network intrusion detection method that is appropriate for an Internet of Things network. The proposed method is based on a conditional variational autoencoder with a specific architecture that integrates the intrusion labels inside the decoder layers. The proposed method is less complex than other unsupervised methods based on a variational autoencoder and it provides better classification results than other familiar classifiers. More important, the method can perform feature reconstruction, that is, it is able to recover missing features from incomplete training datasets. We demonstrate that the reconstruction accuracy is very high, even for categorical features with a high number of distinct values. This work is unique in the network intrusion detection field, presenting the first application of a conditional variational autoencoder and providing the first algorithm to perform feature recovery.
Carro, Belen; Sanchez-Esguevillas, Antonio
2017-01-01
The purpose of a Network Intrusion Detection System is to detect intrusive, malicious activities or policy violations in a host or host’s network. In current networks, such systems are becoming more important as the number and variety of attacks increase along with the volume and sensitiveness of the information exchanged. This is of particular interest to Internet of Things networks, where an intrusion detection system will be critical as its economic importance continues to grow, making it the focus of future intrusion attacks. In this work, we propose a new network intrusion detection method that is appropriate for an Internet of Things network. The proposed method is based on a conditional variational autoencoder with a specific architecture that integrates the intrusion labels inside the decoder layers. The proposed method is less complex than other unsupervised methods based on a variational autoencoder and it provides better classification results than other familiar classifiers. More important, the method can perform feature reconstruction, that is, it is able to recover missing features from incomplete training datasets. We demonstrate that the reconstruction accuracy is very high, even for categorical features with a high number of distinct values. This work is unique in the network intrusion detection field, presenting the first application of a conditional variational autoencoder and providing the first algorithm to perform feature recovery. PMID:28846608
Fusion of sensor geometry into additive strain fields measured with sensing skin
NASA Astrophysics Data System (ADS)
Downey, Austin; Sadoughi, Mohammadkazem; Laflamme, Simon; Hu, Chao
2018-07-01
Recently, numerous studies have been conducted on flexible skin-like membranes for the cost effective monitoring of large-scale structures. The authors have proposed a large-area electronic consisting of a soft elastomeric capacitor (SEC) that transduces a structure’s strain into a measurable change in capacitance. Arranged in a network configuration, SECs deployed onto the surface of a structure could be used to reconstruct strain maps. Several regression methods have been recently developed with the purpose of reconstructing such maps, but all these algorithms assumed that each SEC-measured strain located at its geometric center. This assumption may not be realistic since an SEC measures the average strain value of the whole area covered by the sensor. One solution is to reduce the size of each SEC, but this would also increase the number of required sensors needed to cover the large-scale structure, therefore increasing the need for the power and data acquisition capabilities. Instead, this study proposes an algorithm that accounts for the sensor’s strain averaging feature by adjusting the strain measurements and constructing a full-field strain map using the kriging interpolation method. The proposed algorithm fuses the geometry of an SEC sensor into the strain map reconstruction in order to adaptively adjust the average kriging-estimated strain of the area monitored by the sensor to the signal. Results show that by considering the sensor geometry, in addition to the sensor signal and location, the proposed strain map adjustment algorithm is capable of producing more accurate full-field strain maps than the traditional spatial interpolation method that considered only signal and location.
van Pelt, Jaap; Carnell, Andrew; de Ridder, Sander; Mansvelder, Huibert D.; van Ooyen, Arjen
2010-01-01
Neurons make synaptic connections at locations where axons and dendrites are sufficiently close in space. Typically the required proximity is based on the dimensions of dendritic spines and axonal boutons. Based on this principle one can search those locations in networks formed by reconstructed neurons or computer generated neurons. Candidate synapses are then located where axons and dendrites are within a given criterion distance from each other. Both experimentally reconstructed and model generated neurons are usually represented morphologically by piecewise-linear structures (line pieces or cylinders). Proximity tests are then performed on all pairs of line pieces from both axonal and dendritic branches. Applying just a test on the distance between line pieces may result in local clusters of synaptic sites when more than one pair of nearby line pieces from axonal and dendritic branches is sufficient close, and may introduce a dependency on the length scale of the individual line pieces. The present paper describes a new algorithm for defining locations of candidate synapses which is based on the crossing requirement of a line piece pair, while the length of the orthogonal distance between the line pieces is subjected to the distance criterion for testing 3D proximity. PMID:21160548
NASA Astrophysics Data System (ADS)
Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo
2008-03-01
In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.
NASA Astrophysics Data System (ADS)
Mickevicius, Nikolai J.; Paulson, Eric S.
2017-04-01
The purpose of this work is to investigate the effects of undersampling and reconstruction algorithm on the total processing time and image quality of respiratory phase-resolved 4D MRI data. Specifically, the goal is to obtain quality 4D-MRI data with a combined acquisition and reconstruction time of five minutes or less, which we reasoned would be satisfactory for pre-treatment 4D-MRI in online MRI-gRT. A 3D stack-of-stars, self-navigated, 4D-MRI acquisition was used to scan three healthy volunteers at three image resolutions and two scan durations. The NUFFT, CG-SENSE, SPIRiT, and XD-GRASP reconstruction algorithms were used to reconstruct each dataset on a high performance reconstruction computer. The overall image quality, reconstruction time, artifact prevalence, and motion estimates were compared. The CG-SENSE and XD-GRASP reconstructions provided superior image quality over the other algorithms. The combination of a 3D SoS sequence and parallelized reconstruction algorithms using computing hardware more advanced than those typically seen on product MRI scanners, can result in acquisition and reconstruction of high quality respiratory correlated 4D-MRI images in less than five minutes.
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
Axial Cone-Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering.
Tang, Shaojie; Tang, Xiangyang
2016-09-01
The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone-beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane, determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. The solution is an integration of three-dimensional (3-D) weighted axial CB-BPF/DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting the reconstruction accuracy, and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate the performance of the proposed algorithm. Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3-D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Integrated with orthogonal butterfly filtering, the 3-D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3-D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. The proposed 3-D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications.
Wang, Yi Kan; Hurley, Daniel G.; Schnell, Santiago; Print, Cristin G.; Crampin, Edmund J.
2013-01-01
We develop a new regression algorithm, cMIKANA, for inference of gene regulatory networks from combinations of steady-state and time-series gene expression data. Using simulated gene expression datasets to assess the accuracy of reconstructing gene regulatory networks, we show that steady-state and time-series data sets can successfully be combined to identify gene regulatory interactions using the new algorithm. Inferring gene networks from combined data sets was found to be advantageous when using noisy measurements collected with either lower sampling rates or a limited number of experimental replicates. We illustrate our method by applying it to a microarray gene expression dataset from human umbilical vein endothelial cells (HUVECs) which combines time series data from treatment with growth factor TNF and steady state data from siRNA knockdown treatments. Our results suggest that the combination of steady-state and time-series datasets may provide better prediction of RNA-to-RNA interactions, and may also reveal biological features that cannot be identified from dynamic or steady state information alone. Finally, we consider the experimental design of genomics experiments for gene regulatory network inference and show that network inference can be improved by incorporating steady-state measurements with time-series data. PMID:23967277
Bayesian reconstruction of gravitational wave bursts using chirplets
NASA Astrophysics Data System (ADS)
Millhouse, Margaret; Cornish, Neil J.; Littenberg, Tyson
2018-05-01
The LIGO-Virgo Collaboration uses a variety of techniques to detect and characterize gravitational waves. One approach is to use templates—models for the signals derived from Einstein's equations. Another approach is to extract the signals directly from the coherent response of the detectors in the LIGO-Virgo network. Both approaches played an important role in the first gravitational wave detections. Here we extend the BayesWave analysis algorithm, which reconstructs gravitational wave signals using a collection of continuous wavelets, to use a generalized wavelet family, known as chirplets, that have time-evolving frequency content. Since generic gravitational wave signals have frequency content that evolves in time, a collection of chirplets provides a more compact representation of the signal, resulting in more accurate waveform reconstructions, especially for low signal-to-noise events, and events that occupy a large time-frequency volume.
Knowledge Discovery in Spectral Data by Means of Complex Networks
Zanin, Massimiliano; Papo, David; Solís, José Luis González; Espinosa, Juan Carlos Martínez; Frausto-Reyes, Claudio; Anda, Pascual Palomares; Sevilla-Escoboza, Ricardo; Boccaletti, Stefano; Menasalvas, Ernestina; Sousa, Pedro
2013-01-01
In the last decade, complex networks have widely been applied to the study of many natural and man-made systems, and to the extraction of meaningful information from the interaction structures created by genes and proteins. Nevertheless, less attention has been devoted to metabonomics, due to the lack of a natural network representation of spectral data. Here we define a technique for reconstructing networks from spectral data sets, where nodes represent spectral bins, and pairs of them are connected when their intensities follow a pattern associated with a disease. The structural analysis of the resulting network can then be used to feed standard data-mining algorithms, for instance for the classification of new (unlabeled) subjects. Furthermore, we show how the structure of the network is resilient to the presence of external additive noise, and how it can be used to extract relevant knowledge about the development of the disease. PMID:24957895
Knowledge discovery in spectral data by means of complex networks.
Zanin, Massimiliano; Papo, David; Solís, José Luis González; Espinosa, Juan Carlos Martínez; Frausto-Reyes, Claudio; Anda, Pascual Palomares; Sevilla-Escoboza, Ricardo; Jaimes-Reategui, Rider; Boccaletti, Stefano; Menasalvas, Ernestina; Sousa, Pedro
2013-03-11
In the last decade, complex networks have widely been applied to the study of many natural and man-made systems, and to the extraction of meaningful information from the interaction structures created by genes and proteins. Nevertheless, less attention has been devoted to metabonomics, due to the lack of a natural network representation of spectral data. Here we define a technique for reconstructing networks from spectral data sets, where nodes represent spectral bins, and pairs of them are connected when their intensities follow a pattern associated with a disease. The structural analysis of the resulting network can then be used to feed standard data-mining algorithms, for instance for the classification of new (unlabeled) subjects. Furthermore, we show how the structure of the network is resilient to the presence of external additive noise, and how it can be used to extract relevant knowledge about the development of the disease.
NASA Astrophysics Data System (ADS)
Chen, Xiao-jun; Dong, Li-zhi; Wang, Shuai; Yang, Ping; Xu, Bing
2017-11-01
In quadri-wave lateral shearing interferometry (QWLSI), when the intensity distribution of the incident light wave is non-uniform, part of the information of the intensity distribution will couple with the wavefront derivatives to cause wavefront reconstruction errors. In this paper, we propose two algorithms to reduce the influence of a non-uniform intensity distribution on wavefront reconstruction. Our simulation results demonstrate that the reconstructed amplitude distribution (RAD) algorithm can effectively reduce the influence of the intensity distribution on the wavefront reconstruction and that the collected amplitude distribution (CAD) algorithm can almost eliminate it.
Discovering time-lagged rules from microarray data using gene profile classifiers
2011-01-01
Background Gene regulatory networks have an essential role in every process of life. In this regard, the amount of genome-wide time series data is becoming increasingly available, providing the opportunity to discover the time-delayed gene regulatory networks that govern the majority of these molecular processes. Results This paper aims at reconstructing gene regulatory networks from multiple genome-wide microarray time series datasets. In this sense, a new model-free algorithm called GRNCOP2 (Gene Regulatory Network inference by Combinatorial OPtimization 2), which is a significant evolution of the GRNCOP algorithm, was developed using combinatorial optimization of gene profile classifiers. The method is capable of inferring potential time-delay relationships with any span of time between genes from various time series datasets given as input. The proposed algorithm was applied to time series data composed of twenty yeast genes that are highly relevant for the cell-cycle study, and the results were compared against several related approaches. The outcomes have shown that GRNCOP2 outperforms the contrasted methods in terms of the proposed metrics, and that the results are consistent with previous biological knowledge. Additionally, a genome-wide study on multiple publicly available time series data was performed. In this case, the experimentation has exhibited the soundness and scalability of the new method which inferred highly-related statistically-significant gene associations. Conclusions A novel method for inferring time-delayed gene regulatory networks from genome-wide time series datasets is proposed in this paper. The method was carefully validated with several publicly available data sets. The results have demonstrated that the algorithm constitutes a usable model-free approach capable of predicting meaningful relationships between genes, revealing the time-trends of gene regulation. PMID:21524308
Integrative network alignment reveals large regions of global network similarity in yeast and human.
Kuchaiev, Oleksii; Przulj, Natasa
2011-05-15
High-throughput methods for detecting molecular interactions have produced large sets of biological network data with much more yet to come. Analogous to sequence alignment, efficient and reliable network alignment methods are expected to improve our understanding of biological systems. Unlike sequence alignment, network alignment is computationally intractable. Hence, devising efficient network alignment heuristics is currently a foremost challenge in computational biology. We introduce a novel network alignment algorithm, called Matching-based Integrative GRAph ALigner (MI-GRAAL), which can integrate any number and type of similarity measures between network nodes (e.g. proteins), including, but not limited to, any topological network similarity measure, sequence similarity, functional similarity and structural similarity. Hence, we resolve the ties in similarity measures and find a combination of similarity measures yielding the largest contiguous (i.e. connected) and biologically sound alignments. MI-GRAAL exposes the largest functional, connected regions of protein-protein interaction (PPI) network similarity to date: surprisingly, it reveals that 77.7% of proteins in the baker's yeast high-confidence PPI network participate in such a subnetwork that is fully contained in the human high-confidence PPI network. This is the first demonstration that species as diverse as yeast and human contain so large, continuous regions of global network similarity. We apply MI-GRAAL's alignments to predict functions of un-annotated proteins in yeast, human and bacteria validating our predictions in the literature. Furthermore, using network alignment scores for PPI networks of different herpes viruses, we reconstruct their phylogenetic relationship. This is the first time that phylogeny is exactly reconstructed from purely topological alignments of PPI networks. Supplementary files and MI-GRAAL executables: http://bio-nets.doc.ic.ac.uk/MI-GRAAL/.
Low dose reconstruction algorithm for differential phase contrast imaging.
Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni
2011-01-01
Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.
NASA Astrophysics Data System (ADS)
Krysta, M.; Kusmierczyk-Michulec, J.; Nikkinen, M.; Carter, J. A.
2011-12-01
In order to support its mission of monitoring compliance with the treaty banning nuclear explosions, the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) operates four global networks of, respectively, seismic, infrasound, hydroacoustic sensors and air samplers accompanied with radionuclide detectors. The role of the International Data Centre (IDC) of CTBTO is to associate the signals detected in the monitoring networks with the physical phenomena which emitted these signals, by forming events. One of the aspects of associating detections with emitters is the problem of inferring the sources of radionuclides from the detections made at CTBTO radionuclide network stations. This task is particularly challenging because the average transport distance between a release point and detectors is large. Complex processes of turbulent diffusion are responsible for efficient mixing and consequently for decreasing the information content of detections with an increasing distance from the source. The problem is generally addressed in a two-step process. In the first step, an atmospheric transport model establishes a link between the detections and the regions of possible source location. In the second step this link is inverted to infer source information from the detections. In this presentation, we will discuss enhancements of the presently used regression-based inversion algorithm to reconstruct a source of radionuclides. To this aim, modern inversion algorithms accounting for prior information and appropriately regularizing an under-determined reconstruction problem will be briefly introduced. Emphasis will be on the CTBTO context and the choice of inversion methods. An illustration of the first tests will be provided using a framework of twin experiments, i.e. fictitious detections in the CTBTO radionuclide network generated with an atmospheric transport model.
Optimization-based reconstruction for reduction of CBCT artifact in IGRT
NASA Astrophysics Data System (ADS)
Xia, Dan; Zhang, Zheng; Paysan, Pascal; Seghers, Dieter; Brehm, Marcus; Munro, Peter; Sidky, Emil Y.; Pelizzari, Charles; Pan, Xiaochuan
2016-04-01
Kilo-voltage cone-beam computed tomography (CBCT) plays an important role in image guided radiation therapy (IGRT) by providing 3D spatial information of tumor potentially useful for optimizing treatment planning. In current IGRT CBCT system, reconstructed images obtained with analytic algorithms, such as FDK algorithm and its variants, may contain artifacts. In an attempt to compensate for the artifacts, we investigate optimization-based reconstruction algorithms such as the ASD-POCS algorithm for potentially reducing arti- facts in IGRT CBCT images. In this study, using data acquired with a physical phantom and a patient subject, we demonstrate that the ASD-POCS reconstruction can significantly reduce artifacts observed in clinical re- constructions. Moreover, patient images reconstructed by use of the ASD-POCS algorithm indicate a contrast level of soft-tissue improved over that of the clinical reconstruction. We have also performed reconstructions from sparse-view data, and observe that, for current clinical imaging conditions, ASD-POCS reconstructions from data collected at one half of the current clinical projection views appear to show image quality, in terms of spatial and soft-tissue-contrast resolution, higher than that of the corresponding clinical reconstructions.
A density based algorithm to detect cavities and holes from planar points
NASA Astrophysics Data System (ADS)
Zhu, Jie; Sun, Yizhong; Pang, Yueyong
2017-12-01
Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.
Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization
Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.
2014-01-01
High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original total variation norm. During the reconstruction process, the pixels at edges would be gradually identified and given small penalty weight. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low contrast structures and therefore maintain acceptable spatial resolution. PMID:21860076
Shi, Ximin; Li, Nan; Ding, Haiyan; Dang, Yonghong; Hu, Guilan; Liu, Shuai; Cui, Jie; Zhang, Yue; Li, Fang; Zhang, Hui; Huo, Li
2018-01-01
Kinetic modeling of dynamic 11 C-acetate PET imaging provides quantitative information for myocardium assessment. The quality and quantitation of PET images are known to be dependent on PET reconstruction methods. This study aims to investigate the impacts of reconstruction algorithms on the quantitative analysis of dynamic 11 C-acetate cardiac PET imaging. Suspected alcoholic cardiomyopathy patients ( N = 24) underwent 11 C-acetate dynamic PET imaging after low dose CT scan. PET images were reconstructed using four algorithms: filtered backprojection (FBP), ordered subsets expectation maximization (OSEM), OSEM with time-of-flight (TOF), and OSEM with both time-of-flight and point-spread-function (TPSF). Standardized uptake values (SUVs) at different time points were compared among images reconstructed using the four algorithms. Time-activity curves (TACs) in myocardium and blood pools of ventricles were generated from the dynamic image series. Kinetic parameters K 1 and k 2 were derived using a 1-tissue-compartment model for kinetic modeling of cardiac flow from 11 C-acetate PET images. Significant image quality improvement was found in the images reconstructed using iterative OSEM-type algorithms (OSME, TOF, and TPSF) compared with FBP. However, no statistical differences in SUVs were observed among the four reconstruction methods at the selected time points. Kinetic parameters K 1 and k 2 also exhibited no statistical difference among the four reconstruction algorithms in terms of mean value and standard deviation. However, for the correlation analysis, OSEM reconstruction presented relatively higher residual in correlation with FBP reconstruction compared with TOF and TPSF reconstruction, and TOF and TPSF reconstruction were highly correlated with each other. All the tested reconstruction algorithms performed similarly for quantitative analysis of 11 C-acetate cardiac PET imaging. TOF and TPSF yielded highly consistent kinetic parameter results with superior image quality compared with FBP. OSEM was relatively less reliable. Both TOF and TPSF were recommended for cardiac 11 C-acetate kinetic analysis.
Dereplication, Aggregation and Scoring Tool (DAS Tool) v1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
SIEBER, CHRISTIAN
Communities of uncultivated microbes are critical to ecosystem function and microorganism health, and a key objective of metagenomic studies is to analyze organism-specific metabolic pathways and reconstruct community interaction networks. This requires accurate assignment of genes to genomes, yet existing binning methods often fail to predict a reasonable number of genomes and report many bins of low quality and completeness. Furthermore, the performance of existing algorithms varies between samples and biotypes. Here, we present a dereplication, aggregation and scoring strategy, DAS Tool, that combines the strengths of a flexible set of established binning algorithms. DAS Tools applied to a constructedmore » community generated more accurate bins than any automated method. Further, when applied to samples of different complexity, including soil, natural oil seeps, and the human gut, DAS Tool recovered substantially more near-complete genomes than any single binning method alone. Included were three genomes from a novel lineage . The ability to reconstruct many near-complete genomes from metagenomics data will greatly advance genome-centric analyses of ecosystems.« less
Sinogram-based adaptive iterative reconstruction for sparse view x-ray computed tomography
NASA Astrophysics Data System (ADS)
Trinca, D.; Zhong, Y.; Wang, Y.-Z.; Mamyrbayev, T.; Libin, E.
2016-10-01
With the availability of more powerful computing processors, iterative reconstruction algorithms have recently been successfully implemented as an approach to achieving significant dose reduction in X-ray CT. In this paper, we propose an adaptive iterative reconstruction algorithm for X-ray CT, that is shown to provide results comparable to those obtained by proprietary algorithms, both in terms of reconstruction accuracy and execution time. The proposed algorithm is thus provided for free to the scientific community, for regular use, and for possible further optimization.
Axial Cone Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering
Tang, Shaojie; Tang, Xiangyang
2016-01-01
Goal The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. Methods The solution is an integration of three-dimensional (3D) weighted axial CB-BPF/ DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting reconstruction accuracy and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate performance of the proposed algorithm. Results Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Conclusion Integrated with orthogonal butterfly filtering, the 3D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. Significance The proposed 3D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications. PMID:26660512
Azimuth-invariant mueller-matrix differentiation of the optical anisotropy of biological tissues
NASA Astrophysics Data System (ADS)
Ushenko, V. A.; Sidor, M. I.; Marchuk, Yu. F.; Pashkovskaya, N. V.; Andreichuk, D. R.
2014-07-01
A Mueller-matrix model is proposed for analysis of the optical anisotropy of protein networks of optically thin nondepolarizing layers of biological tissues with allowance for birefringence and dichroism. The model is used to construct algorithms for reconstruction of coordinate distributions of phase shifts and coefficient of linear dichroism. Objective criteria for differentiation of benign and malignant tissues of female genitals are formulated in the framework of the statistical analysis of such distributions. Approaches of evidence-based medicine are used to determine the working characteristics (sensitivity, specificity, and accuracy) of the Mueller-matrix method for the reconstruction of the parameters of optical anisotropy and show its efficiency in the differentiation of benign and malignant tumors.
Liu, Ruolin; Dickerson, Julie
2017-11-01
We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression. Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.
Ahmad, Moiz; Balter, Peter; Pan, Tinsu
2011-10-01
Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4-6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3-8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm. With graphics processing unit hardware used to accelerate computations, the 4D-VOI reconstruction required a 40-s reconstruction time. 4D-VOI reconstruction effectively reduces undersampling artifacts and resolves lung tumor motion in 4D-CBCT. The 4D-VOI reconstruction is computationally inexpensive compared with more sophisticated iterative algorithms. Compared with these algorithms, our 4D-VOI reconstruction is an attractive alternative in 4D-CBCT for reconstructing target motion without generating numerous streak artifacts.
Ahmad, Moiz; Balter, Peter; Pan, Tinsu
2011-01-01
Purpose: Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4–6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. Methods: The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Results: Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3–8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm. With graphics processing unit hardware used to accelerate computations, the 4D-VOI reconstruction required a 40-s reconstruction time. Conclusions: 4D-VOI reconstruction effectively reduces undersampling artifacts and resolves lung tumor motion in 4D-CBCT. The 4D-VOI reconstruction is computationally inexpensive compared with more sophisticated iterative algorithms. Compared with these algorithms, our 4D-VOI reconstruction is an attractive alternative in 4D-CBCT for reconstructing target motion without generating numerous streak artifacts. PMID:21992381
Reconstruction of the experimentally supported human protein interactome: what can we learn?
Klapa, Maria I; Tsafou, Kalliopi; Theodoridis, Evangelos; Tsakalidis, Athanasios; Moschonas, Nicholas K
2013-10-02
Understanding the topology and dynamics of the human protein-protein interaction (PPI) network will significantly contribute to biomedical research, therefore its systematic reconstruction is required. Several meta-databases integrate source PPI datasets, but the protein node sets of their networks vary depending on the PPI data combined. Due to this inherent heterogeneity, the way in which the human PPI network expands via multiple dataset integration has not been comprehensively analyzed. We aim at assembling the human interactome in a global structured way and exploring it to gain insights of biological relevance. First, we defined the UniProtKB manually reviewed human "complete" proteome as the reference protein-node set and then we mined five major source PPI datasets for direct PPIs exclusively between the reference proteins. We updated the protein and publication identifiers and normalized all PPIs to the UniProt identifier level. The reconstructed interactome covers approximately 60% of the human proteome and has a scale-free structure. No apparent differentiating gene functional classification characteristics were identified for the unrepresented proteins. The source dataset integration augments the network mainly in PPIs. Polyubiquitin emerged as the highest-degree node, but the inclusion of most of its identified PPIs may be reconsidered. The high number (>300) of connections of the subsequent fifteen proteins correlates well with their essential biological role. According to the power-law network structure, the unrepresented proteins should mainly have up to four connections with equally poorly-connected interactors. Reconstructing the human interactome based on the a priori definition of the protein nodes enabled us to identify the currently included part of the human "complete" proteome, and discuss the role of the proteins within the network topology with respect to their function. As the network expansion has to comply with the scale-free theory, we suggest that the core of the human interactome has essentially emerged. Thus, it could be employed in systems biology and biomedical research, despite the considerable number of currently unrepresented proteins. The latter are probably involved in specialized physiological conditions, justifying the scarcity of related PPI information, and their identification can assist in designing relevant functional experiments and targeted text mining algorithms.
UV Reconstruction Algorithm And Diurnal Cycle Variability
NASA Astrophysics Data System (ADS)
Curylo, Aleksander; Litynska, Zenobia; Krzyscin, Janusz; Bogdanska, Barbara
2009-03-01
UV reconstruction is a method of estimation of surface UV with the use of available actinometrical and aerological measurements. UV reconstruction is necessary for the study of long-term UV change. A typical series of UV measurements is not longer than 15 years, which is too short for trend estimation. The essential problem in the reconstruction algorithm is the good parameterization of clouds. In our previous algorithm we used an empirical relation between Cloud Modification Factor (CMF) in global radiation and CMF in UV. The CMF is defined as the ratio between measured and modelled irradiances. Clear sky irradiance was calculated with a solar radiative transfer model. In the proposed algorithm, the time variability of global radiation during the diurnal cycle is used as an additional source of information. For elaborating an improved reconstruction algorithm relevant data from Legionowo [52.4 N, 21.0 E, 96 m a.s.l], Poland were collected with the following instruments: NILU-UV multi channel radiometer, Kipp&Zonen pyranometer, radiosonde profiles of ozone, humidity and temperature. The proposed algorithm has been used for reconstruction of UV at four Polish sites: Mikolajki, Kolobrzeg, Warszawa-Bielany and Zakopane since the early 1960s. Krzyscin's reconstruction of total ozone has been used in the calculations.
Reconstruction of three-dimensional ultrasound images based on cyclic Savitzky-Golay filters
NASA Astrophysics Data System (ADS)
Toonkum, Pollakrit; Suwanwela, Nijasri C.; Chinrungrueng, Chedsada
2011-01-01
We present a new algorithm for reconstructing a three-dimensional (3-D) ultrasound image from a series of two-dimensional B-scan ultrasound slices acquired in the mechanical linear scanning framework. Unlike most existing 3-D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the cyclic Savitzky-Golay (CSG) reconstruction filter, is an improvement on the original Savitzky-Golay filter in two respects: First, it is extended to accept a 3-D array of data as the filter input instead of a one-dimensional data sequence. Second, it incorporates the cyclic indicator function in its least-squares objective function so that the CSG algorithm can simultaneously perform both smoothing and interpolating tasks. The performance of the CSG reconstruction filter compared to that of most existing reconstruction algorithms in generating a 3-D synthetic test image and a clinical 3-D carotid artery bifurcation in the mechanical linear scanning framework are also reported.
NASA Astrophysics Data System (ADS)
Ushenko, V. O.; Boichuk, T. M.; Bachinskiy, V. T.; Vanchuliak, O. Ya.; Minzer, O. P.; Dubolazov, O. V.; Marchuk, Yu. F.; Olar, O. I.
2015-08-01
The results of optical modeling of biological tissues polycrystalline multilayer networks have been presented. Algorithms of reconstruction of parameter distributions were determined that describe the linear and circular birefringence. For the separation of the manifestations of these mechanisms we propose a method of space-frequency filtering. Criteria for differentiation of benign and malignant tissues of the women reproductive sphere were found.
Experimental scheme and restoration algorithm of block compression sensing
NASA Astrophysics Data System (ADS)
Zhang, Linxia; Zhou, Qun; Ke, Jun
2018-01-01
Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.
Identifying Defects with Guided Algorithms in Bragg Coherent Diffractive Imaging
Ulvestad, A.; Nashed, Y.; Beutier, G.; ...
2017-08-30
In this study, crystallographic defects such as dislocations can significantly alter material properties and functionality. However, imaging these imperfections during operation remains challenging due to the short length scales involved and the reactive environments of interest. Bragg coherent diffractive imaging (BCDI) has emerged as a powerful tool capable of identifying dislocations, twin domains, and other defects in 3D detail with nanometer spatial resolution within nanocrystals and grains in reactive environments. However, BCDI relies on phase retrieval algorithms that can fail to accurately reconstruct the defect network. Here, we use numerical simulations to explore different guided phase retrieval algorithms for imagingmore » defective crystals using BCDI. We explore different defect types, defect densities, Bragg peaks, and guided algorithm fitness metrics as a function of signal-to-noise ratio. Based on these results, we offer a general prescription for phasing of defective crystals with no a prior knowledge.« less
Identifying Defects with Guided Algorithms in Bragg Coherent Diffractive Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulvestad, A.; Nashed, Y.; Beutier, G.
In this study, crystallographic defects such as dislocations can significantly alter material properties and functionality. However, imaging these imperfections during operation remains challenging due to the short length scales involved and the reactive environments of interest. Bragg coherent diffractive imaging (BCDI) has emerged as a powerful tool capable of identifying dislocations, twin domains, and other defects in 3D detail with nanometer spatial resolution within nanocrystals and grains in reactive environments. However, BCDI relies on phase retrieval algorithms that can fail to accurately reconstruct the defect network. Here, we use numerical simulations to explore different guided phase retrieval algorithms for imagingmore » defective crystals using BCDI. We explore different defect types, defect densities, Bragg peaks, and guided algorithm fitness metrics as a function of signal-to-noise ratio. Based on these results, we offer a general prescription for phasing of defective crystals with no a prior knowledge.« less
Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M
2014-07-01
A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
2014-09-01
to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging system that...research is to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging ...i) developed time-of- flight extraction algorithms to perform USCT, (ii) developing image reconstruction algorithms for USCT, (iii) developed
Reduced projection angles for binary tomography with particle aggregation.
Al-Rifaie, Mohammad Majid; Blackwell, Tim
This paper extends particle aggregate reconstruction technique (PART), a reconstruction algorithm for binary tomography based on the movement of particles. PART supposes that pixel values are particles, and that particles diffuse through the image, staying together in regions of uniform pixel value known as aggregates. In this work, a variation of this algorithm is proposed and a focus is placed on reducing the number of projections and whether this impacts the reconstruction of images. The algorithm is tested on three phantoms of varying sizes and numbers of forward projections and compared to filtered back projection, a random search algorithm and to SART, a standard algebraic reconstruction method. It is shown that the proposed algorithm outperforms the aforementioned algorithms on small numbers of projections. This potentially makes the algorithm attractive in scenarios where collecting less projection data are inevitable.
Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung
2016-02-01
Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low-contrast microcalcifications, the FBP reduced detectability due to its increased noise. The EM algorithm yielded high conspicuity for both microcalcifications and masses and yielded better ASFs in terms of the full width at half maximum. The higher contrast and lower homogeneity in terms of texture analysis were shown in FBP algorithm than in other algorithms. The patient images using the EM algorithm resulted in high visibility of low-contrast mass with clear border. In this study, we compared three reconstruction algorithms by using various kinds of breast phantoms and patient cases. Future work using these algorithms and considering the type of the breast and the acquisition techniques used (e.g., angular range, dose distribution) should include the use of actual patients or patient-like phantoms to increase the potential for practical applications.
Wind reconstruction algorithm for Viking Lander 1
NASA Astrophysics Data System (ADS)
Kynkäänniemi, Tuomas; Kemppinen, Osku; Harri, Ari-Matti; Schmidt, Walter
2017-06-01
The wind measurement sensors of Viking Lander 1 (VL1) were only fully operational for the first 45 sols of the mission. We have developed an algorithm for reconstructing the wind measurement data after the wind measurement sensor failures. The algorithm for wind reconstruction enables the processing of wind data during the complete VL1 mission. The heater element of the quadrant sensor, which provided auxiliary measurement for wind direction, failed during the 45th sol of the VL1 mission. Additionally, one of the wind sensors of VL1 broke down during sol 378. Regardless of the failures, it was still possible to reconstruct the wind measurement data, because the failed components of the sensors did not prevent the determination of the wind direction and speed, as some of the components of the wind measurement setup remained intact for the complete mission. This article concentrates on presenting the wind reconstruction algorithm and methods for validating the operation of the algorithm. The algorithm enables the reconstruction of wind measurements for the complete VL1 mission. The amount of available sols is extended from 350 to 2245 sols.
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Edge-oriented dual-dictionary guided enrichment (EDGE) for MRI-CT image reconstruction.
Li, Liang; Wang, Bigong; Wang, Ge
2016-01-01
In this paper, we formulate the joint/simultaneous X-ray CT and MRI image reconstruction. In particular, a novel algorithm is proposed for MRI image reconstruction from highly under-sampled MRI data and CT images. It consists of two steps. First, a training dataset is generated from a series of well-registered MRI and CT images on the same patients. Then, an initial MRI image of a patient can be reconstructed via edge-oriented dual-dictionary guided enrichment (EDGE) based on the training dataset and a CT image of the patient. Second, an MRI image is reconstructed using the dictionary learning (DL) algorithm from highly under-sampled k-space data and the initial MRI image. Our algorithm can establish a one-to-one correspondence between the two imaging modalities, and obtain a good initial MRI estimation. Both noise-free and noisy simulation studies were performed to evaluate and validate the proposed algorithm. The results with different under-sampling factors show that the proposed algorithm performed significantly better than those reconstructed using the DL algorithm from MRI data alone.
Sparsity-constrained PET image reconstruction with learned dictionaries
NASA Astrophysics Data System (ADS)
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.
Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves.
Yu, Hengyong; Ye, Yangbo; Zhao, Shiying; Wang, Ge
2006-01-01
We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction.
Loh, Su-Yi; Jahans-Price, Thomas; Greenwood, Michael P; Greenwood, Mingkwan; Hoe, See-Ziau; Konopacka, Agnieszka; Campbell, Colin; Murphy, David; Hindmarch, Charles C T
2017-01-01
The supraoptic nucleus (SON) is a group of neurons in the hypothalamus responsible for the synthesis and secretion of the peptide hormones vasopressin and oxytocin. Following physiological cues, such as dehydration, salt-loading and lactation, the SON undergoes a function related plasticity that we have previously described in the rat at the transcriptome level. Using the unsupervised graphical lasso (Glasso) algorithm, we reconstructed a putative network from 500 plastic SON genes in which genes are the nodes and the edges are the inferred interactions. The most active nodal gene identified within the network was Caprin2 . Caprin2 encodes an RNA-binding protein that we have previously shown to be vital for the functioning of osmoregulatory neuroendocrine neurons in the SON of the rat hypothalamus. To test the validity of the Glasso network, we either overexpressed or knocked down Caprin2 transcripts in differentiated rat pheochromocytoma PC12 cells and showed that these manipulations had significant opposite effects on the levels of putative target mRNAs. These studies suggest that the predicative power of the Glasso algorithm within an in vivo system is accurate, and identifies biological targets that may be important to the functional plasticity of the SON.
Fast algorithm for wavefront reconstruction in XAO/SCAO with pyramid wavefront sensor
NASA Astrophysics Data System (ADS)
Shatokhina, Iuliia; Obereder, Andreas; Ramlau, Ronny
2014-08-01
We present a fast wavefront reconstruction algorithm developed for an extreme adaptive optics system equipped with a pyramid wavefront sensor on a 42m telescope. The method is called the Preprocessed Cumulative Reconstructor with domain decomposition (P-CuReD). The algorithm is based on the theoretical relationship between pyramid and Shack-Hartmann wavefront sensor data. The algorithm consists of two consecutive steps - a data preprocessing, and an application of the CuReD algorithm, which is a fast method for wavefront reconstruction from Shack-Hartmann sensor data. The closed loop simulation results show that the P-CuReD method provides the same reconstruction quality and is significantly faster than an MVM.
Miniature modified Faraday cup for micro electron beams
Teruya, Alan T.; Elmer, John W.; Palmer, Todd A.; Walton, Chris C.
2008-05-27
A micro beam Faraday cup assembly includes a refractory metal layer with an odd number of thin, radially positioned traces in this refractory metal layer. Some of the radially positioned traces are located at the edge of the micro modified Faraday cup body and some of the radially positioned traces are located in the central portion of the micro modified Faraday cup body. Each set of traces is connected to a separate data acquisition channel to form multiple independent diagnostic networks. The data obtained from the two diagnostic networks are combined and inputted into a computed tomography algorithm to reconstruct the beam shape, size, and power density distribution.
NOTE: A BPF-type algorithm for CT with a curved PI detector
NASA Astrophysics Data System (ADS)
Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping
2006-08-01
Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941 59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.
A BPF-type algorithm for CT with a curved PI detector.
Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping
2006-08-21
Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941-59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam-Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam-Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.
Henrion, Sebastian; Spoor, Cees W; Pieters, Remco P M; Müller, Ulrike K; van Leeuwen, Johan L
2015-07-07
Images of underwater objects are distorted by refraction at the water-glass-air interfaces and these distortions can lead to substantial errors when reconstructing the objects' position and shape. So far, aquatic locomotion studies have minimized refraction in their experimental setups and used the direct linear transform algorithm (DLT) to reconstruct position information, which does not model refraction explicitly. Here we present a refraction corrected ray-tracing algorithm (RCRT) that reconstructs position information using Snell's law. We validated this reconstruction by calculating 3D reconstruction error-the difference between actual and reconstructed position of a marker. We found that reconstruction error is small (typically less than 1%). Compared with the DLT algorithm, the RCRT has overall lower reconstruction errors, especially outside the calibration volume, and errors are essentially insensitive to camera position and orientation and the number and position of the calibration points. To demonstrate the effectiveness of the RCRT, we tracked an anatomical marker on a seahorse recorded with four cameras to reconstruct the swimming trajectory for six different camera configurations. The RCRT algorithm is accurate and robust and it allows cameras to be oriented at large angles of incidence and facilitates the development of accurate tracking algorithms to quantify aquatic manoeuvers.
Wisdom of crowds for robust gene network inference
Marbach, Daniel; Costello, James C.; Küffner, Robert; Vega, Nicci; Prill, Robert J.; Camacho, Diogo M.; Allison, Kyle R.; Kellis, Manolis; Collins, James J.; Stolovitzky, Gustavo
2012-01-01
Reconstructing gene regulatory networks from high-throughput data is a long-standing problem. Through the DREAM project (Dialogue on Reverse Engineering Assessment and Methods), we performed a comprehensive blind assessment of over thirty network inference methods on Escherichia coli, Staphylococcus aureus, Saccharomyces cerevisiae, and in silico microarray data. We characterize performance, data requirements, and inherent biases of different inference approaches offering guidelines for both algorithm application and development. We observe that no single inference method performs optimally across all datasets. In contrast, integration of predictions from multiple inference methods shows robust and high performance across diverse datasets. Thereby, we construct high-confidence networks for E. coli and S. aureus, each comprising ~1700 transcriptional interactions at an estimated precision of 50%. We experimentally test 53 novel interactions in E. coli, of which 23 were supported (43%). Our results establish community-based methods as a powerful and robust tool for the inference of transcriptional gene regulatory networks. PMID:22796662
Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan
2012-01-01
The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474
NASA Astrophysics Data System (ADS)
Gaci, Said; Hachay, Olga; Zaourar, Naima
2017-04-01
One of the key elements in hydrocarbon reservoirs characterization is the S-wave velocity (Vs). Since the traditional estimating methods often fail to accurately predict this physical parameter, a new approach that takes into account its non-stationary and non-linear properties is needed. In this view, a prediction model based on complete ensemble empirical mode decomposition (CEEMD) and a multiple layer perceptron artificial neural network (MLP ANN) is suggested to compute Vs from P-wave velocity (Vp). Using a fine-to-coarse reconstruction algorithm based on CEEMD, the Vp log data is decomposed into a high frequency (HF) component, a low frequency (LF) component and a trend component. Then, different combinations of these components are used as inputs of the MLP ANN algorithm for estimating Vs log. Applications on well logs taken from different geological settings illustrate that the predicted Vs values using MLP ANN with the combinations of HF, LF and trend in inputs are more accurate than those obtained with the traditional estimating methods. Keywords: S-wave velocity, CEEMD, multilayer perceptron neural networks.
Defining an essence of structure determining residue contacts in proteins.
Sathyapriya, R; Duarte, Jose M; Stehr, Henning; Filippis, Ioannis; Lappe, Michael
2009-12-01
The network of native non-covalent residue contacts determines the three-dimensional structure of a protein. However, not all contacts are of equal structural significance, and little knowledge exists about a minimal, yet sufficient, subset required to define the global features of a protein. Characterisation of this "structural essence" has remained elusive so far: no algorithmic strategy has been devised to-date that could outperform a random selection in terms of 3D reconstruction accuracy (measured as the Ca RMSD). It is not only of theoretical interest (i.e., for design of advanced statistical potentials) to identify the number and nature of essential native contacts-such a subset of spatial constraints is very useful in a number of novel experimental methods (like EPR) which rely heavily on constraint-based protein modelling. To derive accurate three-dimensional models from distance constraints, we implemented a reconstruction pipeline using distance geometry. We selected a test-set of 12 protein structures from the four major SCOP fold classes and performed our reconstruction analysis. As a reference set, series of random subsets (ranging from 10% to 90% of native contacts) are generated for each protein, and the reconstruction accuracy is computed for each subset. We have developed a rational strategy, termed "cone-peeling" that combines sequence features and network descriptors to select minimal subsets that outperform the reference sets. We present, for the first time, a rational strategy to derive a structural essence of residue contacts and provide an estimate of the size of this minimal subset. Our algorithm computes sparse subsets capable of determining the tertiary structure at approximately 4.8 A Ca RMSD with as little as 8% of the native contacts (Ca-Ca and Cb-Cb). At the same time, a randomly chosen subset of native contacts needs about twice as many contacts to reach the same level of accuracy. This "structural essence" opens new avenues in the fields of structure prediction, empirical potentials and docking.
Defining an Essence of Structure Determining Residue Contacts in Proteins
Sathyapriya, R.; Duarte, Jose M.; Stehr, Henning; Filippis, Ioannis; Lappe, Michael
2009-01-01
The network of native non-covalent residue contacts determines the three-dimensional structure of a protein. However, not all contacts are of equal structural significance, and little knowledge exists about a minimal, yet sufficient, subset required to define the global features of a protein. Characterisation of this “structural essence” has remained elusive so far: no algorithmic strategy has been devised to-date that could outperform a random selection in terms of 3D reconstruction accuracy (measured as the Ca RMSD). It is not only of theoretical interest (i.e., for design of advanced statistical potentials) to identify the number and nature of essential native contacts—such a subset of spatial constraints is very useful in a number of novel experimental methods (like EPR) which rely heavily on constraint-based protein modelling. To derive accurate three-dimensional models from distance constraints, we implemented a reconstruction pipeline using distance geometry. We selected a test-set of 12 protein structures from the four major SCOP fold classes and performed our reconstruction analysis. As a reference set, series of random subsets (ranging from 10% to 90% of native contacts) are generated for each protein, and the reconstruction accuracy is computed for each subset. We have developed a rational strategy, termed “cone-peeling” that combines sequence features and network descriptors to select minimal subsets that outperform the reference sets. We present, for the first time, a rational strategy to derive a structural essence of residue contacts and provide an estimate of the size of this minimal subset. Our algorithm computes sparse subsets capable of determining the tertiary structure at approximately 4.8 Å Ca RMSD with as little as 8% of the native contacts (Ca-Ca and Cb-Cb). At the same time, a randomly chosen subset of native contacts needs about twice as many contacts to reach the same level of accuracy. This “structural essence” opens new avenues in the fields of structure prediction, empirical potentials and docking. PMID:19997489
Accounting for hardware imperfections in EIT image reconstruction algorithms.
Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert
2007-07-01
Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.
NASA Astrophysics Data System (ADS)
Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.
2016-12-01
Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.
NASA Astrophysics Data System (ADS)
Garay, Michael J.; Davis, Anthony B.; Diner, David J.
2016-12-01
We present initial results using computed tomography to reconstruct the three-dimensional structure of an aerosol plume from passive observations made by the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. MISR views the Earth from nine different angles at four visible and near-infrared wavelengths. Adopting the 672 nm channel, we treat each view as an independent measure of aerosol optical thickness along the line of sight at 1.1 km resolution. A smoke plume over dark water is selected as it provides a more tractable lower boundary condition for the retrieval. A tomographic algorithm is used to reconstruct the horizontal and vertical aerosol extinction field for one along-track slice from the path of all camera rays passing through a regular grid. The results compare well with ground-based lidar observations from a nearby Micropulse Lidar Network site.
Vertical structure of medium-scale traveling ionospheric disturbances
NASA Astrophysics Data System (ADS)
Ssessanga, Nicholas; Kim, Yong Ha; Kim, Eunsol
2015-11-01
We develop an algorithm of computerized ionospheric tomography (CIT) to infer information on the vertical and horizontal structuring of electron density during nighttime medium-scale traveling ionospheric disturbances (MSTIDs). To facilitate digital CIT we have adopted total electron contents (TEC) from a dense Global Positioning System (GPS) receiver network, GEONET, which contains more than 1000 receivers. A multiplicative algebraic reconstruction technique was utilized with a calibrated IRI-2012 model as an initial solution. The reconstructed F2 peak layer varied in altitude with average peak-to-peak amplitude of ~52 km. In addition, the F2 peak layer anticorrelated with TEC variations. This feature supports a theory in which nighttime MSTID is composed of oscillating electric fields due to conductivity variations. Moreover, reconstructed TEC variations over two stations were reasonably close to variations directly derived from the measured TEC data set. Our tomographic analysis may thus help understand three-dimensional structure of MSTIDs in a quantitative way.
NASA Astrophysics Data System (ADS)
Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang
2018-04-01
In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.
Ma, Ren; Zhou, Xiaoqing; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng
2016-12-21
In this study we present a three-dimensional (3D) reconstruction algorithm for magneto-acoustic tomography with magnetic induction (MAT-MI) based on the characteristics of the ultrasound transducer. The algorithm is investigated to solve the blur problem of the MAT-MI acoustic source image, which is caused by the ultrasound transducer and the scanning geometry. First, we established a transducer model matrix using measured data from the real transducer. With reference to the S-L model used in the computed tomography algorithm, a 3D phantom model of electrical conductivity is set up. Both sphere scanning and cylinder scanning geometries are adopted in the computer simulation. Then, using finite element analysis, the distribution of the eddy current and the acoustic source as well as the acoustic pressure can be obtained with the transducer model matrix. Next, using singular value decomposition, the inverse transducer model matrix together with the reconstruction algorithm are worked out. The acoustic source and the conductivity images are reconstructed using the proposed algorithm. Comparisons between an ideal point transducer and the realistic transducer are made to evaluate the algorithms. Finally, an experiment is performed using a graphite phantom. We found that images of the acoustic source reconstructed using the proposed algorithm are a better match than those using the previous one, the correlation coefficient of sphere scanning geometry is 98.49% and that of cylinder scanning geometry is 94.96%. Comparison between the ideal point transducer and the realistic transducer shows that the correlation coefficients are 90.2% in sphere scanning geometry and 86.35% in cylinder scanning geometry. The reconstruction of the graphite phantom experiment also shows a higher resolution using the proposed algorithm. We conclude that the proposed reconstruction algorithm, which considers the characteristics of the transducer, can obviously improve the resolution of the reconstructed image. This study can be applied to analyse the effect of the position of the transducer and the scanning geometry on imaging. It may provide a more precise method to reconstruct the conductivity distribution in MAT-MI.
Simulation and performance of an artificial retina for 40 MHz track reconstruction
Abba, A.; Bedeschi, F.; Citterio, M.; ...
2015-03-05
We present the results of a detailed simulation of the artificial retina pattern-recognition algorithm, designed to reconstruct events with hundreds of charged-particle tracks in pixel and silicon detectors at LHCb with LHC crossing frequency of 40 MHz. Performances of the artificial retina algorithm are assessed using the official Monte Carlo samples of the LHCb experiment. We found performances for the retina pattern-recognition algorithm comparable with the full LHCb reconstruction algorithm.
A Survey of the Use of Iterative Reconstruction Algorithms in Electron Microscopy
Otón, J.; Vilas, J. L.; Kazemi, M.; Melero, R.; del Caño, L.; Cuenca, J.; Conesa, P.; Gómez-Blanco, J.; Marabini, R.; Carazo, J. M.
2017-01-01
One of the key steps in Electron Microscopy is the tomographic reconstruction of a three-dimensional (3D) map of the specimen being studied from a set of two-dimensional (2D) projections acquired at the microscope. This tomographic reconstruction may be performed with different reconstruction algorithms that can be grouped into several large families: direct Fourier inversion methods, back-projection methods, Radon methods, or iterative algorithms. In this review, we focus on the latter family of algorithms, explaining the mathematical rationale behind the different algorithms in this family as they have been introduced in the field of Electron Microscopy. We cover their use in Single Particle Analysis (SPA) as well as in Electron Tomography (ET). PMID:29312997
Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves
Ye, Yangbo; Zhao, Shiying; Wang, Ge
2006-01-01
We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction. PMID:23165018
Network design and analysis for multi-enzyme biocatalysis.
Blaß, Lisa Katharina; Weyler, Christian; Heinzle, Elmar
2017-08-10
As more and more biological reaction data become available, the full exploration of the enzymatic potential for the synthesis of valuable products opens up exciting new opportunities but is becoming increasingly complex. The manual design of multi-step biosynthesis routes involving enzymes from different organisms is very challenging. To harness the full enzymatic potential, we developed a computational tool for the directed design of biosynthetic production pathways for multi-step catalysis with in vitro enzyme cascades, cell hydrolysates and permeabilized cells. We present a method which encompasses the reconstruction of a genome-scale pan-organism metabolic network, path-finding and the ranking of the resulting pathway candidates for proposing suitable synthesis pathways. The network is based on reaction and reaction pair data from the Kyoto Encyclopedia of Genes and Genomes (KEGG) and the thermodynamics calculator eQuilibrator. The pan-organism network is especially useful for finding the most suitable pathway to a target metabolite from a thermodynamic or economic standpoint. However, our method can be used with any network reconstruction, e.g. for a specific organism. We implemented a path-finding algorithm based on a mixed-integer linear program (MILP) which takes into account both topology and stoichiometry of the underlying network. Unlike other methods we do not specify a single starting metabolite, but our algorithm searches for pathways starting from arbitrary start metabolites to a target product of interest. Using a set of biochemical ranking criteria including pathway length, thermodynamics and other biological characteristics such as number of heterologous enzymes or cofactor requirement, it is possible to obtain well-designed meaningful pathway alternatives. In addition, a thermodynamic profile, the overall reactant balance and potential side reactions as well as an SBML file for visualization are generated for each pathway alternative. We present an in silico tool for the design of multi-enzyme biosynthetic production pathways starting from a pan-organism network. The method is highly customizable and each module can be adapted to the focus of the project at hand. This method is directly applicable for (i) in vitro enzyme cascades, (ii) cell hydrolysates and (iii) permeabilized cells.
Bisognin, Andrea; Sales, Gabriele; Coppe, Alessandro; Bortoluzzi, Stefania; Romualdi, Chiara
2012-01-01
MAGIA2 (http://gencomp.bio.unipd.it/magia2) is an update, extension and evolution of the MAGIA web tool. It is dedicated to the integrated analysis of in silico target prediction, microRNA (miRNA) and gene expression data for the reconstruction of post-transcriptional regulatory networks. miRNAs are fundamental post-transcriptional regulators of several key biological and pathological processes. As miRNAs act prevalently through target degradation, their expression profiles are expected to be inversely correlated to those of the target genes. Low specificity of target prediction algorithms makes integration approaches an interesting solution for target prediction refinement. MAGIA2 performs this integrative approach supporting different association measures, multiple organisms and almost all target predictions algorithms. Nevertheless, miRNAs activity should be viewed as part of a more complex scenario where regulatory elements and their interactors generate a highly connected network and where gene expression profiles are the result of different levels of regulation. The updated MAGIA2 tries to dissect this complexity by reconstructing mixed regulatory circuits involving either miRNA or transcription factor (TF) as regulators. Two types of circuits are identified: (i) a TF that regulates both a miRNA and its target and (ii) a miRNA that regulates both a TF and its target. PMID:22618880
NASA Astrophysics Data System (ADS)
Berta, Maristella; Bellomo, Lucio; Griffa, Annalisa; Gatimu Magaldi, Marcello; Marmain, Julien; Molcard, Anne; Taillandier, Vincent
2013-04-01
The Lagrangian assimilation algorithm LAVA (LAgrangian Variational Analysis) is customized for coastal areas in the framework of the TOSCA (Tracking Oil Spills & Coastal Awareness network) Project, to improve the response to maritime accidents in the Mediterranean Sea. LAVA assimilates drifters' trajectories in the velocity fields which may come from either coastal radars or numerical models. In the present study, LAVA is applied to the coastal area in front of Toulon (France). Surface currents are available from a WERA radar network (2km spatial resolution, every 20 minutes) and from the GLAZUR model (1/64° spatial resolution, every hour). The cluster of drifters considered is constituted by 7 buoys, transmitting every 15 minutes for a period of 5 days. Three assimilation cases are considered: i) correction of the radar velocity field, ii) correction of the model velocity field and iii) reconstruction of the velocity field from drifters only. It is found that drifters' trajectories compare well with the ones obtained by the radar and the correction to radar velocity field is therefore minimal. Contrarily, observed and numerical trajectories separate rapidly and the correction to the model velocity field is substantial. For the reconstruction from drifters only, the velocity fields obtained are similar to the radar ones, but limited to the neighbor of the drifter paths.
Photogrammetric point cloud compression for tactical networks
NASA Astrophysics Data System (ADS)
Madison, Andrew C.; Massaro, Richard D.; Wayant, Clayton D.; Anderson, John E.; Smith, Clint B.
2017-05-01
We report progress toward the development of a compression schema suitable for use in the Army's Common Operating Environment (COE) tactical network. The COE facilitates the dissemination of information across all Warfighter echelons through the establishment of data standards and networking methods that coordinate the readout and control of a multitude of sensors in a common operating environment. When integrated with a robust geospatial mapping functionality, the COE enables force tracking, remote surveillance, and heightened situational awareness to Soldiers at the tactical level. Our work establishes a point cloud compression algorithm through image-based deconstruction and photogrammetric reconstruction of three-dimensional (3D) data that is suitable for dissimination within the COE. An open source visualization toolkit was used to deconstruct 3D point cloud models based on ground mobile light detection and ranging (LiDAR) into a series of images and associated metadata that can be easily transmitted on a tactical network. Stereo photogrammetric reconstruction is then conducted on the received image stream to reveal the transmitted 3D model. The reported method boasts nominal compression ratios typically on the order of 250 while retaining tactical information and accurate georegistration. Our work advances the scope of persistent intelligence, surveillance, and reconnaissance through the development of 3D visualization and data compression techniques relevant to the tactical operations environment.
Material Interface Reconstruction in VisIt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meredith, J S
In this paper, we first survey a variety of approaches to material interface reconstruction and their applicability to visualization, and we investigate the details of the current reconstruction algorithm in the VisIt scientific analysis and visualization tool. We then provide a novel implementation of the original VisIt algorithm that makes use of a wide range of the finite element zoo during reconstruction. This approach results in dramatic improvements in quality and performance without sacrificing the strengths of the VisIt algorithm as it relates to visualization.
NASA Astrophysics Data System (ADS)
Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui
2017-01-01
A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.
NASA Astrophysics Data System (ADS)
Wu, Wei; Zhao, Dewei; Zhang, Huan
2015-12-01
Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.
Wind speed time series reconstruction using a hybrid neural genetic approach
NASA Astrophysics Data System (ADS)
Rodriguez, H.; Flores, J. J.; Puig, V.; Morales, L.; Guerra, A.; Calderon, F.
2017-11-01
Currently, electric energy is used in practically all modern human activities. Most of the energy produced came from fossil fuels, making irreversible damage to the environment. Lately, there has been an effort by nations to produce energy using clean methods, such as solar and wind energy, among others. Wind energy is one of the cleanest alternatives. However, the wind speed is not constant, making the planning and operation at electric power systems a difficult activity. Knowing in advance the amount of raw material (wind speed) used for energy production allows us to estimate the energy to be generated by the power plant, helping the maintenance planning, the operational management, optimal operational cost. For these reasons, the forecast of wind speed becomes a necessary task. The forecast process involves the use of past observations from the variable to forecast (wind speed). To measure wind speed, weather stations use devices called anemometers, but due to poor maintenance, connection error, or natural wear, they may present false or missing data. In this work, a hybrid methodology is proposed, and it uses a compact genetic algorithm with an artificial neural network to reconstruct wind speed time series. The proposed methodology reconstructs the time series using a ANN defined by a Compact Genetic Algorithm.
Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data
NASA Astrophysics Data System (ADS)
Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel
2015-08-01
Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.
NASA Astrophysics Data System (ADS)
Humphries, T.; Winn, J.; Faridani, A.
2017-08-01
Recent work in CT image reconstruction has seen increasing interest in the use of total variation (TV) and related penalties to regularize problems involving reconstruction from undersampled or incomplete data. Superiorization is a recently proposed heuristic which provides an automatic procedure to ‘superiorize’ an iterative image reconstruction algorithm with respect to a chosen objective function, such as TV. Under certain conditions, the superiorized algorithm is guaranteed to find a solution that is as satisfactory as any found by the original algorithm with respect to satisfying the constraints of the problem; this solution is also expected to be superior with respect to the chosen objective. Most work on superiorization has used reconstruction algorithms which assume a linear measurement model, which in the case of CT corresponds to data generated from a monoenergetic x-ray beam. Many CT systems generate x-rays from a polyenergetic spectrum, however, in which the measured data represent an integral of object attenuation over all energies in the spectrum. This inconsistency with the linear model produces the well-known beam hardening artifacts, which impair analysis of CT images. In this work we superiorize an iterative algorithm for reconstruction from polyenergetic data, using both TV and an anisotropic TV (ATV) penalty. We apply the superiorized algorithm in numerical phantom experiments modeling both sparse-view and limited-angle scenarios. In our experiments, the superiorized algorithm successfully finds solutions which are as constraints-compatible as those found by the original algorithm, with significantly reduced TV and ATV values. The superiorized algorithm thus produces images with greatly reduced sparse-view and limited angle artifacts, which are also largely free of the beam hardening artifacts that would be present if a superiorized version of a monoenergetic algorithm were used.
A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT
Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.; Pan, Xiaochuan
2010-01-01
Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack–Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories. PMID:20175463
A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT.
Cho, Seungryong; Xia, Dan; Pellizzari, Charles A; Pan, Xiaochuan
2010-01-01
Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredback-projection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.
Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B
2010-04-01
Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.
2010-01-01
Background The reconstruction of protein complexes from the physical interactome of organisms serves as a building block towards understanding the higher level organization of the cell. Over the past few years, several independent high-throughput experiments have helped to catalogue enormous amount of physical protein interaction data from organisms such as yeast. However, these individual datasets show lack of correlation with each other and also contain substantial number of false positives (noise). Over these years, several affinity scoring schemes have also been devised to improve the qualities of these datasets. Therefore, the challenge now is to detect meaningful as well as novel complexes from protein interaction (PPI) networks derived by combining datasets from multiple sources and by making use of these affinity scoring schemes. In the attempt towards tackling this challenge, the Markov Clustering algorithm (MCL) has proved to be a popular and reasonably successful method, mainly due to its scalability, robustness, and ability to work on scored (weighted) networks. However, MCL produces many noisy clusters, which either do not match known complexes or have additional proteins that reduce the accuracies of correctly predicted complexes. Results Inspired by recent experimental observations by Gavin and colleagues on the modularity structure in yeast complexes and the distinctive properties of "core" and "attachment" proteins, we develop a core-attachment based refinement method coupled to MCL for reconstruction of yeast complexes from scored (weighted) PPI networks. We combine physical interactions from two recent "pull-down" experiments to generate an unscored PPI network. We then score this network using available affinity scoring schemes to generate multiple scored PPI networks. The evaluation of our method (called MCL-CAw) on these networks shows that: (i) MCL-CAw derives larger number of yeast complexes and with better accuracies than MCL, particularly in the presence of natural noise; (ii) Affinity scoring can effectively reduce the impact of noise on MCL-CAw and thereby improve the quality (precision and recall) of its predicted complexes; (iii) MCL-CAw responds well to most available scoring schemes. We discuss several instances where MCL-CAw was successful in deriving meaningful complexes, and where it missed a few proteins or whole complexes due to affinity scoring of the networks. We compare MCL-CAw with several recent complex detection algorithms on unscored and scored networks, and assess the relative performance of the algorithms on these networks. Further, we study the impact of augmenting physical datasets with computationally inferred interactions for complex detection. Finally, we analyse the essentiality of proteins within predicted complexes to understand a possible correlation between protein essentiality and their ability to form complexes. Conclusions We demonstrate that core-attachment based refinement in MCL-CAw improves the predictions of MCL on yeast PPI networks. We show that affinity scoring improves the performance of MCL-CAw. PMID:20939868
NASA Astrophysics Data System (ADS)
Tang, Xiangyang
2003-05-01
In multi-slice helical CT, the single-tilted-plane-based reconstruction algorithm has been proposed to combat helical and cone beam artifacts by tilting a reconstruction plane to fit a helical source trajectory optimally. Furthermore, to improve the noise characteristics or dose efficiency of the single-tilted-plane-based reconstruction algorithm, the multi-tilted-plane-based reconstruction algorithm has been proposed, in which the reconstruction plane deviates from the pose globally optimized due to an extra rotation along the 3rd axis. As a result, the capability of suppressing helical and cone beam artifacts in the multi-tilted-plane-based reconstruction algorithm is compromised. An optomized tilted-plane-based reconstruction algorithm is proposed in this paper, in which a matched view weighting strategy is proposed to optimize the capability of suppressing helical and cone beam artifacts and noise characteristics. A helical body phantom is employed to quantitatively evaluate the imaging performance of the matched view weighting approach by tabulating artifact index and noise characteristics, showing that the matched view weighting improves both the helical artifact suppression and noise characteristics or dose efficiency significantly in comparison to the case in which non-matched view weighting is applied. Finally, it is believed that the matched view weighting approach is of practical importance in the development of multi-slive helical CT, because it maintains the computational structure of fan beam filtered backprojection and demands no extra computational services.
NASA Astrophysics Data System (ADS)
Gok, Gokhan; Mosna, Zbysek; Arikan, Feza; Arikan, Orhan; Erdem, Esra
2016-07-01
Ionospheric observation is essentially accomplished by specialized radar systems called ionosondes. The time delay between the transmitted and received signals versus frequency is measured by the ionosondes and the received signals are processed to generate ionogram plots, which show the time delay or reflection height of signals with respect to transmitted frequency. The critical frequencies of ionospheric layers and virtual heights, that provide useful information about ionospheric structurecan be extracted from ionograms . Ionograms also indicate the amount of variability or disturbances in the ionosphere. With special inversion algorithms and tomographical methods, electron density profiles can also be estimated from the ionograms. Although structural pictures of ionosphere in the vertical direction can be observed from ionosonde measurements, some errors may arise due to inaccuracies that arise from signal propagation, modeling, data processing and tomographic reconstruction algorithms. Recently IONOLAB group (www.ionolab.org) developed a new algorithm for effective and accurate extraction of ionospheric parameters and reconstruction of electron density profile from ionograms. The electron density reconstruction algorithm applies advanced optimization techniques to calculate parameters of any existing analytical function which defines electron density with respect to height using ionogram measurement data. The process of reconstructing electron density with respect to height is known as the ionogram scaling or true height analysis. IONOLAB-RAY algorithm is a tool to investigate the propagation path and parameters of HF wave in the ionosphere. The algorithm models the wave propagation using ray representation under geometrical optics approximation. In the algorithm , the structural ionospheric characteristics arerepresented as realistically as possible including anisotropicity, inhomogenity and time dependence in 3-D voxel structure. The algorithm is also used for various purposes including calculation of actual height and generation of ionograms. In this study, the performance of electron density reconstruction algorithm of IONOLAB group and standard electron density profile algorithms of ionosondes are compared with IONOLAB-RAY wave propagation simulation in near vertical incidence. The electron density reconstruction and parameter extraction algorithms of ionosondes are validated with the IONOLAB-RAY results both for quiet anddisturbed ionospheric states in Central Europe using ionosonde stations such as Pruhonice and Juliusruh . It is observed that IONOLAB ionosonde parameter extraction and electron density reconstruction algorithm performs significantly better compared to standard algorithms especially for disturbed ionospheric conditions. IONOLAB-RAY provides an efficient and reliable tool to investigate and validate ionosonde electron density reconstruction algorithms, especially in determination of reflection height (true height) of signals and critical parameters of ionosphere. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
NASA Astrophysics Data System (ADS)
Zhou, Meiling; Singh, Alok Kumar; Pedrini, Giancarlo; Osten, Wolfgang; Min, Junwei; Yao, Baoli
2018-03-01
We present a tunable output-frequency filter (TOF) algorithm to reconstruct the object from noisy experimental data under low-power partially coherent illumination, such as LED, when imaging through scattering media. In the iterative algorithm, we employ Gaussian functions with different filter windows at different stages of iteration process to reduce corruption from experimental noise to search for a global minimum in the reconstruction. In comparison with the conventional iterative phase retrieval algorithm, we demonstrate that the proposed TOF algorithm achieves consistent and reliable reconstruction in the presence of experimental noise. Moreover, the spatial resolution and distinctive features are retained in the reconstruction since the filter is applied only to the region outside the object. The feasibility of the proposed method is proved by experimental results.
Three-dimensional dictionary-learning reconstruction of (23)Na MRI data.
Behl, Nicolas G R; Gnahm, Christine; Bachert, Peter; Ladd, Mark E; Nagel, Armin M
2016-04-01
To reduce noise and artifacts in (23)Na MRI with a Compressed Sensing reconstruction and a learned dictionary as sparsifying transform. A three-dimensional dictionary-learning compressed sensing reconstruction algorithm (3D-DLCS) for the reconstruction of undersampled 3D radial (23)Na data is presented. The dictionary used as the sparsifying transform is learned with a K-singular-value-decomposition (K-SVD) algorithm. The reconstruction parameters are optimized on simulated data, and the quality of the reconstructions is assessed with peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The performance of the algorithm is evaluated in phantom and in vivo (23)Na MRI data of seven volunteers and compared with nonuniform fast Fourier transform (NUFFT) and other Compressed Sensing reconstructions. The reconstructions of simulated data have maximal PSNR and SSIM for an undersampling factor (USF) of 10 with numbers of averages equal to the USF. For 10-fold undersampling, the PSNR is increased by 5.1 dB compared with the NUFFT reconstruction, and the SSIM by 24%. These results are confirmed by phantom and in vivo (23)Na measurements in the volunteers that show markedly reduced noise and undersampling artifacts in the case of 3D-DLCS reconstructions. The 3D-DLCS algorithm enables precise reconstruction of undersampled (23)Na MRI data with markedly reduced noise and artifact levels compared with NUFFT reconstruction. Small structures are well preserved. © 2015 Wiley Periodicals, Inc.
Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm*
Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan
2010-01-01
The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement. PMID:20617122
Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm.
Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan
2010-02-01
The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement.
Objective performance assessment of five computed tomography iterative reconstruction algorithms.
Omotayo, Azeez; Elbakri, Idris
2016-11-22
Iterative algorithms are gaining clinical acceptance in CT. We performed objective phantom-based image quality evaluation of five commercial iterative reconstruction algorithms available on four different multi-detector CT (MDCT) scanners at different dose levels as well as the conventional filtered back-projection (FBP) reconstruction. Using the Catphan500 phantom, we evaluated image noise, contrast-to-noise ratio (CNR), modulation transfer function (MTF) and noise-power spectrum (NPS). The algorithms were evaluated over a CTDIvol range of 0.75-18.7 mGy on four major MDCT scanners: GE DiscoveryCT750HD (algorithms: ASIR™ and VEO™); Siemens Somatom Definition AS+ (algorithm: SAFIRE™); Toshiba Aquilion64 (algorithm: AIDR3D™); and Philips Ingenuity iCT256 (algorithm: iDose4™). Images were reconstructed using FBP and the respective iterative algorithms on the four scanners. Use of iterative algorithms decreased image noise and increased CNR, relative to FBP. In the dose range of 1.3-1.5 mGy, noise reduction using iterative algorithms was in the range of 11%-51% on GE DiscoveryCT750HD, 10%-52% on Siemens Somatom Definition AS+, 49%-62% on Toshiba Aquilion64, and 13%-44% on Philips Ingenuity iCT256. The corresponding CNR increase was in the range 11%-105% on GE, 11%-106% on Siemens, 85%-145% on Toshiba and 13%-77% on Philips respectively. Most algorithms did not affect the MTF, except for VEO™ which produced an increase in the limiting resolution of up to 30%. A shift in the peak of the NPS curve towards lower frequencies and a decrease in NPS amplitude were obtained with all iterative algorithms. VEO™ required long reconstruction times, while all other algorithms produced reconstructions in real time. Compared to FBP, iterative algorithms reduced image noise and increased CNR. The iterative algorithms available on different scanners achieved different levels of noise reduction and CNR increase while spatial resolution improvements were obtained only with VEO™. This study is useful in that it provides performance assessment of the iterative algorithms available from several mainstream CT manufacturers.
Inferring gene ontologies from pairwise similarity data
Kramer, Michael; Dutkowski, Janusz; Yu, Michael; Bafna, Vineet; Ideker, Trey
2014-01-01
Motivation: While the manually curated Gene Ontology (GO) is widely used, inferring a GO directly from -omics data is a compelling new problem. Recognizing that ontologies are a directed acyclic graph (DAG) of terms and hierarchical relations, algorithms are needed that: analyze a full matrix of gene–gene pairwise similarities from -omics data;infer true hierarchical structure in these data rather than enforcing hierarchy as a computational artifact; andrespect biological pleiotropy, by which a term in the hierarchy can relate to multiple higher level terms. Methods addressing these requirements are just beginning to emerge—none has been evaluated for GO inference. Methods: We consider two algorithms [Clique Extracted Ontology (CliXO), LocalFitness] that uniquely satisfy these requirements, compared with methods including standard clustering. CliXO is a new approach that finds maximal cliques in a network induced by progressive thresholding of a similarity matrix. We evaluate each method’s ability to reconstruct the GO biological process ontology from a similarity matrix based on (a) semantic similarities for GO itself or (b) three -omics datasets for yeast. Results: For task (a) using semantic similarity, CliXO accurately reconstructs GO (>99% precision, recall) and outperforms other approaches (<20% precision, <20% recall). For task (b) using -omics data, CliXO outperforms other methods using two -omics datasets and achieves ∼30% precision and recall using YeastNet v3, similar to an earlier approach (Network Extracted Ontology) and better than LocalFitness or standard clustering (20–25% precision, recall). Conclusion: This study provides algorithmic foundation for building gene ontologies by capturing hierarchical and pleiotropic structure embedded in biomolecular data. Contact: tideker@ucsd.edu PMID:24932003
PI-line-based image reconstruction in helical cone-beam computed tomography with a variable pitch.
Zou, Yu; Pan, Xiaochuan; Xia, Dan; Wang, Ge
2005-08-01
Current applications of helical cone-beam computed tomography (CT) involve primarily a constant pitch where the translating speed of the table and the rotation speed of the source-detector remain constant. However, situations do exist where it may be more desirable to use a helical scan with a variable translating speed of the table, leading a variable pitch. One of such applications could arise in helical cone-beam CT fluoroscopy for the determination of vascular structures through real-time imaging of contrast bolus arrival. Most of the existing reconstruction algorithms have been developed only for helical cone-beam CT with constant pitch, including the backprojection-filtration (BPF) and filtered-backprojection (FBP) algorithms that we proposed previously. It is possible to generalize some of these algorithms to reconstruct images exactly for helical cone-beam CT with a variable pitch. In this work, we generalize our BPF and FBP algorithms to reconstruct images directly from data acquired in helical cone-beam CT with a variable pitch. We have also performed a preliminary numerical study to demonstrate and verify the generalization of the two algorithms. The results of the study confirm that our generalized BPF and FBP algorithms can yield exact reconstruction in helical cone-beam CT with a variable pitch. It should be pointed out that our generalized BPF algorithm is the only algorithm that is capable of reconstructing exactly region-of-interest image from data containing transverse truncations.
A Novel Image Compression Algorithm for High Resolution 3D Reconstruction
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2014-06-01
This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.
Region-of-interest image reconstruction in circular cone-beam microCT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Seungryong; Bian, Junguo; Pelizzari, Charles A.
2007-12-15
Cone-beam microcomputed tomography (microCT) is one of the most popular choices for small animal imaging which is becoming an important tool for studying animal models with transplanted diseases. Region-of-interest (ROI) imaging techniques in CT, which can reconstruct an ROI image from the projection data set of the ROI, can be used not only for reducing imaging-radiation exposure to the subject and scatters to the detector but also for potentially increasing spatial resolution of the reconstructed images. Increasing spatial resolution in microCT images can facilitate improved accuracy in many assessment tasks. A method proposed previously for increasing CT image spatial resolutionmore » entails the exploitation of the geometric magnification in cone-beam CT. Due to finite detector size, however, this method can lead to data truncation for a large geometric magnification. The Feldkamp-Davis-Kress (FDK) algorithm yields images with artifacts when truncated data are used, whereas the recently developed backprojection filtration (BPF) algorithm is capable of reconstructing ROI images without truncation artifacts from truncated cone-beam data. We apply the BPF algorithm to reconstructing ROI images from truncated data of three different objects acquired by our circular cone-beam microCT system. Reconstructed images by use of the FDK and BPF algorithms from both truncated and nontruncated cone-beam data are compared. The results of the experimental studies demonstrate that, from certain truncated data, the BPF algorithm can reconstruct ROI images with quality comparable to that reconstructed from nontruncated data. In contrast, the FDK algorithm yields ROI images with truncation artifacts. Therefore, an implication of the studies is that, when truncated data are acquired with a configuration of a large geometric magnification, the BPF algorithm can be used for effective enhancement of the spatial resolution of a ROI image.« less
NASA Astrophysics Data System (ADS)
Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Gao, Zongzhao; Yang, YaFei
2018-02-01
Based on the discrete algebraic reconstruction technique (DART), this study aims to address and test a new improved algorithm applied to incomplete projection data to generate a high quality reconstruction image by reducing the artifacts and noise in computed tomography. For the incomplete projections, an augmented Lagrangian based on compressed sensing is first used in the initial reconstruction for segmentation of the DART to get higher contrast graphics for boundary and non-boundary pixels. Then, the block matching 3D filtering operator was used to suppress the noise and to improve the gray distribution of the reconstructed image. Finally, simulation studies on the polychromatic spectrum were performed to test the performance of the new algorithm. Study results show a significant improvement in the signal-to-noise ratios (SNRs) and average gradients (AGs) of the images reconstructed from incomplete data. The SNRs and AGs of the new images reconstructed by DART-ALBM were on average 30%-40% and 10% higher than the images reconstructed by DART algorithms. Since the improved DART-ALBM algorithm has a better robustness to limited-view reconstruction, which not only makes the edge of the image clear but also makes the gray distribution of non-boundary pixels better, it has the potential to improve image quality from incomplete projections or sparse projections.
Convex Accelerated Maximum Entropy Reconstruction
Worley, Bradley
2016-01-01
Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476
NASA Astrophysics Data System (ADS)
Tang, Shaojie; Tang, Xiangyang
2016-03-01
Axial cone beam (CB) computed tomography (CT) reconstruction is still the most desirable in clinical applications. As the potential candidates with analytic form for the task, the back projection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical and axial reconstruction from CB and fan beam projection data, respectively. These two algorithms have been heuristically extended for axial CB reconstruction via adoption of virtual PI-line segments. Unfortunately, however, streak artifacts are induced along the Hilbert filtering direction, since these algorithms are no longer accurate on the virtual PI-line segments. We have proposed to cascade the extended BPF/DBPF algorithm with orthogonal butterfly filtering for image reconstruction (namely axial CB-BPP/DBPF cascaded with orthogonal butterfly filtering), in which the orientation-specific artifacts caused by post-BP Hilbert transform can be eliminated, at a possible expense of losing the BPF/DBPF's capability of dealing with projection data truncation. Our preliminary results have shown that this is not the case in practice. Hence, in this work, we carry out an algorithmic analysis and experimental study to investigate the performance of the axial CB-BPP/DBPF cascaded with adequately oriented orthogonal butterfly filtering for three-dimensional (3D) reconstruction in region of interest (ROI).
Shaker, S B; Dirksen, A; Laursen, L C; Maltbaek, N; Christensen, L; Sander, U; Seersholm, N; Skovgaard, L T; Nielsen, L; Kok-Jensen, A
2004-07-01
To study the short-term reproducibility of lung density measurements by multi-slice computed tomography (CT) using three different radiation doses and three reconstruction algorithms. Twenty-five patients with smoker's emphysema and 25 patients with alpha1-antitrypsin deficiency underwent 3 scans at 2-week intervals. Low-dose protocol was applied, and images were reconstructed with bone, detail, and soft algorithms. Total lung volume (TLV), 15th percentile density (PD-15), and relative area at -910 Hounsfield units (RA-910) were obtained from the images using Pulmo-CMS software. Reproducibility of PD-15 and RA-910 and the influence of radiation dose, reconstruction algorithm, and type of emphysema were then analysed. The overall coefficient of variation of volume adjusted PD-15 for all combinations of radiation dose and reconstruction algorithm was 3.7%. The overall standard deviation of volume-adjusted RA-910 was 1.7% (corresponding to a coefficient of variation of 6.8%). Radiation dose, reconstruction algorithm, and type of emphysema had no significant influence on the reproducibility of PD-15 and RA-910. However, bone algorithm and very low radiation dose result in overestimation of the extent of emphysema. Lung density measurement by CT is a sensitive marker for quantitating both subtypes of emphysema. A CT-protocol with radiation dose down to 16 mAs and soft or detail reconstruction algorithm is recommended.
Resource Optimization Scheme for Multimedia-Enabled Wireless Mesh Networks
Ali, Amjad; Ahmed, Muhammad Ejaz; Piran, Md. Jalil; Suh, Doug Young
2014-01-01
Wireless mesh networking is a promising technology that can support numerous multimedia applications. Multimedia applications have stringent quality of service (QoS) requirements, i.e., bandwidth, delay, jitter, and packet loss ratio. Enabling such QoS-demanding applications over wireless mesh networks (WMNs) require QoS provisioning routing protocols that lead to the network resource underutilization problem. Moreover, random topology deployment leads to have some unused network resources. Therefore, resource optimization is one of the most critical design issues in multi-hop, multi-radio WMNs enabled with multimedia applications. Resource optimization has been studied extensively in the literature for wireless Ad Hoc and sensor networks, but existing studies have not considered resource underutilization issues caused by QoS provisioning routing and random topology deployment. Finding a QoS-provisioned path in wireless mesh networks is an NP complete problem. In this paper, we propose a novel Integer Linear Programming (ILP) optimization model to reconstruct the optimal connected mesh backbone topology with a minimum number of links and relay nodes which satisfies the given end-to-end QoS demands for multimedia traffic and identification of extra resources, while maintaining redundancy. We further propose a polynomial time heuristic algorithm called Link and Node Removal Considering Residual Capacity and Traffic Demands (LNR-RCTD). Simulation studies prove that our heuristic algorithm provides near-optimal results and saves about 20% of resources from being wasted by QoS provisioning routing and random topology deployment. PMID:25111241
Resource optimization scheme for multimedia-enabled wireless mesh networks.
Ali, Amjad; Ahmed, Muhammad Ejaz; Piran, Md Jalil; Suh, Doug Young
2014-08-08
Wireless mesh networking is a promising technology that can support numerous multimedia applications. Multimedia applications have stringent quality of service (QoS) requirements, i.e., bandwidth, delay, jitter, and packet loss ratio. Enabling such QoS-demanding applications over wireless mesh networks (WMNs) require QoS provisioning routing protocols that lead to the network resource underutilization problem. Moreover, random topology deployment leads to have some unused network resources. Therefore, resource optimization is one of the most critical design issues in multi-hop, multi-radio WMNs enabled with multimedia applications. Resource optimization has been studied extensively in the literature for wireless Ad Hoc and sensor networks, but existing studies have not considered resource underutilization issues caused by QoS provisioning routing and random topology deployment. Finding a QoS-provisioned path in wireless mesh networks is an NP complete problem. In this paper, we propose a novel Integer Linear Programming (ILP) optimization model to reconstruct the optimal connected mesh backbone topology with a minimum number of links and relay nodes which satisfies the given end-to-end QoS demands for multimedia traffic and identification of extra resources, while maintaining redundancy. We further propose a polynomial time heuristic algorithm called Link and Node Removal Considering Residual Capacity and Traffic Demands (LNR-RCTD). Simulation studies prove that our heuristic algorithm provides near-optimal results and saves about 20% of resources from being wasted by QoS provisioning routing and random topology deployment.
Masoli, Stefano; Rizza, Martina F; Sgritta, Martina; Van Geit, Werner; Schürmann, Felix; D'Angelo, Egidio
2017-01-01
In realistic neuronal modeling, once the ionic channel complement has been defined, the maximum ionic conductance (G i-max ) values need to be tuned in order to match the firing pattern revealed by electrophysiological recordings. Recently, selection/mutation genetic algorithms have been proposed to efficiently and automatically tune these parameters. Nonetheless, since similar firing patterns can be achieved through different combinations of G i-max values, it is not clear how well these algorithms approximate the corresponding properties of real cells. Here we have evaluated the issue by exploiting a unique opportunity offered by the cerebellar granule cell (GrC), which is electrotonically compact and has therefore allowed the direct experimental measurement of ionic currents. Previous models were constructed using empirical tuning of G i-max values to match the original data set. Here, by using repetitive discharge patterns as a template, the optimization procedure yielded models that closely approximated the experimental G i-max values. These models, in addition to repetitive firing, captured additional features, including inward rectification, near-threshold oscillations, and resonance, which were not used as features. Thus, parameter optimization using genetic algorithms provided an efficient modeling strategy for reconstructing the biophysical properties of neurons and for the subsequent reconstruction of large-scale neuronal network models.
Exact BPF and FBP algorithms for nonstandard saddle curves.
Yu, Hengyong; Zhao, Shiying; Ye, Yangbo; Wang, Ge
2005-11-01
A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better image quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.
Use of the Hotelling observer to optimize image reconstruction in digital breast tomosynthesis
Sánchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan
2015-01-01
Abstract. We propose an implementation of the Hotelling observer that can be applied to the optimization of linear image reconstruction algorithms in digital breast tomosynthesis. The method is based on considering information within a specific region of interest, and it is applied to the optimization of algorithms for detectability of microcalcifications. Several linear algorithms are considered: simple back-projection, filtered back-projection, back-projection filtration, and Λ-tomography. The optimized algorithms are then evaluated through the reconstruction of phantom data. The method appears robust across algorithms and parameters and leads to the generation of algorithm implementations which subjectively appear optimized for the task of interest. PMID:26702408
NASA Technical Reports Server (NTRS)
Whitmore, S. A.
1985-01-01
The dynamics model and data sources used to perform air-data reconstruction are discussed, as well as the Kalman filter. The need for adaptive determination of the noise statistics of the process is indicated. The filter innovations are presented as a means of developing the adaptive criterion, which is based on the true mean and covariance of the filter innovations. A method for the numerical approximation of the mean and covariance of the filter innovations is presented. The algorithm as developed is applied to air-data reconstruction for the space shuttle, and data obtained from the third landing are presented. To verify the performance of the adaptive algorithm, the reconstruction is also performed using a constant covariance Kalman filter. The results of the reconstructions are compared, and the adaptive algorithm exhibits better performance.
Jiang, Xiaolei; Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang
2015-01-01
X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm.
Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang
2015-01-01
X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm. PMID:26089971
Reconstruction of network topology using status-time-series data
NASA Astrophysics Data System (ADS)
Pandey, Pradumn Kumar; Badarla, Venkataramana
2018-01-01
Uncovering the heterogeneous connection pattern of a networked system from the available status-time-series (STS) data of a dynamical process on the network is of great interest in network science and known as a reverse engineering problem. Dynamical processes on a network are affected by the structure of the network. The dependency between the diffusion dynamics and structure of the network can be utilized to retrieve the connection pattern from the diffusion data. Information of the network structure can help to devise the control of dynamics on the network. In this paper, we consider the problem of network reconstruction from the available status-time-series (STS) data using matrix analysis. The proposed method of network reconstruction from the STS data is tested successfully under susceptible-infected-susceptible (SIS) diffusion dynamics on real-world and computer-generated benchmark networks. High accuracy and efficiency of the proposed reconstruction procedure from the status-time-series data define the novelty of the method. Our proposed method outperforms compressed sensing theory (CST) based method of network reconstruction using STS data. Further, the same procedure of network reconstruction is applied to the weighted networks. The ordering of the edges in the weighted networks is identified with high accuracy.
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Fiorini, M.; Frezza, O.; Lonardo, A.; Lamanna, G.; Lo Cicero, F.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Tosoratto, L.; Vicini, P.
2016-03-01
A GPU-based low level (L0) trigger is currently integrated in the experimental setup of the RICH detector of the NA62 experiment to assess the feasibility of building more refined physics-related trigger primitives and thus improve the trigger discriminating power. To ensure the real-time operation of the system, a dedicated data transport mechanism has been implemented: an FPGA-based Network Interface Card (NaNet-10) receives data from detectors and forwards them with low, predictable latency to the memory of the GPU performing the trigger algorithms. Results of the ring-shaped hit patterns reconstruction will be reported and discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, T; UT Southwestern Medical Center, Dallas, TX; Yan, H
2014-06-15
Purpose: To develop a 3D dictionary learning based statistical reconstruction algorithm on graphic processing units (GPU), to improve the quality of low-dose cone beam CT (CBCT) imaging with high efficiency. Methods: A 3D dictionary containing 256 small volumes (atoms) of 3x3x3 voxels was trained from a high quality volume image. During reconstruction, we utilized a Cholesky decomposition based orthogonal matching pursuit algorithm to find a sparse representation on this dictionary basis of each patch in the reconstructed image, in order to regularize the image quality. To accelerate the time-consuming sparse coding in the 3D case, we implemented our algorithm inmore » a parallel fashion by taking advantage of the tremendous computational power of GPU. Evaluations are performed based on a head-neck patient case. FDK reconstruction with full dataset of 364 projections is used as the reference. We compared the proposed 3D dictionary learning based method with a tight frame (TF) based one using a subset data of 121 projections. The image qualities under different resolutions in z-direction, with or without statistical weighting are also studied. Results: Compared to the TF-based CBCT reconstruction, our experiments indicated that 3D dictionary learning based CBCT reconstruction is able to recover finer structures, to remove more streaking artifacts, and is less susceptible to blocky artifacts. It is also observed that statistical reconstruction approach is sensitive to inconsistency between the forward and backward projection operations in parallel computing. Using high a spatial resolution along z direction helps improving the algorithm robustness. Conclusion: 3D dictionary learning based CBCT reconstruction algorithm is able to sense the structural information while suppressing noise, and hence to achieve high quality reconstruction. The GPU realization of the whole algorithm offers a significant efficiency enhancement, making this algorithm more feasible for potential clinical application. A high zresolution is preferred to stabilize statistical iterative reconstruction. This work was supported in part by NIH(1R01CA154747-01), NSFC((No. 61172163), Research Fund for the Doctoral Program of Higher Education of China (No. 20110201110011), China Scholarship Council.« less
NASA Astrophysics Data System (ADS)
Choi, Doo-Won; Jeon, Min-Gyu; Cho, Gyeong-Rae; Kamimoto, Takahiro; Deguchi, Yoshihiro; Doh, Deog-Hee
2016-02-01
Performance improvement was attained in data reconstructions of 2-dimensional tunable diode laser absorption spectroscopy (TDLAS). Multiplicative Algebraic Reconstruction Technique (MART) algorithm was adopted for data reconstruction. The data obtained in an experiment for the measurement of temperature and concentration fields of gas flows were used. The measurement theory is based upon the Beer-Lambert law, and the measurement system consists of a tunable laser, collimators, detectors, and an analyzer. Methane was used as a fuel for combustion with air in the Bunsen-type burner. The data used for the reconstruction are from the optical signals of 8-laser beams passed on a cross-section of the methane flame. The performances of MART algorithm in data reconstruction were validated and compared with those obtained by Algebraic Reconstruction Technique (ART) algorithm.
A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.
De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc
2010-09-01
In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.
NASA Astrophysics Data System (ADS)
Closas, Pau; Guillamon, Antoni
2017-12-01
This paper deals with the problem of inferring the signals and parameters that cause neural activity to occur. The ultimate challenge being to unveil brain's connectivity, here we focus on a microscopic vision of the problem, where single neurons (potentially connected to a network of peers) are at the core of our study. The sole observation available are noisy, sampled voltage traces obtained from intracellular recordings. We design algorithms and inference methods using the tools provided by stochastic filtering that allow a probabilistic interpretation and treatment of the problem. Using particle filtering, we are able to reconstruct traces of voltages and estimate the time course of auxiliary variables. By extending the algorithm, through PMCMC methodology, we are able to estimate hidden physiological parameters as well, like intrinsic conductances or reversal potentials. Last, but not least, the method is applied to estimate synaptic conductances arriving at a target cell, thus reconstructing the synaptic excitatory/inhibitory input traces. Notably, the performance of these estimations achieve the theoretical lower bounds even in spiking regimes.
NASA Astrophysics Data System (ADS)
Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh
2015-11-01
The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.
Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2012-01-01
Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2017-01-01
Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.
Comparison of SeaWinds Backscatter Imaging Algorithms
Long, David G.
2017-01-01
This paper compares the performance and tradeoffs of various backscatter imaging algorithms for the SeaWinds scatterometer when multiple passes over a target are available. Reconstruction methods are compared with conventional gridding algorithms. In particular, the performance and tradeoffs in conventional ‘drop in the bucket’ (DIB) gridding at the intrinsic sensor resolution are compared to high-spatial-resolution imaging algorithms such as fine-resolution DIB and the scatterometer image reconstruction (SIR) that generate enhanced-resolution backscatter images. Various options for each algorithm are explored, including considering both linear and dB computation. The effects of sampling density and reconstruction quality versus time are explored. Both simulated and actual data results are considered. The results demonstrate the effectiveness of high-resolution reconstruction using SIR as well as its limitations and the limitations of DIB and fDIB. PMID:28828143
A modified sparse reconstruction method for three-dimensional synthetic aperture radar image
NASA Astrophysics Data System (ADS)
Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin
2018-03-01
There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.
GREIT: a unified approach to 2D linear EIT reconstruction of lung images.
Adler, Andy; Arnold, John H; Bayford, Richard; Borsic, Andrea; Brown, Brian; Dixon, Paul; Faes, Theo J C; Frerichs, Inéz; Gagnon, Hervé; Gärber, Yvo; Grychtol, Bartłomiej; Hahn, Günter; Lionheart, William R B; Malik, Anjum; Patterson, Robert P; Stocks, Janet; Tizzard, Andrew; Weiler, Norbert; Wolf, Gerhard K
2009-06-01
Electrical impedance tomography (EIT) is an attractive method for clinically monitoring patients during mechanical ventilation, because it can provide a non-invasive continuous image of pulmonary impedance which indicates the distribution of ventilation. However, most clinical and physiological research in lung EIT is done using older and proprietary algorithms; this is an obstacle to interpretation of EIT images because the reconstructed images are not well characterized. To address this issue, we develop a consensus linear reconstruction algorithm for lung EIT, called GREIT (Graz consensus Reconstruction algorithm for EIT). This paper describes the unified approach to linear image reconstruction developed for GREIT. The framework for the linear reconstruction algorithm consists of (1) detailed finite element models of a representative adult and neonatal thorax, (2) consensus on the performance figures of merit for EIT image reconstruction and (3) a systematic approach to optimize a linear reconstruction matrix to desired performance measures. Consensus figures of merit, in order of importance, are (a) uniform amplitude response, (b) small and uniform position error, (c) small ringing artefacts, (d) uniform resolution, (e) limited shape deformation and (f) high resolution. Such figures of merit must be attained while maintaining small noise amplification and small sensitivity to electrode and boundary movement. This approach represents the consensus of a large and representative group of experts in EIT algorithm design and clinical applications for pulmonary monitoring. All software and data to implement and test the algorithm have been made available under an open source license which allows free research and commercial use.
Reconstruction of the experimentally supported human protein interactome: what can we learn?
2013-01-01
Background Understanding the topology and dynamics of the human protein-protein interaction (PPI) network will significantly contribute to biomedical research, therefore its systematic reconstruction is required. Several meta-databases integrate source PPI datasets, but the protein node sets of their networks vary depending on the PPI data combined. Due to this inherent heterogeneity, the way in which the human PPI network expands via multiple dataset integration has not been comprehensively analyzed. We aim at assembling the human interactome in a global structured way and exploring it to gain insights of biological relevance. Results First, we defined the UniProtKB manually reviewed human “complete” proteome as the reference protein-node set and then we mined five major source PPI datasets for direct PPIs exclusively between the reference proteins. We updated the protein and publication identifiers and normalized all PPIs to the UniProt identifier level. The reconstructed interactome covers approximately 60% of the human proteome and has a scale-free structure. No apparent differentiating gene functional classification characteristics were identified for the unrepresented proteins. The source dataset integration augments the network mainly in PPIs. Polyubiquitin emerged as the highest-degree node, but the inclusion of most of its identified PPIs may be reconsidered. The high number (>300) of connections of the subsequent fifteen proteins correlates well with their essential biological role. According to the power-law network structure, the unrepresented proteins should mainly have up to four connections with equally poorly-connected interactors. Conclusions Reconstructing the human interactome based on the a priori definition of the protein nodes enabled us to identify the currently included part of the human “complete” proteome, and discuss the role of the proteins within the network topology with respect to their function. As the network expansion has to comply with the scale-free theory, we suggest that the core of the human interactome has essentially emerged. Thus, it could be employed in systems biology and biomedical research, despite the considerable number of currently unrepresented proteins. The latter are probably involved in specialized physiological conditions, justifying the scarcity of related PPI information, and their identification can assist in designing relevant functional experiments and targeted text mining algorithms. PMID:24088582
A combined reconstruction-classification method for diffuse optical tomography.
Hiltunen, P; Prince, S J D; Arridge, S
2009-11-07
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.
Jahans-Price, Thomas; Greenwood, Michael P.; Greenwood, Mingkwan; Hoe, See-Ziau; Konopacka, Agnieszka
2017-01-01
Abstract The supraoptic nucleus (SON) is a group of neurons in the hypothalamus responsible for the synthesis and secretion of the peptide hormones vasopressin and oxytocin. Following physiological cues, such as dehydration, salt-loading and lactation, the SON undergoes a function related plasticity that we have previously described in the rat at the transcriptome level. Using the unsupervised graphical lasso (Glasso) algorithm, we reconstructed a putative network from 500 plastic SON genes in which genes are the nodes and the edges are the inferred interactions. The most active nodal gene identified within the network was Caprin2. Caprin2 encodes an RNA-binding protein that we have previously shown to be vital for the functioning of osmoregulatory neuroendocrine neurons in the SON of the rat hypothalamus. To test the validity of the Glasso network, we either overexpressed or knocked down Caprin2 transcripts in differentiated rat pheochromocytoma PC12 cells and showed that these manipulations had significant opposite effects on the levels of putative target mRNAs. These studies suggest that the predicative power of the Glasso algorithm within an in vivo system is accurate, and identifies biological targets that may be important to the functional plasticity of the SON. PMID:29279858
Reconstruction of a digital core containing clay minerals based on a clustering algorithm.
He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling
2017-10-01
It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K-means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.
Ceccarelli, Michele; Cerulo, Luigi; Santone, Antonella
2014-10-01
Reverse engineering of gene regulatory relationships from genomics data is a crucial task to dissect the complex underlying regulatory mechanism occurring in a cell. From a computational point of view the reconstruction of gene regulatory networks is an undetermined problem as the large number of possible solutions is typically high in contrast to the number of available independent data points. Many possible solutions can fit the available data, explaining the data equally well, but only one of them can be the biologically true solution. Several strategies have been proposed in literature to reduce the search space and/or extend the amount of independent information. In this paper we propose a novel algorithm based on formal methods, mathematically rigorous techniques widely adopted in engineering to specify and verify complex software and hardware systems. Starting with a formal specification of gene regulatory hypotheses we are able to mathematically prove whether a time course experiment belongs or not to the formal specification, determining in fact whether a gene regulation exists or not. The method is able to detect both direction and sign (inhibition/activation) of regulations whereas most of literature methods are limited to undirected and/or unsigned relationships. We empirically evaluated the approach on experimental and synthetic datasets in terms of precision and recall. In most cases we observed high levels of accuracy outperforming the current state of art, despite the computational cost increases exponentially with the size of the network. We made available the tool implementing the algorithm at the following url: http://www.bioinformatics.unisannio.it. Copyright © 2014 Elsevier Inc. All rights reserved.
Comparison Study of Three Different Image Reconstruction Algorithms for MAT-MI
Xia, Rongmin; Li, Xu
2010-01-01
We report a theoretical study on magnetoacoustic tomography with magnetic induction (MAT-MI). According to the description of signal generation mechanism using Green’s function, the acoustic dipole model was proposed to describe acoustic source excited by the Lorentz force. Using Green’s function, three kinds of reconstruction algorithms based on different models of acoustic source (potential energy, vectored acoustic pressure, and divergence of Lorenz force) are deduced and compared, and corresponding numerical simulations were conducted to compare these three kinds of reconstruction algorithms. The computer simulation results indicate that the potential energy method and vectored pressure method can directly reconstruct the Lorentz force distribution and give a more accurate reconstruction of electrical conductivity. PMID:19846363
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
Energy Analysis of Decoders for Rakeness-Based Compressed Sensing of ECG Signals.
Pareschi, Fabio; Mangia, Mauro; Bortolotti, Daniele; Bartolini, Andrea; Benini, Luca; Rovatti, Riccardo; Setti, Gianluca
2017-12-01
In recent years, compressed sensing (CS) has proved to be effective in lowering the power consumption of sensing nodes in biomedical signal processing devices. This is due to the fact the CS is capable of reducing the amount of data to be transmitted to ensure correct reconstruction of the acquired waveforms. Rakeness-based CS has been introduced to further reduce the amount of transmitted data by exploiting the uneven distribution to the sensed signal energy. Yet, so far no thorough analysis exists on the impact of its adoption on CS decoder performance. The latter point is of great importance, since body-area sensor network architectures may include intermediate gateway nodes that receive and reconstruct signals to provide local services before relaying data to a remote server. In this paper, we fill this gap by showing that rakeness-based design also improves reconstruction performance. We quantify these findings in the case of ECG signals and when a variety of reconstruction algorithms are used either in a low-power microcontroller or a heterogeneous mobile computing platform.
High-resolution reconstruction for terahertz imaging.
Xu, Li-Min; Fan, Wen-Hui; Liu, Jia
2014-11-20
We present a high-resolution (HR) reconstruction model and algorithms for terahertz imaging, taking advantage of super-resolution methodology and algorithms. The algorithms used include projection onto a convex sets approach, iterative backprojection approach, Lucy-Richardson iteration, and 2D wavelet decomposition reconstruction. Using the first two HR reconstruction methods, we successfully obtain HR terahertz images with improved definition and lower noise from four low-resolution (LR) 22×24 terahertz images taken from our homemade THz-TDS system at the same experimental conditions with 1.0 mm pixel. Using the last two HR reconstruction methods, we transform one relatively LR terahertz image to a HR terahertz image with decreased noise. This indicates potential application of HR reconstruction methods in terahertz imaging with pulsed and continuous wave terahertz sources.
Bayesian reconstruction of projection reconstruction NMR (PR-NMR).
Yoon, Ji Won
2014-11-01
Projection reconstruction nuclear magnetic resonance (PR-NMR) is a technique for generating multidimensional NMR spectra. A small number of projections from lower-dimensional NMR spectra are used to reconstruct the multidimensional NMR spectra. In our previous work, it was shown that multidimensional NMR spectra are efficiently reconstructed using peak-by-peak based reversible jump Markov chain Monte Carlo (RJMCMC) algorithm. We propose an extended and generalized RJMCMC algorithm replacing a simple linear model with a linear mixed model to reconstruct close NMR spectra into true spectra. This statistical method generates samples in a Bayesian scheme. Our proposed algorithm is tested on a set of six projections derived from the three-dimensional 700 MHz HNCO spectrum of a protein HasA. Copyright © 2014 Elsevier Ltd. All rights reserved.
Constructing an integrated gene similarity network for the identification of disease genes.
Tian, Zhen; Guo, Maozu; Wang, Chunyu; Xing, LinLin; Wang, Lei; Zhang, Yin
2017-09-20
Discovering novel genes that are involved human diseases is a challenging task in biomedical research. In recent years, several computational approaches have been proposed to prioritize candidate disease genes. Most of these methods are mainly based on protein-protein interaction (PPI) networks. However, since these PPI networks contain false positives and only cover less half of known human genes, their reliability and coverage are very low. Therefore, it is highly necessary to fuse multiple genomic data to construct a credible gene similarity network and then infer disease genes on the whole genomic scale. We proposed a novel method, named RWRB, to infer causal genes of interested diseases. First, we construct five individual gene (protein) similarity networks based on multiple genomic data of human genes. Then, an integrated gene similarity network (IGSN) is reconstructed based on similarity network fusion (SNF) method. Finally, we employee the random walk with restart algorithm on the phenotype-gene bilayer network, which combines phenotype similarity network, IGSN as well as phenotype-gene association network, to prioritize candidate disease genes. We investigate the effectiveness of RWRB through leave-one-out cross-validation methods in inferring phenotype-gene relationships. Results show that RWRB is more accurate than state-of-the-art methods on most evaluation metrics. Further analysis shows that the success of RWRB is benefited from IGSN which has a wider coverage and higher reliability comparing with current PPI networks. Moreover, we conduct a comprehensive case study for Alzheimer's disease and predict some novel disease genes that supported by literature. RWRB is an effective and reliable algorithm in prioritizing candidate disease genes on the genomic scale. Software and supplementary information are available at http://nclab.hit.edu.cn/~tianzhen/RWRB/ .
Simultaneous motion estimation and image reconstruction (SMEIR) for 4D cone-beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jing; Gu, Xuejun
2013-10-15
Purpose: Image reconstruction and motion model estimation in four-dimensional cone-beam CT (4D-CBCT) are conventionally handled as two sequential steps. Due to the limited number of projections at each phase, the image quality of 4D-CBCT is degraded by view aliasing artifacts, and the accuracy of subsequent motion modeling is decreased by the inferior 4D-CBCT. The objective of this work is to enhance both the image quality of 4D-CBCT and the accuracy of motion model estimation with a novel strategy enabling simultaneous motion estimation and image reconstruction (SMEIR).Methods: The proposed SMEIR algorithm consists of two alternating steps: (1) model-based iterative image reconstructionmore » to obtain a motion-compensated primary CBCT (m-pCBCT) and (2) motion model estimation to obtain an optimal set of deformation vector fields (DVFs) between the m-pCBCT and other 4D-CBCT phases. The motion-compensated image reconstruction is based on the simultaneous algebraic reconstruction technique (SART) coupled with total variation minimization. During the forward- and backprojection of SART, measured projections from an entire set of 4D-CBCT are used for reconstruction of the m-pCBCT by utilizing the updated DVF. The DVF is estimated by matching the forward projection of the deformed m-pCBCT and measured projections of other phases of 4D-CBCT. The performance of the SMEIR algorithm is quantitatively evaluated on a 4D NCAT phantom. The quality of reconstructed 4D images and the accuracy of tumor motion trajectory are assessed by comparing with those resulting from conventional sequential 4D-CBCT reconstructions (FDK and total variation minimization) and motion estimation (demons algorithm). The performance of the SMEIR algorithm is further evaluated by reconstructing a lung cancer patient 4D-CBCT.Results: Image quality of 4D-CBCT is greatly improved by the SMEIR algorithm in both phantom and patient studies. When all projections are used to reconstruct a 3D-CBCT by FDK, motion-blurring artifacts are present, leading to a 24.4% relative reconstruction error in the NACT phantom. View aliasing artifacts are present in 4D-CBCT reconstructed by FDK from 20 projections, with a relative error of 32.1%. When total variation minimization is used to reconstruct 4D-CBCT, the relative error is 18.9%. Image quality of 4D-CBCT is substantially improved by using the SMEIR algorithm and relative error is reduced to 7.6%. The maximum error (MaxE) of tumor motion determined from the DVF obtained by demons registration on a FDK-reconstructed 4D-CBCT is 3.0, 2.3, and 7.1 mm along left–right (L-R), anterior–posterior (A-P), and superior–inferior (S-I) directions, respectively. From the DVF obtained by demons registration on 4D-CBCT reconstructed by total variation minimization, the MaxE of tumor motion is reduced to 1.5, 0.5, and 5.5 mm along L-R, A-P, and S-I directions. From the DVF estimated by SMEIR algorithm, the MaxE of tumor motion is further reduced to 0.8, 0.4, and 1.5 mm along L-R, A-P, and S-I directions, respectively.Conclusions: The proposed SMEIR algorithm is able to estimate a motion model and reconstruct motion-compensated 4D-CBCT. The SMEIR algorithm improves image reconstruction accuracy of 4D-CBCT and tumor motion trajectory estimation accuracy as compared to conventional sequential 4D-CBCT reconstruction and motion estimation.« less
Regularization iteration imaging algorithm for electrical capacitance tomography
NASA Astrophysics Data System (ADS)
Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao
2018-03-01
The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.
Reconstructing cortical current density by exploring sparseness in the transform domain
NASA Astrophysics Data System (ADS)
Ding, Lei
2009-05-01
In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.
NASA Astrophysics Data System (ADS)
Chen, Buxin; Zhang, Zheng; Sidky, Emil Y.; Xia, Dan; Pan, Xiaochuan
2017-11-01
Optimization-based algorithms for image reconstruction in multispectral (or photon-counting) computed tomography (MCT) remains a topic of active research. The challenge of optimization-based image reconstruction in MCT stems from the inherently non-linear data model that can lead to a non-convex optimization program for which no mathematically exact solver seems to exist for achieving globally optimal solutions. In this work, based upon a non-linear data model, we design a non-convex optimization program, derive its first-order-optimality conditions, and propose an algorithm to solve the program for image reconstruction in MCT. In addition to consideration of image reconstruction for the standard scan configuration, the emphasis is on investigating the algorithm’s potential for enabling non-standard scan configurations with no or minimum hardware modification to existing CT systems, which has potential practical implications for lowered hardware cost, enhanced scanning flexibility, and reduced imaging dose/time in MCT. Numerical studies are carried out for verification of the algorithm and its implementation, and for a preliminary demonstration and characterization of the algorithm in reconstructing images and in enabling non-standard configurations with varying scanning angular range and/or x-ray illumination coverage in MCT.
NASA Astrophysics Data System (ADS)
Nunez, Jorge; Llacer, Jorge
1993-10-01
This paper describes a general Bayesian iterative algorithm with entropy prior for image reconstruction. It solves the cases of both pure Poisson data and Poisson data with Gaussian readout noise. The algorithm maintains positivity of the solution; it includes case-specific prior information (default map) and flatfield corrections; it removes background and can be accelerated to be faster than the Richardson-Lucy algorithm. In order to determine the hyperparameter that balances the entropy and liklihood terms in the Bayesian approach, we have used a liklihood cross-validation technique. Cross-validation is more robust than other methods because it is less demanding in terms of the knowledge of exact data characteristics and of the point-spread function. We have used the algorithm to reconstruct successfully images obtained in different space-and ground-based imaging situations. It has been possible to recover most of the original intended capabilities of the Hubble Space Telescope (HST) wide field and planetary camera (WFPC) and faint object camera (FOC) from images obtained in their present state. Semireal simulations for the future wide field planetary camera 2 show that even after the repair of the spherical abberration problem, image reconstruction can play a key role in improving the resolution of the cameras, well beyond the design of the Hubble instruments. We also show that ground-based images can be reconstructed successfully with the algorithm. A technique which consists of dividing the CCD observations into two frames, with one-half the exposure time each, emerges as a recommended procedure for the utilization of the described algorithms. We have compared our technique with two commonly used reconstruction algorithms: the Richardson-Lucy and the Cambridge maximum entropy algorithms.
Optimization of Stereo Matching in 3D Reconstruction Based on Binocular Vision
NASA Astrophysics Data System (ADS)
Gai, Qiyang
2018-01-01
Stereo matching is one of the key steps of 3D reconstruction based on binocular vision. In order to improve the convergence speed and accuracy in 3D reconstruction based on binocular vision, this paper adopts the combination method of polar constraint and ant colony algorithm. By using the line constraint to reduce the search range, an ant colony algorithm is used to optimize the stereo matching feature search function in the proposed search range. Through the establishment of the stereo matching optimization process analysis model of ant colony algorithm, the global optimization solution of stereo matching in 3D reconstruction based on binocular vision system is realized. The simulation results show that by the combining the advantage of polar constraint and ant colony algorithm, the stereo matching range of 3D reconstruction based on binocular vision is simplified, and the convergence speed and accuracy of this stereo matching process are improved.
Exact BPF and FBP algorithms for nonstandard saddle curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu Hengyong; Zhao Shiying; Ye Yangbo
2005-11-15
A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better imagemore » quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.« less
Ionospheric-thermospheric UV tomography: 1. Image space reconstruction algorithms
NASA Astrophysics Data System (ADS)
Dymond, K. F.; Budzien, S. A.; Hei, M. A.
2017-03-01
We present and discuss two algorithms of the class known as Image Space Reconstruction Algorithms (ISRAs) that we are applying to the solution of large-scale ionospheric tomography problems. ISRAs have several desirable features that make them useful for ionospheric tomography. In addition to producing nonnegative solutions, ISRAs are amenable to sparse-matrix formulations and are fast, stable, and robust. We present the results of our studies of two types of ISRA: the Least Squares Positive Definite and the Richardson-Lucy algorithms. We compare their performance to the Multiplicative Algebraic Reconstruction and Conjugate Gradient Least Squares algorithms. We then discuss the use of regularization in these algorithms and present our new approach based on regularization to a partial differential equation.
Deep learning methods to guide CT image reconstruction and reduce metal artifacts
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge
2017-03-01
The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.
A survey on evolutionary algorithm based hybrid intelligence in bioinformatics.
Li, Shan; Kang, Liying; Zhao, Xing-Ming
2014-01-01
With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs) are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks.
A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar.
Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun; Huang, Yuan-Hao
2018-04-05
Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.
DART: a practical reconstruction algorithm for discrete tomography.
Batenburg, Kees Joost; Sijbers, Jan
2011-09-01
In this paper, we present an iterative reconstruction algorithm for discrete tomography, called discrete algebraic reconstruction technique (DART). DART can be applied if the scanned object is known to consist of only a few different compositions, each corresponding to a constant gray value in the reconstruction. Prior knowledge of the gray values for each of the compositions is exploited to steer the current reconstruction towards a reconstruction that contains only these gray values. Based on experiments with both simulated CT data and experimental μCT data, it is shown that DART is capable of computing more accurate reconstructions from a small number of projection images, or from a small angular range, than alternative methods. It is also shown that DART can deal effectively with noisy projection data and that the algorithm is robust with respect to errors in the estimation of the gray values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr; Clackdoyle, Rolf; Keuschnigg, Peter
Purpose: A new cone-beam CT scanner for image-guided radiotherapy (IGRT) can independently rotate the source and the detector along circular trajectories. Existing reconstruction algorithms are not suitable for this scanning geometry. The authors propose and evaluate a three-dimensional (3D) filtered-backprojection reconstruction for this situation. Methods: The source and the detector trajectories are tuned to image a field-of-view (FOV) that is offset with respect to the center-of-rotation. The new reconstruction formula is derived from the Feldkamp algorithm and results in a similar three-step algorithm: projection weighting, ramp filtering, and weighted backprojection. Simulations of a Shepp Logan digital phantom were used tomore » evaluate the new algorithm with a 10 cm-offset FOV. A real cone-beam CT image with an 8.5 cm-offset FOV was also obtained from projections of an anthropomorphic head phantom. Results: The quality of the cone-beam CT images reconstructed using the new algorithm was similar to those using the Feldkamp algorithm which is used in conventional cone-beam CT. The real image of the head phantom exhibited comparable image quality to that of existing systems. Conclusions: The authors have proposed a 3D filtered-backprojection reconstruction for scanners with independent source and detector rotations that is practical and effective. This algorithm forms the basis for exploiting the scanner’s unique capabilities in IGRT protocols.« less
Reconstruction of networks from one-step data by matching positions
NASA Astrophysics Data System (ADS)
Wu, Jianshe; Dang, Ni; Jiao, Yang
2018-05-01
It is a challenge in estimating the topology of a network from short time series data. In this paper, matching positions is developed to reconstruct the topology of a network from only one-step data. We consider a general network model of coupled agents, in which the phase transformation of each node is determined by its neighbors. From the phase transformation information from one step to the next, the connections of the tail vertices are reconstructed firstly by the matching positions. Removing the already reconstructed vertices, and repeatedly reconstructing the connections of tail vertices, the topology of the entire network is reconstructed. For sparse scale-free networks with more than ten thousands nodes, we almost obtain the actual topology using only the one-step data in simulations.
Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT
NASA Astrophysics Data System (ADS)
Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang-Hee
2014-03-01
State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time.
NASA Astrophysics Data System (ADS)
Gilbert, B. K.; Robb, R. A.; Chu, A.; Kenue, S. K.; Lent, A. H.; Swartzlander, E. E., Jr.
1981-02-01
Rapid advances during the past ten years of several forms of computer-assisted tomography (CT) have resulted in the development of numerous algorithms to convert raw projection data into cross-sectional images. These reconstruction algorithms are either 'iterative,' in which a large matrix algebraic equation is solved by successive approximation techniques; or 'closed form'. Continuing evolution of the closed form algorithms has allowed the newest versions to produce excellent reconstructed images in most applications. This paper will review several computer software and special-purpose digital hardware implementations of closed form algorithms, either proposed during the past several years by a number of workers or actually implemented in commercial or research CT scanners. The discussion will also cover a number of recently investigated algorithmic modifications which reduce the amount of computation required to execute the reconstruction process, as well as several new special-purpose digital hardware implementations under development in laboratories at the Mayo Clinic.
NASA Astrophysics Data System (ADS)
Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.
2016-10-01
We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.
A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.
2010-01-15
Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, amore » chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.« less
Resolution recovery for Compton camera using origin ensemble algorithm.
Andreyev, A; Celler, A; Ozsahin, I; Sitek, A
2016-08-01
Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2-3 orders of magnitude per iteration. The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Functional validation and comparison framework for EIT lung imaging.
Grychtol, Bartłomiej; Elke, Gunnar; Meybohm, Patrick; Weiler, Norbert; Frerichs, Inéz; Adler, Andy
2014-01-01
Electrical impedance tomography (EIT) is an emerging clinical tool for monitoring ventilation distribution in mechanically ventilated patients, for which many image reconstruction algorithms have been suggested. We propose an experimental framework to assess such algorithms with respect to their ability to correctly represent well-defined physiological changes. We defined a set of clinically relevant ventilation conditions and induced them experimentally in 8 pigs by controlling three ventilator settings (tidal volume, positive end-expiratory pressure and the fraction of inspired oxygen). In this way, large and discrete shifts in global and regional lung air content were elicited. We use the framework to compare twelve 2D EIT reconstruction algorithms, including backprojection (the original and still most frequently used algorithm), GREIT (a more recent consensus algorithm for lung imaging), truncated singular value decomposition (TSVD), several variants of the one-step Gauss-Newton approach and two iterative algorithms. We consider the effects of using a 3D finite element model, assuming non-uniform background conductivity, noise modeling, reconstructing for electrode movement, total variation (TV) reconstruction, robust error norms, smoothing priors, and using difference vs. normalized difference data. Our results indicate that, while variation in appearance of images reconstructed from the same data is not negligible, clinically relevant parameters do not vary considerably among the advanced algorithms. Among the analysed algorithms, several advanced algorithms perform well, while some others are significantly worse. Given its vintage and ad-hoc formulation backprojection works surprisingly well, supporting the validity of previous studies in lung EIT.
NASA Astrophysics Data System (ADS)
Çayören, M.; Akduman, I.; Yapar, A.; Crocco, L.
2010-03-01
The reference list should have included the conference communications [1] and [2], wherein we introduced the algorithm described in this paper. Note that a less complete description of the algorithm was given in [1]. However, the example considering a bean-shaped target is the same in the two papers and it is reused in this paper by kind permission of the Applied Computational Electromagnetics Society. References [1] Crocco L, Akduman I, Çayören M and Yapar A 2007 A new method for shape reconstruction of perfectly conducting targets The 23rd Annual Review of Progress in Applied Computational Electromagnetics (Verona, Italy) [2] Çayören M, Akduman I, Yapar A and Crocco L 2007 A new algorithm for the shape reconstruction of perfectly conducting objects Progress in Electromagnetics Research Symposium (PIERS) (Beijing, PRC)
NASA Astrophysics Data System (ADS)
Mushkin, I.; Solomon, S.
2017-10-01
We study the inverse contagion problem (ICP). As opposed to the direct contagion problem, in which the network structure is known and the question is when each node will be contaminated, in the inverse problem the links of the network are unknown but a sequence of contagion histories (the times when each node was contaminated) is observed. We consider two versions of the ICP: The strong problem (SICP), which is the reconstruction of the network and has been studied before, and the weak problem (WICP), which requires "only" the prediction (at each time step) of the nodes that will be contaminated at the next time step (this is often the real life situation in which a contagion is observed and predictions are made in real time). Moreover, our focus is on analyzing the increasing accuracy of the solution, as a function of the number of contagion histories already observed. For simplicity, we discuss the simplest (deterministic and synchronous) contagion dynamics and the simplest solution algorithm, which we have applied to different network types. The main result of this paper is that the complex problem of the convergence of the ICP for a network can be reduced to an individual property of pairs of nodes: the "false link difficulty". By definition, given a pair of unlinked nodes i and j, the difficulty of the false link (i,j) is the probability that in a random contagion history, the nodes i and j are not contaminated at the same time step (or at consecutive time steps). In other words, the "false link difficulty" of a non-existing network link is the probability that the observations during a random contagion history would not rule out that link. This probability is relatively straightforward to calculate, and in most instances relies only on the relative positions of the two nodes (i,j) and not on the entire network structure. We have observed the distribution of false link difficulty for various network types, estimated it theoretically and confronted it (successfully) with the numerical simulations. Based on it, we estimated analytically the convergence of the ICP solution (as a function of the number of contagion histories observed), and found it to be in perfect agreement with simulation results. Finally, the most important insight we obtained is that SICP and WICP are have quite different properties: if one in interested only in the operational aspect of predicting how contagion will spread, the links which are most difficult to decide about are the least influential on contagion dynamics. In other words, the parts of the network which are harder to reconstruct are also least important for predicting the contagion dynamics, up to the point where a (large) constant number of false links in the network (i.e. non-convergence of the network reconstruction procedure) implies a zero rate of the node contagion prediction errors (perfect convergence of the WICP). Thus, the contagion prediction problem (WICP) difficulty is very different from the network reconstruction problem (SICP), in as far as links which are difficult to reconstruct are quite harmless in terms of contagion prediction capability (WICP).
A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.
Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi
2015-12-01
Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, S; Wang, W; Tang, X
2014-06-15
Purpose: With the major benefit in dealing with data truncation for ROI reconstruction, the algorithm of differentiated backprojection followed by Hilbert filtering (DBPF) is originally derived for image reconstruction from parallel- or fan-beam data. To extend its application for axial CB scan, we proposed the integration of the DBPF algorithm with 3-D weighting. In this work, we further propose the incorporation of Butterfly filtering into the 3-D weighted axial CB-DBPF algorithm and conduct an evaluation to verify its performance. Methods: Given an axial scan, tomographic images are reconstructed by the DBPF algorithm with 3-D weighting, in which streak artifacts existmore » along the direction of Hilbert filtering. Recognizing this orientation-specific behavior, a pair of orthogonal Butterfly filtering is applied on the reconstructed images with the horizontal and vertical Hilbert filtering correspondingly. In addition, the Butterfly filtering can also be utilized for streak artifact suppression in the scenarios wherein only partial scan data with an angular range as small as 270° are available. Results: Preliminary data show that, with the correspondingly applied Butterfly filtering, the streak artifacts existing in the images reconstructed by the 3-D weighted DBPF algorithm can be suppressed to an unnoticeable level. Moreover, the Butterfly filtering also works at the scenarios of partial scan, though the 3-D weighting scheme may have to be dropped because of no sufficient projection data are available. Conclusion: As an algorithmic step, the incorporation of Butterfly filtering enables the DBPF algorithm for CB image reconstruction from data acquired along either a full or partial axial scan.« less
2017-01-01
Objective Electrical Impedance Tomography (EIT) is a powerful non-invasive technique for imaging applications. The goal is to estimate the electrical properties of living tissues by measuring the potential at the boundary of the domain. Being safe with respect to patient health, non-invasive, and having no known hazards, EIT is an attractive and promising technology. However, it suffers from a particular technical difficulty, which consists of solving a nonlinear inverse problem in real time. Several nonlinear approaches have been proposed as a replacement for the linear solver, but in practice very few are capable of stable, high-quality, and real-time EIT imaging because of their very low robustness to errors and inaccurate modeling, or because they require considerable computational effort. Methods In this paper, a post-processing technique based on an artificial neural network (ANN) is proposed to obtain a nonlinear solution to the inverse problem, starting from a linear solution. While common reconstruction methods based on ANNs estimate the solution directly from the measured data, the method proposed here enhances the solution obtained from a linear solver. Conclusion Applying a linear reconstruction algorithm before applying an ANN reduces the effects of noise and modeling errors. Hence, this approach significantly reduces the error associated with solving 2D inverse problems using machine-learning-based algorithms. Significance This work presents radical enhancements in the stability of nonlinear methods for biomedical EIT applications. PMID:29206856
SU-E-I-01: Iterative CBCT Reconstruction with a Feature-Preserving Penalty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyu, Q; Li, B; Southern Medical University, Guangzhou
2015-06-15
Purpose: Low-dose CBCT is desired in various clinical applications. Iterative image reconstruction algorithms have shown advantages in suppressing noise in low-dose CBCT. However, due to the smoothness constraint enforced during the reconstruction process, edges may be blurred and image features may lose in the reconstructed image. In this work, we proposed a new penalty design to preserve image features in the image reconstructed by iterative algorithms. Methods: Low-dose CBCT is reconstructed by minimizing the penalized weighted least-squares (PWLS) objective function. Binary Robust Independent Elementary Features (BRIEF) of the image were integrated into the penalty of PWLS. BRIEF is a generalmore » purpose point descriptor that can be used to identify important features of an image. In this work, BRIEF distance of two neighboring pixels was used to weigh the smoothing parameter in PWLS. For pixels of large BRIEF distance, weaker smooth constraint will be enforced. Image features will be better preserved through such a design. The performance of the PWLS algorithm with BRIEF penalty was evaluated by a CatPhan 600 phantom. Results: The image quality reconstructed by the proposed PWLS-BRIEF algorithm is superior to that by the conventional PWLS method and the standard FDK method. At matched noise level, edges in PWLS-BRIEF reconstructed image are better preserved. Conclusion: This study demonstrated that the proposed PWLS-BRIEF algorithm has great potential on preserving image features in low-dose CBCT.« less
Banjak, Hussein; Grenier, Thomas; Epicier, Thierry; Koneti, Siddardha; Roiban, Lucian; Gay, Anne-Sophie; Magnin, Isabelle; Peyrin, Françoise; Maxim, Voichita
2018-06-01
Fast tomography in Environmental Transmission Electron Microscopy (ETEM) is of a great interest for in situ experiments where it allows to observe 3D real-time evolution of nanomaterials under operating conditions. In this context, we are working on speeding up the acquisition step to a few seconds mainly with applications on nanocatalysts. In order to accomplish such rapid acquisitions of the required tilt series of projections, a modern 4K high-speed camera is used, that can capture up to 100 images per second in a 2K binning mode. However, due to the fast rotation of the sample during the tilt procedure, noise and blur effects may occur in many projections which in turn would lead to poor quality reconstructions. Blurred projections make classical reconstruction algorithms inappropriate and require the use of prior information. In this work, a regularized algebraic reconstruction algorithm named SIRT-FISTA-TV is proposed. The performance of this algorithm using blurred data is studied by means of a numerical blur introduced into simulated images series to mimic possible mechanical instabilities/drifts during fast acquisitions. We also present reconstruction results from noisy data to show the robustness of the algorithm to noise. Finally, we show reconstructions with experimental datasets and we demonstrate the interest of fast tomography with an ultra-fast acquisition performed under environmental conditions, i.e. gas and temperature, in the ETEM. Compared to classically used SIRT and SART approaches, our proposed SIRT-FISTA-TV reconstruction algorithm provides higher quality tomograms allowing easier segmentation of the reconstructed volume for a better final processing and analysis. Copyright © 2018 Elsevier B.V. All rights reserved.
Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.
Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong
2011-09-01
Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.
M-AMST: an automatic 3D neuron tracing method based on mean shift and adapted minimum spanning tree.
Wan, Zhijiang; He, Yishan; Hao, Ming; Yang, Jian; Zhong, Ning
2017-03-29
Understanding the working mechanism of the brain is one of the grandest challenges for modern science. Toward this end, the BigNeuron project was launched to gather a worldwide community to establish a big data resource and a set of the state-of-the-art of single neuron reconstruction algorithms. Many groups contributed their own algorithms for the project, including our mean shift and minimum spanning tree (M-MST). Although M-MST is intuitive and easy to implement, the MST just considers spatial information of single neuron and ignores the shape information, which might lead to less precise connections between some neuron segments. In this paper, we propose an improved algorithm, namely M-AMST, in which a rotating sphere model based on coordinate transformation is used to improve the weight calculation method in M-MST. Two experiments are designed to illustrate the effect of adapted minimum spanning tree algorithm and the adoptability of M-AMST in reconstructing variety of neuron image datasets respectively. In the experiment 1, taking the reconstruction of APP2 as reference, we produce the four difference scores (entire structure average (ESA), different structure average (DSA), percentage of different structure (PDS) and max distance of neurons' nodes (MDNN)) by comparing the neuron reconstruction of the APP2 and the other 5 competing algorithm. The result shows that M-AMST gets lower difference scores than M-MST in ESA, PDS and MDNN. Meanwhile, M-AMST is better than N-MST in ESA and MDNN. It indicates that utilizing the adapted minimum spanning tree algorithm which took the shape information of neuron into account can achieve better neuron reconstructions. In the experiment 2, 7 neuron image datasets are reconstructed and the four difference scores are calculated by comparing the gold standard reconstruction and the reconstructions produced by 6 competing algorithms. Comparing the four difference scores of M-AMST and the other 5 algorithm, we can conclude that M-AMST is able to achieve the best difference score in 3 datasets and get the second-best difference score in the other 2 datasets. We develop a pathway extraction method using a rotating sphere model based on coordinate transformation to improve the weight calculation approach in MST. The experimental results show that M-AMST utilizes the adapted minimum spanning tree algorithm which takes the shape information of neuron into account can achieve better neuron reconstructions. Moreover, M-AMST is able to get good neuron reconstruction in variety of image datasets.
Unsupervised building detection from irregularly spaced LiDAR and aerial imagery
NASA Astrophysics Data System (ADS)
Shorter, Nicholas Sven
As more data sources containing 3-D information are becoming available, an increased interest in 3-D imaging has emerged. Among these is the 3-D reconstruction of buildings and other man-made structures. A necessary preprocessing step is the detection and isolation of individual buildings that subsequently can be reconstructed in 3-D using various methodologies. Applications for both building detection and reconstruction have commercial use for urban planning, network planning for mobile communication (cell phone tower placement), spatial analysis of air pollution and noise nuisances, microclimate investigations, geographical information systems, security services and change detection from areas affected by natural disasters. Building detection and reconstruction are also used in the military for automatic target recognition and in entertainment for virtual tourism. Previously proposed building detection and reconstruction algorithms solely utilized aerial imagery. With the advent of Light Detection and Ranging (LiDAR) systems providing elevation data, current algorithms explore using captured LiDAR data as an additional feasible source of information. Additional sources of information can lead to automating techniques (alleviating their need for manual user intervention) as well as increasing their capabilities and accuracy. Several building detection approaches surveyed in the open literature have fundamental weaknesses that hinder their use; such as requiring multiple data sets from different sensors, mandating certain operations to be carried out manually, and limited functionality to only being able to detect certain types of buildings. In this work, a building detection system is proposed and implemented which strives to overcome the limitations seen in existing techniques. The developed framework is flexible in that it can perform building detection from just LiDAR data (first or last return), or just nadir, color aerial imagery. If data from both LiDAR and aerial imagery are available, then the algorithm will use them both for improved accuracy. Additionally, the proposed approach does not employ severely limiting assumptions thus enabling the end user to apply the approach to a wider variety of different building types. The proposed approach is extensively tested using real data sets and it is also compared with other existing techniques. Experimental results are presented.
Shading correction assisted iterative cone-beam CT reconstruction
NASA Astrophysics Data System (ADS)
Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye
2017-11-01
Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.
Tomography by iterative convolution - Empirical study and application to interferometry
NASA Technical Reports Server (NTRS)
Vest, C. M.; Prikryl, I.
1984-01-01
An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.
Khor, Joo Moy; Tizzard, Andrew; Demosthenous, Andreas; Bayford, Richard
2014-06-01
Electrical impedance tomography (EIT) could be significantly advantageous to continuous monitoring of lung development in newborn and, in particular, preterm infants as it is non-invasive and safe to use within the intensive care unit. It has been demonstrated that accurate boundary form of the forward model is important to minimize artefacts in reconstructed electrical impedance images. This paper presents the outcomes of initial investigations for acquiring patient-specific thorax boundary information using a network of flexible sensors that imposes no restrictions on the patient's normal breathing and movements. The investigations include: (1) description of the basis of the reconstruction algorithms, (2) tests to determine a minimum number of bend sensors, (3) validation of two approaches to reconstruction and (4) an example of a commercially available bend sensor and its performance. Simulation results using ideal sensors show that, in the worst case, a total shape error of less than 6% with respect to its total perimeter can be achieved.
Research of centroiding algorithms for extended and elongated spot of sodium laser guide star
NASA Astrophysics Data System (ADS)
Shao, Yayun; Zhang, Yudong; Wei, Kai
2016-10-01
Laser guide stars (LGSs) increase the sky coverage of astronomical adaptive optics systems. But spot array obtained by Shack-Hartmann wave front sensors (WFSs) turns extended and elongated, due to the thickness and size limitation of sodium LGS, which affects the accuracy of the wave front reconstruction algorithm. In this paper, we compared three different centroiding algorithms , the Center-of-Gravity (CoG), weighted CoG (WCoG) and Intensity Weighted Centroid (IWC), as well as those accuracies for various extended and elongated spots. In addition, we compared the reconstructed image data from those three algorithms with theoretical results, and proved that WCoG and IWC are the best wave front reconstruction algorithms for extended and elongated spot among all the algorithms.
RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.
Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F
2016-11-01
Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.
Yi, Faliu; Moon, Inkyu; Javidi, Bahram
2017-10-01
In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm.
Yi, Faliu; Moon, Inkyu; Javidi, Bahram
2017-01-01
In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm. PMID:29082078
Yu, Hua; Jiao, Bingke; Lu, Lu; Wang, Pengfei; Chen, Shuangcheng; Liang, Chengzhi; Liu, Wei
2018-01-01
Accurately reconstructing gene co-expression network is of great importance for uncovering the genetic architecture underlying complex and various phenotypes. The recent availability of high-throughput RNA-seq sequencing has made genome-wide detecting and quantifying of the novel, rare and low-abundance transcripts practical. However, its potential merits in reconstructing gene co-expression network have still not been well explored. Using massive-scale RNA-seq samples, we have designed an ensemble pipeline, called NetMiner, for building genome-scale and high-quality Gene Co-expression Network (GCN) by integrating three frequently used inference algorithms. We constructed a RNA-seq-based GCN in one species of monocot rice. The quality of network obtained by our method was verified and evaluated by the curated gene functional association data sets, which obviously outperformed each single method. In addition, the powerful capability of network for associating genes with functions and agronomic traits was shown by enrichment analysis and case studies. In particular, we demonstrated the potential value of our proposed method to predict the biological roles of unknown protein-coding genes, long non-coding RNA (lncRNA) genes and circular RNA (circRNA) genes. Our results provided a valuable and highly reliable data source to select key candidate genes for subsequent experimental validation. To facilitate identification of novel genes regulating important biological processes and phenotypes in other plants or animals, we have published the source code of NetMiner, making it freely available at https://github.com/czllab/NetMiner.
NASA Astrophysics Data System (ADS)
Ma, Ming; Wang, Huafeng; Liu, Yan; Zhang, Hao; Gu, Xianfeng; Liang, Zhengrong
2014-03-01
Cone-beam computed tomography (CBCT) has attracted growing interest of researchers in image reconstruction. The mAs level of the X-ray tube current, in practical application of CBCT, is mitigated in order to reduce the CBCT dose. The lowering of the X-ray tube current, however, results in the degradation of image quality. Thus, low-dose CBCT image reconstruction is in effect a noise problem. To acquire clinically acceptable quality of image, and keep the X-ray tube current as low as achievable in the meanwhile, some penalized weighted least-squares (PWLS)-based image reconstruction algorithms have been developed. One representative strategy in previous work is to model the prior information for solution regularization using an anisotropic penalty term. To enhance the edge preserving and noise suppressing in a finer scale, a novel algorithm combining the local binary pattern (LBP) with penalized weighted leastsquares (PWLS), called LBP-PWLS-based image reconstruction algorithm, is proposed in this work. The proposed LBP-PWLS-based algorithm adaptively encourages strong diffusion on the local spot/flat region around a voxel and less diffusion on edge/corner ones by adjusting the penalty for cost function, after the LBP is utilized to detect the region around the voxel as spot, flat and edge ones. The LBP-PWLS-based reconstruction algorithm was evaluated using the sinogram data acquired by a clinical CT scanner from the CatPhan® 600 phantom. Experimental results on the noiseresolution tradeoff measurement and other quantitative measurements demonstrated its feasibility and effectiveness in edge preserving and noise suppressing in comparison with a previous PWLS reconstruction algorithm.
Kole, J S; Beekman, F J
2006-02-21
Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.
NASA Astrophysics Data System (ADS)
Zeng, Rongping; Badano, Aldo; Myers, Kyle J.
2017-04-01
We showed in our earlier work that the choice of reconstruction methods does not affect the optimization of DBT acquisition parameters (angular span and number of views) using simulated breast phantom images in detecting lesions with a channelized Hotelling observer (CHO). In this work we investigate whether the model-observer based conclusion is valid when using humans to interpret images. We used previously generated DBT breast phantom images and recruited human readers to find the optimal geometry settings associated with two reconstruction algorithms, filtered back projection (FBP) and simultaneous algebraic reconstruction technique (SART). The human reader results show that image quality trends as a function of the acquisition parameters are consistent between FBP and SART reconstructions. The consistent trends confirm that the optimization of DBT system geometry is insensitive to the choice of reconstruction algorithm. The results also show that humans perform better in SART reconstructed images than in FBP reconstructed images. In addition, we applied CHOs with three commonly used channel models, Laguerre-Gauss (LG) channels, square (SQR) channels and sparse difference-of-Gaussian (sDOG) channels. We found that LG channels predict human performance trends better than SQR and sDOG channel models for the task of detecting lesions in tomosynthesis backgrounds. Overall, this work confirms that the choice of reconstruction algorithm is not critical for optimizing DBT system acquisition parameters.
Reconstructing householder vectors from Tall-Skinny QR
Ballard, Grey Malone; Demmel, James; Grigori, Laura; ...
2015-08-05
The Tall-Skinny QR (TSQR) algorithm is more communication efficient than the standard Householder algorithm for QR decomposition of matrices with many more rows than columns. However, TSQR produces a different representation of the orthogonal factor and therefore requires more software development to support the new representation. Further, implicitly applying the orthogonal factor to the trailing matrix in the context of factoring a square matrix is more complicated and costly than with the Householder representation. We show how to perform TSQR and then reconstruct the Householder vector representation with the same asymptotic communication efficiency and little extra computational cost. We demonstratemore » the high performance and numerical stability of this algorithm both theoretically and empirically. The new Householder reconstruction algorithm allows us to design more efficient parallel QR algorithms, with significantly lower latency cost compared to Householder QR and lower bandwidth and latency costs compared with Communication-Avoiding QR (CAQR) algorithm. Experiments on supercomputers demonstrate the benefits of the communication cost improvements: in particular, our experiments show substantial improvements over tuned library implementations for tall-and-skinny matrices. Furthermore, we also provide algorithmic improvements to the Householder QR and CAQR algorithms, and we investigate several alternatives to the Householder reconstruction algorithm that sacrifice guarantees on numerical stability in some cases in order to obtain higher performance.« less
Extraction of Martian valley networks from digital topography
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Collier, M. L.
2004-01-01
We have developed a novel method for delineating valley networks on Mars. The valleys are inferred from digital topography by an autonomous computer algorithm as drainage networks, instead of being manually mapped from images. Individual drainage basins are precisely defined and reconstructed to restore flow continuity disrupted by craters. Drainage networks are extracted from their underlying basins using the contributing area threshold method. We demonstrate that such drainage networks coincide with mapped valley networks verifying that valley networks are indeed drainage systems. Our procedure is capable of delineating and analyzing valley networks with unparalleled speed and consistency. We have applied this method to 28 Noachian locations on Mars exhibiting prominent valley networks. All extracted networks have a planar morphology similar to that of terrestrial river networks. They are characterized by a drainage density of approx.0.1/km, low in comparison to the drainage density of terrestrial river networks. Slopes of "streams" in Martian valley networks decrease downstream at a slower rate than slopes of streams in terrestrial river networks. This analysis, based on a sizable data set of valley networks, reveals that although valley networks have some features pointing to their origin by precipitation-fed runoff erosion, their quantitative characteristics suggest that precipitation intensity and/or longevity of past pluvial climate were inadequate to develop mature drainage basins on Mars.
Interval-based reconstruction for uncertainty quantification in PET
NASA Astrophysics Data System (ADS)
Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis
2018-02-01
A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.
Banakh, V A; Marakasov, D A
2007-08-01
Reconstruction of a wind profile based on the statistics of plane-wave intensity fluctuations in a turbulent atmosphere is considered. The algorithm for wind profile retrieval from the spatiotemporal spectrum of plane-wave weak intensity fluctuations is described, and the results of end-to-end computer experiments on wind profiling based on the developed algorithm are presented. It is shown that the reconstructing algorithm allows retrieval of a wind profile from turbulent plane-wave intensity fluctuations with acceptable accuracy.
NASA Astrophysics Data System (ADS)
Ouillon, G.; Ducorbier, C.; Sornette, D.
2008-01-01
We propose a new pattern recognition method that is able to reconstruct the three-dimensional structure of the active part of a fault network using the spatial location of earthquakes. The method is a generalization of the so-called dynamic clustering (or k means) method, that partitions a set of data points into clusters, using a global minimization criterion of the variance of the hypocenters locations about their center of mass. The new method improves on the original k means method by taking into account the full spatial covariance tensor of each cluster in order to partition the data set into fault-like, anisotropic clusters. Given a catalog of seismic events, the output is the optimal set of plane segments that fits the spatial structure of the data. Each plane segment is fully characterized by its location, size, and orientation. The main tunable parameter is the accuracy of the earthquake locations, which fixes the resolution, i.e., the residual variance of the fit. The resolution determines the number of fault segments needed to describe the earthquake catalog: the better the resolution, the finer the structure of the reconstructed fault segments. The algorithm successfully reconstructs the fault segments of synthetic earthquake catalogs. Applied to the real catalog constituted of a subset of the aftershock sequence of the 28 June 1992 Landers earthquake in southern California, the reconstructed plane segments fully agree with faults already known on geological maps or with blind faults that appear quite obvious in longer-term catalogs. Future improvements of the method are discussed, as well as its potential use in the multiscale study of the inner structure of fault zones.
Chang, Hing-Chiu; Guhaniyogi, Shayan; Chen, Nan-kuei
2014-01-01
Purpose We report a series of techniques to reliably eliminate artifacts in interleaved echo-planar imaging (EPI) based diffusion weighted imaging (DWI). Methods First, we integrate the previously reported multiplexed sensitivity encoding (MUSE) algorithm with a new adaptive Homodyne partial-Fourier reconstruction algorithm, so that images reconstructed from interleaved partial-Fourier DWI data are free from artifacts even in the presence of either a) motion-induced k-space energy peak displacement, or b) susceptibility field gradient induced fast phase changes. Second, we generalize the previously reported single-band MUSE framework to multi-band MUSE, so that both through-plane and in-plane aliasing artifacts in multi-band multi-shot interleaved DWI data can be effectively eliminated. Results The new adaptive Homodyne-MUSE reconstruction algorithm reliably produces high-quality and high-resolution DWI, eliminating residual artifacts in images reconstructed with previously reported methods. Furthermore, the generalized MUSE algorithm is compatible with multi-band and high-throughput DWI. Conclusion The integration of the multi-band and adaptive Homodyne-MUSE algorithms significantly improves the spatial-resolution, image quality, and scan throughput of interleaved DWI. We expect that the reported reconstruction framework will play an important role in enabling high-resolution DWI for both neuroscience research and clinical uses. PMID:24925000
Dai, Erpeng; Zhang, Zhe; Ma, Xiaodong; Dong, Zijing; Li, Xuesong; Xiong, Yuhui; Yuan, Chun; Guo, Hua
2018-03-23
To study the effects of 2D navigator distortion and noise level on interleaved EPI (iEPI) DWI reconstruction, using either the image- or k-space-based method. The 2D navigator acquisition was adjusted by reducing its echo spacing in the readout direction and undersampling in the phase encoding direction. A POCS-based reconstruction using image-space sampling function (IRIS) algorithm (POCSIRIS) was developed to reduce the impact of navigator distortion. POCSIRIS was then compared with the original IRIS algorithm and a SPIRiT-based k-space algorithm, under different navigator distortion and noise levels. Reducing the navigator distortion can improve the reconstruction of iEPI DWI. The proposed POCSIRIS and SPIRiT-based algorithms are more tolerable to different navigator distortion levels, compared to the original IRIS algorithm. SPIRiT may be hindered by low SNR of the navigator. Multi-shot iEPI DWI reconstruction can be improved by reducing the 2D navigator distortion. Different reconstruction methods show variable sensitivity to navigator distortion or noise levels. Furthermore, the findings can be valuable in applications such as simultaneous multi-slice accelerated iEPI DWI and multi-slab diffusion imaging. © 2018 International Society for Magnetic Resonance in Medicine.
Event-by-event PET image reconstruction using list-mode origin ensembles algorithm
NASA Astrophysics Data System (ADS)
Andreyev, Andriy
2016-03-01
There is a great demand for real time or event-by-event (EBE) image reconstruction in emission tomography. Ideally, as soon as event has been detected by the acquisition electronics, it needs to be used in the image reconstruction software. This would greatly speed up the image reconstruction since most of the data will be processed and reconstructed while the patient is still undergoing the scan. Unfortunately, the current industry standard is that the reconstruction of the image would not start until all the data for the current image frame would be acquired. Implementing an EBE reconstruction for MLEM family of algorithms is possible, but not straightforward as multiple (computationally expensive) updates to the image estimate are required. In this work an alternative Origin Ensembles (OE) image reconstruction algorithm for PET imaging is converted to EBE mode and is investigated whether it is viable alternative for real-time image reconstruction. In OE algorithm all acquired events are seen as points that are located somewhere along the corresponding line-of-responses (LORs), together forming a point cloud. Iteratively, with a multitude of quasi-random shifts following the likelihood function the point cloud converges to a reflection of an actual radiotracer distribution with the degree of accuracy that is similar to MLEM. New data can be naturally added into the point cloud. Preliminary results with simulated data show little difference between regular reconstruction and EBE mode, proving the feasibility of the proposed approach.
Munsell, B C; Wu, G; Fridriksson, J; Thayer, K; Mofrad, N; Desisto, N; Shen, D; Bonilha, L
2017-09-09
Impaired confrontation naming is a common symptom of temporal lobe epilepsy (TLE). The neurobiological mechanisms underlying this impairment are poorly understood but may indicate a structural disorganization of broadly distributed neuronal networks that support naming ability. Importantly, naming is frequently impaired in other neurological disorders and by contrasting the neuronal structures supporting naming in TLE with other diseases, it will become possible to elucidate the common systems supporting naming. We aimed to evaluate the neuronal networks that support naming in TLE by using a machine learning algorithm intended to predict naming performance in subjects with medication refractory TLE using only the structural brain connectome reconstructed from diffusion tensor imaging. A connectome-based prediction framework was developed using network properties from anatomically defined brain regions across the entire brain, which were used in a multi-task machine learning algorithm followed by support vector regression. Nodal eigenvector centrality, a measure of regional network integration, predicted approximately 60% of the variance in naming. The nodes with the highest regression weight were bilaterally distributed among perilimbic sub-networks involving mainly the medial and lateral temporal lobe regions. In the context of emerging evidence regarding the role of large structural networks that support language processing, our results suggest intact naming relies on the integration of sub-networks, as opposed to being dependent on isolated brain areas. In the case of TLE, these sub-networks may be disproportionately indicative naming processes that are dependent semantic integration from memory and lexical retrieval, as opposed to multi-modal perception or motor speech production. Copyright © 2017. Published by Elsevier Inc.
Bowsher, Clive G
2011-02-15
Understanding the encoding and propagation of information by biochemical reaction networks and the relationship of such information processing properties to modular network structure is of fundamental importance in the study of cell signalling and regulation. However, a rigorous, automated approach for general biochemical networks has not been available, and high-throughput analysis has therefore been out of reach. Modularization Identification by Dynamic Independence Algorithms (MIDIA) is a user-friendly, extensible R package that performs automated analysis of how information is processed by biochemical networks. An important component is the algorithm's ability to identify exact network decompositions based on both the mass action kinetics and informational properties of the network. These modularizations are visualized using a tree structure from which important dynamic conditional independence properties can be directly read. Only partial stoichiometric information needs to be used as input to MIDIA, and neither simulations nor knowledge of rate parameters are required. When applied to a signalling network, for example, the method identifies the routes and species involved in the sequential propagation of information between its multiple inputs and outputs. These routes correspond to the relevant paths in the tree structure and may be further visualized using the Input-Output Path Matrix tool. MIDIA remains computationally feasible for the largest network reconstructions currently available and is straightforward to use with models written in Systems Biology Markup Language (SBML). The package is distributed under the GNU General Public License and is available, together with a link to browsable Supplementary Material, at http://code.google.com/p/midia. Further information is at www.maths.bris.ac.uk/~macgb/Software.html.
ECG-gated interventional cardiac reconstruction for non-periodic motion.
Rohkohl, Christopher; Lauritsch, Günter; Biller, Lisa; Hornegger, Joachim
2010-01-01
The 3-D reconstruction of cardiac vasculature using C-arm CT is an active and challenging field of research. In interventional environments patients often do have arrhythmic heart signals or cannot hold breath during the complete data acquisition. This important group of patients cannot be reconstructed with current approaches that do strongly depend on a high degree of cardiac motion periodicity for working properly. In a last year's MICCAI contribution a first algorithm was presented that is able to estimate non-periodic 4-D motion patterns. However, to some degree that algorithm still depends on periodicity, as it requires a prior image which is obtained using a simple ECG-gated reconstruction. In this work we aim to provide a solution to this problem by developing a motion compensated ECG-gating algorithm. It is built upon a 4-D time-continuous affine motion model which is capable of compactly describing highly non-periodic motion patterns. A stochastic optimization scheme is derived which minimizes the error between the measured projection data and the forward projection of the motion compensated reconstruction. For evaluation, the algorithm is applied to 5 datasets of the left coronary arteries of patients that have ignored the breath hold command and/or had arrhythmic heart signals during the data acquisition. By applying the developed algorithm the average visibility of the vessel segments could be increased by 27%. The results show that the proposed algorithm provides excellent reconstruction quality in cases where classical approaches fail. The algorithm is highly parallelizable and a clinically feasible runtime of under 4 minutes is achieved using modern graphics card hardware.
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar
Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun
2018-01-01
Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256×13 real-time radar image display with a throughput of 28.2 frames per second. PMID:29621170
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, B; Tan, Y; Tsai, W
2014-06-15
Purpose: Radiogenomics promises the ability to study cancer tumor genotype from the phenotype obtained through radiographic imaging. However, little attention has been paid to the sensitivity of image features, the image-based biomarkers, to imaging acquisition techniques. This study explores the impact of CT dose, slice thickness and reconstruction algorithm on measuring image features using a thorax phantom. Methods: Twentyfour phantom lesions of known volume (1 and 2mm), shape (spherical, elliptical, lobular and spicular) and density (-630, -10 and +100 HU) were scanned on a GE VCT at four doses (25, 50, 100, and 200 mAs). For each scan, six imagemore » series were reconstructed at three slice thicknesses of 5, 2.5 and 1.25mm with continuous intervals, using the lung and standard reconstruction algorithms. The lesions were segmented with an in-house 3D algorithm. Fifty (50) image features representing lesion size, shape, edge, and density distribution/texture were computed. Regression method was employed to analyze the effect of CT dose, slice of thickness and reconstruction algorithm on these features adjusting 3 confounding factors (size, density and shape of phantom lesions). Results: The coefficients of CT dose, slice thickness and reconstruction algorithm are presented in Table 1 in the supplementary material. No significant difference was found between the image features calculated on low dose CT scans (25mAs and 50mAs). About 50% texture features were found statistically different between low doses and high doses (100 and 200mAs). Significant differences were found for almost all features when calculated on 1.25mm, 2.5mm, and 5mm slice thickness images. Reconstruction algorithms significantly affected all density-based image features, but not morphological features. Conclusions: There is a great need to standardize the CT imaging protocols for radiogenomics study because CT dose, slice thickness and reconstruction algorithm impact quantitative image features to various degrees as our study has shown.« less
Level-set-based reconstruction algorithm for EIT lung images: first clinical results.
Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy
2012-05-01
We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.
4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties
NASA Astrophysics Data System (ADS)
Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.
2018-05-01
4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated the most biased parametric maps. Inclusion of a temporal roughness penalty function improved the performance of 4D reconstruction based on the cubic B-spline, spectral and spline-residue models.
NASA Technical Reports Server (NTRS)
Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome
2016-01-01
In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.
Link prediction boosted psychiatry disorder classification for functional connectivity network
NASA Astrophysics Data System (ADS)
Li, Weiwei; Mei, Xue; Wang, Hao; Zhou, Yu; Huang, Jiashuang
2017-02-01
Functional connectivity network (FCN) is an effective tool in psychiatry disorders classification, and represents cross-correlation of the regional blood oxygenation level dependent signal. However, FCN is often incomplete for suffering from missing and spurious edges. To accurate classify psychiatry disorders and health control with the incomplete FCN, we first `repair' the FCN with link prediction, and then exact the clustering coefficients as features to build a weak classifier for every FCN. Finally, we apply a boosting algorithm to combine these weak classifiers for improving classification accuracy. Our method tested by three datasets of psychiatry disorder, including Alzheimer's Disease, Schizophrenia and Attention Deficit Hyperactivity Disorder. The experimental results show our method not only significantly improves the classification accuracy, but also efficiently reconstructs the incomplete FCN.
Distributed nuclear medicine applications using World Wide Web and Java technology.
Knoll, P; Höll, K; Mirzaei, S; Koriska, K; Köhn, H
2000-01-01
At present, medical applications applying World Wide Web (WWW) technology are mainly used to view static images and to retrieve some information. The Java platform is a relative new way of computing, especially designed for network computing and distributed applications which enables interactive connection between user and information via the WWW. The Java 2 Software Development Kit (SDK) including Java2D API, Java Remote Method Invocation (RMI) technology, Object Serialization and the Java Advanced Imaging (JAI) extension was used to achieve a robust, platform independent and network centric solution. Medical image processing software based on this technology is presented and adequate performance capability of Java is demonstrated by an iterative reconstruction algorithm for single photon emission computerized tomography (SPECT).
Functional Validation and Comparison Framework for EIT Lung Imaging
Meybohm, Patrick; Weiler, Norbert; Frerichs, Inéz; Adler, Andy
2014-01-01
Introduction Electrical impedance tomography (EIT) is an emerging clinical tool for monitoring ventilation distribution in mechanically ventilated patients, for which many image reconstruction algorithms have been suggested. We propose an experimental framework to assess such algorithms with respect to their ability to correctly represent well-defined physiological changes. We defined a set of clinically relevant ventilation conditions and induced them experimentally in 8 pigs by controlling three ventilator settings (tidal volume, positive end-expiratory pressure and the fraction of inspired oxygen). In this way, large and discrete shifts in global and regional lung air content were elicited. Methods We use the framework to compare twelve 2D EIT reconstruction algorithms, including backprojection (the original and still most frequently used algorithm), GREIT (a more recent consensus algorithm for lung imaging), truncated singular value decomposition (TSVD), several variants of the one-step Gauss-Newton approach and two iterative algorithms. We consider the effects of using a 3D finite element model, assuming non-uniform background conductivity, noise modeling, reconstructing for electrode movement, total variation (TV) reconstruction, robust error norms, smoothing priors, and using difference vs. normalized difference data. Results and Conclusions Our results indicate that, while variation in appearance of images reconstructed from the same data is not negligible, clinically relevant parameters do not vary considerably among the advanced algorithms. Among the analysed algorithms, several advanced algorithms perform well, while some others are significantly worse. Given its vintage and ad-hoc formulation backprojection works surprisingly well, supporting the validity of previous studies in lung EIT. PMID:25110887
Biomedical sensor design using analog compressed sensing
NASA Astrophysics Data System (ADS)
Balouchestani, Mohammadreza; Krishnan, Sridhar
2015-05-01
The main drawback of current healthcare systems is the location-specific nature of the system due to the use of fixed/wired biomedical sensors. Since biomedical sensors are usually driven by a battery, power consumption is the most important factor determining the life of a biomedical sensor. They are also restricted by size, cost, and transmission capacity. Therefore, it is important to reduce the load of sampling by merging the sampling and compression steps to reduce the storage usage, transmission times, and power consumption in order to expand the current healthcare systems to Wireless Healthcare Systems (WHSs). In this work, we present an implementation of a low-power biomedical sensor using analog Compressed Sensing (CS) framework for sparse biomedical signals that addresses both the energy and telemetry bandwidth constraints of wearable and wireless Body-Area Networks (BANs). This architecture enables continuous data acquisition and compression of biomedical signals that are suitable for a variety of diagnostic and treatment purposes. At the transmitter side, an analog-CS framework is applied at the sensing step before Analog to Digital Converter (ADC) in order to generate the compressed version of the input analog bio-signal. At the receiver side, a reconstruction algorithm based on Restricted Isometry Property (RIP) condition is applied in order to reconstruct the original bio-signals form the compressed bio-signals with high probability and enough accuracy. We examine the proposed algorithm with healthy and neuropathy surface Electromyography (sEMG) signals. The proposed algorithm achieves a good level for Average Recognition Rate (ARR) at 93% and reconstruction accuracy at 98.9%. In addition, The proposed architecture reduces total computation time from 32 to 11.5 seconds at sampling-rate=29 % of Nyquist rate, Percentage Residual Difference (PRD)=26 %, Root Mean Squared Error (RMSE)=3 %.
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; ...
2014-10-16
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.
2014-01-01
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface. PMID:25329157
Reconstruction of Complex Network based on the Noise via QR Decomposition and Compressed Sensing.
Li, Lixiang; Xu, Dafei; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian
2017-11-08
It is generally known that the states of network nodes are stable and have strong correlations in a linear network system. We find that without the control input, the method of compressed sensing can not succeed in reconstructing complex networks in which the states of nodes are generated through the linear network system. However, noise can drive the dynamics between nodes to break the stability of the system state. Therefore, a new method integrating QR decomposition and compressed sensing is proposed to solve the reconstruction problem of complex networks under the assistance of the input noise. The state matrix of the system is decomposed by QR decomposition. We construct the measurement matrix with the aid of Gaussian noise so that the sparse input matrix can be reconstructed by compressed sensing. We also discover that noise can build a bridge between the dynamics and the topological structure. Experiments are presented to show that the proposed method is more accurate and more efficient to reconstruct four model networks and six real networks by the comparisons between the proposed method and only compressed sensing. In addition, the proposed method can reconstruct not only the sparse complex networks, but also the dense complex networks.
Anisotropic conductivity imaging with MREIT using equipotential projection algorithm.
Değirmenci, Evren; Eyüboğlu, B Murat
2007-12-21
Magnetic resonance electrical impedance tomography (MREIT) combines magnetic flux or current density measurements obtained by magnetic resonance imaging (MRI) and surface potential measurements to reconstruct images of true conductivity with high spatial resolution. Most of the biological tissues have anisotropic conductivity; therefore, anisotropy should be taken into account in conductivity image reconstruction. Almost all of the MREIT reconstruction algorithms proposed to date assume isotropic conductivity distribution. In this study, a novel MREIT image reconstruction algorithm is proposed to image anisotropic conductivity. Relative anisotropic conductivity values are reconstructed iteratively, using only current density measurements without any potential measurement. In order to obtain true conductivity values, only either one potential or conductivity measurement is sufficient to determine a scaling factor. The proposed technique is evaluated on simulated data for isotropic and anisotropic conductivity distributions, with and without measurement noise. Simulation results show that the images of both anisotropic and isotropic conductivity distributions can be reconstructed successfully.
NASA Astrophysics Data System (ADS)
Chen, Xueli; Yang, Defu; Qu, Xiaochao; Hu, Hao; Liang, Jimin; Gao, Xinbo; Tian, Jie
2012-06-01
Bioluminescence tomography (BLT) has been successfully applied to the detection and therapeutic evaluation of solid cancers. However, the existing BLT reconstruction algorithms are not accurate enough for cavity cancer detection because of neglecting the void problem. Motivated by the ability of the hybrid radiosity-diffusion model (HRDM) in describing the light propagation in cavity organs, an HRDM-based BLT reconstruction algorithm was provided for the specific problem of cavity cancer detection. HRDM has been applied to optical tomography but is limited to simple and regular geometries because of the complexity in coupling the boundary between the scattering and void region. In the provided algorithm, HRDM was first applied to three-dimensional complicated and irregular geometries and then employed as the forward light transport model to describe the bioluminescent light propagation in tissues. Combining HRDM with the sparse reconstruction strategy, the cavity cancer cells labeled with bioluminescent probes can be more accurately reconstructed. Compared with the diffusion equation based reconstruction algorithm, the essentiality and superiority of the HRDM-based algorithm were demonstrated with simulation, phantom and animal studies. An in vivo gastric cancer-bearing nude mouse experiment was conducted, whose results revealed the ability and feasibility of the HRDM-based algorithm in the biomedical application of gastric cancer detection.
Chen, Xueli; Yang, Defu; Qu, Xiaochao; Hu, Hao; Liang, Jimin; Gao, Xinbo; Tian, Jie
2012-06-01
Bioluminescence tomography (BLT) has been successfully applied to the detection and therapeutic evaluation of solid cancers. However, the existing BLT reconstruction algorithms are not accurate enough for cavity cancer detection because of neglecting the void problem. Motivated by the ability of the hybrid radiosity-diffusion model (HRDM) in describing the light propagation in cavity organs, an HRDM-based BLT reconstruction algorithm was provided for the specific problem of cavity cancer detection. HRDM has been applied to optical tomography but is limited to simple and regular geometries because of the complexity in coupling the boundary between the scattering and void region. In the provided algorithm, HRDM was first applied to three-dimensional complicated and irregular geometries and then employed as the forward light transport model to describe the bioluminescent light propagation in tissues. Combining HRDM with the sparse reconstruction strategy, the cavity cancer cells labeled with bioluminescent probes can be more accurately reconstructed. Compared with the diffusion equation based reconstruction algorithm, the essentiality and superiority of the HRDM-based algorithm were demonstrated with simulation, phantom and animal studies. An in vivo gastric cancer-bearing nude mouse experiment was conducted, whose results revealed the ability and feasibility of the HRDM-based algorithm in the biomedical application of gastric cancer detection.
Three-Dimensional Weighting in Cone Beam FBP Reconstruction and Its Transformation Over Geometries.
Tang, Shaojie; Huang, Kuidong; Cheng, Yunyong; Niu, Tianye; Tang, Xiangyang
2018-06-01
With substantially increased number of detector rows in multidetector CT (MDCT), axial scan with projection data acquired along a circular source trajectory has become the method-of-choice in increasing clinical applications. Recognizing the practical relevance of image reconstruction directly from the projection data acquired in the native cone beam (CB) geometry, especially in scenarios wherein the most achievable in-plane resolution is desirable, we present a three-dimensional (3-D) weighted CB-FBP algorithm in such geometry in this paper. We start the algorithm's derivation in the cone-parallel geometry. Via changing of variables, taking the Jacobian into account and making heuristic and empirical assumptions, we arrive at the formulas for 3-D weighted image reconstruction in the native CB geometry. Using the projection data simulated by computer and acquired by an MDCT scanner, we evaluate and verify performance of the proposed algorithm for image reconstruction directly from projection data acquired in the native CB geometry. The preliminary data show that the proposed algorithm performs as well as the 3-D weighted CB-FBP algorithm in the cone-parallel geometry. The proposed algorithm is anticipated to find its utility in extensive clinical and preclinical applications wherein the reconstruction of images in the native CB geometry, i.e., the geometry for data acquisition, is of relevance.
Park, D Y; Fessler, J A; Yost, M G; Levine, S P
2000-03-01
Computed tomographic (CT) reconstructions of air contaminant concentration fields were conducted in a room-sized chamber employing a single open-path Fourier transform infrared (OP-FTIR) instrument and a combination of 52 flat mirrors and 4 retroreflectors. A total of 56 beam path data were repeatedly collected for around 1 hr while maintaining a stable concentration gradient. The plane of the room was divided into 195 pixels (13 x 15) for reconstruction. The algebraic reconstruction technique (ART) failed to reconstruct the original concentration gradient patterns for most cases. These poor results were caused by the "highly underdetermined condition" in which the number of unknown values (156 pixels) exceeds that of known data (56 path integral concentrations) in the experimental setting. A new CT algorithm, called the penalized weighted least-squares (PWLS), was applied to remedy this condition. The peak locations were correctly positioned in the PWLS-CT reconstructions. A notable feature of the PWLS-CT reconstructions was a significant reduction of highly irregular noise peaks found in the ART-CT reconstructions. However, the peak heights were slightly reduced in the PWLS-CT reconstructions due to the nature of the PWLS algorithm. PWLS could converge on the original concentration gradient even when a fairly high error was embedded into some experimentally measured path integral concentrations. It was also found in the simulation tests that the PWLS algorithm was very robust with respect to random errors in the path integral concentrations. This beam geometry and the use of a single OP-FTIR scanning system, in combination with the PWLS algorithm, is a system applicable to both environmental and industrial settings.
Park, Doo Y; Fessier, Jeffrey A; Yost, Michael G; Levine, Steven P
2000-03-01
Computed tomographic (CT) reconstructions of air contaminant concentration fields were conducted in a room-sized chamber employing a single open-path Fourier transform infrared (OP-FTIR) instrument and a combination of 52 flat mirrors and 4 retroreflectors. A total of 56 beam path data were repeatedly collected for around 1 hr while maintaining a stable concentration gradient. The plane of the room was divided into 195 pixels (13 × 15) for reconstruction. The algebraic reconstruction technique (ART) failed to reconstruct the original concentration gradient patterns for most cases. These poor results were caused by the "highly underdetermined condition" in which the number of unknown values (156 pixels) exceeds that of known data (56 path integral concentrations) in the experimental setting. A new CT algorithm, called the penalized weighted least-squares (PWLS), was applied to remedy this condition. The peak locations were correctly positioned in the PWLS-CT reconstructions. A notable feature of the PWLS-CT reconstructions was a significant reduction of highly irregular noise peaks found in the ART-CT reconstructions. However, the peak heights were slightly reduced in the PWLS-CT reconstructions due to the nature of the PWLS algorithm. PWLS could converge on the original concentration gradient even when a fairly high error was embedded into some experimentally measured path integral concentrations. It was also found in the simulation tests that the PWLS algorithm was very robust with respect to random errors in the path integral concentrations. This beam geometry and the use of a single OP-FTIR scanning system, in combination with the PWLS algorithm, is a system applicable to both environmental and industrial settings.
Jini service to reconstruct tomographic data
NASA Astrophysics Data System (ADS)
Knoll, Peter; Mirzaei, S.; Koriska, K.; Koehn, H.
2002-06-01
A number of imaging systems rely on the reconstruction of a 3- dimensional model from its projections through the process of computed tomography (CT). In medical imaging, for example magnetic resonance imaging (MRI), positron emission tomography (PET), and Single Computer Tomography (SPECT) acquire two-dimensional projections of a three dimensional projections of a three dimensional object. In order to calculate the 3-dimensional representation of the object, i.e. its voxel distribution, several reconstruction algorithms have been developed. Currently, mainly two reconstruct use: the filtered back projection(FBP) and iterative methods. Although the quality of iterative reconstructed SPECT slices is better than that of FBP slices, such iterative algorithms are rarely used for clinical routine studies because of their low availability and increased reconstruction time. We used Jini and a self-developed iterative reconstructions algorithm to design and implement a Jini reconstruction service. With this service, the physician selects the patient study from a database and a Jini client automatically discovers the registered Jini reconstruction services in the department's Intranet. After downloading the proxy object the this Jini service, the SPECT acquisition data are reconstructed. The resulting transaxial slices are visualized using a Jini slice viewer, which can be used for various imaging modalities.
Image reconstruction from few-view CT data by gradient-domain dictionary learning.
Hu, Zhanli; Liu, Qiegen; Zhang, Na; Zhang, Yunwan; Peng, Xi; Wu, Peter Z; Zheng, Hairong; Liang, Dong
2016-05-21
Decreasing the number of projections is an effective way to reduce the radiation dose exposed to patients in medical computed tomography (CT) imaging. However, incomplete projection data for CT reconstruction will result in artifacts and distortions. In this paper, a novel dictionary learning algorithm operating in the gradient-domain (Grad-DL) is proposed for few-view CT reconstruction. Specifically, the dictionaries are trained from the horizontal and vertical gradient images, respectively and the desired image is reconstructed subsequently from the sparse representations of both gradients by solving the least-square method. Since the gradient images are sparser than the image itself, the proposed approach could lead to sparser representations than conventional DL methods in the image-domain, and thus a better reconstruction quality is achieved. To evaluate the proposed Grad-DL algorithm, both qualitative and quantitative studies were employed through computer simulations as well as real data experiments on fan-beam and cone-beam geometry. The results show that the proposed algorithm can yield better images than the existing algorithms.
NASA Astrophysics Data System (ADS)
Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang
2017-07-01
The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.
Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery.
Hashemi, SayedMasoud; Song, William Y; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G; Ruschin, Mark
2017-04-07
One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm -1 which was increased to 1.2 mm -1 by SDIR, at half maximum.
Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery
NASA Astrophysics Data System (ADS)
Hashemi, SayedMasoud; Song, William Y.; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G.; Ruschin, Mark
2017-04-01
One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm-1 which was increased to 1.2 mm-1 by SDIR, at half maximum.
Priori mask guided image reconstruction (p-MGIR) for ultra-low dose cone-beam computed tomography
NASA Astrophysics Data System (ADS)
Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Kahler, Darren L.; Liu, Chihray; Lu, Bo
2015-11-01
Recently, the compressed sensing (CS) based iterative reconstruction method has received attention because of its ability to reconstruct cone beam computed tomography (CBCT) images with good quality using sparsely sampled or noisy projections, thus enabling dose reduction. However, some challenges remain. In particular, there is always a tradeoff between image resolution and noise/streak artifact reduction based on the amount of regularization weighting that is applied uniformly across the CBCT volume. The purpose of this study is to develop a novel low-dose CBCT reconstruction algorithm framework called priori mask guided image reconstruction (p-MGIR) that allows reconstruction of high-quality low-dose CBCT images while preserving the image resolution. In p-MGIR, the unknown CBCT volume was mathematically modeled as a combination of two regions: (1) where anatomical structures are complex, and (2) where intensities are relatively uniform. The priori mask, which is the key concept of the p-MGIR algorithm, was defined as the matrix that distinguishes between the two separate CBCT regions where the resolution needs to be preserved and where streak or noise needs to be suppressed. We then alternately updated each part of image by solving two sub-minimization problems iteratively, where one minimization was focused on preserving the edge information of the first part while the other concentrated on the removal of noise/artifacts from the latter part. To evaluate the performance of the p-MGIR algorithm, a numerical head-and-neck phantom, a Catphan 600 physical phantom, and a clinical head-and-neck cancer case were used for analysis. The results were compared with the standard Feldkamp-Davis-Kress as well as conventional CS-based algorithms. Examination of the p-MGIR algorithm showed that high-quality low-dose CBCT images can be reconstructed without compromising the image resolution. For both phantom and the patient cases, the p-MGIR is able to achieve a clinically-reasonable image with 60 projections. Therefore, a clinically-viable, high-resolution head-and-neck CBCT image can be obtained while cutting the dose by 83%. Moreover, the image quality obtained using p-MGIR is better than the quality obtained using other algorithms. In this work, we propose a novel low-dose CBCT reconstruction algorithm called p-MGIR. It can be potentially used as a CBCT reconstruction algorithm with low dose scan requests
Motion and positional error correction for cone beam 3D-reconstruction with mobile C-arms.
Bodensteiner, C; Darolti, C; Schumacher, H; Matthäus, L; Schweikard, A
2007-01-01
CT-images acquired by mobile C-arm devices can contain artefacts caused by positioning errors. We propose a data driven method based on iterative 3D-reconstruction and 2D/3D-registration to correct projection data inconsistencies. With a 2D/3D-registration algorithm, transformations are computed to align the acquired projection images to a previously reconstructed volume. In an iterative procedure, the reconstruction algorithm uses the results of the registration step. This algorithm also reduces small motion artefacts within 3D-reconstructions. Experiments with simulated projections from real patient data show the feasibility of the proposed method. In addition, experiments with real projection data acquired with an experimental robotised C-arm device have been performed with promising results.
NASA Astrophysics Data System (ADS)
Huang, Xiaokun; Zhang, You; Wang, Jing
2017-03-01
Four-dimensional (4D) cone-beam computed tomography (CBCT) enables motion tracking of anatomical structures and removes artifacts introduced by motion. However, the imaging time/dose of 4D-CBCT is substantially longer/higher than traditional 3D-CBCT. We previously developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, to reconstruct high-quality 4D-CBCT from limited number of projections to reduce the imaging time/dose. However, the accuracy of SMEIR is limited in reconstructing low-contrast regions with fine structure details. In this study, we incorporate biomechanical modeling into the SMEIR algorithm (SMEIR-Bio), to improve the reconstruction accuracy at low-contrast regions with fine details. The efficacy of SMEIR-Bio is evaluated using 11 lung patient cases and compared to that of the original SMEIR algorithm. Qualitative and quantitative comparisons showed that SMEIR-Bio greatly enhances the accuracy of reconstructed 4D-CBCT volume in low-contrast regions, which can potentially benefit multiple clinical applications including the treatment outcome analysis.
A preliminary investigation of ROI-image reconstruction with the rebinned BPF algorithm
NASA Astrophysics Data System (ADS)
Bian, Junguo; Xia, Dan; Yu, Lifeng; Sidky, Emil Y.; Pan, Xiaochuan
2008-03-01
The back-projection filtration (BPF)algorithm is capable of reconstructing ROI images from truncated data acquired with a wide class of general trajectories. However, it has been observed that, similar to other algorithms for convergent beam geometries, the BPF algorithm involves a spatially varying weighting factor in the backprojection step. This weighting factor can not only increase the computation load, but also amplify the noise in reconstructed images The weighting factor can be eliminated by appropriately rebinning the measured cone-beam data into fan-parallel-beam data. Such an appropriate data rebinning not only removes the weighting factor, but also retain other favorable properties of the BPF algorithm. In this work, we conduct a preliminary study of the rebinned BPF algorithm and its noise property. Specifically, we consider an application in which the detector and source can move in several directions for achieving ROI data acquisition. The combined motion of the detector and source generally forms a complex trajectory. We investigate in this work image reconstruction within an ROI from data acquired in this kind of applications.
Statistical reconstruction for cosmic ray muon tomography.
Schultz, Larry J; Blanpied, Gary S; Borozdin, Konstantin N; Fraser, Andrew M; Hengartner, Nicolas W; Klimenko, Alexei V; Morris, Christopher L; Orum, Chris; Sossong, Michael J
2007-08-01
Highly penetrating cosmic ray muons constantly shower the earth at a rate of about 1 muon per cm2 per minute. We have developed a technique which exploits the multiple Coulomb scattering of these particles to perform nondestructive inspection without the use of artificial radiation. In prior work [1]-[3], we have described heuristic methods for processing muon data to create reconstructed images. In this paper, we present a maximum likelihood/expectation maximization tomographic reconstruction algorithm designed for the technique. This algorithm borrows much from techniques used in medical imaging, particularly emission tomography, but the statistics of muon scattering dictates differences. We describe the statistical model for multiple scattering, derive the reconstruction algorithm, and present simulated examples. We also propose methods to improve the robustness of the algorithm to experimental errors and events departing from the statistical model.
Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm
NASA Astrophysics Data System (ADS)
Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.
2017-03-01
Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.
Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin
2018-04-18
Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.
NASA Astrophysics Data System (ADS)
Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen
2016-11-01
To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.
NASA Astrophysics Data System (ADS)
Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang
2018-05-01
Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.
Compressed sensing with gradient total variation for low-dose CBCT reconstruction
NASA Astrophysics Data System (ADS)
Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Seongchae; Huh, Young; Park, Justin C.; Lee, Byeonghun; Baek, Junghee; Kim, Eunyoung
2015-06-01
This paper describes the improvement of convergence speed with gradient total variation (GTV) in compressed sensing (CS) for low-dose cone-beam computed tomography (CBCT) reconstruction. We derive a fast algorithm for the constrained total variation (TV)-based a minimum number of noisy projections. To achieve this task we combine the GTV with a TV-norm regularization term to promote an accelerated sparsity in the X-ray attenuation characteristics of the human body. The GTV is derived from a TV and enforces more efficient computationally and faster in convergence until a desired solution is achieved. The numerical algorithm is simple and derives relatively fast convergence. We apply a gradient projection algorithm that seeks a solution iteratively in the direction of the projected gradient while enforcing a non-negatively of the found solution. In comparison with the Feldkamp, Davis, and Kress (FDK) and conventional TV algorithms, the proposed GTV algorithm showed convergence in ≤18 iterations, whereas the original TV algorithm needs at least 34 iterations in reducing 50% of the projections compared with the FDK algorithm in order to reconstruct the chest phantom images. Future investigation includes improving imaging quality, particularly regarding X-ray cone-beam scatter, and motion artifacts of CBCT reconstruction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A
2016-04-01
The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.
2016-01-01
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582
Comparison and analysis of nonlinear algorithms for compressed sensing in MRI.
Yu, Yeyang; Hong, Mingjian; Liu, Feng; Wang, Hua; Crozier, Stuart
2010-01-01
Compressed sensing (CS) theory has been recently applied in Magnetic Resonance Imaging (MRI) to accelerate the overall imaging process. In the CS implementation, various algorithms have been used to solve the nonlinear equation system for better image quality and reconstruction speed. However, there are no explicit criteria for an optimal CS algorithm selection in the practical MRI application. A systematic and comparative study of those commonly used algorithms is therefore essential for the implementation of CS in MRI. In this work, three typical algorithms, namely, the Gradient Projection For Sparse Reconstruction (GPSR) algorithm, Interior-point algorithm (l(1)_ls), and the Stagewise Orthogonal Matching Pursuit (StOMP) algorithm are compared and investigated in three different imaging scenarios, brain, angiogram and phantom imaging. The algorithms' performances are characterized in terms of image quality and reconstruction speed. The theoretical results show that the performance of the CS algorithms is case sensitive; overall, the StOMP algorithm offers the best solution in imaging quality, while the GPSR algorithm is the most efficient one among the three methods. In the next step, the algorithm performances and characteristics will be experimentally explored. It is hoped that this research will further support the applications of CS in MRI.
Solomon, Justin; Mileto, Achille; Nelson, Rendon C; Roy Choudhury, Kingshuk; Samei, Ehsan
2016-04-01
To determine if radiation dose and reconstruction algorithm affect the computer-based extraction and analysis of quantitative imaging features in lung nodules, liver lesions, and renal stones at multi-detector row computed tomography (CT). Retrospective analysis of data from a prospective, multicenter, HIPAA-compliant, institutional review board-approved clinical trial was performed by extracting 23 quantitative imaging features (size, shape, attenuation, edge sharpness, pixel value distribution, and texture) of lesions on multi-detector row CT images of 20 adult patients (14 men, six women; mean age, 63 years; range, 38-72 years) referred for known or suspected focal liver lesions, lung nodules, or kidney stones. Data were acquired between September 2011 and April 2012. All multi-detector row CT scans were performed at two different radiation dose levels; images were reconstructed with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) algorithms. A linear mixed-effects model was used to assess the effect of radiation dose and reconstruction algorithm on extracted features. Among the 23 imaging features assessed, radiation dose had a significant effect on five, three, and four of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Adaptive statistical iterative reconstruction had a significant effect on three, one, and one of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). MBIR reconstruction had a significant effect on nine, 11, and 15 of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Of note, the measured size of lung nodules and renal stones with MBIR was significantly different than those for the other two algorithms (P < .002 for all comparisons). Although lesion texture was significantly affected by the reconstruction algorithm used (average of 3.33 features affected by MBIR throughout lesion types; P < .002, for all comparisons), no significant effect of the radiation dose setting was observed for all but one of the texture features (P = .002-.998). Radiation dose settings and reconstruction algorithms affect the extraction and analysis of quantitative imaging features in lesions at multi-detector row CT.
Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong
2018-04-12
Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grzetic, S; Weldon, M; Noa, K
Purpose: This study compares the newly released MaxFOV Revision 1 EFOV reconstruction algorithm for GE RT590 to the older WideView EFOV algorithm. Two radiotherapy overlays from Q-fix and Diacor, are included in our analysis. Hounsfield Units (HU) generated with the WideView algorithm varied in the extended field (beyond 50cm) and the scanned object’s border varied from slice to slice. A validation of HU consistency between the two reconstruction algorithms is performed. Methods: A CatPhan 504 and CIRS062 Electron Density Phantom were scanned on a GE RT590 CT-Simulator. The phantoms were positioned in multiple locations within the scan field of viewmore » so some of the density plugs were outside the 50cm reconstruction circle. Images were reconstructed using both the WideView and MaxFOV algorithms. The HU for each scan were characterized both in average over a volume and in profile. Results: HU values are consistent between the two algorithms. Low-density material will have a slight increase in HU value and high-density material will have a slight decrease in HU value as the distance from the sweet spot increases. Border inconsistencies and shading artifacts are still present with the MaxFOV reconstruction on the Q-fix overlay but not the Diacor overlay (It should be noted that the Q-fix overlay is not currently GE-certified). HU values for water outside the 50cm FOV are within 40HU of reconstructions at the sweet spot of the scanner. CatPhan HU profiles show improvement with the MaxFOV algorithm as it approaches the scanner edge. Conclusion: The new MaxFOV algorithm improves the contour border for objects outside of the standard FOV when using a GE-approved tabletop. Air cavities outside of the standard FOV create inconsistent object borders. HU consistency is within GE specifications and the accuracy of the phantom edge improves. Further adjustments to the algorithm are being investigated by GE.« less
PSF reconstruction for Compton-based prompt gamma imaging
NASA Astrophysics Data System (ADS)
Jan, Meei-Ling; Lee, Ming-Wei; Huang, Hsuan-Ming
2018-02-01
Compton-based prompt gamma (PG) imaging has been proposed for in vivo range verification in proton therapy. However, several factors degrade the image quality of PG images, some of which are due to inherent properties of a Compton camera such as spatial resolution and energy resolution. Moreover, Compton-based PG imaging has a spatially variant resolution loss. In this study, we investigate the performance of the list-mode ordered subset expectation maximization algorithm with a shift-variant point spread function (LM-OSEM-SV-PSF) model. We also evaluate how well the PG images reconstructed using an SV-PSF model reproduce the distal falloff of the proton beam. The SV-PSF parameters were estimated from simulation data of point sources at various positions. Simulated PGs were produced in a water phantom irradiated with a proton beam. Compared to the LM-OSEM algorithm, the LM-OSEM-SV-PSF algorithm improved the quality of the reconstructed PG images and the estimation of PG falloff positions. In addition, the 4.44 and 5.25 MeV PG emissions can be accurately reconstructed using the LM-OSEM-SV-PSF algorithm. However, for the 2.31 and 6.13 MeV PG emissions, the LM-OSEM-SV-PSF reconstruction provides limited improvement. We also found that the LM-OSEM algorithm followed by a shift-variant Richardson-Lucy deconvolution could reconstruct images with quality visually similar to the LM-OSEM-SV-PSF-reconstructed images, while requiring shorter computation time.
Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation
NASA Astrophysics Data System (ADS)
Dwivedi, Shekhar
2009-02-01
Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.
Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo
2014-05-01
To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. CT scans were performed on a chest phantom containing various nodules (10 and 12mm; +100, -630 and -800HU) at 120kVp with tube current-time settings of 10, 20, 50, and 100mAs. Each CT was reconstructed using filtered back projection (FBP), iDose(4) and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p>0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose(4) at all radiation dose settings (p<0.05). Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Minimal-scan filtered backpropagation algorithms for diffraction tomography.
Pan, X; Anastasio, M A
1999-12-01
The filtered backpropagation (FBPP) algorithm, originally developed by Devaney [Ultrason. Imaging 4, 336 (1982)], has been widely used for reconstructing images in diffraction tomography. It is generally known that the FBPP algorithm requires scattered data from a full angular range of 2 pi for exact reconstruction of a generally complex-valued object function. However, we reveal that one needs scattered data only over the angular range 0 < or = phi < or = 3 pi/2 for exact reconstruction of a generally complex-valued object function. Using this insight, we develop and analyze a family of minimal-scan filtered backpropagation (MS-FBPP) algorithms, which, unlike the FBPP algorithm, use scattered data acquired from view angles over the range 0 < or = phi < or = 3 pi/2. We show analytically that these MS-FBPP algorithms are mathematically identical to the FBPP algorithm. We also perform computer simulation studies for validation, demonstration, and comparison of these MS-FBPP algorithms. The numerical results in these simulation studies corroborate our theoretical assertions.
CUDA-based high-performance computing of the S-BPF algorithm with no-waiting pipelining
NASA Astrophysics Data System (ADS)
Deng, Lin; Yan, Bin; Chang, Qingmei; Han, Yu; Zhang, Xiang; Xi, Xiaoqi; Li, Lei
2015-10-01
The backprojection-filtration (BPF) algorithm has become a good solution for local reconstruction in cone-beam computed tomography (CBCT). However, the reconstruction speed of BPF is a severe limitation for clinical applications. The selective-backprojection filtration (S-BPF) algorithm is developed to improve the parallel performance of BPF by selective backprojection. Furthermore, the general-purpose graphics processing unit (GP-GPU) is a popular tool for accelerating the reconstruction. Much work has been performed aiming for the optimization of the cone-beam back-projection. As the cone-beam back-projection process becomes faster, the data transportation holds a much bigger time proportion in the reconstruction than before. This paper focuses on minimizing the total time in the reconstruction with the S-BPF algorithm by hiding the data transportation among hard disk, CPU and GPU. And based on the analysis of the S-BPF algorithm, some strategies are implemented: (1) the asynchronous calls are used to overlap the implemention of CPU and GPU, (2) an innovative strategy is applied to obtain the DBP image to hide the transport time effectively, (3) two streams for data transportation and calculation are synchronized by the cudaEvent in the inverse of finite Hilbert transform on GPU. Our main contribution is a smart reconstruction of the S-BPF algorithm with GPU's continuous calculation and no data transportation time cost. a 5123 volume is reconstructed in less than 0.7 second on a single Tesla-based K20 GPU from 182 views projection with 5122 pixel per projection. The time cost of our implementation is about a half of that without the overlap behavior.
Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing
2013-09-15
For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.
NASA Astrophysics Data System (ADS)
Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.
2014-07-01
The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.
Chen, Delei; Goris, Bart; Bleichrodt, Folkert; Mezerji, Hamed Heidari; Bals, Sara; Batenburg, Kees Joost; de With, Gijsbertus; Friedrich, Heiner
2014-12-01
In electron tomography, the fidelity of the 3D reconstruction strongly depends on the employed reconstruction algorithm. In this paper, the properties of SIRT, TVM and DART reconstructions are studied with respect to having only a limited number of electrons available for imaging and applying different angular sampling schemes. A well-defined realistic model is generated, which consists of tubular domains within a matrix having slab-geometry. Subsequently, the electron tomography workflow is simulated from calculated tilt-series over experimental effects to reconstruction. In comparison with the model, the fidelity of each reconstruction method is evaluated qualitatively and quantitatively based on global and local edge profiles and resolvable distance between particles. Results show that the performance of all reconstruction methods declines with the total electron dose. Overall, SIRT algorithm is the most stable method and insensitive to changes in angular sampling. TVM algorithm yields significantly sharper edges in the reconstruction, but the edge positions are strongly influenced by the tilt scheme and the tubular objects become thinned. The DART algorithm markedly suppresses the elongation artifacts along the beam direction and moreover segments the reconstruction which can be considered a significant advantage for quantification. Finally, no advantage of TVM and DART to deal better with fewer projections was observed. Copyright © 2014 Elsevier B.V. All rights reserved.
Thuillard, Marc; Fraix-Burnet, Didier
2015-01-01
This article presents an innovative approach to phylogenies based on the reduction of multistate characters to binary-state characters. We show that the reduction to binary characters' approach can be applied to both character- and distance-based phylogenies and provides a unifying framework to explain simply and intuitively the similarities and differences between distance- and character-based phylogenies. Building on these results, this article gives a possible explanation on why phylogenetic trees obtained from a distance matrix or a set of characters are often quite reasonable despite lateral transfers of genetic material between taxa. In the presence of lateral transfers, outer planar networks furnish a better description of evolution than phylogenetic trees. We present a polynomial-time reconstruction algorithm for perfect outer planar networks with a fixed number of states, characters, and lateral transfers.
Kwon, Young-Hoo; Casebolt, Jeffrey B
2006-01-01
One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.
Kwon, Young-Hoo; Casebolt, Jeffrey B
2006-07-01
One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.
A Genetic Algorithm for the Generation of Packetization Masks for Robust Image Communication
Zapata-Quiñones, Katherine; Duran-Faundez, Cristian; Gutiérrez, Gilberto; Lecuire, Vincent; Arredondo-Flores, Christopher; Jara-Lipán, Hugo
2017-01-01
Image interleaving has proven to be an effective solution to provide the robustness of image communication systems when resource limitations make reliable protocols unsuitable (e.g., in wireless camera sensor networks); however, the search for optimal interleaving patterns is scarcely tackled in the literature. In 2008, Rombaut et al. presented an interesting approach introducing a packetization mask generator based in Simulated Annealing (SA), including a cost function, which allows assessing the suitability of a packetization pattern, avoiding extensive simulations. In this work, we present a complementary study about the non-trivial problem of generating optimal packetization patterns. We propose a genetic algorithm, as an alternative to the cited work, adopting the mentioned cost function, then comparing it to the SA approach and a torus automorphism interleaver. In addition, we engage the validation of the cost function and provide results attempting to conclude about its implication in the quality of reconstructed images. Several scenarios based on visual sensor networks applications were tested in a computer application. Results in terms of the selected cost function and image quality metric PSNR show that our algorithm presents similar results to the other approaches. Finally, we discuss the obtained results and comment about open research challenges. PMID:28452934
Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography.
Park, Justin C; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G; Liu, Chihray; Lu, Bo
2015-12-07
Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm 'the common mask guided image reconstruction' (c-MGIR).In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and 'well' solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm, the code was implemented with a graphic processing unit for parallel processing purposes.Root mean square error (RMSE) between the ground truth and reconstructed volumes of the numerical phantom were in the descending order of FDK, CTV, PICCS, MCIR, and c-MGIR for all phases. Specifically, the means and the standard deviations of the RMSE of FDK, CTV, PICCS, MCIR and c-MGIR for all phases were 42.64 ± 6.5%, 3.63 ± 0.83%, 1.31% ± 0.09%, 0.86% ± 0.11% and 0.52 % ± 0.02%, respectively. The image quality of the patient case also indicated the superiority of c-MGIR compared to other algorithms.The results indicated that clinically viable 4D CBCT images can be reconstructed while requiring no more projection data than a typical clinical 3D CBCT scan. This makes c-MGIR a potential online reconstruction algorithm for 4D CBCT, which can provide much better image quality than other available algorithms, while requiring less dose and potentially less scanning time.
Characterizing isolated attosecond pulses with angular streaking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Siqi; Guo, Zhaoheng; Coffee, Ryan N.
Here, we present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.
Characterizing isolated attosecond pulses with angular streaking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Sigi; Guo, Zhaoheng; Coffee, Ryan N.
We present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.
Characterizing isolated attosecond pulses with angular streaking
Li, Siqi; Guo, Zhaoheng; Coffee, Ryan N.; ...
2018-02-12
Here, we present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.
Characterizing isolated attosecond pulses with angular streaking
Li, Sigi; Guo, Zhaoheng; Coffee, Ryan N.; ...
2018-02-13
We present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.
Image-based 3D reconstruction and virtual environmental walk-through
NASA Astrophysics Data System (ADS)
Sun, Jifeng; Fang, Lixiong; Luo, Ying
2001-09-01
We present a 3D reconstruction method, which combines geometry-based modeling, image-based modeling and rendering techniques. The first component is an interactive geometry modeling method which recovery of the basic geometry of the photographed scene. The second component is model-based stereo algorithm. We discus the image processing problems and algorithms of walking through in virtual space, then designs and implement a high performance multi-thread wandering algorithm. The applications range from architectural planning and archaeological reconstruction to virtual environments and cinematic special effects.
Accelerating Advanced MRI Reconstructions on GPUs
Stone, S.S.; Haldar, J.P.; Tsao, S.C.; Hwu, W.-m.W.; Sutton, B.P.; Liang, Z.-P.
2008-01-01
Computational acceleration on graphics processing units (GPUs) can make advanced magnetic resonance imaging (MRI) reconstruction algorithms attractive in clinical settings, thereby improving the quality of MR images across a broad spectrum of applications. This paper describes the acceleration of such an algorithm on NVIDIA’s Quadro FX 5600. The reconstruction of a 3D image with 1283 voxels achieves up to 180 GFLOPS and requires just over one minute on the Quadro, while reconstruction on a quad-core CPU is twenty-one times slower. Furthermore, relative to the true image, the error exhibited by the advanced reconstruction is only 12%, while conventional reconstruction techniques incur error of 42%. PMID:21796230
Accelerating Advanced MRI Reconstructions on GPUs.
Stone, S S; Haldar, J P; Tsao, S C; Hwu, W-M W; Sutton, B P; Liang, Z-P
2008-10-01
Computational acceleration on graphics processing units (GPUs) can make advanced magnetic resonance imaging (MRI) reconstruction algorithms attractive in clinical settings, thereby improving the quality of MR images across a broad spectrum of applications. This paper describes the acceleration of such an algorithm on NVIDIA's Quadro FX 5600. The reconstruction of a 3D image with 128(3) voxels achieves up to 180 GFLOPS and requires just over one minute on the Quadro, while reconstruction on a quad-core CPU is twenty-one times slower. Furthermore, relative to the true image, the error exhibited by the advanced reconstruction is only 12%, while conventional reconstruction techniques incur error of 42%.
Iterative image reconstruction for PROPELLER-MRI using the nonuniform fast fourier transform.
Tamhane, Ashish A; Anastasio, Mark A; Gui, Minzhi; Arfanakis, Konstantinos
2010-07-01
To investigate an iterative image reconstruction algorithm using the nonuniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI. Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it with that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased signal to noise ratio, reduced artifacts, for similar spatial resolution, compared with gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter, the new reconstruction technique may provide PROPELLER images with improved image quality compared with conventional gridding. (c) 2010 Wiley-Liss, Inc.
Iterative Image Reconstruction for PROPELLER-MRI using the NonUniform Fast Fourier Transform
Tamhane, Ashish A.; Anastasio, Mark A.; Gui, Minzhi; Arfanakis, Konstantinos
2013-01-01
Purpose To investigate an iterative image reconstruction algorithm using the non-uniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping parallEL Lines with Enhanced Reconstruction) MRI. Materials and Methods Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it to that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. Results It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased SNR, reduced artifacts, for similar spatial resolution, compared to gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. Conclusion An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter the new reconstruction technique may provide PROPELLER images with improved image quality compared to conventional gridding. PMID:20578028
Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy
NASA Astrophysics Data System (ADS)
Tang, Jing; Rahmim, Arman
2015-01-01
A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE-MAP algorithm resulted in comparable regional mean values to those from the maximum likelihood algorithm while reducing noise. Achieving robust performance in various noise-level simulation and patient studies, the WJE-MAP algorithm demonstrates its potential in clinical quantitative PET imaging.
Krohn, Thomas; Birmes, Anita; Winz, Oliver H; Drude, Natascha I; Mottaghy, Felix M; Behrendt, Florian F; Verburg, Frederik A
2017-04-01
To investigate whether the numbers of lymph node metastases and coeliac ganglia delineated on [ 68 Ga]PSMA-HBED-CC PET/CT scans differ among datasets generated using different reconstruction algorithms. Data were constructed using the BLOB-OS-TF, BLOB-OS and 3D-RAMLA algorithms. All reconstructions were assessed by two nuclear medicine physicians for the number of pelvic/paraaortal lymph node metastases as well the number of coeliac ganglia. Standardized uptake values (SUV) were also calculated in different regions. At least one [ 68 Ga]PSMA-HBED-CC PET/CT-positive pelvic or paraaortal lymph node metastasis was found in 49 and 35 patients using the BLOB-OS-TF algorithm, in 42 and 33 patients using the BLOB-OS algorithm, and in 41 and 31 patients using the 3D-RAMLA algorithm, respectively, and a positive ganglion was found in 92, 59 and 24 of 100 patients using the three algorithms, respectively. Quantitatively, the SUVmean and SUVmax were significantly higher with the BLOB-OS algorithm than with either the BLOB-OS-TF or the 3D-RAMLA algorithm in all measured regions (p < 0.001 for all comparisons). The differences between the SUVs with the BLOB-OS-TF- and 3D-RAMLA algorithms were not significant in the aorta (SUVmean, p = 0.93; SUVmax, p = 0.97) but were significant in all other regions (p < 0.001 in all cases). The SUVmean ganglion/gluteus ratio was significantly higher with the BLOB-OS-TF algorithm than with either the BLOB-OS or the 3D-RAMLA algorithm and was significantly higher with the BLOB-OS than with the 3D-RAMLA algorithm (p < 0.001 in all cases). The results of [ 68 Ga]PSMA-HBED-CC PET/CT are affected by the reconstruction algorithm used. The highest number of lesions and physiological structures will be visualized using a modern algorithm employing time-of-flight information.
Hudson, H M; Ma, J; Green, P
1994-01-01
Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.
Hsieh, Jiang; Nilsen, Roy A.; McOlash, Scott M.
2006-01-01
A three-dimensional (3D) weighted helical cone beam filtered backprojection (CB-FBP) algorithm (namely, original 3D weighted helical CB-FBP algorithm) has already been proposed to reconstruct images from the projection data acquired along a helical trajectory in angular ranges up to [0, 2 π]. However, an overscan is usually employed in the clinic to reconstruct tomographic images with superior noise characteristics at the most challenging anatomic structures, such as head and spine, extremity imaging, and CT angiography as well. To obtain the most achievable noise characteristics or dose efficiency in a helical overscan, we extended the 3D weighted helical CB-FBP algorithm to handle helical pitches that are smaller than 1: 1 (namely extended 3D weighted helical CB-FBP algorithm). By decomposing a helical over scan with an angular range of [0, 2π + Δβ] into a union of full scans corresponding to an angular range of [0, 2π], the extended 3D weighted function is a summation of all 3D weighting functions corresponding to each full scan. An experimental evaluation shows that the extended 3D weighted helical CB-FBP algorithm can improve noise characteristics or dose efficiency of the 3D weighted helical CB-FBP algorithm at a helical pitch smaller than 1: 1, while its reconstruction accuracy and computational efficiency are maintained. It is believed that, such an efficient CB reconstruction algorithm that can provide superior noise characteristics or dose efficiency at low helical pitches may find its extensive applications in CT medical imaging. PMID:23165031
Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS
NASA Astrophysics Data System (ADS)
Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.
Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.
Inference of neuronal network spike dynamics and topology from calcium imaging data
Lütcke, Henry; Gerhard, Felipe; Zenke, Friedemann; Gerstner, Wulfram; Helmchen, Fritjof
2013-01-01
Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP) occurrence (“spike trains”) from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR) and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties. PMID:24399936
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
Functional brain networks reconstruction using group sparsity-regularized learning.
Zhao, Qinghua; Li, Will X Y; Jiang, Xi; Lv, Jinglei; Lu, Jianfeng; Liu, Tianming
2018-06-01
Investigating functional brain networks and patterns using sparse representation of fMRI data has received significant interests in the neuroimaging community. It has been reported that sparse representation is effective in reconstructing concurrent and interactive functional brain networks. To date, most of data-driven network reconstruction approaches rarely take consideration of anatomical structures, which are the substrate of brain function. Furthermore, it has been rarely explored whether structured sparse representation with anatomical guidance could facilitate functional networks reconstruction. To address this problem, in this paper, we propose to reconstruct brain networks utilizing the structure guided group sparse regression (S2GSR) in which 116 anatomical regions from the AAL template, as prior knowledge, are employed to guide the network reconstruction when performing sparse representation of whole-brain fMRI data. Specifically, we extract fMRI signals from standard space aligned with the AAL template. Then by learning a global over-complete dictionary, with the learned dictionary as a set of features (regressors), the group structured regression employs anatomical structures as group information to regress whole brain signals. Finally, the decomposition coefficients matrix is mapped back to the brain volume to represent functional brain networks and patterns. We use the publicly available Human Connectome Project (HCP) Q1 dataset as the test bed, and the experimental results indicate that the proposed anatomically guided structure sparse representation is effective in reconstructing concurrent functional brain networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naseri, M; Rajabi, H; Wang, J
Purpose: Respiration causes lesion smearing, image blurring and quality degradation, affecting lesion contrast and the ability to define correct lesion size. The spatial resolution of current multi pinhole SPECT (MPHS) scanners is sub-millimeter. Therefore, the effect of motion is more noticeable in comparison to conventional SPECT scanner. Gated imaging aims to reduce motion artifacts. A major issue in gating is the lack of statistics and individual reconstructed frames are noisy. The increased noise in each frame, deteriorates the quantitative accuracy of the MPHS Images. The objective of this work, is to enhance the image quality in 4D-MPHS imaging, by 4Dmore » image reconstruction. Methods: The new algorithm requires deformation vector fields (DVFs) that are calculated by non-rigid Demons registration. The algorithm is based on the motion-incorporated version of ordered subset expectation maximization (OSEM) algorithm. This iterative algorithm is capable to make full use of all projections to reconstruct each individual frame. To evaluate the performance of the proposed algorithm a simulation study was conducted. A fast ray tracing method was used to generate MPHS projections of a 4D digital mouse phantom with a small tumor in liver in eight different respiratory phases. To evaluate the 4D-OSEM algorithm potential, tumor to liver activity ratio was compared with other image reconstruction methods including 3D-MPHS and post reconstruction registered with Demons-derived DVFs. Results: Image quality of 4D-MPHS is greatly improved by the 4D-OSEM algorithm. When all projections are used to reconstruct a 3D-MPHS, motion blurring artifacts are present, leading to overestimation of the tumor size and 24% tumor contrast underestimation. This error reduced to 16% and 10% for post reconstruction registration methods and 4D-OSEM respectively. Conclusion: 4D-OSEM method can be used for motion correction in 4D-MPHS. The statistics and quantification are improved since all projection data are combined together to update the image.« less
SU-F-P-56: On a New Approach to Reconstruct the Patient Dose From Phantom Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bangtsson, E; Vries, W de
Purpose: The development of complex radiation treatment schemes emphasizes the need for advanced QA analysis methods to ensure patient safety. One such tool is the Delta4 DVH Anatomy software, where the patient dose is reconstructed from phantom measurements. Deviations in the measured dose are transferred to the patient anatomy and their clinical impact is evaluated in situ. Results from the original algorithm revealed weaknesses that may introduce artefacts in the reconstructed dose. These can lead to false negatives or obscure the effects of minor dose deviations from delivery failures. Here, we will present results from a new patient dose reconstructionmore » algorithm. Methods: The main steps of the new algorithm are: (1) the dose delivered to a phantom is measured in a number of detector positions. (2) The measured dose is compared to an internally calculated dose distribution evaluated in said positions. The so-obtained dose difference is (3) used to calculate an energy fluence difference. This entity is (4) used as input to a patient dose correction calculation routine. Finally, the patient dose is reconstructed by adding said patient dose correction to the planned patient dose. The internal dose calculation in step (2) and (4) is based on the Pencil Beam algorithm. Results: The new patient dose reconstruction algorithm have been tested on a number of patients and the standard metrics dose deviation (DDev), distance-to-agreement (DTA) and Gamma index are improved when compared to the original algorithm. In a certain case the Gamma index (3%/3mm) increases from 72.9% to 96.6%. Conclusion: The patient dose reconstruction algorithm is improved. This leads to a reduction in non-physical artefacts in the reconstructed patient dose. As a consequence, the possibility to detect deviations in the dose that is delivered to the patient is improved. An increase in Gamma index for the PTV can be seen. The corresponding author is an employee of ScandiDos.« less
A smoothed two- and three-dimensional interface reconstruction method
Mosso, Stewart; Garasi, Christopher; Drake, Richard
2008-04-22
The Patterned Interface Reconstruction algorithm reduces the discontinuity between material interfaces in neighboring computational elements. This smoothing improves the accuracy of the reconstruction for smooth bodies. The method can be used in two- and three-dimensional Cartesian and unstructured meshes. Planar interfaces will be returned for planar volume fraction distributions. Finally, the algorithm is second-order accurate for smooth volume fraction distributions.
BPF-type region-of-interest reconstruction for parallel translational computed tomography.
Wu, Weiwen; Yu, Hengyong; Wang, Shaoyu; Liu, Fenglin
2017-01-01
The objective of this study is to present and test a new ultra-low-cost linear scan based tomography architecture. Similar to linear tomosynthesis, the source and detector are translated in opposite directions and the data acquisition system targets on a region-of-interest (ROI) to acquire data for image reconstruction. This kind of tomographic architecture was named parallel translational computed tomography (PTCT). In previous studies, filtered backprojection (FBP)-type algorithms were developed to reconstruct images from PTCT. However, the reconstructed ROI images from truncated projections have severe truncation artefact. In order to overcome this limitation, we in this study proposed two backprojection filtering (BPF)-type algorithms named MP-BPF and MZ-BPF to reconstruct ROI images from truncated PTCT data. A weight function is constructed to deal with data redundancy for multi-linear translations modes. Extensive numerical simulations are performed to evaluate the proposed MP-BPF and MZ-BPF algorithms for PTCT in fan-beam geometry. Qualitative and quantitative results demonstrate that the proposed BPF-type algorithms cannot only more accurately reconstruct ROI images from truncated projections but also generate high-quality images for the entire image support in some circumstances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreyev, A.
Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlomore » simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.« less
Measuring the performance of super-resolution reconstruction algorithms
NASA Astrophysics Data System (ADS)
Dijk, Judith; Schutte, Klamer; van Eekeren, Adam W. M.; Bijl, Piet
2012-06-01
For many military operations situational awareness is of great importance. This situational awareness and related tasks such as Target Acquisition can be acquired using cameras, of which the resolution is an important characteristic. Super resolution reconstruction algorithms can be used to improve the effective sensor resolution. In order to judge these algorithms and the conditions under which they operate best, performance evaluation methods are necessary. This evaluation, however, is not straightforward for several reasons. First of all, frequency-based evaluation techniques alone will not provide a correct answer, due to the fact that they are unable to discriminate between structure-related and noise-related effects. Secondly, most super-resolution packages perform additional image enhancement techniques such as noise reduction and edge enhancement. As these algorithms improve the results they cannot be evaluated separately. Thirdly, a single high-resolution ground truth is rarely available. Therefore, evaluation of the differences in high resolution between the estimated high resolution image and its ground truth is not that straightforward. Fourth, different artifacts can occur due to super-resolution reconstruction, which are not known on forehand and hence are difficult to evaluate. In this paper we present a set of new evaluation techniques to assess super-resolution reconstruction algorithms. Some of these evaluation techniques are derived from processing on dedicated (synthetic) imagery. Other evaluation techniques can be evaluated on both synthetic and natural images (real camera data). The result is a balanced set of evaluation algorithms that can be used to assess the performance of super-resolution reconstruction algorithms.
Lettau, Michael; Bendszus, Martin; Hähnel, Stefan
2013-06-01
Our aim was to evaluate the in vitro visualization of different carotid artery stents on angiographic CT (ACT). Of particular interest was the influence of stent orientation to the angiography system by measurement of artificial lumen narrowing (ALN) caused by the stent material within the stented vessel segment to determine whether ACT can be used to detect restenosis within the stent. ACT appearances of 17 carotid artery stents of different designs and sizes (4.0 to 11.0 mm) were investigated in vitro. Stents were placed in different orientations to the angiography system. Standard algorithm image reconstruction and stent-optimized algorithm image reconstruction was performed. For each stent, ALN was calculated. With standard algorithm image reconstruction, ALN ranged from 19.0 to 43.6 %. With stent-optimized algorithm image reconstruction, ALN was significantly lower and ranged from 8.2 to 18.7 %. Stent struts could be visualized in all stents. Differences in ALN between the different stent orientations to the angiography system were not significant. ACT evaluation of vessel patency after stent placement is possible but is impaired by ALN. Stent orientation of the stents to the angiography system did not significantly influence ALN. Stent-optimized algorithm image reconstruction decreases ALN but further research is required to define the visibility of in-stent stenosis depending on image reconstruction.
NASA Astrophysics Data System (ADS)
Hosani, E. Al; Zhang, M.; Abascal, J. F. P. J.; Soleimani, M.
2016-11-01
Electrical capacitance tomography (ECT) is an imaging technology used to reconstruct the permittivity distribution within the sensing region. So far, ECT has been primarily used to image non-conductive media only, since if the conductivity of the imaged object is high, the capacitance measuring circuit will be almost shortened by the conductivity path and a clear image cannot be produced using the standard image reconstruction approaches. This paper tackles the problem of imaging metallic samples using conventional ECT systems by investigating the two main aspects of image reconstruction algorithms, namely the forward problem and the inverse problem. For the forward problem, two different methods to model the region of high conductivity in ECT is presented. On the other hand, for the inverse problem, three different algorithms to reconstruct the high contrast images are examined. The first two methods are the linear single step Tikhonov method and the iterative total variation regularization method, and use two sets of ECT data to reconstruct the image in time difference mode. The third method, namely the level set method, uses absolute ECT measurements and was developed using a metallic forward model. The results indicate that the applications of conventional ECT systems can be extended to metal samples using the suggested algorithms and forward model, especially using a level set algorithm to find the boundary of the metal.
NASA Astrophysics Data System (ADS)
Jiang, Kaili; Zhu, Jun; Tang, Bin
2017-12-01
Periodic nonuniform sampling occurs in many applications, and the Nyquist folding receiver (NYFR) is an efficient, low complexity, and broadband spectrum sensing architecture. In this paper, we first derive that the radio frequency (RF) sample clock function of NYFR is periodic nonuniform. Then, the classical results of periodic nonuniform sampling are applied to NYFR. We extend the spectral reconstruction algorithm of time series decomposed model to the subsampling case by using the spectrum characteristics of NYFR. The subsampling case is common for broadband spectrum surveillance. Finally, we take example for a LFM signal under large bandwidth to verify the proposed algorithm and compare the spectral reconstruction algorithm with orthogonal matching pursuit (OMP) algorithm.
2011-01-01
Background Inferring regulatory interactions between genes from transcriptomics time-resolved data, yielding reverse engineered gene regulatory networks, is of paramount importance to systems biology and bioinformatics studies. Accurate methods to address this problem can ultimately provide a deeper insight into the complexity, behavior, and functions of the underlying biological systems. However, the large number of interacting genes coupled with short and often noisy time-resolved read-outs of the system renders the reverse engineering a challenging task. Therefore, the development and assessment of methods which are computationally efficient, robust against noise, applicable to short time series data, and preferably capable of reconstructing the directionality of the regulatory interactions remains a pressing research problem with valuable applications. Results Here we perform the largest systematic analysis of a set of similarity measures and scoring schemes within the scope of the relevance network approach which are commonly used for gene regulatory network reconstruction from time series data. In addition, we define and analyze several novel measures and schemes which are particularly suitable for short transcriptomics time series. We also compare the considered 21 measures and 6 scoring schemes according to their ability to correctly reconstruct such networks from short time series data by calculating summary statistics based on the corresponding specificity and sensitivity. Our results demonstrate that rank and symbol based measures have the highest performance in inferring regulatory interactions. In addition, the proposed scoring scheme by asymmetric weighting has shown to be valuable in reducing the number of false positive interactions. On the other hand, Granger causality as well as information-theoretic measures, frequently used in inference of regulatory networks, show low performance on the short time series analyzed in this study. Conclusions Our study is intended to serve as a guide for choosing a particular combination of similarity measures and scoring schemes suitable for reconstruction of gene regulatory networks from short time series data. We show that further improvement of algorithms for reverse engineering can be obtained if one considers measures that are rooted in the study of symbolic dynamics or ranks, in contrast to the application of common similarity measures which do not consider the temporal character of the employed data. Moreover, we establish that the asymmetric weighting scoring scheme together with symbol based measures (for low noise level) and rank based measures (for high noise level) are the most suitable choices. PMID:21771321
NASA Astrophysics Data System (ADS)
Ingacheva, Anastasia; Chukalina, Marina; Khanipov, Timur; Nikolaev, Dmitry
2018-04-01
Motion blur caused by camera vibration is a common source of degradation in photographs. In this paper we study the problem of finding the point spread function (PSF) of a blurred image using the tomography technique. The PSF reconstruction result strongly depends on the particular tomography technique used. We present a tomography algorithm with regularization adapted specifically for this task. We use the algebraic reconstruction technique (ART algorithm) as the starting algorithm and introduce regularization. We use the conjugate gradient method for numerical implementation of the proposed approach. The algorithm is tested using a dataset which contains 9 kernels extracted from real photographs by the Adobe corporation where the point spread function is known. We also investigate influence of noise on the quality of image reconstruction and investigate how the number of projections influence the magnitude change of the reconstruction error.
Improving Cancer Detection and Dose Efficiency in Dedicated Breast Cancer CT
2010-02-01
source trajectory and data truncation, which can however be solved with the back-projection filtration ( BPF ) algorithm [6,7]. I have used the BPF ...high to low radiation dose levels. I have investigated noise properties in images reconstructed by use of FDK and BPF algorithms at different noise...analytic algorithms such as the FDK and BPF algorithms are applied to sparse-view data, the reconstruction images will contain artifacts such as streak
Hofmann, Hannes G; Keck, Benjamin; Rohkohl, Christopher; Hornegger, Joachim
2011-01-01
Interventional reconstruction of 3-D volumetric data from C-arm CT projections is a computationally demanding task. Hardware optimization is not an option but mandatory for interventional image processing and, in particular, for image reconstruction due to the high demands on performance. Several groups have published fast analytical 3-D reconstruction on highly parallel hardware such as GPUs to mitigate this issue. The authors show that the performance of modern CPU-based systems is in the same order as current GPUs for static 3-D reconstruction and outperforms them for a recent motion compensated (3-D+time) image reconstruction algorithm. This work investigates two algorithms: Static 3-D reconstruction as well as a recent motion compensated algorithm. The evaluation was performed using a standardized reconstruction benchmark, RABBITCT, to get comparable results and two additional clinical data sets. The authors demonstrate for a parametric B-spline motion estimation scheme that the derivative computation, which requires many write operations to memory, performs poorly on the GPU and can highly benefit from modern CPU architectures with large caches. Moreover, on a 32-core Intel Xeon server system, the authors achieve linear scaling with the number of cores used and reconstruction times almost in the same range as current GPUs. Algorithmic innovations in the field of motion compensated image reconstruction may lead to a shift back to CPUs in the future. For analytical 3-D reconstruction, the authors show that the gap between GPUs and CPUs became smaller. It can be performed in less than 20 s (on-the-fly) using a 32-core server.
Network reconstructions with partially available data
NASA Astrophysics Data System (ADS)
Zhang, Chaoyang; Chen, Yang; Hu, Gang
2017-06-01
Many practical systems in natural and social sciences can be described by dynamical networks. Day by day we have measured and accumulated huge amounts of data from these networks, which can be used by us to further our understanding of the world. The structures of the networks producing these data are often unknown. Consequently, understanding the structures of these networks from available data turns to be one of the central issues in interdisciplinary fields, which is called the network reconstruction problem. In this paper, we considered problems of network reconstructions using partially available data and some situations where data availabilities are not sufficient for conventional network reconstructions. Furthermore, we proposed to infer subnetwork with data of the subnetwork available only and other nodes of the entire network hidden; to depict group-group interactions in networks with averages of groups of node variables available; and to perform network reconstructions with known data of node variables only when networks are driven by both unknown internal fast-varying noises and unknown external slowly-varying signals. All these situations are expected to be common in practical systems and the methods and results may be useful for real world applications.
NASA Astrophysics Data System (ADS)
Ham, Woonchul; Song, Chulgyu
2017-05-01
In this paper, we propose a new three-dimensional stereo image reconstruction algorithm for a photoacoustic medical imaging system. We also introduce and discuss a new theoretical algorithm by using the physical concept of Radon transform. The main key concept of proposed theoretical algorithm is to evaluate the existence possibility of the acoustic source within a searching region by using the geometric distance between each sensor element of acoustic detector and the corresponding searching region denoted by grid. We derive the mathematical equation for the magnitude of the existence possibility which can be used for implementing a new proposed algorithm. We handle and derive mathematical equations of proposed algorithm for the one-dimensional sensing array case as well as two dimensional sensing array case too. A mathematical k-wave simulation data are used for comparing the image quality of the proposed algorithm with that of general conventional algorithm in which the FFT should be necessarily used. From the k-wave Matlab simulation results, we can prove the effectiveness of the proposed reconstruction algorithm.
Strategy on energy saving reconstruction of distribution networks based on life cycle cost
NASA Astrophysics Data System (ADS)
Chen, Xiaofei; Qiu, Zejing; Xu, Zhaoyang; Xiao, Chupeng
2017-08-01
Because the actual distribution network reconstruction project funds are often limited, the cost-benefit model and the decision-making method are crucial for distribution network energy saving reconstruction project. From the perspective of life cycle cost (LCC), firstly the research life cycle is determined for the energy saving reconstruction of distribution networks with multi-devices. Then, a new life cycle cost-benefit model for energy-saving reconstruction of distribution network is developed, in which the modification schemes include distribution transformers replacement, lines replacement and reactive power compensation. In the operation loss cost and maintenance cost area, the operation cost model considering the influence of load season characteristics and the maintenance cost segmental model of transformers are proposed. Finally, aiming at the highest energy saving profit per LCC, a decision-making method is developed while considering financial and technical constraints as well. The model and method are applied to a real distribution network reconstruction, and the results prove that the model and method are effective.
NASA Astrophysics Data System (ADS)
Szadkowski, Zbigniew; Głas, Dariusz; Pytel, Krzysztof; Wiedeński, Michał
2017-06-01
Neutrinos play a fundamental role in the understanding of the origin of ultrahigh-energy cosmic rays. They interact through charged and neutral currents in the atmosphere generating extensive air showers. However, the very low rate of events potentially generated by neutrinos is a significant challenge for detection techniques and requires both sophisticated algorithms and high-resolution hardware. Air showers initiated by protons and muon neutrinos at various altitudes, angles, and energies were simulated in CORSIKA and the Auger OffLine event reconstruction platforms, giving analog-to-digital convertor (ADC) patterns in Auger water Cherenkov detectors on the ground. The proton interaction cross section is high, so proton “old” showers start their development early in the atmosphere. In contrast to this, neutrinos can generate “young” showers deeply in the atmosphere relatively close to the detectors. Differences between “old” proton and “young” neutrino showers are visible in attenuation factors of ADC waveforms. For the separation of “old” proton and “young” neutrino ADC traces, many three-layer artificial neural networks (ANNs) were tested. They were trained in MATLAB (in a dedicated way -only “old” proton and “young” neutrino showers as patterns) by simulated ADC traces according to the Levenberg-Marquardt algorithm. Unexpectedly, the recognition efficiency is found to be almost independent of the size of the networks. The ANN trigger based on a selected 8-6-1 network was tested in the Cyclone V E FPGA 5CEFA9F31I7, the heart of prototype front-end boards developed for testing new algorithms in the Pierre Auger surface detectors.
Cone-beam reconstruction for the two-circles-plus-one-line trajectory
NASA Astrophysics Data System (ADS)
Lu, Yanbin; Yang, Jiansheng; Emerson, John W.; Mao, Heng; Zhou, Tie; Si, Yuanzheng; Jiang, Ming
2012-05-01
The Kodak Image Station In-Vivo FX has an x-ray module with cone-beam configuration for radiographic imaging but lacks the functionality of tomography. To introduce x-ray tomography into the system, we choose the two-circles-plus-one-line trajectory by mounting one translation motor and one rotation motor. We establish a reconstruction algorithm by applying the M-line reconstruction method. Numerical studies and preliminary physical phantom experiment demonstrate the feasibility of the proposed design and reconstruction algorithm.
Fully 3D refraction correction dosimetry system.
Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan
2016-02-21
The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.
Parallel heterogeneous architectures for efficient OMP compressive sensing reconstruction
NASA Astrophysics Data System (ADS)
Kulkarni, Amey; Stanislaus, Jerome L.; Mohsenin, Tinoosh
2014-05-01
Compressive Sensing (CS) is a novel scheme, in which a signal that is sparse in a known transform domain can be reconstructed using fewer samples. The signal reconstruction techniques are computationally intensive and have sluggish performance, which make them impractical for real-time processing applications . The paper presents novel architectures for Orthogonal Matching Pursuit algorithm, one of the popular CS reconstruction algorithms. We show the implementation results of proposed architectures on FPGA, ASIC and on a custom many-core platform. For FPGA and ASIC implementation, a novel thresholding method is used to reduce the processing time for the optimization problem by at least 25%. Whereas, for the custom many-core platform, efficient parallelization techniques are applied, to reconstruct signals with variant signal lengths of N and sparsity of m. The algorithm is divided into three kernels. Each kernel is parallelized to reduce execution time, whereas efficient reuse of the matrix operators allows us to reduce area. Matrix operations are efficiently paralellized by taking advantage of blocked algorithms. For demonstration purpose, all architectures reconstruct a 256-length signal with maximum sparsity of 8 using 64 measurements. Implementation on Xilinx Virtex-5 FPGA, requires 27.14 μs to reconstruct the signal using basic OMP. Whereas, with thresholding method it requires 18 μs. ASIC implementation reconstructs the signal in 13 μs. However, our custom many-core, operating at 1.18 GHz, takes 18.28 μs to complete. Our results show that compared to the previous published work of the same algorithm and matrix size, proposed architectures for FPGA and ASIC implementations perform 1.3x and 1.8x respectively faster. Also, the proposed many-core implementation performs 3000x faster than the CPU and 2000x faster than the GPU.
Application of composite dictionary multi-atom matching in gear fault diagnosis.
Cui, Lingli; Kang, Chenhui; Wang, Huaqing; Chen, Peng
2011-01-01
The sparse decomposition based on matching pursuit is an adaptive sparse expression method for signals. This paper proposes an idea concerning a composite dictionary multi-atom matching decomposition and reconstruction algorithm, and the introduction of threshold de-noising in the reconstruction algorithm. Based on the structural characteristics of gear fault signals, a composite dictionary combining the impulse time-frequency dictionary and the Fourier dictionary was constituted, and a genetic algorithm was applied to search for the best matching atom. The analysis results of gear fault simulation signals indicated the effectiveness of the hard threshold, and the impulse or harmonic characteristic components could be separately extracted. Meanwhile, the robustness of the composite dictionary multi-atom matching algorithm at different noise levels was investigated. Aiming at the effects of data lengths on the calculation efficiency of the algorithm, an improved segmented decomposition and reconstruction algorithm was proposed, and the calculation efficiency of the decomposition algorithm was significantly enhanced. In addition it is shown that the multi-atom matching algorithm was superior to the single-atom matching algorithm in both calculation efficiency and algorithm robustness. Finally, the above algorithm was applied to gear fault engineering signals, and achieved good results.
An object-oriented simulator for 3D digital breast tomosynthesis imaging system.
Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa
2013-01-01
Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.
An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System
Cengiz, Kubra
2013-01-01
Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468
NASA Astrophysics Data System (ADS)
Chen, Jiaoxuan; Zhang, Maomao; Liu, Yinyan; Chen, Jiaoliao; Li, Yi
2017-03-01
Electrical capacitance tomography (ECT) is a promising technique applied in many fields. However, the solutions for ECT are not unique and highly sensitive to the measurement noise. To remain a good shape of reconstructed object and endure a noisy data, a Rudin-Osher-Fatemi (ROF) model with total variation regularization is applied to image reconstruction in ECT. Two numerical methods, which are simplified augmented Lagrangian (SAL) and accelerated alternating direction method of multipliers (AADMM), are innovatively introduced to try to solve the above mentioned problems in ECT. The effect of the parameters and the number of iterations for different algorithms, and the noise level in capacitance data are discussed. Both simulation and experimental tests were carried out to validate the feasibility of the proposed algorithms, compared to the Landweber iteration (LI) algorithm. The results show that the SAL and AADMM algorithms can handle a high level of noise and the AADMM algorithm outperforms other algorithms in identifying the object from its background.
Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models
Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...
Deep learning with domain adaptation for accelerated projection-reconstruction MR.
Han, Yoseob; Yoo, Jaejun; Kim, Hak Hee; Shin, Hee Jung; Sung, Kyunghyun; Ye, Jong Chul
2018-09-01
The radial k-space trajectory is a well-established sampling trajectory used in conjunction with magnetic resonance imaging. However, the radial k-space trajectory requires a large number of radial lines for high-resolution reconstruction. Increasing the number of radial lines causes longer acquisition time, making it more difficult for routine clinical use. On the other hand, if we reduce the number of radial lines, streaking artifact patterns are unavoidable. To solve this problem, we propose a novel deep learning approach with domain adaptation to restore high-resolution MR images from under-sampled k-space data. The proposed deep network removes the streaking artifacts from the artifact corrupted images. To address the situation given the limited available data, we propose a domain adaptation scheme that employs a pre-trained network using a large number of X-ray computed tomography (CT) or synthesized radial MR datasets, which is then fine-tuned with only a few radial MR datasets. The proposed method outperforms existing compressed sensing algorithms, such as the total variation and PR-FOCUSS methods. In addition, the calculation time is several orders of magnitude faster than the total variation and PR-FOCUSS methods. Moreover, we found that pre-training using CT or MR data from similar organ data is more important than pre-training using data from the same modality for different organ. We demonstrate the possibility of a domain-adaptation when only a limited amount of MR data is available. The proposed method surpasses the existing compressed sensing algorithms in terms of the image quality and computation time. © 2018 International Society for Magnetic Resonance in Medicine.
Gu, Xiangping; Zhou, Xiaofeng; Sun, Yanjing
2018-02-28
Compressive sensing (CS)-based data gathering is a promising method to reduce energy consumption in wireless sensor networks (WSNs). Traditional CS-based data-gathering approaches require a large number of sensor nodes to participate in each CS measurement task, resulting in high energy consumption, and do not guarantee load balance. In this paper, we propose a sparser analysis that depends on modified diffusion wavelets, which exploit sensor readings' spatial correlation in WSNs. In particular, a novel data-gathering scheme with joint routing and CS is presented. A modified ant colony algorithm is adopted, where next hop node selection takes a node's residual energy and path length into consideration simultaneously. Moreover, in order to speed up the coverage rate and avoid the local optimal of the algorithm, an improved pheromone impact factor is put forward. More importantly, theoretical proof is given that the equivalent sensing matrix generated can satisfy the restricted isometric property (RIP). The simulation results demonstrate that the modified diffusion wavelets' sparsity affects the sensor signal and has better reconstruction performance than DFT. Furthermore, our data gathering with joint routing and CS can dramatically reduce the energy consumption of WSNs, balance the load, and prolong the network lifetime in comparison to state-of-the-art CS-based methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, X.D.; Tsui, B.M.W.; Gregoriou, G.K.
The goal of the investigation was to study the effectiveness of the corrective reconstruction methods in cardiac SPECT using a realistic phantom and to qualitatively and quantitatively evaluate the reconstructed images using bull's-eye plots. A 3D mathematical phantom which realistically models the anatomical structures of the cardiac-torso region of patients was used. The phantom allows simulation of both the attenuation distribution and the uptake of radiopharmaceuticals in different organs. Also, the phantom can be easily modified to simulate different genders and variations in patient anatomy. Two-dimensional projection data were generated from the phantom and included the effects of attenuation andmore » detector response blurring. The reconstruction methods used in the study included the conventional filtered backprojection (FBP) with no attenuation compensation, and the first-order Chang algorithm, an iterative filtered backprojection algorithm (IFBP), the weighted least square conjugate gradient algorithm and the ML-EM algorithm with non-uniform attenuation compensation. The transaxial reconstructed images were rearranged into short-axis slices from which bull's-eye plots of the count density distribution in the myocardium were generated.« less
Pan-sharpening via compressed superresolution reconstruction and multidictionary learning
NASA Astrophysics Data System (ADS)
Shi, Cheng; Liu, Fang; Li, Lingling; Jiao, Licheng; Hao, Hongxia; Shang, Ronghua; Li, Yangyang
2018-01-01
In recent compressed sensing (CS)-based pan-sharpening algorithms, pan-sharpening performance is affected by two key problems. One is that there are always errors between the high-resolution panchromatic (HRP) image and the linear weighted high-resolution multispectral (HRM) image, resulting in spatial and spectral information lost. The other is that the dictionary construction process depends on the nontruth training samples. These problems have limited applications to CS-based pan-sharpening algorithm. To solve these two problems, we propose a pan-sharpening algorithm via compressed superresolution reconstruction and multidictionary learning. Through a two-stage implementation, compressed superresolution reconstruction model reduces the error effectively between the HRP and the linear weighted HRM images. Meanwhile, the multidictionary with ridgelet and curvelet is learned for both the two stages in the superresolution reconstruction process. Since ridgelet and curvelet can better capture the structure and directional characteristics, a better reconstruction result can be obtained. Experiments are done on the QuickBird and IKONOS satellites images. The results indicate that the proposed algorithm is competitive compared with the recent CS-based pan-sharpening methods and other well-known methods.
MR image reconstruction via guided filter.
Huang, Heyan; Yang, Hang; Wang, Kang
2018-04-01
Magnetic resonance imaging (MRI) reconstruction from the smallest possible set of Fourier samples has been a difficult problem in medical imaging field. In our paper, we present a new approach based on a guided filter for efficient MRI recovery algorithm. The guided filter is an edge-preserving smoothing operator and has better behaviors near edges than the bilateral filter. Our reconstruction method is consist of two steps. First, we propose two cost functions which could be computed efficiently and thus obtain two different images. Second, the guided filter is used with these two obtained images for efficient edge-preserving filtering, and one image is used as the guidance image, the other one is used as a filtered image in the guided filter. In our reconstruction algorithm, we can obtain more details by introducing guided filter. We compare our reconstruction algorithm with some competitive MRI reconstruction techniques in terms of PSNR and visual quality. Simulation results are given to show the performance of our new method.
Direct 4D reconstruction of parametric images incorporating anato-functional joint entropy.
Tang, Jing; Kuwabara, Hiroto; Wong, Dean F; Rahmim, Arman
2010-08-07
We developed an anatomy-guided 4D closed-form algorithm to directly reconstruct parametric images from projection data for (nearly) irreversible tracers. Conventional methods consist of individually reconstructing 2D/3D PET data, followed by graphical analysis on the sequence of reconstructed image frames. The proposed direct reconstruction approach maintains the simplicity and accuracy of the expectation-maximization (EM) algorithm by extending the system matrix to include the relation between the parametric images and the measured data. A closed-form solution was achieved using a different hidden complete-data formulation within the EM framework. Furthermore, the proposed method was extended to maximum a posterior reconstruction via incorporation of MR image information, taking the joint entropy between MR and parametric PET features as the prior. Using realistic simulated noisy [(11)C]-naltrindole PET and MR brain images/data, the quantitative performance of the proposed methods was investigated. Significant improvements in terms of noise versus bias performance were demonstrated when performing direct parametric reconstruction, and additionally upon extending the algorithm to its Bayesian counterpart using the MR-PET joint entropy measure.
The role of advanced reconstruction algorithms in cardiac CT
Halliburton, Sandra S.; Tanabe, Yuki; Partovi, Sasan
2017-01-01
Non-linear iterative reconstruction (IR) algorithms have been increasingly incorporated into clinical cardiac CT protocols at institutions around the world. Multiple IR algorithms are available commercially from various vendors. IR algorithms decrease image noise and are primarily used to enable lower radiation dose protocols. IR can also be used to improve image quality for imaging of obese patients, coronary atherosclerotic plaques, coronary stents, and myocardial perfusion. In this article, we will review the various applications of IR algorithms in cardiac imaging and evaluate how they have changed practice. PMID:29255694
Efficient iterative image reconstruction algorithm for dedicated breast CT
NASA Astrophysics Data System (ADS)
Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan
2016-03-01
Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.
Computational optical tomography using 3-D deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Nguyen, Thanh; Bui, Vy; Nehmetallah, George
2018-04-01
Deep convolutional neural networks (DCNNs) offer a promising performance for many image processing areas, such as super-resolution, deconvolution, image classification, denoising, and segmentation, with outstanding results. Here, we develop for the first time, to our knowledge, a method to perform 3-D computational optical tomography using 3-D DCNN. A simulated 3-D phantom dataset was first constructed and converted to a dataset of phase objects imaged on a spatial light modulator. For each phase image in the dataset, the corresponding diffracted intensity image was experimentally recorded on a CCD. We then experimentally demonstrate the ability of the developed 3-D DCNN algorithm to solve the inverse problem by reconstructing the 3-D index of refraction distributions of test phantoms from the dataset from their corresponding diffraction patterns.
Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying
2013-12-01
Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsui, B.M.W.; Frey, E.C.; Lalush, D.S.
1996-12-31
We investigated methods to accurately reconstruct 180{degrees} truncated TCT and SPECT projection data obtained from a right-angle dual-camera SPECT system for myocardial SPECT with attenuation compensation. The 180{degrees} data reconstruction methods would permit substantial savings in transmission data acquisition time. Simulation data from the 3D MCAT phantom and clinical data from large patients were used in the evaluation study. Different transmission reconstruction methods including the FBP, transmission ML-EM, transmission ML-SA, and BIT algorithms with and without using the body contour as support, were used in the TCT image reconstructions. The accuracy of both the TCT and attenuation compensated SPECT imagesmore » were evaluated for different degrees of truncation and noise levels. We found that using the FBP reconstructed TCT images resulted in higher count density in the left ventricular (LV) wall of the attenuation compensated SPECT images. The LV wall count density obtained using the iteratively reconstructed TCT images with and without support were similar to each other and were more accurate than that using the FBP. However, the TCT images obtained with support show fewer image artifacts than without support. Among the iterative reconstruction algorithms, the ML-SA algorithm provides the most accurate reconstruction but is the slowest. The BIT algorithm is the fastest but shows the most image artifacts. We conclude that accurate attenuation compensated images can be obtained with truncated 180{degrees} data from large patients using a right-angle dual-camera SPECT system.« less
Using hybridization networks to retrace the evolution of Indo-European languages.
Willems, Matthieu; Lord, Etienne; Laforest, Louise; Labelle, Gilbert; Lapointe, François-Joseph; Di Sciullo, Anna Maria; Makarenkov, Vladimir
2016-09-06
Curious parallels between the processes of species and language evolution have been observed by many researchers. Retracing the evolution of Indo-European (IE) languages remains one of the most intriguing intellectual challenges in historical linguistics. Most of the IE language studies use the traditional phylogenetic tree model to represent the evolution of natural languages, thus not taking into account reticulate evolutionary events, such as language hybridization and word borrowing which can be associated with species hybridization and horizontal gene transfer, respectively. More recently, implicit evolutionary networks, such as split graphs and minimal lateral networks, have been used to account for reticulate evolution in linguistics. Striking parallels existing between the evolution of species and natural languages allowed us to apply three computational biology methods for reconstruction of phylogenetic networks to model the evolution of IE languages. We show how the transfer of methods between the two disciplines can be achieved, making necessary methodological adaptations. Considering basic vocabulary data from the well-known Dyen's lexical database, which contains word forms in 84 IE languages for the meanings of a 200-meaning Swadesh list, we adapt a recently developed computational biology algorithm for building explicit hybridization networks to study the evolution of IE languages and compare our findings to the results provided by the split graph and galled network methods. We conclude that explicit phylogenetic networks can be successfully used to identify donors and recipients of lexical material as well as the degree of influence of each donor language on the corresponding recipient languages. We show that our algorithm is well suited to detect reticulate relationships among languages, and present some historical and linguistic justification for the results obtained. Our findings could be further refined if relevant syntactic, phonological and morphological data could be analyzed along with the available lexical data.
Alam, Daniel; Ali, Yaseen; Klem, Christopher; Coventry, Daniel
2016-11-01
Orbito-malar reconstruction after oncological resection represents one of the most challenging facial reconstructive procedures. Until the last few decades, rehabilitation was typically prosthesis based with a limited role for surgery. The advent of microsurgical techniques allowed large-volume tissue reconstitution from a distant donor site, revolutionizing the potential approaches to these defects. The authors report a novel surgery-based algorithm and a classification scheme for complete midface reconstruction with a foundation in the Gillies principles of like-to-like reconstruction and with a significant role of computer-aided virtual planning. With this approach, the authors have been able to achieve significantly better patient outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.
Two-dimensional wavefront reconstruction based on double-shearing and least squares fitting
NASA Astrophysics Data System (ADS)
Liang, Peiying; Ding, Jianping; Zhu, Yangqing; Dong, Qian; Huang, Yuhua; Zhu, Zhen
2017-06-01
The two-dimensional wavefront reconstruction method based on double-shearing and least squares fitting is proposed in this paper. Four one-dimensional phase estimates of the measured wavefront, which correspond to the two shears and the two orthogonal directions, could be calculated from the differential phase, which solves the problem of the missing spectrum, and then by using the least squares method the two-dimensional wavefront reconstruction could be done. The numerical simulations of the proposed algorithm are carried out to verify the feasibility of this method. The influence of noise generated from different shear amount and different intensity on the accuracy of the reconstruction is studied and compared with the results from the algorithm based on single-shearing and least squares fitting. Finally, a two-grating lateral shearing interference experiment is carried out to verify the wavefront reconstruction algorithm based on doubleshearing and least squares fitting.
Zhang, Shang; Dong, Yuhan; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin
2018-02-22
The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer.
Zhang, Shang; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin
2018-01-01
The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer. PMID:29470406
Analog "neuronal" networks in early vision.
Koch, C; Marroquin, J; Yuille, A
1986-01-01
Many problems in early vision can be formulated in terms of minimizing a cost function. Examples are shape from shading, edge detection, motion analysis, structure from motion, and surface interpolation. As shown by Poggio and Koch [Poggio, T. & Koch, C. (1985) Proc. R. Soc. London, Ser. B 226, 303-323], quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical, or chemical networks. However, in the presence of discontinuities, the cost function is nonquadratic, raising the question of designing efficient algorithms for computing the optimal solution. Recently, Hopfield and Tank [Hopfield, J. J. & Tank, D. W. (1985) Biol. Cybern. 52, 141-152] have shown that networks of nonlinear analog "neurons" can be effective in computing the solution of optimization problems. We show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific analog network, solving the problem of reconstructing a smooth surface from sparse data while preserving its discontinuities. These results suggest a novel computational strategy for solving early vision problems in both biological and real-time artificial vision systems. PMID:3459172
NASA Astrophysics Data System (ADS)
Li, Min; Yuan, Yunbin; Wang, Ningbo; Li, Zishen; Liu, Xifeng; Zhang, Xiao
2018-07-01
This paper presents a quantitative comparison of several widely used interpolation algorithms, i.e., Ordinary Kriging (OrK), Universal Kriging (UnK), planar fit and Inverse Distance Weighting (IDW), based on a grid-based single-shell ionosphere model over China. The experimental data were collected from the Crustal Movement Observation Network of China (CMONOC) and the International GNSS Service (IGS), covering the days of year 60-90 in 2015. The quality of these interpolation algorithms was assessed by cross-validation in terms of both the ionospheric correction performance and Single-Frequency (SF) Precise Point Positioning (PPP) accuracy on an epoch-by-epoch basis. The results indicate that the interpolation models perform better at mid-latitudes than low latitudes. For the China region, the performance of OrK and UnK is relatively better than the planar fit and IDW model for estimating ionospheric delay and positioning. In addition, the computational efficiencies of the IDW and planar fit models are better than those of OrK and UnK.
Superiorization-based multi-energy CT image reconstruction
Yang, Q; Cong, W; Wang, G
2017-01-01
The recently-developed superiorization approach is efficient and robust for solving various constrained optimization problems. This methodology can be applied to multi-energy CT image reconstruction with the regularization in terms of the prior rank, intensity and sparsity model (PRISM). In this paper, we propose a superiorized version of the simultaneous algebraic reconstruction technique (SART) based on the PRISM model. Then, we compare the proposed superiorized algorithm with the Split-Bregman algorithm in numerical experiments. The results show that both the Superiorized-SART and the Split-Bregman algorithms generate good results with weak noise and reduced artefacts. PMID:28983142
MR fingerprinting reconstruction with Kalman filter.
Zhang, Xiaodi; Zhou, Zechen; Chen, Shiyang; Chen, Shuo; Li, Rui; Hu, Xiaoping
2017-09-01
Magnetic resonance fingerprinting (MR fingerprinting or MRF) is a newly introduced quantitative magnetic resonance imaging technique, which enables simultaneous multi-parameter mapping in a single acquisition with improved time efficiency. The current MRF reconstruction method is based on dictionary matching, which may be limited by the discrete and finite nature of the dictionary and the computational cost associated with dictionary construction, storage and matching. In this paper, we describe a reconstruction method based on Kalman filter for MRF, which avoids the use of dictionary to obtain continuous MR parameter measurements. With this Kalman filter framework, the Bloch equation of inversion-recovery balanced steady state free-precession (IR-bSSFP) MRF sequence was derived to predict signal evolution, and acquired signal was entered to update the prediction. The algorithm can gradually estimate the accurate MR parameters during the recursive calculation. Single pixel and numeric brain phantom simulation were implemented with Kalman filter and the results were compared with those from dictionary matching reconstruction algorithm to demonstrate the feasibility and assess the performance of Kalman filter algorithm. The results demonstrated that Kalman filter algorithm is applicable for MRF reconstruction, eliminating the need for a pre-define dictionary and obtaining continuous MR parameter in contrast to the dictionary matching algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.
LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation
NASA Astrophysics Data System (ADS)
Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.
2015-01-01
Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanners. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present a LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the non-negative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which can have significant implications in preclinical and clinical ROI imaging applications.
On the assessment of spatial resolution of PET systems with iterative image reconstruction
NASA Astrophysics Data System (ADS)
Gong, Kuang; Cherry, Simon R.; Qi, Jinyi
2016-03-01
Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.