Science.gov

Sample records for gene-network reconstruction method

  1. Supervised classification for gene network reconstruction.

    PubMed

    Soinov, L A

    2003-12-01

    One of the central problems of functional genomics is revealing gene expression networks - the relationships between genes that reflect observations of how the expression level of each gene affects those of others. Microarray data are currently a major source of information about the interplay of biochemical network participants in living cells. Various mathematical techniques, such as differential equations, Bayesian and Boolean models and several statistical methods, have been applied to expression data in attempts to extract the underlying knowledge. Unsupervised clustering methods are often considered as the necessary first step in visualization and analysis of the expression data. As for supervised classification, the problem mainly addressed so far has been how to find discriminative genes separating various samples or experimental conditions. Numerous methods have been applied to identify genes that help to predict treatment outcome or to confirm a diagnosis, as well as to identify primary elements of gene regulatory circuits. However, less attention has been devoted to using supervised learning to uncover relationships between genes and/or their products. To start filling this gap a machine-learning approach for gene networks reconstruction is described here. This approach is based on building classifiers--functions, which determine the state of a gene's transcription machinery through expression levels of other genes. The method can be applied to various cases where relationships between gene expression levels could be expected. PMID:14641098

  2. Reconstruct modular phenotype-specific gene networks by knowledge-driven matrix factorization

    PubMed Central

    Yang, Xuerui; Zhou, Yang; Jin, Rong; Chan, Christina

    2009-01-01

    Motivation: Reconstructing gene networks from microarray data has provided mechanistic information on cellular processes. A popular structure learning method, Bayesian network inference, has been used to determine network topology despite its shortcomings, i.e. the high-computational cost when analyzing a large number of genes and the inefficiency in exploiting prior knowledge, such as the co-regulation information of the genes. To address these limitations, we are introducing an alternative method, knowledge-driven matrix factorization (KMF) framework, to reconstruct phenotype-specific modular gene networks. Results: Considering the reconstruction of gene network as a matrix factorization problem, we first use the gene expression data to estimate a correlation matrix, and then factorize the correlation matrix to recover the gene modules and the interactions between them. Prior knowledge from Gene Ontology is integrated into the matrix factorization. We applied this KMF algorithm to hepatocellular carcinoma (HepG2) cells treated with free fatty acids (FFAs). By comparing the module networks for the different conditions, we identified the specific modules that are involved in conferring the cytotoxic phenotype induced by palmitate. Further analysis of the gene modules of the different conditions suggested individual genes that play important roles in palmitate-induced cytotoxicity. In summary, KMF can efficiently integrate gene expression data with prior knowledge, thereby providing a powerful method of reconstructing phenotype-specific gene networks and valuable insights into the mechanisms that govern the phenotype. Contact: krischan@msu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19542155

  3. Hub-Centered Gene Network Reconstruction Using Automatic Relevance Determination

    PubMed Central

    Böck, Matthias; Ogishima, Soichi; Tanaka, Hiroshi; Kramer, Stefan; Kaderali, Lars

    2012-01-01

    Network inference deals with the reconstruction of biological networks from experimental data. A variety of different reverse engineering techniques are available; they differ in the underlying assumptions and mathematical models used. One common problem for all approaches stems from the complexity of the task, due to the combinatorial explosion of different network topologies for increasing network size. To handle this problem, constraints are frequently used, for example on the node degree, number of edges, or constraints on regulation functions between network components. We propose to exploit topological considerations in the inference of gene regulatory networks. Such systems are often controlled by a small number of hub genes, while most other genes have only limited influence on the network's dynamic. We model gene regulation using a Bayesian network with discrete, Boolean nodes. A hierarchical prior is employed to identify hub genes. The first layer of the prior is used to regularize weights on edges emanating from one specific node. A second prior on hyperparameters controls the magnitude of the former regularization for different nodes. The net effect is that central nodes tend to form in reconstructed networks. Network reconstruction is then performed by maximization of or sampling from the posterior distribution. We evaluate our approach on simulated and real experimental data, indicating that we can reconstruct main regulatory interactions from the data. We furthermore compare our approach to other state-of-the art methods, showing superior performance in identifying hubs. Using a large publicly available dataset of over 800 cell cycle regulated genes, we are able to identify several main hub genes. Our method may thus provide a valuable tool to identify interesting candidate genes for further study. Furthermore, the approach presented may stimulate further developments in regularization methods for network reconstruction from data. PMID:22570688

  4. Semi-Supervised Multi-View Learning for Gene Network Reconstruction

    PubMed Central

    Ceci, Michelangelo; Pio, Gianvito; Kuzmanovski, Vladimir; Džeroski, Sašo

    2015-01-01

    The task of gene regulatory network reconstruction from high-throughput data is receiving increasing attention in recent years. As a consequence, many inference methods for solving this task have been proposed in the literature. It has been recently observed, however, that no single inference method performs optimally across all datasets. It has also been shown that the integration of predictions from multiple inference methods is more robust and shows high performance across diverse datasets. Inspired by this research, in this paper, we propose a machine learning solution which learns to combine predictions from multiple inference methods. While this approach adds additional complexity to the inference process, we expect it would also carry substantial benefits. These would come from the automatic adaptation to patterns on the outputs of individual inference methods, so that it is possible to identify regulatory interactions more reliably when these patterns occur. This article demonstrates the benefits (in terms of accuracy of the reconstructed networks) of the proposed method, which exploits an iterative, semi-supervised ensemble-based algorithm. The algorithm learns to combine the interactions predicted by many different inference methods in the multi-view learning setting. The empirical evaluation of the proposed algorithm on a prokaryotic model organism (E. coli) and on a eukaryotic model organism (S. cerevisiae) clearly shows improved performance over the state of the art methods. The results indicate that gene regulatory network reconstruction for the real datasets is more difficult for S. cerevisiae than for E. coli. The software, all the datasets used in the experiments and all the results are available for download at the following link: http://figshare.com/articles/Semi_supervised_Multi_View_Learning_for_Gene_Network_Reconstruction/1604827. PMID:26641091

  5. Semi-Supervised Multi-View Learning for Gene Network Reconstruction.

    PubMed

    Ceci, Michelangelo; Pio, Gianvito; Kuzmanovski, Vladimir; Džeroski, Sašo

    2015-01-01

    The task of gene regulatory network reconstruction from high-throughput data is receiving increasing attention in recent years. As a consequence, many inference methods for solving this task have been proposed in the literature. It has been recently observed, however, that no single inference method performs optimally across all datasets. It has also been shown that the integration of predictions from multiple inference methods is more robust and shows high performance across diverse datasets. Inspired by this research, in this paper, we propose a machine learning solution which learns to combine predictions from multiple inference methods. While this approach adds additional complexity to the inference process, we expect it would also carry substantial benefits. These would come from the automatic adaptation to patterns on the outputs of individual inference methods, so that it is possible to identify regulatory interactions more reliably when these patterns occur. This article demonstrates the benefits (in terms of accuracy of the reconstructed networks) of the proposed method, which exploits an iterative, semi-supervised ensemble-based algorithm. The algorithm learns to combine the interactions predicted by many different inference methods in the multi-view learning setting. The empirical evaluation of the proposed algorithm on a prokaryotic model organism (E. coli) and on a eukaryotic model organism (S. cerevisiae) clearly shows improved performance over the state of the art methods. The results indicate that gene regulatory network reconstruction for the real datasets is more difficult for S. cerevisiae than for E. coli. The software, all the datasets used in the experiments and all the results are available for download at the following link: http://figshare.com/articles/Semi_supervised_Multi_View_Learning_for_Gene_Network_Reconstruction/1604827. PMID:26641091

  6. [A generalized chemical-kinetic method for modeling gene networks].

    PubMed

    Likhoshvaĭ, V A; Matushkin, Iu G; Ratushnyĭ, A V; Anan'ko, E A; Ignat'eva, E V; Podkolodnaia, O A

    2001-01-01

    Development of methods for mathematical simulation of biological systems and building specific simulations is an important trend of bioinformatics development. Here we describe the method of generalized chemokinetic simulation generating flexible and adequate simulations of various biological systems. Adequate simulations of complex nonlinear gene networks--control system of cholesterol by synthesis in the cell and erythrocyte differentiation and maturation--are given as the examples. The simulations were expressed in terms of unit processes--biochemical reactions. Optimal sets of parameters were determined and the systems were numerically simulated under various conditions. The simulations allow us to study possible functional conditions of these gene networks, calculate consequences of mutations, and define optimal strategies for their correction including therapeutic ones. Graphical user interface for these simulations is available at http://wwwmgs.bionet.nsc.ru/systems/MGL/GeneNet/. PMID:11771132

  7. A swarm intelligence framework for reconstructing gene networks: searching for biologically plausible architectures.

    PubMed

    Kentzoglanakis, Kyriakos; Poole, Matthew

    2012-01-01

    In this paper, we investigate the problem of reverse engineering the topology of gene regulatory networks from temporal gene expression data. We adopt a computational intelligence approach comprising swarm intelligence techniques, namely particle swarm optimization (PSO) and ant colony optimization (ACO). In addition, the recurrent neural network (RNN) formalism is employed for modeling the dynamical behavior of gene regulatory systems. More specifically, ACO is used for searching the discrete space of network architectures and PSO for searching the corresponding continuous space of RNN model parameters. We propose a novel solution construction process in the context of ACO for generating biologically plausible candidate architectures. The objective is to concentrate the search effort into areas of the structure space that contain architectures which are feasible in terms of their topological resemblance to real-world networks. The proposed framework is initially applied to the reconstruction of a small artificial network that has previously been studied in the context of gene network reverse engineering. Subsequently, we consider an artificial data set with added noise for reconstructing a subnetwork of the genetic interaction network of S. cerevisiae (yeast). Finally, the framework is applied to a real-world data set for reverse engineering the SOS response system of the bacterium Escherichia coli. Results demonstrate the relative advantage of utilizing problem-specific knowledge regarding biologically plausible structural properties of gene networks over conducting a problem-agnostic search in the vast space of network architectures. PMID:21576756

  8. Snapshot of iron response in Shewanella oneidensis by gene network reconstruction

    SciTech Connect

    Yang, Yunfeng; Harris, Daniel P.; Luo, Feng; Xiong, Wenlu; Joachimiak, Marcin; Wu, Liyou; Dehal, Paramvir; Jacobsen, Janet; Yang, Zamin; Palumbo, Anthony V.; Arkin, Adam P.; Zhou, Jizhong

    2008-10-09

    Background: Iron homeostasis of Shewanella oneidensis, a gamma-proteobacterium possessing high iron content, is regulated by a global transcription factor Fur. However, knowledge is incomplete about other biological pathways that respond to changes in iron concentration, as well as details of the responses. In this work, we integrate physiological, transcriptomics and genetic approaches to delineate the iron response of S. oneidensis. Results: We show that the iron response in S. oneidensis is a rapid process. Temporal gene expression profiles were examined for iron depletion and repletion, and a gene co-expression network was reconstructed. Modules of iron acquisition systems, anaerobic energy metabolism and protein degradation were the most noteworthy in the gene network. Bioinformatics analyses suggested that genes in each of the modules might be regulated by DNA-binding proteins Fur, CRP and RpoH, respectively. Closer inspection of these modules revealed a transcriptional regulator (SO2426) involved in iron acquisition and ten transcriptional factors involved in anaerobic energy metabolism. Selected genes in the network were analyzed by genetic studies. Disruption of genes encoding a putative alcaligin biosynthesis protein (SO3032) and a gene previously implicated in protein degradation (SO2017) led to severe growth deficiency under iron depletion conditions. Disruption of a novel transcriptional factor (SO1415) caused deficiency in both anaerobic iron reduction and growth with thiosulfate or TMAO as an electronic acceptor, suggesting that SO1415 is required for specific branches of anaerobic energy metabolism pathways. Conclusions: Using a reconstructed gene network, we identified major biological pathways that were differentially expressed during iron depletion and repletion. Genetic studies not only demonstrated the importance of iron acquisition and protein degradation for iron depletion, but also characterized a novel transcriptional factor (SO1415) with a

  9. A Synthesis Method of Gene Networks Having Cyclic Expression Pattern Sequences by Network Learning

    NASA Astrophysics Data System (ADS)

    Mori, Yoshihiro; Kuroe, Yasuaki

    Recently, synthesis of gene networks having desired functions has become of interest to many researchers because it is a complementary approach to understanding gene networks, and it could be the first step in controlling living cells. There exist several periodic phenomena in cells, e.g. circadian rhythm. These phenomena are considered to be generated by gene networks. We have already proposed synthesis method of gene networks based on gene expression. The method is applicable to synthesizing gene networks possessing the desired cyclic expression pattern sequences. It ensures that realized expression pattern sequences are periodic, however, it does not ensure that their corresponding solution trajectories are periodic, which might bring that their oscillations are not persistent. In this paper, in order to resolve the problem we propose a synthesis method of gene networks possessing the desired cyclic expression pattern sequences together with their corresponding solution trajectories being periodic. In the proposed method the persistent oscillations of the solution trajectories are realized by specifying passing points of them.

  10. Reconstruction of a Functional Human Gene Network, with an Application for Prioritizing Positional Candidate Genes

    PubMed Central

    Franke, Lude; Bakel, Harm van; Fokkens, Like; de Jong, Edwin D.; Egmont-Petersen, Michael; Wijmenga, Cisca

    2006-01-01

    Most common genetic disorders have a complex inheritance and may result from variants in many genes, each contributing only weak effects to the disease. Pinpointing these disease genes within the myriad of susceptibility loci identified in linkage studies is difficult because these loci may contain hundreds of genes. However, in any disorder, most of the disease genes will be involved in only a few different molecular pathways. If we know something about the relationships between the genes, we can assess whether some genes (which may reside in different loci) functionally interact with each other, indicating a joint basis for the disease etiology. There are various repositories of information on pathway relationships. To consolidate this information, we developed a functional human gene network that integrates information on genes and the functional relationships between genes, based on data from the Kyoto Encyclopedia of Genes and Genomes, the Biomolecular Interaction Network Database, Reactome, the Human Protein Reference Database, the Gene Ontology database, predicted protein-protein interactions, human yeast two-hybrid interactions, and microarray coexpressions. We applied this network to interrelate positional candidate genes from different disease loci and then tested 96 heritable disorders for which the Online Mendelian Inheritance in Man database reported at least three disease genes. Artificial susceptibility loci, each containing 100 genes, were constructed around each disease gene, and we used the network to rank these genes on the basis of their functional interactions. By following up the top five genes per artificial locus, we were able to detect at least one known disease gene in 54% of the loci studied, representing a 2.8-fold increase over random selection. This suggests that our method can significantly reduce the cost and effort of pinpointing true disease genes in analyses of disorders for which numerous loci have been reported but for which

  11. Ensemble-Based Network Aggregation Improves the Accuracy of Gene Network Reconstruction

    PubMed Central

    Xiao, Guanghua; Xie, Yang

    2014-01-01

    Reverse engineering approaches to constructing gene regulatory networks (GRNs) based on genome-wide mRNA expression data have led to significant biological findings, such as the discovery of novel drug targets. However, the reliability of the reconstructed GRNs needs to be improved. Here, we propose an ensemble-based network aggregation approach to improving the accuracy of network topologies constructed from mRNA expression data. To evaluate the performances of different approaches, we created dozens of simulated networks from combinations of gene-set sizes and sample sizes and also tested our methods on three Escherichia coli datasets. We demonstrate that the ensemble-based network aggregation approach can be used to effectively integrate GRNs constructed from different studies – producing more accurate networks. We also apply this approach to building a network from epithelial mesenchymal transition (EMT) signature microarray data and identify hub genes that might be potential drug targets. The R code used to perform all of the analyses is available in an R package entitled “ENA”, accessible on CRAN (http://cran.r-project.org/web/packages/ENA/). PMID:25390635

  12. Methods of Voice Reconstruction

    PubMed Central

    Chen, Hung-Chi; Kim Evans, Karen F.; Salgado, Christopher J.; Mardini, Samir

    2010-01-01

    This article reviews methods of voice reconstruction. Nonsurgical methods of voice reconstruction include electrolarynx, pneumatic artificial larynx, and esophageal speech. Surgical methods of voice reconstruction include neoglottis, tracheoesophageal puncture, and prosthesis. Tracheoesophageal puncture can be performed in patients with pedicled flaps such as colon interposition, jejunum, or gastric pull-up or in free flaps such as perforator flaps, jejunum, and colon flaps. Other flaps for voice reconstruction include the ileocolon flap and jejunum. Laryngeal transplantation is also reviewed. PMID:22550443

  13. How to train your microbe: methods for dynamically characterizing gene networks

    PubMed Central

    Castillo-Hair, Sebastian M.; Igoshin, Oleg A.; Tabor, Jeffrey J.

    2015-01-01

    Gene networks regulate biological processes dynamically. However, researchers have largely relied upon static perturbations, such as growth media variations and gene knockouts, to elucidate gene network structure and function. Thus, much of the regulation on the path from DNA to phenotype remains poorly understood. Recent studies have utilized improved genetic tools, hardware, and computational control strategies to generate precise temporal perturbations outside and inside of live cells. These experiments have, in turn, provided new insights into the organizing principles of biology. Here, we introduce the major classes of dynamical perturbations that can be used to study gene networks, and discuss technologies available for creating them in a wide range of microbial pathways. PMID:25677419

  14. How to train your microbe: methods for dynamically characterizing gene networks.

    PubMed

    Castillo-Hair, Sebastian M; Igoshin, Oleg A; Tabor, Jeffrey J

    2015-04-01

    Gene networks regulate biological processes dynamically. However, researchers have largely relied upon static perturbations, such as growth media variations and gene knockouts, to elucidate gene network structure and function. Thus, much of the regulation on the path from DNA to phenotype remains poorly understood. Recent studies have utilized improved genetic tools, hardware, and computational control strategies to generate precise temporal perturbations outside and inside of live cells. These experiments have, in turn, provided new insights into the organizing principles of biology. Here, we introduce the major classes of dynamical perturbations that can be used to study gene networks, and discuss technologies available for creating them in a wide range of microbial pathways. PMID:25677419

  15. Inferring robust gene networks from expression data by a sensitivity-based incremental evolution method

    PubMed Central

    2012-01-01

    Background Reconstructing gene regulatory networks (GRNs) from expression data is one of the most important challenges in systems biology research. Many computational models and methods have been proposed to automate the process of network reconstruction. Inferring robust networks with desired behaviours remains challenging, however. This problem is related to network dynamics but has yet to be investigated using network modeling. Results We propose an incremental evolution approach for inferring GRNs that takes network robustness into consideration and can deal with a large number of network parameters. Our approach includes a sensitivity analysis procedure to iteratively select the most influential network parameters, and it uses a swarm intelligence procedure to perform parameter optimization. We have conducted a series of experiments to evaluate the external behaviors and internal robustness of the networks inferred by the proposed approach. The results and analyses have verified the effectiveness of our approach. Conclusions Sensitivity analysis is crucial to identifying the most sensitive parameters that govern the network dynamics. It can further be used to derive constraints for network parameters in the network reconstruction process. The experimental results show that the proposed approach can successfully infer robust GRNs with desired system behaviors. PMID:22595005

  16. Investigating the Effects of Imputation Methods for Modelling Gene Networks Using a Dynamic Bayesian Network from Gene Expression Data

    PubMed Central

    CHAI, Lian En; LAW, Chow Kuan; MOHAMAD, Mohd Saberi; CHONG, Chuii Khim; CHOON, Yee Wen; DERIS, Safaai; ILLIAS, Rosli Md

    2014-01-01

    Background: Gene expression data often contain missing expression values. Therefore, several imputation methods have been applied to solve the missing values, which include k-nearest neighbour (kNN), local least squares (LLS), and Bayesian principal component analysis (BPCA). However, the effects of these imputation methods on the modelling of gene regulatory networks from gene expression data have rarely been investigated and analysed using a dynamic Bayesian network (DBN). Methods: In the present study, we separately imputed datasets of the Escherichia coli S.O.S. DNA repair pathway and the Saccharomyces cerevisiae cell cycle pathway with kNN, LLS, and BPCA, and subsequently used these to generate gene regulatory networks (GRNs) using a discrete DBN. We made comparisons on the basis of previous studies in order to select the gene network with the least error. Results: We found that BPCA and LLS performed better on larger networks (based on the S. cerevisiae dataset), whereas kNN performed better on smaller networks (based on the E. coli dataset). Conclusion: The results suggest that the performance of each imputation method is dependent on the size of the dataset, and this subsequently affects the modelling of the resultant GRNs using a DBN. In addition, on the basis of these results, a DBN has the capacity to discover potential edges, as well as display interactions, between genes. PMID:24876803

  17. Line profile reconstruction: validation and comparison of reconstruction methods

    NASA Astrophysics Data System (ADS)

    Tsai, Ming-Yi; Yost, Michael G.; Wu, Chang-Fu; Hashmonay, Ram A.; Larson, Timothy V.

    Currently, open path Fourier transform infrared (OP-FTIR) spectrometers have been applied in some fenceline monitoring, but their use has been limited because path-integrated concentration measurements typically only provide an estimate of the average concentration. We present a series of experiments that further explore the use of path-integrated measurements to reconstruct various pollutant distributions along a linear path. Our experiments were conducted in a ventilation chamber using an OP-FTIR instrument to monitor a tracer-gas release over a fenceline configuration. These experiments validate a line profile method (1-D reconstruction). Additionally, we expand current reconstruction techniques by applying the Bootstrap to our measurements. We compared our reconstruction results to our point samplers using the concordance correlation factor (CCF). Of the four different release types, three were successfully reconstructed with CCFs greater than 0.9. The difficult reconstruction involved a narrow release where the pollutant was limited to one segment of the segmented beampath. In general, of the three reconstruction methods employed, the average of the bootstrapped reconstructions was found to have the highest CCFs when compared to the point samplers. Furthermore, the bootstrap method was the most flexible and allowed a determination of the uncertainty surrounding our reconstructions.

  18. Method for position emission mammography image reconstruction

    DOEpatents

    Smith, Mark Frederick

    2004-10-12

    An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.

  19. The gap gene network

    PubMed Central

    2010-01-01

    Gap genes are involved in segment determination during the early development of the fruit fly Drosophila melanogaster as well as in other insects. This review attempts to synthesize the current knowledge of the gap gene network through a comprehensive survey of the experimental literature. I focus on genetic and molecular evidence, which provides us with an almost-complete picture of the regulatory interactions responsible for trunk gap gene expression. I discuss the regulatory mechanisms involved, and highlight the remaining ambiguities and gaps in the evidence. This is followed by a brief discussion of molecular regulatory mechanisms for transcriptional regulation, as well as precision and size-regulation provided by the system. Finally, I discuss evidence on the evolution of gap gene expression from species other than Drosophila. My survey concludes that studies of the gap gene system continue to reveal interesting and important new insights into the role of gene regulatory networks in development and evolution. PMID:20927566

  20. Reconstructive methods in hearing disorders - surgical methods

    PubMed Central

    Zahnert, Thomas

    2005-01-01

    Restoration of hearing is associated in many cases with resocialisation of those affected and therefore occupies an important place in a society where communication is becoming ever faster. Not all problems can be solved surgically. Even 50 years after the introduction of tympanoplasty, the hearing results are unsatisfactory and often do not reach the threshold for social hearing. The cause of this can in most cases be regarded as incomplete restoration of the mucosal function of the middle ear and tube, which leads to ventilation disorders of the ear and does not allow real vibration of the reconstructed middle ear. However, a few are also caused by the biomechanics of the reconstructed ossicular chain. There has been progress in reconstructive middle ear surgery, which applies particularly to the development of implants. Implants made of titanium, which are distinguished by outstanding biocompatibility, delicate design and by biomechanical possibilities in the reconstruction of chain function, can be regarded as a new generation. Metal implants for the first time allow a controlled close fit with the remainder of the chain and integration of micromechanical functions in the implant. Moreover, there has also been progress in microsurgery itself. This applies particularly to the operative procedures for auditory canal atresia, the restoration of the tympanic membrane and the coupling of implants. This paper gives a summary of the current state of reconstructive microsurgery paying attention to the acousto-mechanical rules. PMID:22073050

  1. PET image reconstruction using kernel method.

    PubMed

    Wang, Guobao; Qi, Jinyi

    2015-01-01

    Image reconstruction from low-count positron emission tomography (PET) projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4-D dynamic PET patient dataset showed promising results. PMID:25095249

  2. PET Image Reconstruction Using Kernel Method

    PubMed Central

    Wang, Guobao; Qi, Jinyi

    2014-01-01

    Image reconstruction from low-count PET projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization (EM) algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4D dynamic PET patient dataset showed promising results. PMID:25095249

  3. Gene network and pathway generation and analysis: Editorial

    SciTech Connect

    Zhao, Zhongming; Sanfilippo, Antonio P.; Huang, Kun

    2011-02-18

    The past decade has witnessed an exponential growth of biological data including genomic sequences, gene annotations, expression and regulation, and protein-protein interactions. A key aim in the post-genome era is to systematically catalogue gene networks and pathways in a dynamic living cell and apply them to study diseases and phenotypes. To promote the research in systems biology and its application to disease studies, we organized a workshop focusing on the reconstruction and analysis of gene networks and pathways in any organisms from high-throughput data collected through techniques such as microarray analysis and RNA-Seq.

  4. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  5. Hybrid stochastic simplifications for multiscale gene networks

    PubMed Central

    Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu

    2009-01-01

    Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach. PMID:19735554

  6. Magnetic flux reconstruction methods for shaped tokamaks

    SciTech Connect

    Tsui, Chi-Wa

    1993-12-01

    The use of a variational method permits the Grad-Shafranov (GS) equation to be solved by reducing the problem of solving the 2D non-linear partial differential equation to the problem of minimizing a function of several variables. This high speed algorithm approximately solves the GS equation given a parameterization of the plasma boundary and the current profile (p` and FF` functions). The author treats the current profile parameters as unknowns. The goal is to reconstruct the internal magnetic flux surfaces of a tokamak plasma and the toroidal current density profile from the external magnetic measurements. This is a classic problem of inverse equilibrium determination. The current profile parameters can be evaluated by several different matching procedures. Matching of magnetic flux and field at the probe locations using the Biot-Savart law and magnetic Green`s function provides a robust method of magnetic reconstruction. The matching of poloidal magnetic field on the plasma surface provides a unique method of identifying the plasma current profile. However, the power of this method is greatly compromised by the experimental errors of the magnetic signals. The Casing Principle provides a very fast way to evaluate the plasma contribution to the magnetic signals. It has the potential of being a fast matching method. The performance of this method is hindered by the accuracy of the poloidal magnetic field computed from the equilibrium solver. A flux reconstruction package has been implemented which integrates a vacuum field solver using a filament model for the plasma, a multi-layer perception neural network as an interface, and the volume integration of plasma current density using Green`s functions as a matching method for the current profile parameters. The flux reconstruction package is applied to compare with the ASEQ and EFIT data. The results are promising.

  7. Buffering in cyclic gene networks

    NASA Astrophysics Data System (ADS)

    Glyzin, S. D.; Kolesov, A. Yu.; Rozov, N. Kh.

    2016-06-01

    We consider cyclic chains of unidirectionally coupled delay differential-difference equations that are mathematical models of artificial oscillating gene networks. We establish that the buffering phenomenon is realized in these system for an appropriate choice of the parameters: any given finite number of stable periodic motions of a special type, the so-called traveling waves, coexist.

  8. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    PubMed Central

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel

  9. Adaptive Models for Gene Networks

    PubMed Central

    Shin, Yong-Jun; Sayed, Ali H.; Shen, Xiling

    2012-01-01

    Biological systems are often treated as time-invariant by computational models that use fixed parameter values. In this study, we demonstrate that the behavior of the p53-MDM2 gene network in individual cells can be tracked using adaptive filtering algorithms and the resulting time-variant models can approximate experimental measurements more accurately than time-invariant models. Adaptive models with time-variant parameters can help reduce modeling complexity and can more realistically represent biological systems. PMID:22359614

  10. Gene networks controlling petal organogenesis.

    PubMed

    Huang, Tengbo; Irish, Vivian F

    2016-01-01

    One of the biggest unanswered questions in developmental biology is how growth is controlled. Petals are an excellent organ system for investigating growth control in plants: petals are dispensable, have a simple structure, and are largely refractory to environmental perturbations that can alter their size and shape. In recent studies, a number of genes controlling petal growth have been identified. The overall picture of how such genes function in petal organogenesis is beginning to be elucidated. This review will focus on studies using petals as a model system to explore the underlying gene networks that control organ initiation, growth, and final organ morphology. PMID:26428062

  11. Exhaustive Search for Fuzzy Gene Networks from Microarray Data

    SciTech Connect

    Sokhansanj, B A; Fitch, J P; Quong, J N; Quong, A A

    2003-07-07

    Recent technological advances in high-throughput data collection allow for the study of increasingly complex systems on the scale of the whole cellular genome and proteome. Gene network models are required to interpret large and complex data sets. Rationally designed system perturbations (e.g. gene knock-outs, metabolite removal, etc) can be used to iteratively refine hypothetical models, leading to a modeling-experiment cycle for high-throughput biological system analysis. We use fuzzy logic gene network models because they have greater resolution than Boolean logic models and do not require the precise parameter measurement needed for chemical kinetics-based modeling. The fuzzy gene network approach is tested by exhaustive search for network models describing cyclin gene interactions in yeast cell cycle microarray data, with preliminary success in recovering interactions predicted by previous biological knowledge and other analysis techniques. Our goal is to further develop this method in combination with experiments we are performing on bacterial regulatory networks.

  12. A Comparative Study of Different Reconstruction Schemes for a Reconstructed Discontinuous Galerkin Method on Arbitrary Grids

    SciTech Connect

    Hong Luo; Hanping Xiao; Robert Nourgaliev; Chunpei Cai

    2011-06-01

    A comparative study of different reconstruction schemes for a reconstruction-based discontinuous Galerkin, termed RDG(P1P2) method is performed for compressible flow problems on arbitrary grids. The RDG method is designed to enhance the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution via a reconstruction scheme commonly used in the finite volume method. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are implemented to obtain a quadratic polynomial representation of the underlying discontinuous Galerkin linear polynomial solution on each cell. These three reconstruction/recovery methods are compared for a variety of compressible flow problems on arbitrary meshes to access their accuracy and robustness. The numerical results demonstrate that all three reconstruction methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstruction method provides the best performance in terms of both accuracy and robustness.

  13. A new target reconstruction method considering atmospheric refraction

    NASA Astrophysics Data System (ADS)

    Zuo, Zhengrong; Yu, Lijuan

    2015-12-01

    In this paper, a new target reconstruction method considering the atmospheric refraction is presented to improve 3D reconstruction accuracy in long rang surveillance system. The basic idea of the method is that the atmosphere between the camera and the target is partitioned into several thin layers radially in which the density is regarded as uniform; Then the reverse tracking of the light propagation path from sensor to target was carried by applying Snell's law at the interface between layers; and finally the average of the tracked target's positions from different cameras is regarded as the reconstructed position. The reconstruction experiments were carried, and the experiment results showed that the new method have much better reconstruction accuracy than the traditional stereoscopic reconstruction method.

  14. Magnetic Field Configuration Models and Reconstruction Methods: a comparative study

    NASA Astrophysics Data System (ADS)

    Al-haddad, Nada; Möstl, Christian; Roussev, Ilia; Nieves-Chinchilla, Teresa; Poedts, Stefaan; Hidalgo, Miguel Angel; Marubashi, Katsuhide; Savani, Neel

    2012-07-01

    This study aims to provide a reference to different magnetic field models and reconstruction methods. In order to understand the dissimilarities of those models and codes, we analyze 59 events from the CDAW list, using four different magnetic field models and reconstruction techniques; force- free reconstruction (Lepping et al.(1990); Lynch et al.(2003)), magnetostatic reconstruction, referred as Grad-Shafranov (Hu & Sonnerup(2001); Mostl et al.(2009)), cylinder reconstruction (Marubashi & Lepping(2007)), elliptical, non-force free (Hidalgo et al.(2002)). The resulted parameters of the reconstructions, for the 59 events are compared, statistically, as well as in more details for some cases. The differences between the reconstruction codes are discussed, and suggestions are provided as how to enhance them. Finally we look at 2 unique cases under the microscope, to provide a comprehensive idea of the different aspects of how the fitting codes work.

  15. Spectrum reconstruction based on the constrained optimal linear inverse methods.

    PubMed

    Ren, Wenyi; Zhang, Chunmin; Mu, Tingkui; Dai, Haishan

    2012-07-01

    The dispersion effect of birefringent material results in spectrally varying Nyquist frequency for the Fourier transform spectrometer based on birefringent prism. Correct spectral information cannot be retrieved from the observed interferogram if the dispersion effect is not appropriately compensated. Some methods, such as nonuniform fast Fourier transforms and compensation method, were proposed to reconstruct the spectrum. In this Letter, an alternative constrained spectrum reconstruction method is suggested for the stationary polarization interference imaging spectrometer (SPIIS) based on the Savart polariscope. In the theoretical model of the interferogram, the noise and the total measurement error are included, and the spectrum reconstruction is performed by using the constrained optimal linear inverse methods. From numerical simulation, it is found that the proposed method is much more effective and robust than the nonconstrained spectrum reconstruction method proposed by Jian, and provides a useful spectrum reconstruction approach for the SPIIS. PMID:22743461

  16. Gene networks and liar paradoxes

    PubMed Central

    Isalan, Mark

    2009-01-01

    Network motifs are small patterns of connections, found over-represented in gene regulatory networks. An example is the negative feedback loop (e.g. factor A represses itself). This opposes its own state so that when ‘on’ it tends towards ‘off’ – and vice versa. Here, we argue that such self-opposition, if considered dimensionlessly, is analogous to the liar paradox: ‘This statement is false’. When ‘true’ it implies ‘false’ – and vice versa. Such logical constructs have provided philosophical consternation for over 2000 years. Extending the analogy, other network topologies give strikingly varying outputs over different dimensions. For example, the motif ‘A activates B and A. B inhibits A’ can give switches or oscillators with time only, or can lead to Turing-type patterns with both space and time (spots, stripes or waves). It is argued here that the dimensionless form reduces to a variant of ‘The following statement is true. The preceding statement is false’. Thus, merely having a static topological description of a gene network can lead to a liar paradox. Network diagrams are only snapshots of dynamic biological processes and apparent paradoxes can reveal important biological mechanisms that are far from paradoxical when considered explicitly in time and space. PMID:19722183

  17. Gene networks and liar paradoxes.

    PubMed

    Isalan, Mark

    2009-10-01

    Network motifs are small patterns of connections, found over-represented in gene regulatory networks. An example is the negative feedback loop (e.g. factor A represses itself). This opposes its own state so that when 'on' it tends towards 'off' - and vice versa. Here, we argue that such self-opposition, if considered dimensionlessly, is analogous to the liar paradox: 'This statement is false'. When 'true' it implies 'false' - and vice versa. Such logical constructs have provided philosophical consternation for over 2000 years. Extending the analogy, other network topologies give strikingly varying outputs over different dimensions. For example, the motif 'A activates B and A. B inhibits A' can give switches or oscillators with time only, or can lead to Turing-type patterns with both space and time (spots, stripes or waves). It is argued here that the dimensionless form reduces to a variant of 'The following statement is true. The preceding statement is false'. Thus, merely having a static topological description of a gene network can lead to a liar paradox. Network diagrams are only snapshots of dynamic biological processes and apparent paradoxes can reveal important biological mechanisms that are far from paradoxical when considered explicitly in time and space. PMID:19722183

  18. High resolution x-ray CMT: Reconstruction methods

    SciTech Connect

    Brown, J.K.

    1997-02-01

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited for high accuracy, tomographic reconstruction codes.

  19. Multigrid hierarchical simulated annealing method for reconstructing heterogeneous media.

    PubMed

    Pant, Lalit M; Mitra, Sushanta K; Secanell, Marc

    2015-12-01

    A reconstruction methodology based on different-phase-neighbor (DPN) pixel swapping and multigrid hierarchical annealing is presented. The method performs reconstructions by starting at a coarse image and successively refining it. The DPN information is used at each refinement stage to freeze interior pixels of preformed structures. This preserves the large-scale structures in refined images and also reduces the number of pixels to be swapped, thereby resulting in a decrease in the necessary computational time to reach a solution. Compared to conventional single-grid simulated annealing, this method was found to reduce the required computation time to achieve a reconstruction by around a factor of 70-90, with the potential of even higher speedups for larger reconstructions. The method is able to perform medium sized (up to 300(3) voxels) three-dimensional reconstructions with multiple correlation functions in 36-47 h. PMID:26764849

  20. Reconstructing ENSO - Methods, Proxy Data and Teleconnections

    NASA Astrophysics Data System (ADS)

    Wilson, R.; Cook, E.; D'Arrigo, R.; Riedwyl, N.; Evans, M.; Tudhope, A.; Allan, R.

    2009-04-01

    The El Niño/Southern Oscillation (ENSO) is globally important and influences climate at interannual and decadal time-scales with resultant links with extreme weather events and associated socio-economic problems. An understanding of the ENSO system is therefore crucial to allow for a better understanding of how ENSO will ‘react' under current global warming. Palaeoclimate reconstructions of ENSO variability allow extension prior to the relatively short instrumental record. However, due to the paucity of relevant annually resolved proxy archives (e.g. corals) in the central and eastern Pacific, reconstructions must rely on proxy data that are located in regions where the local climate is teleconnected with the tropical Pacific. In this study we compare three newly developed independent NINO3.4 SST reconstructions using data from (1) the central Pacific (corals), (2) the TexMex region of the United States (tree-rings), and (3) other regions in the tropics (corals and an ice-core) which are teleconnected with central Pacific SSTs in the 20th century. Although these three reconstructions are strongly calibrated and well verified, inter-proxy comparison shows a significant weakening in inter-proxy coherence in the 19th century. This break down in common signal could be related to insufficient data, dating errors in some of the proxy records or a break down in ENSO's influence on other regions. However, spectral analysis indicates that each reconstruction portrays ENSO-like spectral properties. Superposed epoch analysis also shows that each reconstruction shows a generally consistent ‘El-Niño-like' response to major volcanic events in the following year, while during years T+4 to T+7, ‘La Niña-like' conditions prevail. These results suggest that each of the series expresses ENSO-like ‘behaviour' but this ‘behaviour' however does not appear to be spatially or temporally consistent. This result may reflect published observations that there appear to be

  1. An analytic reconstruction method for PET based on cubic splines

    NASA Astrophysics Data System (ADS)

    Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.

    2014-03-01

    PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.

  2. A comparison of ancestral state reconstruction methods for quantitative characters.

    PubMed

    Royer-Carenzi, Manuela; Didier, Gilles

    2016-09-01

    Choosing an ancestral state reconstruction method among the alternatives available for quantitative characters may be puzzling. We present here a comparison of seven of them, namely the maximum likelihood, restricted maximum likelihood, generalized least squares under Brownian, Brownian-with-trend and Ornstein-Uhlenbeck models, phylogenetic independent contrasts and squared parsimony methods. A review of the relations between these methods shows that the maximum likelihood, the restricted maximum likelihood and the generalized least squares under Brownian model infer the same ancestral states and can only be distinguished by the distributions accounting for the reconstruction uncertainty which they provide. The respective accuracy of the methods is assessed over character evolution simulated under a Brownian motion with (and without) directional or stabilizing selection. We give the general form of ancestral state distributions conditioned on leaf states under the simulation models. Ancestral distributions are used first, to give a theoretical lower bound of the expected reconstruction error, and second, to develop an original evaluation scheme which is more efficient than comparing the reconstructed and the simulated states. Our simulations show that: (i) the distributions of the reconstruction uncertainty provided by the methods generally make sense (some more than others); (ii) it is essential to detect the presence of an evolutionary trend and to choose a reconstruction method accordingly; (iii) all the methods show good performances on characters under stabilizing selection; (iv) without trend or stabilizing selection, the maximum likelihood method is generally the most accurate. PMID:27234644

  3. Preconditioning methods for improved convergence rates in iterative reconstructions

    SciTech Connect

    Clinthorne, N.H.; Chiao, Pingchun; Rogers, W.L. . Div. of Nuclear Medicine); Pan, T.S. . Dept. of Nuclear Medicine); Stamos, J.A. . Dept. of Nuclear Engineering)

    1993-03-01

    Because of the characteristics of the tomographic inversion problem, iterative reconstruction techniques often suffer from poor convergence rates--especially at high spatial frequencies. By using preconditioning methods, the convergence properties of most iterative methods can be greatly enhanced without changing their ultimate solution. To increase reconstruction speed, the authors have applied spatially-invariant preconditioning filters that can be designed using the tomographic system response and implemented using 2-D frequency-domain filtering techniques. In a sample application, the authors performed reconstructions from noiseless, simulated projection data, using preconditioned and conventional steepest-descent algorithms. The preconditioned methods demonstrated residuals that were up to a factor of 30 lower than the unassisted algorithms at the same iteration. Applications of these methods to regularized reconstructions from projection data containing Poisson noise showed similar, although not as dramatic, behavior.

  4. Reconstruction-classification method for quantitative photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Malone, Emma; Powell, Samuel; Cox, Ben T.; Arridge, Simon

    2015-12-01

    We propose a combined reconstruction-classification method for simultaneously recovering absorption and scattering in turbid media from images of absorbed optical energy. This method exploits knowledge that optical parameters are determined by a limited number of classes to iteratively improve their estimate. Numerical experiments show that the proposed approach allows for accurate recovery of absorption and scattering in two and three dimensions, and delivers superior image quality with respect to traditional reconstruction-only approaches.

  5. Reconstruction methods for phase-contrast tomography

    SciTech Connect

    Raven, C.

    1997-02-01

    Phase contrast imaging with coherent x-rays can be distinguished in outline imaging and holography, depending on the wavelength {lambda}, the object size d and the object-to-detector distance r. When r << d{sup 2}{lambda}, phase contrast occurs only in regions where the refractive index fastly changes, i.e. at interfaces and edges in the sample. With increasing object-to-detector distance we come in the area of holographic imaging. The image contrast outside the shadow region of the object is due to interference of the direct, undiffracted beam and a beam diffracted by the object, or, in terms of holography, the interference of a reference wave with the object wave. Both, outline imaging and holography, offer the possibility to obtain three dimensional information of the sample in conjunction with a tomographic technique. But the data treatment and the kind of information one can obtain from the reconstruction is different.

  6. New method for 3D reconstruction in digital tomosynthesis

    NASA Astrophysics Data System (ADS)

    Claus, Bernhard E. H.; Eberhard, Jeffrey W.

    2002-05-01

    Digital tomosynthesis mammography is an advanced x-ray application that can provide detailed 3D information about the imaged breast. We introduce a novel reconstruction method based on simple backprojection, which yields high contrast reconstructions with reduced artifacts at a relatively low computational complexity. The first step in the proposed reconstruction method is a simple backprojection with an order statistics-based operator (e.g., minimum) used for combining the backprojected images into a reconstructed slice. Accordingly, a given pixel value does generally not contribute to all slices. The percentage of slices where a given pixel value does not contribute, as well as the associated reconstructed values, are collected. Using a form of re-projection consistency constraint, one now updates the projection images, and repeats the order statistics backprojection reconstruction step, but now using the enhanced projection images calculated in the first step. In our digital mammography application, this new approach enhances the contrast of structures in the reconstruction, and allows in particular to recover the loss in signal level due to reduced tissue thickness near the skinline, while keeping artifacts to a minimum. We present results obtained with the algorithm for phantom images.

  7. Digital Signal Processing and Control for the Study of Gene Networks.

    PubMed

    Shin, Yong-Jun

    2016-01-01

    Thanks to the digital revolution, digital signal processing and control has been widely used in many areas of science and engineering today. It provides practical and powerful tools to model, simulate, analyze, design, measure, and control complex and dynamic systems such as robots and aircrafts. Gene networks are also complex dynamic systems which can be studied via digital signal processing and control. Unlike conventional computational methods, this approach is capable of not only modeling but also controlling gene networks since the experimental environment is mostly digital today. The overall aim of this article is to introduce digital signal processing and control as a useful tool for the study of gene networks. PMID:27102828

  8. Digital Signal Processing and Control for the Study of Gene Networks

    PubMed Central

    Shin, Yong-Jun

    2016-01-01

    Thanks to the digital revolution, digital signal processing and control has been widely used in many areas of science and engineering today. It provides practical and powerful tools to model, simulate, analyze, design, measure, and control complex and dynamic systems such as robots and aircrafts. Gene networks are also complex dynamic systems which can be studied via digital signal processing and control. Unlike conventional computational methods, this approach is capable of not only modeling but also controlling gene networks since the experimental environment is mostly digital today. The overall aim of this article is to introduce digital signal processing and control as a useful tool for the study of gene networks. PMID:27102828

  9. Reconstruction of radiating sound fields using minimum energy method.

    PubMed

    Bader, Rolf

    2010-01-01

    A method for reconstructing a pressure field at the surface of a radiating body or source is presented using recording data of a microphone array. The radiation is assumed to consist of many spherical radiators, as microphone positions are present in the array. These monopoles are weighted using a parameter alpha, which broadens or narrows the overall radiation directivity as an effective and highly intuitive parameter of the radiation characteristics. A radiation matrix is built out of these weighted monopole radiators, and for different assumed values of alpha, a linear equation solver reconstructs the pressure field at the body's surface. It appears that from these many arbitrary reconstructions, the correct one minimizes the reconstruction energy. The method is tested, localizing the radiation points of a Balinese suling flute, reconstructing complex radiation from a duff frame drum, and determining the radiation directivity for the first seven modes of an Usbek tambourine. Stability in terms of measurement noise is demonstrated for the plain method, and additional highly effective algorithm is added for a noise level up to 0 dB. The stability of alpha in terms of minimal reconstruction energy is shown over the whole range of possible values for alpha. Additionally, the treatment of unwanted room reflections is discussed, still leading to satisfactory results in many cases. PMID:20058977

  10. Yeast Ancestral Genome Reconstructions: The Possibilities of Computational Methods

    NASA Astrophysics Data System (ADS)

    Tannier, Eric

    In 2006, a debate has risen on the question of the efficiency of bioinformatics methods to reconstruct mammalian ancestral genomes. Three years later, Gordon et al. (PLoS Genetics, 5(5), 2009) chose not to use automatic methods to build up the genome of a 100 million year old Saccharomyces cerevisiae ancestor. Their manually constructed ancestor provides a reference genome to test whether automatic methods are indeed unable to approach confident reconstructions. Adapting several methodological frameworks to the same yeast gene order data, I discuss the possibilities, differences and similarities of the available algorithms for ancestral genome reconstructions. The methods can be classified into two types: local and global. Studying the properties of both helps to clarify what we can expect from their usage. Both methods propose contiguous ancestral regions that come very close (> 95% identity) to the manually predicted ancestral yeast chromosomes, with a good coverage of the extant genomes.

  11. A Comparison of Methods for Ocean Reconstruction from Sparse Observations

    NASA Astrophysics Data System (ADS)

    Streletz, G. J.; Kronenberger, M.; Weber, C.; Gebbie, G.; Hagen, H.; Garth, C.; Hamann, B.; Kreylos, O.; Kellogg, L. H.; Spero, H. J.

    2014-12-01

    We present a comparison of two methods for developing reconstructions of oceanic scalar property fields from sparse scattered observations. Observed data from deep sea core samples provide valuable information regarding the properties of oceans in the past. However, because the locations of sample sites are distributed on the ocean floor in a sparse and irregular manner, developing a global ocean reconstruction is a difficult task. Our methods include a flow-based and a moving least squares -based approximation method. The flow-based method augments the process of interpolating or approximating scattered scalar data by incorporating known flow information. The scheme exploits this additional knowledge to define a non-Euclidean distance measure between points in the spatial domain. This distance measure is used to create a reconstruction of the desired scalar field on the spatial domain. The resulting reconstruction thus incorporates information from both the scattered samples and the known flow field. The second method does not assume a known flow field, but rather works solely with the observed scattered samples. It is based on a modification of the moving least squares approach, a weighted least squares approximation method that blends local approximations into a global result. The modifications target the selection of data used for these local approximations and the construction of the weighting function. The definition of distance used in the weighting function is crucial for this method, so we use a machine learning approach to determine a set of near-optimal parameters for the weighting. We have implemented both of the reconstruction methods and have tested them using several sparse oceanographic datasets. Based upon these studies, we discuss the advantages and disadvantages of each method and suggest possible ways to combine aspects of both methods in order to achieve an overall high-quality reconstruction.

  12. Cheng's method for reconstruction of a functionally sensitive penis.

    PubMed

    Cheng, K X; Zhang, R H; Zhou, S; Jiang, K C; Eid, A E; Huang, W Y

    1997-01-01

    This article introduces a new surgical method for one-stage reconstruction of the penis. It is applied to the reconstruction of the microphallus as well as to traumatic cases with the residual stump of the amputated penis not less than 3 cm long. By transferring the original glans or the residual penile stump to the anterior portion of the newly reconstructed penile body with microsurgical techniques, we have thus rebuilt a penis with more satisfactory results in both appearance and erotic sensation. Seven patients are reported here who were operated on by this method and who have been followed up for 18 months to 10 years. The good results achieved and the method's advantages over other methods are demonstrated and discussed. PMID:8982190

  13. Digital holographic method for tomography-image reconstruction

    NASA Astrophysics Data System (ADS)

    Liu, Cheng; Yan, Changchun; Gao, Shumei

    2004-02-01

    A digital holographic method for three-dimensional reconstruction of tomography images is demonstrated theoretically and experimentally. In this proposed method, a numerical hologram is first computed by calculating the total diffraction field of all transect images of a detected organ. Then, the numerical hologram is transferred to the usual recording medium to generate a physical hologram. Last, all the transect images are reconstructed in their original position by illuminating the physical hologram with a laser, thereby forming a three-dimensional transparent image of the organ detected. Due to its true third dimension, the reconstructed image using this method is much more vivid and accurate than that of other methods. Potentially, it may have great prospects for application in medical engineering.

  14. Path method for reconstructing images in fluorescence optical tomography

    SciTech Connect

    Kravtsenyuk, Olga V; Lyubimov, Vladimir V; Kalintseva, Natalie A

    2006-11-30

    A reconstruction method elaborated for the optical diffusion tomography of the internal structure of objects containing absorbing and scattering inhomogeneities is considered. The method is developed for studying objects with fluorescing inhomogeneities and can be used for imaging of distributions of artificial fluorophores whose aggregations indicate the presence of various diseases or pathological deviations. (special issue devoted to multiple radiation scattering in random media)

  15. Sparse Reconstruction for Bioluminescence Tomography Based on the Semigreedy Method

    PubMed Central

    Guo, Wei; Jia, Kebin; Zhang, Qian; Liu, Xueyan; Feng, Jinchao; Qin, Chenghu; Ma, Xibo; Yang, Xin; Tian, Jie

    2012-01-01

    Bioluminescence tomography (BLT) is a molecular imaging modality which can three-dimensionally resolve the molecular processes in small animals in vivo. The ill-posedness nature of BLT problem makes its reconstruction bears nonunique solution and is sensitive to noise. In this paper, we proposed a sparse BLT reconstruction algorithm based on semigreedy method. To reduce the ill-posedness and computational cost, the optimal permissible source region was automatically chosen by using an iterative search tree. The proposed method obtained fast and stable source reconstruction from the whole body and imposed constraint without using a regularization penalty term. Numerical simulations on a mouse atlas, and in vivo mouse experiments were conducted to validate the effectiveness and potential of the method. PMID:22927887

  16. Matrix-based image reconstruction methods for tomography

    SciTech Connect

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures.

  17. A fast-convergence POCS seismic denoising and reconstruction method

    NASA Astrophysics Data System (ADS)

    Ge, Zi-Jian; Li, Jing-Ye; Pan, Shu-Lin; Chen, Xiao-Hong

    2015-06-01

    The efficiency, precision, and denoising capabilities of reconstruction algorithms are critical to seismic data processing. Based on the Fourier-domain projection onto convex sets (POCS) algorithm, we propose an inversely proportional threshold model that defines the optimum threshold, in which the descent rate is larger than in the exponential threshold in the large-coefficient section and slower than in the exponential threshold in the small-coefficient section. Thus, the computation efficiency of the POCS seismic reconstruction greatly improves without affecting the reconstructed precision of weak reflections. To improve the flexibility of the inversely proportional threshold, we obtain the optimal threshold by using an adjustable dependent variable in the denominator of the inversely proportional threshold model. For random noise attenuation by completing the missing traces in seismic data reconstruction, we present a weighted reinsertion strategy based on the data-driven model that can be obtained by using the percentage of the data-driven threshold in each iteration in the threshold section. We apply the proposed POCS reconstruction method to 3D synthetic and field data. The results suggest that the inversely proportional threshold model improves the computational efficiency and precision compared with the traditional threshold models; furthermore, the proposed reinserting weight strategy increases the SNR of the reconstructed data.

  18. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods

    PubMed Central

    Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  19. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.

    PubMed

    Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  20. Tomographic fluorescence reconstruction by a spectral projected gradient pursuit method

    NASA Astrophysics Data System (ADS)

    Ye, Jinzuo; An, Yu; Mao, Yamin; Jiang, Shixin; Yang, Xin; Chi, Chongwei; Tian, Jie

    2015-03-01

    In vivo fluorescence molecular imaging (FMI) has played an increasingly important role in biomedical research of preclinical area. Fluorescence molecular tomography (FMT) further upgrades the two-dimensional FMI optical information to three-dimensional fluorescent source distribution, which can greatly facilitate applications in related studies. However, FMT presents a challenging inverse problem which is quite ill-posed and ill-conditioned. Continuous efforts to develop more practical and efficient methods for FMT reconstruction are still needed. In this paper, a method based on spectral projected gradient pursuit (SPGP) has been proposed for FMT reconstruction. The proposed method was based on the directional pursuit framework. A mathematical strategy named the nonmonotone line search was associated with the SPGP method, which guaranteed the global convergence. In addition, the Barzilai-Borwein step length was utilized to build the new step length of the SPGP method, which was able to speed up the convergence of this gradient method. To evaluate the performance of the proposed method, several heterogeneous simulation experiments including multisource cases as well as comparative analyses have been conducted. The results demonstrated that, the proposed method was able to achieve satisfactory source localizations with a bias less than 1 mm; the computational efficiency of the method was one order of magnitude faster than the contrast method; and the fluorescence reconstructed by the proposed method had a higher contrast to the background than the contrast method. All the results demonstrated the potential for practical FMT applications with the proposed method.

  1. An Event Reconstruction Method for the Telescope Array Fluorescence Detectors

    SciTech Connect

    Fujii, T.; Ogio, S.; Yamazaki, K.; Fukushima, M.; Ikeda, D.; Sagawa, H.; Takahashi, Y.; Tameda, Y.; Hayashi, K.; Ishimori, R.; Kobayashi, Y.; Tokuno, H.; Tsunesada, Y.; Honda, K.; Tomida, T.; Udo, S.

    2011-09-22

    We measure arrival directions, energies and mass composition of ultra-high energy cosmic rays with air fluorescence detector telescopes. The longitudinal profile of the cosmic ray induced extensive air shower cascade is imaged on focal plane of the telescope camera. Here, we show an event reconstruction method to obtain the primary information from data collected by the Telescope Array Fluorescence Detectors. In particular, we report on an ''Inverse Monte Carlo (IMC)'' method in which the reconstruction process searches for an optimum solution via repeated Monte Carlo simulations including characteristics of all detectors, atmospheric conditions, photon emission and scattering processes.

  2. Bubble reconstruction method for wire-mesh sensors measurements

    NASA Astrophysics Data System (ADS)

    Mukin, Roman V.

    2016-08-01

    A new algorithm is presented for post-processing of void fraction measurements with wire-mesh sensors, particularly for identifying and reconstructing bubble surfaces in a two-phase flow. This method is a combination of the bubble recognition algorithm presented in Prasser (Nuclear Eng Des 237(15):1608, 2007) and Poisson surface reconstruction algorithm developed in Kazhdan et al. (Poisson surface reconstruction. In: Proceedings of the fourth eurographics symposium on geometry processing 7, 2006). To verify the proposed technique, a comparison was done of the reconstructed individual bubble shapes with those obtained numerically in Sato and Ničeno (Int J Numer Methods Fluids 70(4):441, 2012). Using the difference between reconstructed and referenced bubble shapes, the accuracy of the proposed algorithm was estimated. At the next step, the algorithm was applied to void fraction measurements performed in Ylönen (High-resolution flow structure measurements in a rod bundle (Diss., Eidgenössische Technische Hochschule ETH Zürich, Nr. 20961, 2013) by means of wire-mesh sensors in a rod bundle geometry. The reconstructed bubble shape yields bubble surface area and volume, hence its Sauter diameter d_{32} as well. Sauter diameter is proved to be more suitable for bubbles size characterization compared to volumetric diameter d_{30}, proved capable to capture the bi-disperse bubble size distribution in the flow. The effect of a spacer grid was studied as well: For the given spacer grid and considered flow rates, bubble size frequency distribution is obtained almost at the same position for all cases, approximately at d_{32} = 3.5 mm. This finding can be related to the specific geometry of the spacer grid or the air injection device applied in the experiments, or even to more fundamental properties of the bubble breakup and coagulation processes. In addition, an application of the new algorithm for reconstruction of a large air-water interface in a tube bundle is

  3. Reconstruction method for curvilinear structures from two views

    NASA Astrophysics Data System (ADS)

    Hoffmann, Matthias; Brost, Alexander; Jakob, Carolin; Koch, Martin; Bourier, Felix; Kurzidim, Klaus; Hornegger, Joachim; Strobel, Norbert

    2013-03-01

    Minimally invasive interventions often involve tools of curvilinear shape like catheters and guide-wires. If the camera parameters of a fluoroscopic system or a stereoscopic endoscope are known, a 3-D reconstruction of corresponding points can be computed by triangulation. Manual identification of point correspondences is time consuming, but there exist methods that automatically select corresponding points along curvilinear structures. The focus here is on the evaluation of a recent published method for catheter reconstruction from two views. A previous evaluation of this method using clinical data yielded promising results. For that evaluation, however, no 3-D ground truth data was available such that the error could only be estimated using the forward-projection of the reconstruction. In this paper, we present a more extensive evaluation of this method based on both clinical and phantom data. For the evaluation using clinical images, 36 data sets and two different catheters were available. The mean error found when reconstructing both catheters was 0.1mm +/- 0.1mm. To evaluate the error in 3-D, images of a phantom were acquired from 13 different angulations. For the phantom, A 3D C-arm CT voxel data set of the phantom was also available. A reconstruction error was calculated by comparing the triangulated 3D reconstruction result to the 3D voxel data set. The evaluation yielded an average error of 1.2mm +/- 1.2mm for the circumferential mapping catheter and 1.3mm +/- 1.0mm for the ablation catheter.

  4. Method for 3D fibre reconstruction on a microrobotic platform.

    PubMed

    Hirvonen, J; Myllys, M; Kallio, P

    2016-07-01

    Automated handling of a natural fibrous object requires a method for acquiring the three-dimensional geometry of the object, because its dimensions cannot be known beforehand. This paper presents a method for calculating the three-dimensional reconstruction of a paper fibre on a microrobotic platform that contains two microscope cameras. The method is based on detecting curvature changes in the fibre centreline, and using them as the corresponding points between the different views of the images. We test the developed method with four fibre samples and compare the results with the references measured with an X-ray microtomography device. We rotate the samples through 16 different orientations on the platform and calculate the three-dimensional reconstruction to test the repeatability of the algorithm and its sensitivity to the orientation of the sample. We also test the noise sensitivity of the algorithm, and record the mismatch rate of the correspondences provided. We use the iterative closest point algorithm to align the measured three-dimensional reconstructions with the references. The average point-to-point distances between the reconstructed fibre centrelines and the references are 20-30 μm, and the mismatch rate is low. Given the manipulation tolerance, this shows that the method is well suited to automated fibre grasping. This has also been demonstrated with actual grasping experiments. PMID:26695385

  5. Reconstructing Program Theories: Methods Available and Problems To Be Solved.

    ERIC Educational Resources Information Center

    Leeuw, Frans L.

    2003-01-01

    Discusses methods for reconstructing theories underlying programs and policies, focusing on three approaches: (1) an empirical approach that focuses on interviews, documents, and argumentational analysis; (2) an approach based on strategic assessment, group dynamics, and dialogue; and (3) an approach based on cognitive and organizational…

  6. An improved reconstruction method for cosmological density fields

    NASA Technical Reports Server (NTRS)

    Gramann, Mirt

    1993-01-01

    This paper proposes some improvements to existing reconstruction methods for recovering the initial linear density and velocity fields of the universe from the present large-scale density distribution. We derive the Eulerian continuity equation in the Zel'dovich approximation and show that, by applying this equation, we can trace the evolution of the gravitational potential of the universe more exactly than is possible with previous approaches based on the Zel'dovich-Bernoulli equation. The improved reconstruction method is tested using N-body simulations. When the Zel'dovich-Bernoulli equation describes the formation of filaments, then the Zel'dovich continuity equation also follows the clustering of clumps inside the filaments. Our reconstruction method recovers the true initial gravitational potential with an rms error about 3 times smaller than previous methods. We examine the recovery of the initial distribution of Fourier components and find the scale at which the recovered phases are scrambled with respect their true initial values. Integrating the Zel'dovich continuity equation back in time, we can improve the spatial resolution of the reconstruction by a factor of about 2.

  7. Robust Methods for Sensing and Reconstructing Sparse Signals

    ERIC Educational Resources Information Center

    Carrillo, Rafael E.

    2012-01-01

    Compressed sensing (CS) is an emerging signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are…

  8. Testing the global flow reconstruction method on coupled chaotic oscillators

    NASA Astrophysics Data System (ADS)

    Plachy, Emese; Kolláth, Zoltán

    2010-03-01

    Irregular behaviour of pulsating variable stars may occur due to low dimensional chaos. To determine the quantitative properties of the dynamics in such systems, we apply a suitable time series analysis, the global flow reconstruction method. The robustness of the reconstruction can be tested through the resultant quantities, like Lyapunov dimension and Fourier frequencies. The latter is specially important as it is directly derivable from the observed light curves. We have performed tests using coupled Rossler oscillators to investigate the possible connection between those quantities. In this paper we present our test results.

  9. 3D reconstruction methods of coronal structures by radio observations

    NASA Astrophysics Data System (ADS)

    Aschwanden, Markus J.; Bastian, T. S.; White, Stephen M.

    1992-11-01

    The ability to carry out the three dimensional (3D) reconstruction of structures in the solar corona would represent a major advance in the study of the physical properties in active regions and in flares. Methods which allow a geometric reconstruction of quasistationary coronal structures (for example active region loops) or dynamic structures (for example flaring loops) are described: stereoscopy of multi-day imaging observations by the VLA (Very Large Array); tomography of optically thin emission (in radio or soft x-rays); multifrequency band imaging by the VLA; and tracing of magnetic field lines by propagating electron beams.

  10. Method for image reconstruction of moving radionuclide source distribution

    DOEpatents

    Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick

    2012-12-18

    A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.

  11. Method for reconstructing the history of pollution emissions

    SciTech Connect

    Not Available

    1986-05-01

    This paper examines the methods for reconstructing the history of pollution emissions. Since very few direct measurements were made in the past, documentary evidence of releases is drawn mainly from the records of economic activity. The available data are integrated into a routing network for the flow of each pollutant, from raw materials through processing, shipment of products, consumption, and finally to waste disposal. This process, called the mass balance approach, is much like reconstructing a fossil skeleton. It was the process was used in the pilot historical study of the pollution of the Hudson region.

  12. 3D reconstruction methods of coronal structures by radio observations

    NASA Technical Reports Server (NTRS)

    Aschwanden, Markus J.; Bastian, T. S.; White, Stephen M.

    1992-01-01

    The ability to carry out the three dimensional (3D) reconstruction of structures in the solar corona would represent a major advance in the study of the physical properties in active regions and in flares. Methods which allow a geometric reconstruction of quasistationary coronal structures (for example active region loops) or dynamic structures (for example flaring loops) are described: stereoscopy of multi-day imaging observations by the VLA (Very Large Array); tomography of optically thin emission (in radio or soft x-rays); multifrequency band imaging by the VLA; and tracing of magnetic field lines by propagating electron beams.

  13. Efficient method for content reconstruction with self-embedding.

    PubMed

    Korus, Paweł; Dziech, Andrzej

    2013-03-01

    This paper presents a new model of the content reconstruction problem in self-embedding systems, based on an erasure communication channel. We explain why such a model is a good fit for this problem, and how it can be practically implemented with the use of digital fountain codes. The proposed method is based on an alternative approach to spreading the reference information over the whole image, which has recently been shown to be of critical importance in the application at hand. Our paper presents a theoretical analysis of the inherent restoration trade-offs. We analytically derive formulas for the reconstruction success bounds, and validate them experimentally with Monte Carlo simulations and a reference image authentication system. We perform an exhaustive reconstruction quality assessment, where the presented reference scheme is compared to five state-of-the-art alternatives in a common evaluation scenario. Our paper leads to important insights on how self-embedding schemes should be constructed to achieve optimal performance. The reference authentication system designed according to the presented principles allows for high-quality reconstruction, regardless of the amount of the tampered content. The average reconstruction quality, measured on 10000 natural images is 37 dB, and is achievable even when 50% of the image area becomes tampered. PMID:23193455

  14. Robust H infinity-stabilization design in gene networks under stochastic molecular noises: fuzzy-interpolation approach.

    PubMed

    Chen, Bor-Sen; Chang, Yu-Te; Wang, Yu-Chao

    2008-02-01

    Molecular noises in gene networks come from intrinsic fluctuations, transmitted noise from upstream genes, and the global noise affecting all genes. Knowledge of molecular noise filtering in gene networks is crucial to understand the signal processing in gene networks and to design noise-tolerant gene circuits for synthetic biology. A nonlinear stochastic dynamic model is proposed in describing a gene network under intrinsic molecular fluctuations and extrinsic molecular noises. The stochastic molecular-noise-processing scheme of gene regulatory networks for attenuating these molecular noises is investigated from the nonlinear robust stabilization and filtering perspective. In order to improve the robust stability and noise filtering, a robust gene circuit design for gene networks is proposed based on the nonlinear robust H infinity stochastic stabilization and filtering scheme, which needs to solve a nonlinear Hamilton-Jacobi inequality. However, in order to avoid solving these complicated nonlinear stabilization and filtering problems, a fuzzy approximation method is employed to interpolate several linear stochastic gene networks at different operation points via fuzzy bases to approximate the nonlinear stochastic gene network. In this situation, the method of linear matrix inequality technique could be employed to simplify the gene circuit design problems to improve robust stability and molecular-noise-filtering ability of gene networks to overcome intrinsic molecular fluctuations and extrinsic molecular noises. PMID:18270080

  15. A new method of morphological comparison for bony reconstructive surgery: maxillary reconstruction using scapular tip bone

    NASA Astrophysics Data System (ADS)

    Chan, Harley; Gilbert, Ralph W.; Pagedar, Nitin A.; Daly, Michael J.; Irish, Jonathan C.; Siewerdsen, Jeffrey H.

    2010-02-01

    esthetic appearance is one of the most important factors for reconstructive surgery. The current practice of maxillary reconstruction chooses radial forearm, fibula or iliac rest osteocutaneous to recreate three-dimensional complex structures of the palate and maxilla. However, these bone flaps lack shape similarity to the palate and result in a less satisfactory esthetic. Considering similarity factors and vasculature advantages, reconstructive surgeons recently explored the use of scapular tip myo-osseous free flaps to restore the excised site. We have developed a new method that quantitatively evaluates the morphological similarity of the scapula tip bone and palate based on a diagnostic volumetric computed tomography (CT) image. This quantitative result was further interpreted as a color map that rendered on the surface of a three-dimensional computer model. For surgical planning, this color interpretation could potentially assist the surgeon to maximize the orientation of the bone flaps for best fit of the reconstruction site. With approval from the Research Ethics Board (REB) of the University Health Network, we conducted a retrospective analysis with CT image obtained from 10 patients. Each patient had a CT scans including the maxilla and chest on the same day. Based on this image set, we simulated total, subtotal and hemi palate reconstruction. The procedure of simulation included volume segmentation, conversing the segmented volume to a stereo lithography (STL) model, manual registration, computation of minimum geometric distances and curvature between STL model. Across the 10 patients data, we found the overall root-mean-square (RMS) conformance was 3.71+/- 0.16 mm

  16. A structured light method for underwater surface reconstruction

    NASA Astrophysics Data System (ADS)

    Sarafraz, Amin; Haus, Brian K.

    2016-04-01

    A new structured-light method for 3D imaging has been developed which can simultaneously estimate both the geometric shape of the water surface and the geometric shape of underwater objects. The method requires only a single image and thus can be applied to dynamic as well as static scenes. Experimental results show the utility of this method in non-invasive underwater 3D reconstruction applications. The performance of the new method is studied through a sensitivity analysis for different parameters of the suggested method.

  17. Iterative reconstruction methods for high-throughput PET tomographs.

    PubMed

    Hamill, James; Bruckbauer, Thomas

    2002-08-01

    A fast iterative method is described for processing clinical PET scans acquired in three dimensions, that is, with no inter-plane septa, using standard computers to replace dedicated processors used until the late 1990s. The method is based on sinogram resampling, Fourier rebinning, Monte Carlo scatter simulation and iterative reconstruction using the attenuation-weighted OSEM method and a projector based on a Gaussian pixel model. Resampling of measured sinogram values occurs before Fourier rebinning, to minimize parallax and geometric distortions due to the circular geometry, and also to reduce the size of the sinogram. We analyse the geometrical and statistical effects of resampling, showing that the lines of response are positioned correctly and that resampling is equivalent to about 4 mm of post-reconstruction filtering. We also present phantom and patient results. In this approach, multi-bed clinical oncology scans can be ready for diagnosis within minutes. PMID:12200928

  18. Reverse engineering and analysis of large genome-scale gene networks

    PubMed Central

    Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas

    2013-01-01

    Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web. PMID:23042249

  19. The gridding method for image reconstruction by Fourier transformation

    SciTech Connect

    Schomberg, H.; Timmer, J.

    1995-09-01

    This paper explores a computational method for reconstructing an n-dimensional signal f from a sampled version of its Fourier transform {cflx f}. The method involves a window function {cflx w} and proceeds in three steps. First, the convolution {cflx g} = {cflx w} * {cflx f} is computed numerically on a Cartesian grid, using the available samples of {cflx f}. Then, g = wf is computed via the inverse discrete Fourier transform, and finally f is obtained as g/w. Due to the smoothing effect of the convolution, evaluating {cflx w} * {cflx f} is much less error prone than merely interpolating {cflx f}. The method was originally devised for image reconstruction in radio astronomy, but is actually applicable to a broad range of reconstructive imaging methods, including magnetic resonance imaging and computed tomography. In particular, it provides a fast and accurate alternative to the filtered backprojection. The basic method has several variants with other applications, such as the equidistant resampling of arbitrarily sampled signals or the fast computation of the Radon (Hough) transform.

  20. Iterative reconstruction methods in X-ray CT.

    PubMed

    Beister, Marcel; Kolditz, Daniel; Kalender, Willi A

    2012-04-01

    Iterative reconstruction (IR) methods have recently re-emerged in transmission x-ray computed tomography (CT). They were successfully used in the early years of CT, but given up when the amount of measured data increased because of the higher computational demands of IR compared to analytical methods. The availability of large computational capacities in normal workstations and the ongoing efforts towards lower doses in CT have changed the situation; IR has become a hot topic for all major vendors of clinical CT systems in the past 5 years. This review strives to provide information on IR methods and aims at interested physicists and physicians already active in the field of CT. We give an overview on the terminology used and an introduction to the most important algorithmic concepts including references for further reading. As a practical example, details on a model-based iterative reconstruction algorithm implemented on a modern graphics adapter (GPU) are presented, followed by application examples for several dedicated CT scanners in order to demonstrate the performance and potential of iterative reconstruction methods. Finally, some general thoughts regarding the advantages and disadvantages of IR methods as well as open points for research in this field are discussed. PMID:22316498

  1. Detection of driver pathways using mutated gene network in cancer.

    PubMed

    Li, Feng; Gao, Lin; Ma, Xiaoke; Yang, Xiaofei

    2016-06-21

    Distinguishing driver pathways has been extensively studied because they are critical for understanding the development and molecular mechanisms of cancers. Most existing methods for driver pathways are based on high coverage as well as high mutual exclusivity, with the underlying assumption that mutations are exclusive. However, in many cases, mutated driver genes in the same pathways are not strictly mutually exclusive. Based on this observation, we propose an index for quantifying mutual exclusivity between gene pairs. Then, we construct a mutated gene network for detecting driver pathways by integrating the proposed index and coverage. The detection of driver pathways on the mutated gene network consists of two steps: raw pathways are obtained using a CPM method, and the final driver pathways are selected using a strict testing strategy. We apply this method to glioblastoma and breast cancers and find that our method is more accurate than state-of-the-art methods in terms of enrichment of KEGG pathways. Furthermore, the detected driver pathways intersect with well-known pathways with moderate exclusivity, which cannot be discovered using the existing algorithms. In conclusion, the proposed method provides an effective way to investigate driver pathways in cancers. PMID:27118146

  2. Improving automated 3D reconstruction methods via vision metrology

    NASA Astrophysics Data System (ADS)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  3. Methods of graph network reconstruction in personalized medicine.

    PubMed

    Danilov, A; Ivanov, Yu; Pryamonosov, R; Vassilevski, Yu

    2016-08-01

    The paper addresses methods for generation of individualized computational domains on the basis of medical imaging dataset. The computational domains will be used in one-dimensional (1D) and three-dimensional (3D)-1D coupled hemodynamic models. A 1D hemodynamic model employs a 1D network of a patient-specific vascular network with large number of vessels. The 1D network is the graph with nodes in the 3D space which bears additional geometric data such as length and radius of vessels. A 3D hemodynamic model requires a detailed 3D reconstruction of local parts of the vascular network. We propose algorithms which extend the automated segmentation of vascular and tubular structures, generation of centerlines, 1D network reconstruction, correction, and local adaptation. We consider two modes of centerline representation: (i) skeletal segments or sets of connected voxels and (ii) curved paths with corresponding radii. Individualized reconstruction of 1D networks depends on the mode of centerline representation. Efficiency of the proposed algorithms is demonstrated on several examples of 1D network reconstruction. The networks can be used in modeling of blood flows as well as other physiological processes in tubular structures. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26462139

  4. Linear method of fluorescent source reconstruction in a diffusion medium.

    PubMed

    Janunts, Edgar; Pöschinger, Thomas; Brünner, Holger; Langenbucher, Achim

    2008-01-01

    A new method is described for obtaining a 2D reconstruction of a fluorescent source distribution inside a diffusion medium from planar measurements of the emission light at the surface after excitation by a plane wave. Point sources are implanted at known locations of a rectangular phantom. The forward model of the photon transport is based on the diffusion approximation of the radiative transport equation (RTE) for homogeneous media. This can be described by a hierarchical system of two time-independent RTE's, one for the excitation plane wave originating from the external light source to the medium and another one for the fluorescence emission originating from the fluorophore marker to the detector. A linear inverse source problem was solved for image reconstruction. The applicability of the theoretical method is demonstrated in some representative working examples. For an optimization of the problem we used least squares minimization technique. PMID:18826162

  5. PET iterative reconstruction incorporating an efficient positron range correction method.

    PubMed

    Bertolli, Ottavia; Eleftheriou, Afroditi; Cecchetti, Matteo; Camarlinghi, Niccolò; Belcari, Nicola; Tsoumpas, Charalampos

    2016-02-01

    Positron range is one of the main physical effects limiting the spatial resolution of positron emission tomography (PET) images. If positrons travel inside a magnetic field, for instance inside a nuclear magnetic resonance (MR) tomograph, the mean range will be smaller but still significant. In this investigation we examined a method to correct for the positron range effect in iterative image reconstruction by including tissue-specific kernels in the forward projection operation. The correction method was implemented within STIR library (Software for Tomographic Image Reconstruction). In order to obtain the positron annihilation distribution of various radioactive isotopes in water and lung tissue, simulations were performed with the Monte Carlo package GATE [Jan et al. 2004 [1

  6. Optical Sensors and Methods for Underwater 3D Reconstruction

    PubMed Central

    Massot-Campos, Miquel; Oliver-Codina, Gabriel

    2015-01-01

    This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389

  7. Optical Sensors and Methods for Underwater 3D Reconstruction.

    PubMed

    Massot-Campos, Miquel; Oliver-Codina, Gabriel

    2015-01-01

    This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389

  8. Efficient finite element method for grating profile reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Ruming; Sun, Jiguang

    2015-12-01

    This paper concerns the reconstruction of grating profiles from scattering data. The inverse problem is formulated as an optimization problem with a regularization term. We devise an efficient finite element method (FEM) and employ a quasi-Newton method to solve it. For the direct problems, the FEM stiff and mass matrices are assembled once at the beginning of the numerical procedure. Then only minor changes are made to the mass matrix at each iteration, which significantly saves the computation cost. Numerical examples show that the method is effective and robust.

  9. Multiobjective H2/H∞ synthetic gene network design based on promoter libraries.

    PubMed

    Wu, Chih-Hung; Zhang, Weihei; Chen, Bor-Sen

    2011-10-01

    Some current promoter libraries have been developed for synthetic gene networks. But an efficient method to engineer a synthetic gene network with some desired behaviors by selecting adequate promoters from these promoter libraries has not been presented. Thus developing a systematic method to efficiently employ promoter libraries to improve the engineering of synthetic gene networks with desired behaviors is appealing for synthetic biologists. In this study, a synthetic gene network with intrinsic parameter fluctuations and environmental disturbances in vivo is modeled by a nonlinear stochastic system. In order to engineer a synthetic gene network with a desired behavior despite intrinsic parameter fluctuations and environmental disturbances in vivo, a multiobjective H(2)/H(∞) reference tracking (H(2) optimal tracking and H(∞) noise filtering) design is introduced. The H(2) optimal tracking can make the tracking errors between the behaviors of a synthetic gene network and the desired behaviors as small as possible from the minimum mean square error point of view, and the H(∞) noise filtering can attenuate all possible noises, from the worst-case noise effect point of view, to achieve a desired noise filtering ability. If the multiobjective H(2)/H(∞) reference tracking design is satisfied, the synthetic gene network can robustly and optimally track the desired behaviors, simultaneously. First, based on the dynamic gene regulation, the existing promoter libraries are redefined by their promoter activities so that they can be efficiently selected in the design procedure. Then a systematic method is developed to select an adequate promoter set from the redefined promoter libraries to synthesize a gene network satisfying these two design objectives. But the multiobjective H(2)/H(∞) reference tracking design problem needs to solve a difficult Hamilton-Jacobi Inequality (HJI)-constrained optimization problem. Therefore, the fuzzy approximation method is

  10. Reconstruction of Gene Networks of Iron Response in Shewanella oneidensis

    SciTech Connect

    Yang, Yunfeng; Harris, Daniel P; Luo, Feng; Joachimiak, Marcin; Wu, Liyou; Dehal, Paramvir; Jacobsen, Janet; Yang, Zamin Koo; Gao, Haichun; Arkin, Adam; Palumbo, Anthony Vito; Zhou, Jizhong

    2009-01-01

    It is of great interest to study the iron response of the -proteobacterium Shewanella oneidensis since it possesses a high content of iron and is capable of utilizing iron for anaerobic respiration. We report here that the iron response in S. oneidensis is a rapid process. To gain more insights into the bacterial response to iron, temporal gene expression profiles were examined for iron depletion and repletion, resulting in identification of iron-responsive biological pathways in a gene co-expression network. Iron acquisition systems, including genes unique to S. oneidensis, were rapidly and strongly induced by iron depletion, and repressed by iron repletion. Some were required for iron depletion, as exemplified by the mutational analysis of the putative siderophore biosynthesis protein SO3032. Unexpectedly, a number of genes related to anaerobic energy metabolism were repressed by iron depletion and induced by repletion, which might be due to the iron storage potential of their protein products. Other iron-responsive biological pathways include protein degradation, aerobic energy metabolism and protein synthesis. Furthermore, sequence motifs enriched in gene clusters as well as their corresponding DNA-binding proteins (Fur, CRP and RpoH) were identified, resulting in a regulatory network of iron response in S. oneidensis. Together, this work provides an overview of iron response and reveals novel features in S. oneidensis, including Shewanella-specific iron acquisition systems, and suggests the intimate relationship between anaerobic energy metabolism and iron response.

  11. Computational methods estimating uncertainties for profile reconstruction in scatterometry

    NASA Astrophysics Data System (ADS)

    Gross, H.; Rathsfeld, A.; Scholze, F.; Model, R.; Bär, M.

    2008-04-01

    The solution of the inverse problem in scatterometry, i.e. the determination of periodic surface structures from light diffraction patterns, is incomplete without knowledge of the uncertainties associated with the reconstructed surface parameters. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to the efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation and end up minimizing the deviation of the measured efficiency or phase shift values from the simulated ones. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping and the uncertainties of the measured efficiencies or phase shifts. In particular, the uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data and can be estimated by various methods. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.

  12. Transcriptional control in the segmentation gene network of Drosophila.

    PubMed

    Schroeder, Mark D; Pearce, Michael; Fak, John; Fan, HongQing; Unnerstall, Ulrich; Emberly, Eldon; Rajewsky, Nikolaus; Siggia, Eric D; Gaul, Ulrike

    2004-09-01

    The segmentation gene network of Drosophila consists of maternal and zygotic factors that generate, by transcriptional (cross-) regulation, expression patterns of increasing complexity along the anterior-posterior axis of the embryo. Using known binding site information for maternal and zygotic gap transcription factors, the computer algorithm Ahab recovers known segmentation control elements (modules) with excellent success and predicts many novel modules within the network and genome-wide. We show that novel module predictions are highly enriched in the network and typically clustered proximal to the promoter, not only upstream, but also in intronic space and downstream. When placed upstream of a reporter gene, they consistently drive patterned blastoderm expression, in most cases faithfully producing one or more pattern elements of the endogenous gene. Moreover, we demonstrate for the entire set of known and newly validated modules that Ahab's prediction of binding sites correlates well with the expression patterns produced by the modules, revealing basic rules governing their composition. Specifically, we show that maternal factors consistently act as activators and that gap factors act as repressors, except for the bimodal factor Hunchback. Our data suggest a simple context-dependent rule for its switch from repressive to activating function. Overall, the composition of modules appears well fitted to the spatiotemporal distribution of their positive and negative input factors. Finally, by comparing Ahab predictions with different categories of transcription factor input, we confirm the global regulatory structure of the segmentation gene network, but find odd skipped behaving like a primary pair-rule gene. The study expands our knowledge of the segmentation gene network by increasing the number of experimentally tested modules by 50%. For the first time, the entire set of validated modules is analyzed for binding site composition under a uniform set of

  13. Transcriptional Control in the Segmentation Gene Network of Drosophila

    PubMed Central

    Fan, HongQing; Unnerstall, Ulrich; Emberly, Eldon; Rajewsky, Nikolaus; Siggia, Eric D

    2004-01-01

    The segmentation gene network of Drosophila consists of maternal and zygotic factors that generate, by transcriptional (cross-) regulation, expression patterns of increasing complexity along the anterior-posterior axis of the embryo. Using known binding site information for maternal and zygotic gap transcription factors, the computer algorithm Ahab recovers known segmentation control elements (modules) with excellent success and predicts many novel modules within the network and genome-wide. We show that novel module predictions are highly enriched in the network and typically clustered proximal to the promoter, not only upstream, but also in intronic space and downstream. When placed upstream of a reporter gene, they consistently drive patterned blastoderm expression, in most cases faithfully producing one or more pattern elements of the endogenous gene. Moreover, we demonstrate for the entire set of known and newly validated modules that Ahab's prediction of binding sites correlates well with the expression patterns produced by the modules, revealing basic rules governing their composition. Specifically, we show that maternal factors consistently act as activators and that gap factors act as repressors, except for the bimodal factor Hunchback. Our data suggest a simple context-dependent rule for its switch from repressive to activating function. Overall, the composition of modules appears well fitted to the spatiotemporal distribution of their positive and negative input factors. Finally, by comparing Ahab predictions with different categories of transcription factor input, we confirm the global regulatory structure of the segmentation gene network, but find odd skipped behaving like a primary pair-rule gene. The study expands our knowledge of the segmentation gene network by increasing the number of experimentally tested modules by 50%. For the first time, the entire set of validated modules is analyzed for binding site composition under a uniform set of

  14. Sediment core and glacial environment reconstruction - a method review

    NASA Astrophysics Data System (ADS)

    Bakke, Jostein; Paasche, Øyvind

    2010-05-01

    Alpine glaciers are often located in remote and high-altitude regions of the world, areas that only rarely are covered by instrumental records. Reconstructions of glaciers has therefore proven useful for understanding past climate dynamics on both shorter and longer time-scales. One major drawback with glacier reconstructions based solely on moraine chronologies - by far the most common -, is that due to selective preservation of moraine ridges such records do not exclude the possibility of multiple Holocene glacier advances. This problem is true regardless whether cosmogenic isotopes or lichenometry have been used to date the moraines, or also radiocarbon dating of mega-fossils buried in till or underneath the moraines themselves. To overcome this problem Karlén (1976) initially suggested that glacial erosion and the associated production of rock-flour deposited in downstream lakes could provide a continuous record of glacial fluctuations, hence overcoming the problem of incomplete reconstructions. We want to discuss the methods used to reconstruct past glacier activity based on sediments deposited in distal glacier-fed lakes. By quantifying physical properties of glacial and extra-glacial sediments deposited in catchments, and in downstream lakes and fjords, it is possible to isolate and identify past glacier activity - size and production rate - that subsequently can be used to reconstruct changing environmental shifts and trends. Changes in average sediment evacuation from alpine glaciers are mainly governed by glacier size and the mass turnover gradient, determining the deformation rate at any given time. The amount of solid precipitation (mainly winter accumulation) versus loss due to melting during the ablation-season (mainly summer temperature) determines the mass turnover gradient in either positive or negative direction. A prevailing positive net balance will lead to higher sedimentation rates and vice versa, which in turn can be recorded in downstream

  15. Track and vertex reconstruction: From classical to adaptive methods

    SciTech Connect

    Strandlie, Are; Fruehwirth, Rudolf

    2010-04-15

    This paper reviews classical and adaptive methods of track and vertex reconstruction in particle physics experiments. Adaptive methods have been developed to meet the experimental challenges at high-energy colliders, in particular, the CERN Large Hadron Collider. They can be characterized by the obliteration of the traditional boundaries between pattern recognition and statistical estimation, by the competition between different hypotheses about what constitutes a track or a vertex, and by a high level of flexibility and robustness achieved with a minimum of assumptions about the data. The theoretical background of some of the adaptive methods is described, and it is shown that there is a close connection between the two main branches of adaptive methods: neural networks and deformable templates, on the one hand, and robust stochastic filters with annealing, on the other hand. As both classical and adaptive methods of track and vertex reconstruction presuppose precise knowledge of the positions of the sensitive detector elements, the paper includes an overview of detector alignment methods and a survey of the alignment strategies employed by past and current experiments.

  16. Reverse optimization reconstruction method in non-null aspheric interferometry

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Liu, Dong; Shi, Tu; Yang, Yongying; Chong, Shiyao; Shen, Yibing; Bai, Jian

    2015-10-01

    Aspheric non-null test achieves more flexible measurements than the null test. However, the precision calibration for retrace error has always been difficult. A reverse optimization reconstruction (ROR) method is proposed for the retrace error calibration as well as the aspheric figure error extraction based on system modeling. An optimization function is set up with system model, in which the wavefront data from experiment is inserted as the optimization objective while the figure error under test in the model as the optimization variable. The optimization is executed by the reverse ray tracing in the system model until the test wavefront in the model is consistent with the one in experiment. At this point, the surface figure error in the model is considered to be consistent with the one in experiment. With the Zernike fitting, the aspheric surface figure error is then reconstructed in the form of Zernike polynomials. Numerical simulations verifying the high accuracy of the ROR method are presented with error considerations. A set of experiments are carried out to demonstrate the validity and repeatability of ROR method. Compared with the results of Zygo interferometer (null test), the measurement error by the ROR method achieves better than 1/10λ.

  17. Image reconstruction by the speckle-masking method.

    PubMed

    Weigelt, G; Wirnitzer, B

    1983-07-01

    Speckle masking is a method for reconstructing high-resolution images of general astronomical objects from stellar speckle interferograms. In speckle masking no unresolvable star is required within the isoplanatic patch of the object. We present digital applications of speckle masking to close spectroscopic double stars. The speckle interferograms were recorded with the European Southern Observatory's 3.6-m telescope. Diffraction-limited resolution (0.03 arc see) was achieved, which is about 30 times higher than the resolution of conventional astrophotography. PMID:19718124

  18. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos; Pina, Robert

    2005-05-17

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.

  19. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos

    2002-01-01

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape size, and/or position) as needed to best fit the data.

  20. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos

    2002-01-01

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.

  1. Algebraic filter approach for fast approximation of nonlinear tomographic reconstruction methods

    NASA Astrophysics Data System (ADS)

    Plantagie, Linda; Batenburg, Kees Joost

    2015-01-01

    We present a computational approach for fast approximation of nonlinear tomographic reconstruction methods by filtered backprojection (FBP) methods. Algebraic reconstruction algorithms are the methods of choice in a wide range of tomographic applications, yet they require significant computation time, restricting their usefulness. We build upon recent work on the approximation of linear algebraic reconstruction methods and extend the approach to the approximation of nonlinear reconstruction methods which are common in practice. We demonstrate that if a blueprint image is available that is sufficiently similar to the scanned object, our approach can compute reconstructions that approximate iterative nonlinear methods, yet have the same speed as FBP.

  2. A New Method for Coronal Magnetic Field Reconstruction

    NASA Astrophysics Data System (ADS)

    Yi, Sibaek; Choe, Gwangson; Lim, Daye

    2015-08-01

    We present a new, simple, variational method for reconstruction of coronal force-free magnetic fields based on vector magnetogram data. Our method employs vector potentials for magnetic field description in order to ensure the divergence-free condition. As boundary conditions, it only requires the normal components of magnetic field and current density so that the boundary conditions are not over-specified as in many other methods. The boundary normal current distribution is initially fixed once and for all and does not need continual adjustment as in stress-and-relax type methods. We have tested the computational code based on our new method in problems with known solutions and those with actual photospheric data. When solutions are fully given at all boundaries, the accuracy of our method is almost comparable to best performing methods in the market. When magnetic field data are given only at the photospheric boundary, our method excels other methods in most “figures of merit” devised by Schrijver et al. (2006). Furthermore the residual force in the solution is at least an order of magnitude smaller than that of any other method. It can also accommodate the source-surface boundary condition at the top boundary. Our method is expected to contribute to the real time monitoring of the sun required for future space weather forecasts.

  3. 3D scanning modeling method application in ancient city reconstruction

    NASA Astrophysics Data System (ADS)

    Ren, Pu; Zhou, Mingquan; Du, Guoguang; Shui, Wuyang; Zhou, Pengbo

    2015-07-01

    With the development of optical engineering technology, the precision of 3D scanning equipment becomes higher, and its role in 3D modeling is getting more distinctive. This paper proposed a 3D scanning modeling method that has been successfully applied in Chinese ancient city reconstruction. On one hand, for the existing architectures, an improved algorithm based on multiple scanning is adopted. Firstly, two pieces of scanning data were rough rigid registered using spherical displacers and vertex clustering method. Secondly, a global weighted ICP (iterative closest points) method is used to achieve a fine rigid registration. On the other hand, for the buildings which have already disappeared, an exemplar-driven algorithm for rapid modeling was proposed. Based on the 3D scanning technology and the historical data, a system approach was proposed for 3D modeling and virtual display of ancient city.

  4. A reconstruction method for gappy and noisy arterial flow data.

    PubMed

    Yakhot, Alexander; Anor, Tomer; Karniadakis, George Em

    2007-12-01

    Proper orthogonal decomposition (POD), Kriging interpolation, and smoothing are applied to reconstruct gappy and noisy data of blood flow in a carotid artery. While we have applied these techniques to clinical data, in this paper in order to rigorously evaluate their effectiveness we rely on data obtained by computational fluid dynamics (CFD). Specifically, gappy data sets are generated by removing nodal values from high-resolution 3-D CFD data (at random or in a fixed area) while noisy data sets are formed by superimposing speckle noise on the CFD results. A combined POD-Kriging procedure is applied to planar data sets mimicking coarse resolution "ultrasound-like" blood flow images. A method for locating the vessel wall boundary and for calculating the wall shear stress (WSS) is also proposed. The results show good agreement with the original CFD data. The combined POD-Kriging method, enhanced by proper smoothing if needed, holds great potential in dealing effectively with gappy and noisy data reconstruction of in vivo velocity measurements based on color Doppler ultrasound (CDUS) imaging or magnetic resonance angiography (MRA). PMID:18092738

  5. Hysteresis in a synthetic mammalian gene network.

    PubMed

    Kramer, Beat P; Fussenegger, Martin

    2005-07-01

    Bistable and hysteretic switches, enabling cells to adopt multiple internal expression states in response to a single external input signal, have a pivotal impact on biological systems, ranging from cell-fate decisions to cell-cycle control. We have designed a synthetic hysteretic mammalian transcription network. A positive feedback loop, consisting of a transgene and transactivator (TA) cotranscribed by TA's cognate promoter, is repressed by constitutive expression of a macrolide-dependent transcriptional silencer, whose activity is modulated by the macrolide antibiotic erythromycin. The antibiotic concentration, at which a quasi-discontinuous switch of transgene expression occurs, depends on the history of the synthetic transcription circuitry. If the network components are imbalanced, a graded rather than a quasi-discontinuous signal integration takes place. These findings are consistent with a mathematical model. Synthetic gene networks, which are able to emulate natural gene expression behavior, may foster progress in future gene therapy and tissue engineering initiatives. PMID:15972812

  6. Next-Generation Synthetic Gene Networks

    PubMed Central

    Lu, Timothy K.; Khalil, Ahmad S.; Collins, James J.

    2009-01-01

    Synthetic biology is focused on the rational construction of biological systems based on engineering principles. During the field’s first decade of development, significant progress has been made in designing biological parts and assembling them into genetic circuits to achieve basic functionalities. These circuits have been used to construct proof-of-principle systems with promising results in industrial and medical applications. However, advances in synthetic biology have been limited by a lack of interoperable parts, techniques for dynamically probing biological systems, and frameworks for the reliable construction and operation of complex, higher-order networks. Here, we highlight challenges and goals for next-generation synthetic gene networks, in the context of potential applications in medicine, biotechnology, bioremediation, and bioenergy. PMID:20010597

  7. [Image quality evaluation of new image reconstruction methods applying the iterative reconstruction].

    PubMed

    Takata, Tadanori; Ichikawa, Katsuhiro; Hayashi, Hiroyuki; Mitsui, Wataru; Sakuta, Keita; Koshida, Haruka; Yokoi, Tomohiro; Matsubara, Kousuke; Horii, Jyunsei; Iida, Hiroji

    2012-01-01

    The purpose of this study was to evaluate the image quality of an iterative reconstruction method, the iterative reconstruction in image space (IRIS), which was implemented in a 128-slices multi-detector computed tomography system (MDCT), Siemens Somatom Definition Flash (Definition). We evaluated image noise by standard deviation (SD) as many researchers did before, and in addition, we measured modulation transfer function (MTF), noise power spectrum (NPS), and perceptual low-contrast detectability using a water phantom including a low-contrast object with a 10 Hounsfield unit (HU) contrast, to evaluate whether the noise reduction of IRIS was effective. The SD and NPS were measured from the images of a water phantom. The MTF was measured from images of a thin metal wire and a bar pattern phantom with the bar contrast of 125 HU. The NPS of IRIS was lower than that of filtered back projection (FBP) at middle and high frequency regions. The SD values were reduced by 21%. The MTF of IRIS and FBP measured by the wire phantom coincided precisely. However, for the bar pattern phantom, the MTF values of IRIS at 0.625 and 0.833 cycle/mm were lower than those of FBP. Despite the reduction of the SD and the NPS, the low-contrast detectability study indicated no significant difference between IRIS and FBP. From these results, it was demonstrated that IRIS had the noise reduction performance with exact preservation for high contrast resolution and slight degradation of middle contrast resolution, and could slightly improve the low contrast detectability but with no significance. PMID:22516592

  8. [The method of the isolated reconstruction by gastropancreatoduodenal resection].

    PubMed

    Shchepotin, I B; Vasil'ev, O V; Lukashenko, A V; Rozumiĭ, D A; Priĭmak, V V

    2011-01-01

    The modification of the reconstructive stage of gastropancreatoduodenal resection aims to increase the security of the pancreatojejunoanastomosis by minimizing the impact of such aggressive substances as bile and pancreatic juice. The modification represents the isolated pancreatojejunoanastomosis on the Roux-en-Y intestinal loop and gastro- and hepaticojejunoanastomoses on the second intestinal loop, separated with the use of the stub. Thus, the method allows the separate passage of pancreatic juice, bile and gastric contents, excluding their impact on other anastomoses. The described modification was performed in 6 patients. There were no cases of the anastomotic insufficiency. The mean hospital stay was 10,5 days. Thus. The method proved to be effective and safe, providing good initial results. PMID:22334901

  9. Comparison of image reconstruction methods for structured illumination microscopy

    NASA Astrophysics Data System (ADS)

    Lukeš, Tomas; Hagen, Guy M.; Křížek, Pavel; Švindrych, Zdeněk.; Fliegel, Karel; Klíma, Miloš

    2014-05-01

    Structured illumination microscopy (SIM) is a recent microscopy technique that enables one to go beyond the diffraction limit using patterned illumination. The high frequency information is encoded through aliasing into the observed image. By acquiring multiple images with different illumination patterns aliased components can be separated and a highresolution image reconstructed. Here we investigate image processing methods that perform the task of high-resolution image reconstruction, namely square-law detection, scaled subtraction, super-resolution SIM (SR-SIM), and Bayesian estimation. The optical sectioning and lateral resolution improvement abilities of these algorithms were tested under various noise level conditions on simulated data and on fluorescence microscopy images of a pollen grain test sample and of a cultured cell stained for the actin cytoskeleton. In order to compare the performance of the algorithms, the following objective criteria were evaluated: Signal to Noise Ratio (SNR), Signal to Background Ratio (SBR), circular average of the power spectral density and the S3 sharpness index. The results show that SR-SIM and Bayesian estimation combine illumination patterned images more effectively and provide better lateral resolution in exchange for more complex image processing. SR-SIM requires one to precisely shift the separated spectral components to their proper positions in reciprocal space. High noise levels in the raw data can cause inaccuracies in the shifts of the spectral components which degrade the super-resolved image. Bayesian estimation has proven to be more robust to changes in noise level and illumination pattern frequency.

  10. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS

    SciTech Connect

    Hong Luo; Yidong Xia; Robert Nourgaliev

    2011-05-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.

  11. Evolution of a Core Gene Network for Skeletogenesis in Chordates

    PubMed Central

    Hecht, Jochen; Panopoulou, Georgia; Podsiadlowski, Lars; Poustka, Albert J.; Dieterich, Christoph; Ehrich, Siegfried; Suvorova, Julia; Mundlos, Stefan; Seitz, Volkhard

    2008-01-01

    The skeleton is one of the most important features for the reconstruction of vertebrate phylogeny but few data are available to understand its molecular origin. In mammals the Runt genes are central regulators of skeletogenesis. Runx2 was shown to be essential for osteoblast differentiation, tooth development, and bone formation. Both Runx2 and Runx3 are essential for chondrocyte maturation. Furthermore, Runx2 directly regulates Indian hedgehog expression, a master coordinator of skeletal development. To clarify the correlation of Runt gene evolution and the emergence of cartilage and bone in vertebrates, we cloned the Runt genes from hagfish as representative of jawless fish (MgRunxA, MgRunxB) and from dogfish as representative of jawed cartilaginous fish (ScRunx1–3). According to our phylogenetic reconstruction the stem species of chordates harboured a single Runt gene and thereafter Runt locus duplications occurred during early vertebrate evolution. All newly isolated Runt genes were expressed in cartilage according to quantitative PCR. In situ hybridisation confirmed high MgRunxA expression in hard cartilage of hagfish. In dogfish ScRunx2 and ScRunx3 were expressed in embryonal cartilage whereas all three Runt genes were detected in teeth and placoid scales. In cephalochordates (lancelets) Runt, Hedgehog and SoxE were strongly expressed in the gill bars and expression of Runt and Hedgehog was found in endo- as well as ectodermal cells. Furthermore we demonstrate that the lancelet Runt protein binds to Runt binding sites in the lancelet Hedgehog promoter and regulates its activity. Together, these results suggest that Runt and Hedgehog were part of a core gene network for cartilage formation, which was already active in the gill bars of the common ancestor of cephalochordates and vertebrates and diversified after Runt duplications had occurred during vertebrate evolution. The similarities in expression patterns of Runt genes support the view that teeth and

  12. Vector intensity reconstruction using the data completion method.

    PubMed

    Langrenne, Christophe; Garcia, Alexandre

    2013-04-01

    This paper presents an application of the data completion method (DCM) for vector intensity reconstructions. A mobile array of 36 pressure-pressure probes (72 microphones) is used to perform measurements near a planar surface. Nevertheless, since the proposed method is based on integral formulations, DCM can be applied with any kind of geometry. This method requires the knowledge of Cauchy data (pressure and velocity) on a part of the boundary of an empty domain in order to evaluate pressure and velocity on the remaining part of the boundary. Intensity vectors are calculated in the interior domain surrounded by the measurement array. This inverse acoustic problem requires the use of a regularization method to obtain a realistic solution. An experiment in a closed wooden car trunk mock-up excited by a shaker and two loudspeakers is presented. In this case, where the volume of the mock-up is small (0.61 m(3)), standing-waves and fluid structure interactions appear and show that DCM is a powerful tool to identify sources in a confined space. PMID:23556589

  13. An iterative method for the reconstruction of the coronary arteries from rotational x-ray angiography

    NASA Astrophysics Data System (ADS)

    Hansis, Eberhard; Schäfer, Dirk; Grass, Michael; Dössel, Olaf

    2007-03-01

    Three-dimensional (3D) reconstruction of the coronary arteries offers great advantages in the diagnosis and treatment of cardiovascular diseases, compared to two-dimensional X-ray angiograms. Besides improved roadmapping, quantitative analysis of vessel lesions is possible. To perform 3D reconstruction, rotational projection data of the selectively contrast agent enhanced coronary arteries are acquired with simultaneous ECG recording. For the reconstruction of one cardiac phase, the corresponding projections are selected from the rotational sequence by nearest-neighbor ECG gating. This typically provides only 5-10 projections per cardiac phase. The severe angular undersampling leads to an ill-posed reconstruction problem. In this contribution, an iterative reconstruction method is presented which employs regularizations especially suited for the given reconstruction problem. The coronary arteries cover only a small fraction of the reconstruction volume. Therefore, we formulate the reconstruction problem as a minimization of the L I-norm of the reconstructed image, which results in a spatially sparse object. Two additional regularization terms are introduced: a 3D vesselness prior, which is reconstructed from vesselness-filtered projection data, and a Gibbs smoothing prior. The regularizations favor the reconstruction of the desired object, while taking care not to over-constrain the reconstruction by too detailed a-priori assumptions. Simulated projection data of a coronary artery software phantom are used to evaluate the performance of the method. Human data of clinical cases are presented to show the method's potential for clinical application.

  14. Paper-based Synthetic Gene Networks

    PubMed Central

    Pardee, Keith; Green, Alexander A.; Ferrante, Tom; Cameron, D. Ewen; DaleyKeyser, Ajay; Yin, Peng; Collins, James J.

    2014-01-01

    Synthetic gene networks have wide-ranging uses in reprogramming and rewiring organisms. To date, there has not been a way to harness the vast potential of these networks beyond the constraints of a laboratory or in vivo environment. Here, we present an in vitro paper-based platform that provides a new venue for synthetic biologists to operate, and a much-needed medium for the safe deployment of engineered gene circuits beyond the lab. Commercially available cell-free systems are freeze-dried onto paper, enabling the inexpensive, sterile and abiotic distribution of synthetic biology-based technologies for the clinic, global health, industry, research and education. For field use, we create circuits with colorimetric outputs for detection by eye, and fabricate a low-cost, electronic optical interface. We demonstrate this technology with small molecule and RNA actuation of genetic switches, rapid prototyping of complex gene circuits, and programmable in vitro diagnostics, including glucose sensors and strain-specific Ebola virus sensors. PMID:25417167

  15. Paper-based synthetic gene networks.

    PubMed

    Pardee, Keith; Green, Alexander A; Ferrante, Tom; Cameron, D Ewen; DaleyKeyser, Ajay; Yin, Peng; Collins, James J

    2014-11-01

    Synthetic gene networks have wide-ranging uses in reprogramming and rewiring organisms. To date, there has not been a way to harness the vast potential of these networks beyond the constraints of a laboratory or in vivo environment. Here, we present an in vitro paper-based platform that provides an alternate, versatile venue for synthetic biologists to operate and a much-needed medium for the safe deployment of engineered gene circuits beyond the lab. Commercially available cell-free systems are freeze dried onto paper, enabling the inexpensive, sterile, and abiotic distribution of synthetic-biology-based technologies for the clinic, global health, industry, research, and education. For field use, we create circuits with colorimetric outputs for detection by eye and fabricate a low-cost, electronic optical interface. We demonstrate this technology with small-molecule and RNA actuation of genetic switches, rapid prototyping of complex gene circuits, and programmable in vitro diagnostics, including glucose sensors and strain-specific Ebola virus sensors. PMID:25417167

  16. Analysis of Cascading Failure in Gene Networks

    PubMed Central

    Sun, Longxiao; Wang, Shudong; Li, Kaikai; Meng, Dazhi

    2012-01-01

    It is an important subject to research the functional mechanism of cancer-related genes make in formation and development of cancers. The modern methodology of data analysis plays a very important role for deducing the relationship between cancers and cancer-related genes and analyzing functional mechanism of genome. In this research, we construct mutual information networks using gene expression profiles of glioblast and renal in normal condition and cancer conditions. We investigate the relationship between structure and robustness in gene networks of the two tissues using a cascading failure model based on betweenness centrality. Define some important parameters such as the percentage of failure nodes of the network, the average size-ratio of cascading failure, and the cumulative probability of size-ratio of cascading failure to measure the robustness of the networks. By comparing control group and experiment groups, we find that the networks of experiment groups are more robust than that of control group. The gene that can cause large scale failure is called structural key gene. Some of them have been confirmed to be closely related to the formation and development of glioma and renal cancer respectively. Most of them are predicted to play important roles during the formation of glioma and renal cancer, maybe the oncogenes, suppressor genes, and other cancer candidate genes in the glioma and renal cancer cells. However, these studies provide little information about the detailed roles of identified cancer genes. PMID:23248647

  17. Evaluation of back projection methods for breast tomosynthesis image reconstruction.

    PubMed

    Zhou, Weihua; Lu, Jianping; Zhou, Otto; Chen, Ying

    2015-06-01

    Breast cancer is the most common cancer among women in the USA. Compared to mammography, digital breast tomosynthesis is a new imaging technique that may improve the diagnostic accuracy by removing the ambiguities of overlapped tissues and providing 3D information of the breast. Tomosynthesis reconstruction algorithms generate 3D reconstructed slices from a few limited angle projection images. Among different reconstruction algorithms, back projection (BP) is considered an important foundation of quite a few reconstruction techniques with deblurring algorithms such as filtered back projection. In this paper, two BP variants, including α-trimmed BP and principal component analysis-based BP, were proposed to improve the image quality against that of traditional BP. Computer simulations and phantom studies demonstrated that the α-trimmed BP may improve signal response performance and suppress noise in breast tomosynthesis image reconstruction. PMID:25384538

  18. Reconstructing palaeoclimatic variables from fossil pollen using boosted regression trees: comparison and synthesis with other quantitative reconstruction methods

    NASA Astrophysics Data System (ADS)

    Salonen, J. Sakari; Luoto, Miska; Alenius, Teija; Heikkilä, Maija; Seppä, Heikki; Telford, Richard J.; Birks, H. John B.

    2014-03-01

    We test and analyse a new calibration method, boosted regression trees (BRTs) in palaeoclimatic reconstructions based on fossil pollen assemblages. We apply BRTs to multiple Holocene and Lateglacial pollen sequences from northern Europe, and compare their performance with two commonly-used calibration methods: weighted averaging regression (WA) and the modern-analogue technique (MAT). Using these calibration methods and fossil pollen data, we present synthetic reconstructions of Holocene summer temperature, winter temperature, and water balance changes in northern Europe. Highly consistent trends are found for summer temperature, with a distinct Holocene thermal maximum at ca 8000-4000 cal. a BP, with a mean Tjja anomaly of ca +0.7 °C at 6 ka compared to 0.5 ka. We were unable to reconstruct reliably winter temperature or water balance, due to the confounding effects of summer temperature and the great between-reconstruction variability. We find BRTs to be a promising tool for quantitative reconstructions from palaeoenvironmental proxy data. BRTs show good performance in cross-validations compared with WA and MAT, can model a variety of taxon response types, find relevant predictors and incorporate interactions between predictors, and show some robustness with non-analogue fossil assemblages.

  19. An iterative reconstruction method of complex images using expectation maximization for radial parallel MRI

    NASA Astrophysics Data System (ADS)

    Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook

    2013-05-01

    In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method.

  20. An iterative reconstruction method of complex images using expectation maximization for radial parallel MRI.

    PubMed

    Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook

    2013-05-01

    In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. PMID:23588215

  1. High-quality image reconstruction method for ptychography with partially coherent illumination

    NASA Astrophysics Data System (ADS)

    Yu, Wei; Wang, Shouyu; Veetil, Suhas; Gao, Shumei; Liu, Cheng; Zhu, Jianqiang

    2016-06-01

    The influence of partial coherence on the image reconstruction in ptychography is analyzed, and a simple method is proposed to reconstruct a clear image for the weakly scattering object with partially coherent illumination. It is demonstrated numerically and experimentally that by illuminating a weakly scattering object with a divergent radiation beam, and doing the reconstruction only from the bright-field diffraction data, the mathematical ambiguity and corresponding reconstruction errors related to the partial coherency can be remarkably suppressed, thus clear reconstructed images can be generated even under seriously incoherent illumination.

  2. Simultaneous segmentation and reconstruction: A level set method approach for limited view computed tomography

    SciTech Connect

    Yoon, Sungwon; Pineda, Angel R.; Fahrig, Rebecca

    2010-05-15

    Purpose: An iterative tomographic reconstruction algorithm that simultaneously segments and reconstructs the reconstruction domain is proposed and applied to tomographic reconstructions from a sparse number of projection images. Methods: The proposed algorithm uses a two-phase level set method segmentation in conjunction with an iterative tomographic reconstruction to achieve simultaneous segmentation and reconstruction. The simultaneous segmentation and reconstruction is achieved by alternating between level set function evolutions and per-region intensity value updates. To deal with the limited number of projections, a priori information about the reconstruction is enforced via penalized likelihood function. Specifically, smooth function within each region (piecewise smooth function) and bounded function intensity values for each region are assumed. Such a priori information is formulated into a quadratic objective function with linear bound constraints. The level set function evolutions are achieved by artificially time evolving the level set function in the negative gradient direction; the intensity value updates are achieved by using the gradient projection conjugate gradient algorithm. Results: The proposed simultaneous segmentation and reconstruction results were compared to ''conventional'' iterative reconstruction (with no segmentation), iterative reconstruction followed by segmentation, and filtered backprojection. Improvements of 6%-13% in the normalized root mean square error were observed when the proposed algorithm was applied to simulated projections of a numerical phantom and to real fan-beam projections of the Catphan phantom, both of which did not satisfy the a priori assumptions. Conclusions: The proposed simultaneous segmentation and reconstruction resulted in improved reconstruction image quality. The algorithm correctly segments the reconstruction space into regions, preserves sharp edges between different regions, and smoothes the noise

  3. Cerec: correlation, an accurate and practical method for occlusal reconstruction.

    PubMed

    Prévost, A P; Bouchard, Y

    2001-07-01

    The correlation technique explained here shows one of the possibilities for occlusal reconstruction offered by the Cerec approach. The various stages of this technique are described and illustrated. The most current applications are reviewed. PMID:11862885

  4. The Local Front Reconstruction Method for direct simulation of two- and three-dimensional multiphase flows

    NASA Astrophysics Data System (ADS)

    Shin, Seungwon; Yoon, Ikroh; Juric, Damir

    2011-07-01

    We present a new interface reconstruction technique, the Local Front Reconstruction Method (LFRM), for incompressible multiphase flows. This new method falls in the category of Front Tracking methods but it shares automatic topology handling characteristics of the previously proposed Level Contour Reconstruction Method (LCRM). The LFRM tracks the phase interface explicitly as in Front Tracking but there is no logical connectivity between interface elements thus greatly easing the algorithmic complexity. Topological changes such as interfacial merging or pinch off are dealt with automatically and naturally as in the Level Contour Reconstruction Method. Here the method is described for both two- and three-dimensional flow geometries. The interfacial reconstruction technique in the LFRM differs from that in the LCRM formulation by foregoing using an Eulerian distance field function. Instead, the LFRM uses information from the original interface elements directly to generate the new interface in a mass conservative way thus showing significantly improved local mass conservation. Because the reconstruction procedure is independently carried out in each individual reconstruction cell after an initial localization process, an adaptive reconstruction procedure can be easily implemented to increase the accuracy while at the same time significantly decreasing the computational time required to perform the reconstruction. Several benchmarking tests are performed to validate the improved accuracy and computational efficiency as compared to the LCRM. The results demonstrate superior performance of the LFRM in maintaining detailed interfacial shapes and good local mass conservation especially when using low-resolution Eulerian grids.

  5. Analysis of method of 3D shape reconstruction using scanning deflectometry

    NASA Astrophysics Data System (ADS)

    Novák, Jiří; Novák, Pavel; Mikš, Antonín.

    2013-04-01

    This work presents a scanning deflectometric approach to solving a 3D surface reconstruction problem, which is based on measurements of a surface gradient of optically smooth surfaces. It is shown that a description of this problem leads to a nonlinear partial differential equation (PDE) of the first order, from which the surface shape can be reconstructed numerically. The method for effective finding of the solution of this differential equation is proposed, which is based on the transform of the problem of PDE solving to the optimization problem. We describe different types of surface description for the shape reconstruction and a numerical simulation of the presented method is performed. The reconstruction process is analyzed by computer simulations and presented on examples. The performed analysis confirms a robustness of the reconstruction method and a good possibility for measurements and reconstruction of the 3D shape of specular surfaces.

  6. Tomographic bioluminescence imaging reconstruction via a dynamically sparse regularized global method in mouse models.

    PubMed

    Liu, Kai; Tian, Jie; Qin, Chenghu; Yang, Xin; Zhu, Shouping; Han, Dong; Wu, Ping

    2011-04-01

    Generally, the performance of tomographic bioluminescence imaging is dependent on several factors, such as regularization parameters and initial guess of source distribution. In this paper, a global-inexact-Newton based reconstruction method, which is regularized by a dynamic sparse term, is presented for tomographic reconstruction. The proposed method can enhance higher imaging reliability and efficiency. In vivo mouse experimental reconstructions were performed to validate the proposed method. Reconstruction comparisons of the proposed method with other methods demonstrate the applicability on an entire region. Moreover, the reliable performance on a wide range of regularization parameters and initial unknown values were also investigated. Based on the in vivo experiment and a mouse atlas, the tolerance for optical property mismatch was evaluated with optical overestimation and underestimation. Additionally, the reconstruction efficiency was also investigated with different sizes of mouse grids. We showed that this method was reliable for tomographic bioluminescence imaging in practical mouse experimental applications. PMID:21529085

  7. Blockwise conjugate gradient methods for image reconstruction in volumetric CT.

    PubMed

    Qiu, W; Titley-Peloquin, D; Soleimani, M

    2012-11-01

    Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images. PMID:22325240

  8. Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models

    EPA Science Inventory

    Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...

  9. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-01

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels. PMID:19820265

  10. On multigrid methods for image reconstruction from projections

    SciTech Connect

    Henson, V.E.; Robinson, B.T.; Limber, M.

    1994-12-31

    The sampled Radon transform of a 2D function can be represented as a continuous linear map R : L{sup 1} {yields} R{sup N}. The image reconstruction problem is: given a vector b {element_of} R{sup N}, find an image (or density function) u(x, y) such that Ru = b. Since in general there are infinitely many solutions, the authors pick the solution with minimal 2-norm. Numerous proposals have been made regarding how best to discretize this problem. One can, for example, select a set of functions {phi}{sub j} that span a particular subspace {Omega} {contained_in} L{sup 1}, and model R : {Omega} {yields} R{sup N}. The subspace {Omega} may be chosen as a member of a sequence of subspaces whose limit is dense in L{sup 1}. One approach to the choice of {Omega} gives rise to a natural pixel discretization of the image space. Two possible choices of the set {phi}{sub j} are the set of characteristic functions of finite-width `strips` representing energy transmission paths and the set of intersections of such strips. The authors have studied the eigenstructure of the matrices B resulting from these choices and the effect of applying a Gauss-Seidel iteration to the problem Bw = b. There exists a near null space into which the error vectors migrate with iteration, after which Gauss-Seidel iteration stalls. The authors attempt to accelerate convergence via a multilevel scheme, based on the principles of McCormick`s Multilevel Projection Method (PML). Coarsening is achieved by thickening the rays which results in a much smaller discretization of an optimal grid, and a halving of the number of variables. This approach satisfies all the requirements of the PML scheme. They have observed that a multilevel approach based on this idea accelerates convergence at least to the point where noise in the data dominates.

  11. Digital reconstructed radiography quality control with software methods

    NASA Astrophysics Data System (ADS)

    Denis, Eloise; Beaumont, Stephane; Guedon, JeanPierre

    2005-04-01

    Nowadays, most of treatments for external radiotherapy are prepared with Treatment Planning Systems (TPS) which uses a virtual patient generated by a set of transverse slices acquired with a CT scanner of the patient in treatment position 1 2 3. In the first step of virtual simulation, the TPS is used to define a ballistic allowing a good target covering and the lowest irradiation for normal tissues. This parameters optimisation of the treatment with the TPS is realised with particular graphic tools allowing to: ×Contour the target, ×Expand the limit of the target in order to take into account contouring uncertainties, patient set up errors, movements of the target during the treatment (internal movement of the target and external movement of the patient), and beam's penumbra, ×Determine beams orientation and define dimensions and forms of the beams, ×Visualize beams on the patient's skin and calculate some characteristic points which will be tattooed on the patient to assist the patient set up before treating, ×Calculate for each beam a Digital Reconstructed Radiography (DRR) consisting in projecting the 3D CT virtual patient and beam limits with a cone beam geometry onto a plane. These DRR allow one for insuring the patient positioning during the treatment, essentially bone structures alignment by comparison with real radiography realized with the treatment X-ray source in the same geometric conditions (portal imaging). Then DRR are preponderant to insure the geometric accuracy of the treatment. For this reason quality control of its computation is mandatory4 . Until now, this control is realised with real test objects including some special inclusions4 5 . This paper proposes to use some numerical test objects to control the quality DRR calculation in terms of computation time, beam angle, divergence and magnification precision, spatial and contrast resolutions. The main advantage of this proposed method is to avoid a real test object CT acquisition

  12. Dictionary-Learning-Based Reconstruction Method for Electron Tomography

    PubMed Central

    LIU, BAODONG; YU, HENGYONG; VERBRIDGE, SCOTT S.; SUN, LIZHI; WANG, GE

    2014-01-01

    Summary Electron tomography usually suffers from so-called “missing wedge” artifacts caused by limited tilt angle range. An equally sloped tomography (EST) acquisition scheme (which should be called the linogram sampling scheme) was recently applied to achieve 2.4-angstrom resolution. On the other hand, a compressive sensing inspired reconstruction algorithm, known as adaptive dictionary based statistical iterative reconstruction (ADSIR), has been reported for X-ray computed tomography. In this paper, we evaluate the EST, ADSIR, and an ordered-subset simultaneous algebraic reconstruction technique (OS-SART), and compare the ES and equally angled (EA) data acquisition modes. Our results show that OS-SART is comparable to EST, and the ADSIR outperforms EST and OS-SART. Furthermore, the equally sloped projection data acquisition mode has no advantage over the conventional equally angled mode in this context. PMID:25104167

  13. Reconstruction method for data protection in telemedicine systems

    NASA Astrophysics Data System (ADS)

    Buldakova, T. I.; Suyatinov, S. I.

    2015-03-01

    In the report the approach to protection of transmitted data by creation of pair symmetric keys for the sensor and the receiver is offered. Since biosignals are unique for each person, their corresponding processing allows to receive necessary information for creation of cryptographic keys. Processing is based on reconstruction of the mathematical model generating time series that are diagnostically equivalent to initial biosignals. Information about the model is transmitted to the receiver, where the restoration of physiological time series is performed using the reconstructed model. Thus, information about structure and parameters of biosystem model received in the reconstruction process can be used not only for its diagnostics, but also for protection of transmitted data in telemedicine complexes.

  14. A new method to combine 3D reconstruction volumes for multiple parallel circular cone beam orbits

    PubMed Central

    Baek, Jongduk; Pelc, Norbert J.

    2010-01-01

    Purpose: This article presents a new reconstruction method for 3D imaging using a multiple 360° circular orbit cone beam CT system, specifically a way to combine 3D volumes reconstructed with each orbit. The main goal is to improve the noise performance in the combined image while avoiding cone beam artifacts. Methods: The cone beam projection data of each orbit are reconstructed using the FDK algorithm. When at least a portion of the total volume can be reconstructed by more than one source, the proposed combination method combines these overlap regions using weighted averaging in frequency space. The local exactness and the noise performance of the combination method were tested with computer simulations of a Defrise phantom, a FORBILD head phantom, and uniform noise in the raw data. Results: A noiseless simulation showed that the local exactness of the reconstructed volume from the source with the smallest tilt angle was preserved in the combined image. A noise simulation demonstrated that the combination method improved the noise performance compared to a single orbit reconstruction. Conclusions: In CT systems which have overlap volumes that can be reconstructed with data from more than one orbit and in which the spatial frequency content of each reconstruction can be calculated, the proposed method offers improved noise performance while keeping the local exactness of data from the source with the smallest tilt angle. PMID:21089770

  15. L1/2 regularization based numerical method for effective reconstruction of bioluminescence tomography

    NASA Astrophysics Data System (ADS)

    Chen, Xueli; Yang, Defu; Zhang, Qitan; Liang, Jimin

    2014-05-01

    Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l1/2 regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l1/2 regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l1 regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.

  16. Comparison of reconstruction methods and quantitative accuracy in Siemens Inveon PET scanner

    NASA Astrophysics Data System (ADS)

    Ram Yu, A.; Kim, Jin Su; Kang, Joo Hyun; Moo Lim, Sang

    2015-04-01

    PET reconstruction is key to the quantification of PET data. To our knowledge, no comparative study of reconstruction methods has been performed to date. In this study, we compared reconstruction methods with various filters in terms of their spatial resolution, non-uniformities (NU), recovery coefficients (RCs), and spillover ratios (SORs). In addition, the linearity of reconstructed radioactivity between linearity of measured and true concentrations were also assessed. A Siemens Inveon PET scanner was used in this study. Spatial resolution was measured with NEMA standard by using a 1 mm3 sized 18F point source. Image quality was assessed in terms of NU, RC and SOR. To measure the effect of reconstruction algorithms and filters, data was reconstructed using FBP, 3D reprojection algorithm (3DRP), ordered subset expectation maximization 2D (OSEM 2D), and maximum a posteriori (MAP) with various filters or smoothing factors (β). To assess the linearity of reconstructed radioactivity, image quality phantom filled with 18F was used using FBP, OSEM and MAP (β =1.5 & 5 × 10-5). The highest achievable volumetric resolution was 2.31 mm3 and the highest RCs were obtained when OSEM 2D was used. SOR was 4.87% for air and 3.97% for water, obtained OSEM 2D reconstruction was used. The measured radioactivity of reconstruction image was proportional to the injected one for radioactivity below 16 MBq/ml when FBP or OSEM 2D reconstruction methods were used. By contrast, when the MAP reconstruction method was used, activity of reconstruction image increased proportionally, regardless of the amount of injected radioactivity. When OSEM 2D or FBP were used, the measured radioactivity concentration was reduced by 53% compared with true injected radioactivity for radioactivity <16 MBq/ml. The OSEM 2D reconstruction method provides the highest achievable volumetric resolution and highest RC among all the tested methods and yields a linear relation between the measured and true

  17. The point-source method for 3D reconstructions for the Helmholtz and Maxwell equations

    NASA Astrophysics Data System (ADS)

    Ben Hassen, M. F.; Erhard, K.; Potthast, R.

    2006-02-01

    We use the point-source method (PSM) to reconstruct a scattered field from its associated far field pattern. The reconstruction scheme is described and numerical results are presented for three-dimensional acoustic and electromagnetic scattering problems. We give new proofs of the algorithms, based on the Green and Stratton-Chu formulae, which are more general than with the former use of the reciprocity relation. This allows us to handle the case of limited aperture data and arbitrary incident fields. Both for 3D acoustics and electromagnetics, numerical reconstructions of the field for different settings and with noisy data are shown. For shape reconstruction in acoustics, we develop an appropriate strategy to identify areas with good reconstruction quality and combine different such regions into one joint function. Then, we show how shapes of unknown sound-soft scatterers are found as level curves of the total reconstructed field.

  18. A method for investigating system matrix properties in optimization-based CT reconstruction

    NASA Astrophysics Data System (ADS)

    Rose, Sean D.; Sidky, Emil Y.; Pan, Xiaochuan

    2016-04-01

    Optimization-based iterative reconstruction methods have shown much promise for a variety of applications in X-ray computed tomography (CT). In these reconstruction methods, the X-ray measurement is modeled as a linear mapping from a finite-dimensional image space to a finite dimensional data-space. This mapping is dependent on a number of factors including the basis functions used for image representation1 and the method by which the matrix representing this mapping is generated.2 Understanding the properties of this linear mapping and how it depends on our choice of parameters is fundamental to optimization-based reconstruction. In this work, we confine our attention to a pixel basis and propose a method to investigate the effect of pixel size in optimization-based reconstruction. The proposed method provides insight into the tradeoff between higher resolution image representation and matrix conditioning. We demonstrate this method for a particular breast CT system geometry. We find that the images obtained from accurate solution of a least squares reconstruction optimization problem have high sensitivity to pixel size within certain regimes. We propose two methods by which this sensitivity can be reduced and demonstrate their efficacy. Our results indicate that the choice of pixel size in optimization-based reconstruction can have great impact on the quality of the reconstructed image, and that understanding the properties of the linear mapping modeling the X-ray measurement can help guide us with this choice.

  19. Synthetic Gene Networks: De novo constructs -- in numero descriptions

    NASA Astrophysics Data System (ADS)

    Hasty, Jeff

    2007-03-01

    Uncovering the structure and function of gene regulatory networks has become one of the central challenges of the post-genomic era. Theoretical models of protein-DNA feedback loops and gene regulatory networks have long been proposed, and recently, certain qualitative features of such models have been experimentally corroborated. This talk will focus on model and experimental results that demonstrate how a naturally occurring gene network can be used as a ``parts list'' for synthetic network design. The model formulation leads to computational and analytical approaches relevant to nonlinear dynamics and statistical physics, and the utility of such a formulation will be demonstrated through the consideration of specific design criteria for several novel genetic devices. Fluctuations originating from small molecule-number effects will be discussed in the context of model predictions, and the experimental validation of these stochastic effects underscores the importance of internal noise in gene expression. The underlying methodology highlights the utility of engineering-based methods in the design of synthetic gene regulatory networks.

  20. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  1. A new nonlinear reconstruction method based on total variation regularization of neutron penumbral imaging.

    PubMed

    Qian, Weixin; Qi, Shuangxi; Wang, Wanli; Cheng, Jinming; Liu, Dongbing

    2011-09-01

    Neutron penumbral imaging is a significant diagnostic technique in laser-driven inertial confinement fusion experiment. It is very important to develop a new reconstruction method to improve the resolution of neutron penumbral imaging. A new nonlinear reconstruction method based on total variation (TV) regularization is proposed in this paper. A TV-norm is used as regularized term to construct a smoothing functional for penumbral image reconstruction in the new method, in this way, the problem of penumbral image reconstruction is transformed to the problem of a functional minimization. In addition, a fixed point iteration scheme is introduced to solve the problem of functional minimization. The numerical experimental results show that, compared to linear reconstruction method based on Wiener filter, the TV regularized nonlinear reconstruction method is beneficial to improve the quality of reconstructed image with better performance of noise smoothing and edge preserving. Meanwhile, it can also obtain the spatial resolution with 5 μm which is higher than the Wiener method. PMID:21974584

  2. A new nonlinear reconstruction method based on total variation regularization of neutron penumbral imaging

    SciTech Connect

    Qian Weixin; Qi Shuangxi; Wang Wanli; Cheng Jinming; Liu Dongbing

    2011-09-15

    Neutron penumbral imaging is a significant diagnostic technique in laser-driven inertial confinement fusion experiment. It is very important to develop a new reconstruction method to improve the resolution of neutron penumbral imaging. A new nonlinear reconstruction method based on total variation (TV) regularization is proposed in this paper. A TV-norm is used as regularized term to construct a smoothing functional for penumbral image reconstruction in the new method, in this way, the problem of penumbral image reconstruction is transformed to the problem of a functional minimization. In addition, a fixed point iteration scheme is introduced to solve the problem of functional minimization. The numerical experimental results show that, compared to linear reconstruction method based on Wiener filter, the TV regularized nonlinear reconstruction method is beneficial to improve the quality of reconstructed image with better performance of noise smoothing and edge preserving. Meanwhile, it can also obtain the spatial resolution with 5 {mu}m which is higher than the Wiener method.

  3. Method for reconstruction of shape of specular surfaces using scanning beam deflectometry

    NASA Astrophysics Data System (ADS)

    Miks, Antonin; Novak, Jiri; Novak, Pavel

    2013-07-01

    A new method is presented for reconstruction of the shape of specular surfaces using scanning beam deflectometry. A description and an analysis of a deflectometric technique for 3D measurements of specular surfaces is provided and it is derived that a surface reconstruction problem leads to a theoretical description by a nonlinear partial differential equation. The surface shape can be calculated by solution of the derived equation. A method was proposed, which makes possible to find effectively the solution of the deflectometric differential equation for the shape reconstruction. The presented method is noncontact and no reference surface is needed as in interferometry.

  4. Sparse Reconstruction Techniques in Magnetic Resonance Imaging: Methods, Applications, and Challenges to Clinical Adoption.

    PubMed

    Yang, Alice C; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole

    2016-06-01

    The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in magnetic resonance imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be used to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MRI, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold standards, are discussed. PMID:27003227

  5. Improvement of the background optical property reconstruction of the two-layered slab sample based on a region-stepwise-reconstruction method

    NASA Astrophysics Data System (ADS)

    Liu, Ming; Qin, Zhuanping; Jia, Mengyu; Zhao, Huijuan; Gao, Feng

    2015-03-01

    Two-layered slab is a rational simplified sample to the near-infrared functional brain imaging using diffuse optical tomography (DOT).The quality of reconstructed images is substantially affected by the accuracy of the background optical properties. In this paper, region step wise reconstruction method is proposed for reconstructing the background optical properties of the two-layered slab sample with the known geometric information based on continuous wave (CW) DOT. The optical properties of the top and bottom layers are respectively reconstructed utilizing the different source-detector-separation groups according to the depth of maximum brain sensitivity of the source-detector-separation. We demonstrate the feasibility of the proposed method and investigate the application range of the source-detector-separation groups by the numerical simulations. The numerical simulation results indicate the proposed method can effectively reconstruct the background optical properties of two-layered slab sample. The relative reconstruction errors are less than 10% when the thickness of the top layer is approximate 10mm. The reconstruction of target caused by brain activation is investigated with the reconstructed optical properties as well. The quantitativeness ratio of the ROI is about 80% which is higher than that of the conventional method. The spatial resolution of the reconstructions (R) with two targets is investigated, and it demonstrates R with the proposed method is better than that with the conventional method as well.

  6. The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method

    NASA Astrophysics Data System (ADS)

    Voronina, T. A.; Romanenko, A. A.

    2016-04-01

    Application of the r- solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.

  7. Network Reconstruction Using Nonparametric Additive ODE Models

    PubMed Central

    Henderson, James; Michailidis, George

    2014-01-01

    Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative

  8. A Parallel Reconstructed Discontinuous Galerkin Method for the Compressible Flows on Aritrary Grids

    SciTech Connect

    Hong Luo; Amjad Ali; Robert Nourgaliev; Vincent A. Mousseau

    2010-01-01

    A reconstruction-based discontinuous Galerkin method is presented for the solution of the compressible Navier-Stokes equations on arbitrary grids. In this method, an in-cell reconstruction is used to obtain a higher-order polynomial representation of the underlying discontinuous Galerkin polynomial solution and an inter-cell reconstruction is used to obtain a continuous polynomial solution on the union of two neighboring, interface-sharing cells. The in-cell reconstruction is designed to enhance the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. The inter-cell reconstruction is devised to remove an interface discontinuity of the solution and its derivatives and thus to provide a simple, accurate, consistent, and robust approximation to the viscous and heat fluxes in the Navier-Stokes equations. A parallel strategy is also devised for the resulting reconstruction discontinuous Galerkin method, which is based on domain partitioning and Single Program Multiple Data (SPMD) parallel programming model. The RDG method is used to compute a variety of compressible flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results demonstrate that this RDG method is third-order accurate at a cost slightly higher than its underlying second-order DG method, at the same time providing a better performance than the third order DG method, in terms of both computing costs and storage requirements.

  9. Simultaneous denoising and reconstruction of 5D seismic data via damped rank-reduction method

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang; Zhang, Dong; Jin, Zhaoyu; Chen, Xiaohong; Zu, Shaohuan; Huang, Weilin; Gan, Shuwei

    2016-06-01

    The Cadzow rank-reduction method can be effectively utilized in simultaneously denoising and reconstructing 5D seismic data that depends on four spatial dimensions. The classic version of Cadzow rank-reduction method arranges the 4D spatial data into a level-four block Hankel/Toeplitz matrix and then applies truncated singular value decomposition (TSVD) for rank-reduction. When the observed data is extremely noisy, which is often the feature of real seismic data, traditional TSVD cannot be adequate for attenuating the noise and reconstructing the signals. The reconstructed data tends to contain a significant amount of residual noise using the traditional TSVD method, which can be explained by the fact that the reconstructed data space is a mixture of both signal subspace and noise subspace. In order to better decompose the block Hankel matrix into signal and noise components, we introduced a damping operator into the traditional TSVD formula, which we call the damped rank-reduction method. The damped rank-reduction method can obtain a perfect reconstruction performance even when the observed data has extremely low signal-to-noise ratio (SNR). The feasibility of the improved 5D seismic data reconstruction method was validated via both 5D synthetic and field data examples. We presented comprehensive analysis of the data examples and obtained valuable experience and guidelines in better utilizing the proposed method in practice. Since the proposed method is convenient to implement and can achieve immediate improvement, we suggest its wide application in the industry.

  10. Simultaneous denoising and reconstruction of 5-D seismic data via damped rank-reduction method

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang; Zhang, Dong; Jin, Zhaoyu; Chen, Xiaohong; Zu, Shaohuan; Huang, Weilin; Gan, Shuwei

    2016-09-01

    The Cadzow rank-reduction method can be effectively utilized in simultaneously denoising and reconstructing 5-D seismic data that depend on four spatial dimensions. The classic version of Cadzow rank-reduction method arranges the 4-D spatial data into a level-four block Hankel/Toeplitz matrix and then applies truncated singular value decomposition (TSVD) for rank reduction. When the observed data are extremely noisy, which is often the feature of real seismic data, traditional TSVD cannot be adequate for attenuating the noise and reconstructing the signals. The reconstructed data tend to contain a significant amount of residual noise using the traditional TSVD method, which can be explained by the fact that the reconstructed data space is a mixture of both signal subspace and noise subspace. In order to better decompose the block Hankel matrix into signal and noise components, we introduced a damping operator into the traditional TSVD formula, which we call the damped rank-reduction method. The damped rank-reduction method can obtain a perfect reconstruction performance even when the observed data have extremely low signal-to-noise ratio. The feasibility of the improved 5-D seismic data reconstruction method was validated via both 5-D synthetic and field data examples. We presented comprehensive analysis of the data examples and obtained valuable experience and guidelines in better utilizing the proposed method in practice. Since the proposed method is convenient to implement and can achieve immediate improvement, we suggest its wide application in the industry.

  11. Application of Symmetry Adapted Function Method for Three-Dimensional Reconstruction of Octahedral Biological Macromolecules

    PubMed Central

    Zeng, Songjun; Liu, Hongrong; Yang, Qibin

    2010-01-01

    A method for three-dimensional (3D) reconstruction of macromolecule assembles, that is, octahedral symmetrical adapted functions (OSAFs) method, was introduced in this paper and a series of formulations for reconstruction by OSAF method were derived. To verify the feasibility and advantages of the method, two octahedral symmetrical macromolecules, that is, heat shock protein Degp24 and the Red-cell L Ferritin, were utilized as examples to implement reconstruction by the OSAF method. The schedule for simulation was designed as follows: 2000 random orientated projections of single particles with predefined Euler angles and centers of origins were generated, then different levels of noises that is signal-to-noise ratio (S/N) = 0.1, 0.5, and 0.8 were added. The structures reconstructed by the OSAF method were in good agreement with the standard models and the relative errors of the structures reconstructed by the OSAF method to standard structures were very little even for high level noise. The facts mentioned above account for that the OSAF method is feasible and efficient approach to reconstruct structures of macromolecules and have ability to suppress the influence of noise. PMID:20150955

  12. Iterative reconstruction methods in atmospheric tomography: FEWHA, Kaczmarz and Gradient-based algorithm

    NASA Astrophysics Data System (ADS)

    Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.

    2014-07-01

    The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.

  13. A general few-projection method for tomographic reconstruction of samples consisting of several distinct materials

    NASA Astrophysics Data System (ADS)

    Myers, Glenn R.; Thomas, C. David L.; Paganin, David M.; Gureyev, Timur E.; Clement, John G.

    2010-01-01

    We present a method for tomographic reconstruction of objects containing several distinct materials, which is capable of accurately reconstructing a sample from vastly fewer angular projections than required by conventional algorithms. The algorithm is more general than many previous discrete tomography methods, as: (i) a priori knowledge of the exact number of materials is not required; (ii) the linear attenuation coefficient of each constituent material may assume a small range of a priori unknown values. We present reconstructions from an experimental x-ray computed tomography scan of cortical bone acquired at the SPring-8 synchrotron.

  14. A general few-projection method for tomographic reconstruction of samples consisting of several distinct materials

    SciTech Connect

    Myers, Glenn R.; Thomas, C. David L.; Clement, John G.; Paganin, David M.; Gureyev, Timur E.

    2010-01-11

    We present a method for tomographic reconstruction of objects containing several distinct materials, which is capable of accurately reconstructing a sample from vastly fewer angular projections than required by conventional algorithms. The algorithm is more general than many previous discrete tomography methods, as: (i) a priori knowledge of the exact number of materials is not required; (ii) the linear attenuation coefficient of each constituent material may assume a small range of a priori unknown values. We present reconstructions from an experimental x-ray computed tomography scan of cortical bone acquired at the SPring-8 synchrotron.

  15. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    PubMed

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral

  16. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    NASA Astrophysics Data System (ADS)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral

  17. Adaptive region of interest method for analytical micro-CT reconstruction.

    PubMed

    Yang, Wanneng; Xu, Xiaochun; Bi, Kun; Zeng, Shaoqun; Liu, Qian; Chen, Shangbin

    2011-01-01

    The real-time imaging is important in automatic successive inspection with micro-computerized tomography (micro-CT). Generally, the size of the detector is chosen according to the most probable size of the measured object to acquire all the projection data. Given enough imaging area and imaging resolution of X-ray detector, the detector is larger than specimen projection area, which results in redundant data in the Sinogram. The process of real-time micro-CT is computation-intensive because of the large amounts of source and destination data. The speed of the reconstruction algorithm can't always meet the requirements of real-time applications. A preprocessing method called adaptive region of interest (AROI), which detects the object's boundaries automatically to focus the active Sinogram regions, is introduced into the analytical reconstruction algorithm in this paper. The AROI method reduces the volume of the reconstructing data and thus directly accelerates the reconstruction process. It has been further shown that image quality is not compromised when applying AROI, while the reconstruction speed is increased as the square of the ratio of the sizes of the detector and the specimen slice. In practice, the conch reconstruction experiment indicated that the process is accelerated by 5.2 times with AROI and the imaging quality is not degraded. Therefore, the AROI method improves the speed of analytical micro-CT reconstruction significantly. PMID:21422587

  18. Direct reconstruction of pharmacokinetic parameters in dynamic fluorescence molecular tomography by the augmented Lagrangian method

    NASA Astrophysics Data System (ADS)

    Zhu, Dianwen; Zhang, Wei; Zhao, Yue; Li, Changqing

    2016-03-01

    Dynamic fluorescence molecular tomography (FMT) has the potential to quantify physiological or biochemical information, known as pharmacokinetic parameters, which are important for cancer detection, drug development and delivery etc. To image those parameters, there are indirect methods, which are easier to implement but tend to provide images with low signal-to-noise ratio, and direct methods, which model all the measurement noises together and are statistically more efficient. The direct reconstruction methods in dynamic FMT have attracted a lot of attention recently. However, the coupling of tomographic image reconstruction and nonlinearity of kinetic parameter estimation due to the compartment modeling has imposed a huge computational burden to the direct reconstruction of the kinetic parameters. In this paper, we propose to take advantage of both the direct and indirect reconstruction ideas through a variable splitting strategy under the augmented Lagrangian framework. Each iteration of the direct reconstruction is split into two steps: the dynamic FMT image reconstruction and the node-wise nonlinear least squares fitting of the pharmacokinetic parameter images. Through numerical simulation studies, we have found that the proposed algorithm can achieve good reconstruction results within a small amount of time. This will be the first step for a combined dynamic PET and FMT imaging in the future.

  19. Fast wave-front reconstruction by solving the Sylvester equation with the alternating direction implicit method

    NASA Astrophysics Data System (ADS)

    Ren, Hongwu; Dekany, Richard

    2004-07-01

    Large degree-of-freedom real-time adaptive optics (AO) control requires reconstruction algorithms that are computationally efficient and readily parallelized for hardware implementation. In particular, we find the wave-front reconstruction for the Hudgin and Fried geometry can be cast into a form of the well-known Sylvester equation using the Kronecker product properties of matrices. We derive the filters and inverse filtering formulas for wave-front reconstruction in two-dimensional (2-D) Discrete Cosine Transform (DCT) domain for these two geometries using the Hadamard product concept of matrices and the principle of separable variables. We introduce a recursive filtering (RF) method for the wave-front reconstruction on an annular aperture, in which, an imbedding step is used to convert an annular-aperture wave-front reconstruction into a squareaperture wave-front reconstruction, and then solving the Hudgin geometry problem on the square aperture. We apply the Alternating Direction Implicit (ADI) method to this imbedding step of the RF algorithm, to efficiently solve the annular-aperture wave-front reconstruction problem at cost of order of the number of degrees of freedom, O(n). Moreover, the ADI method is better suited for parallel implementation and we describe a practical real-time implementation for AO systems of order 3,000 actuators.

  20. Reconstruction for 3D PET Based on Total Variation Constrained Direct Fourier Method

    PubMed Central

    Yu, Haiqing; Chen, Zhi; Zhang, Heye; Loong Wong, Kelvin Kian; Chen, Yunmei; Liu, Huafeng

    2015-01-01

    This paper presents a total variation (TV) regularized reconstruction algorithm for 3D positron emission tomography (PET). The proposed method first employs the Fourier rebinning algorithm (FORE), rebinning the 3D data into a stack of ordinary 2D data sets as sinogram data. Then, the resulted 2D sinogram are ready to be reconstructed by conventional 2D reconstruction algorithms. Given the locally piece-wise constant nature of PET images, we introduce the total variation (TV) based reconstruction schemes. More specifically, we formulate the 2D PET reconstruction problem as an optimization problem, whose objective function consists of TV norm of the reconstructed image and the data fidelity term measuring the consistency between the reconstructed image and sinogram. To solve the resulting minimization problem, we apply an efficient methods called the Bregman operator splitting algorithm with variable step size (BOSVS). Experiments based on Monte Carlo simulated data and real data are conducted as validations. The experiment results show that the proposed method produces higher accuracy than conventional direct Fourier (DF) (bias in BOSVS is 70% of ones in DF, variance of BOSVS is 80% of ones in DF). PMID:26398232

  1. Development of threedimensional optical correction method for reconstruction of flow field in droplet

    NASA Astrophysics Data System (ADS)

    Ko, Han Seo; Gim, Yeonghyeon; Kang, Seung-Hwan

    2015-11-01

    A three-dimensional optical correction method was developed to reconstruct droplet-based flow fields. For a numerical simulation, synthetic phantoms were reconstructed by a simultaneous multiplicative algebraic reconstruction technique using three projection images which were positioned at an offset angle of 45°. If the synthetic phantom in a conical object with refraction index which differs from atmosphere, the image can be distorted because a light is refracted on the surface of the conical object. Thus, the direction of the projection ray was replaced by the refracted ray which occurred on the surface of the conical object. In order to prove the method considering the distorted effect, reconstruction results of the developed method were compared with the original phantom. As a result, the reconstruction result of the method showed smaller error than that without the method. The method was applied for a Taylor cone which was caused by high voltage between a droplet and a substrate to reconstruct the three-dimensional flow fields for analysis of the characteristics of the droplet. This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korean government (MEST) (No. 2013R1A2A2A01068653).

  2. Reconstruction of vibroacoustic responses of a highly nonspherical structure using Helmholtz equation least-squares method.

    PubMed

    Lu, Huancai; Wu, Sean F

    2009-03-01

    The vibroacoustic responses of a highly nonspherical vibrating object are reconstructed using Helmholtz equation least-squares (HELS) method. The objectives of this study are to examine the accuracy of reconstruction and the impacts of various parameters involved in reconstruction using HELS. The test object is a simply supported and baffled thin plate. The reason for selecting this object is that it represents a class of structures that cannot be exactly described by the spherical Hankel functions and spherical harmonics, which are taken as the basis functions in the HELS formulation, yet the analytic solutions to vibroacoustic responses of a baffled plate are readily available so the accuracy of reconstruction can be checked accurately. The input field acoustic pressures for reconstruction are generated by the Rayleigh integral. The reconstructed normal surface velocities are validated against the benchmark values, and the out-of-plane vibration patterns at several natural frequencies are compared with the natural modes of a simply supported plate. The impacts of various parameters such as number of measurement points, measurement distance, location of the origin of the coordinate system, microphone spacing, and ratio of measurement aperture size to the area of source surface of reconstruction on the resultant accuracy of reconstruction are examined. PMID:19275312

  3. Optimization of a Stochastically Simulated Gene Network Model via Simulated Annealing

    PubMed Central

    Tomshine, Jonathan; Kaznessis, Yiannis N.

    2006-01-01

    By rearranging naturally occurring genetic components, gene networks can be created that display novel functions. When designing these networks, the kinetic parameters describing DNA/protein binding are of great importance, as these parameters strongly influence the behavior of the resulting gene network. This article presents an optimization method based on simulated annealing to locate combinations of kinetic parameters that produce a desired behavior in a genetic network. Since gene expression is an inherently stochastic process, the simulation component of simulated annealing optimization is conducted using an accurate multiscale simulation algorithm to calculate an ensemble of network trajectories at each iteration of the simulated annealing algorithm. Using the three-gene repressilator of Elowitz and Leibler as an example, we show that gene network optimizations can be conducted using a mechanistically realistic model integrated stochastically. The repressilator is optimized to give oscillations of an arbitrary specified period. These optimized designs may then provide a starting-point for the selection of genetic components needed to realize an in vivo system. PMID:16920827

  4. Reconstruction Method for Optical Tomography Based on the Linearized Bregman Iteration with Sparse Regularization

    PubMed Central

    Leng, Chengcai; Yu, Dongdong; Zhang, Shuang; An, Yu; Hu, Yifang

    2015-01-01

    Optical molecular imaging is a promising technique and has been widely used in physiology, and pathology at cellular and molecular levels, which includes different modalities such as bioluminescence tomography, fluorescence molecular tomography and Cerenkov luminescence tomography. The inverse problem is ill-posed for the above modalities, which cause a nonunique solution. In this paper, we propose an effective reconstruction method based on the linearized Bregman iterative algorithm with sparse regularization (LBSR) for reconstruction. Considering the sparsity characteristics of the reconstructed sources, the sparsity can be regarded as a kind of a priori information and sparse regularization is incorporated, which can accurately locate the position of the source. The linearized Bregman iteration method is exploited to minimize the sparse regularization problem so as to further achieve fast and accurate reconstruction results. Experimental results in a numerical simulation and in vivo mouse demonstrate the effectiveness and potential of the proposed method. PMID:26421055

  5. Apparatus And Method For Reconstructing Data Using Cross-Parity Stripes On Storage Media

    DOEpatents

    Hughes, James Prescott

    2003-06-17

    An apparatus and method for reconstructing missing data using cross-parity stripes on a storage medium is provided. The apparatus and method may operate on data symbols having sizes greater than a data bit. The apparatus and method makes use of a plurality of parity stripes for reconstructing missing data stripes. The parity symbol values in the parity stripes are used as a basis for determining the value of the missing data symbol in a data stripe. A correction matrix is shifted along the data stripes, correcting missing data symbols as it is shifted. The correction is performed from the outside data stripes towards the inner data stripes to thereby use previously reconstructed data symbols to reconstruct other missing data symbols.

  6. Reconstruction Method for Optical Tomography Based on the Linearized Bregman Iteration with Sparse Regularization.

    PubMed

    Leng, Chengcai; Yu, Dongdong; Zhang, Shuang; An, Yu; Hu, Yifang

    2015-01-01

    Optical molecular imaging is a promising technique and has been widely used in physiology, and pathology at cellular and molecular levels, which includes different modalities such as bioluminescence tomography, fluorescence molecular tomography and Cerenkov luminescence tomography. The inverse problem is ill-posed for the above modalities, which cause a nonunique solution. In this paper, we propose an effective reconstruction method based on the linearized Bregman iterative algorithm with sparse regularization (LBSR) for reconstruction. Considering the sparsity characteristics of the reconstructed sources, the sparsity can be regarded as a kind of a priori information and sparse regularization is incorporated, which can accurately locate the position of the source. The linearized Bregman iteration method is exploited to minimize the sparse regularization problem so as to further achieve fast and accurate reconstruction results. Experimental results in a numerical simulation and in vivo mouse demonstrate the effectiveness and potential of the proposed method. PMID:26421055

  7. Possible methods of reconstructing conveyor gallery span structures

    SciTech Connect

    Kolesnichenko, V.G.; Beizer, V.N.; Raskina, A.M.

    1983-01-01

    Problems of reconstruction of industrial buildings and structures are acquiring increasing national economic importance. The Makeevka Construction Engineering Institute of the conducted investigations of the design of the conveyor galleries at the Yasinovka and Makeevka Coke Works, in operation for 20-40 years. The bearing constructions of the span structures are generally welded (in the old galleries riveted) metal trusses. The principal trusses are predominantly made discontinuous with spans of 10-40 m, supported by hinges on intermediate lattice columns, connected through the upper and lower horizontal strips by ties (see figure, a). The height of the main trusses corresponds to the height of the galleries (2.3-3.4 m). The width of a gallery depends on the width and number of conveyors; for galleries with a single conveyor it is 3.1-3.8 m, and 5.5-7.1 m for a gallery with two conveyors.

  8. The feasibility of images reconstructed with the method of sieves

    SciTech Connect

    Veklerov, E.; Llacer, J.

    1989-04-01

    The concept of sieves has been applied with the Maximum likelihood Estimator (MLE) to image reconstruction. While it makes it possible to recover smooth images consistent with the data, the degree of smoothness provided by it is arbitrary. It is shown that the concept of feasibility is able to resolve this arbitrariness. By varying the values of parameters determining the degree of smoothness, one can generate images on both sides of the feasibility region, as well as within the region. Feasible images recovered by using different sieve parameters are compared with feasible results of other procedures. One- and two-dimensional examples using both simulated and real data sets are considered. 12 refs., 3 figs., 2 tabs.

  9. Modified Method of Increasing of Reconstruction Quality of Diffractive Optical Elements Displayed with LC SLM

    NASA Astrophysics Data System (ADS)

    Krasnov, V. V.; Cheremkhin, P. A.; Erkin, I. Yu.; Evtikhiev, N. N.; Starikov, R. S.; Starikov, S. N.

    Modified method of increasing of reconstruction quality of diffractive optical elements (DOE) displayed with liquid crystal (LC) spatial light modulators (SLM) is presented. Method is based on optimization of DOE synthesized with conventional method by application of direct search with random trajectory method while taking into account LC SLM phase fluctuations. Reduction of synthesis error up to 88% is achieved.

  10. Compensation for acoustic heterogeneities in photoacoustic computed tomography using a variable temporal data truncation reconstruction method

    NASA Astrophysics Data System (ADS)

    Poudel, Joemini; Matthews, Thomas P.; Anastasio, Mark A.; Wang, Lihong V.

    2016-03-01

    Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the absorbed optical energy density within tissue. If the object possesses spatially variant acoustic properties that are unaccounted for by the reconstruction algorithm, the estimated image can contain distortions. While reconstruction algorithms have recently been developed for compensating for this effect, they generally require the objects acoustic properties to be known a priori. To circumvent the need for detailed information regarding an objects acoustic properties, we have previously proposed a half-time reconstruction method for PACT. A half-time reconstruction method estimates the PACT image from a data set that has been temporally truncated to exclude the data components that have been strongly aberrated. In this approach, the degree of temporal truncation is the same for all measurements. However, this strategy can be improved upon it when the approximate sizes and locations of strongly heterogeneous structures such as gas voids or bones are known. In this work, we investigate PACT reconstruction algorithms that are based on a variable temporal data truncation (VTDT) approach that represents a generalization of the half-time reconstruction approach. In the VTDT approach, the degree of temporal truncation for each measurement is determined by the distance between the corresponding transducer location and the nearest known bone or gas void location. Reconstructed images from a numerical phantom is employed to demonstrate the feasibility and effectiveness of the approach.

  11. Magnetic Field Configuration Models and Reconstruction Methods for Interplanetary Coronal Mass Ejections

    NASA Astrophysics Data System (ADS)

    Al-Haddad, N.; Nieves-Chinchilla, T.; Savani, N. P.; Möstl, C.; Marubashi, K.; Hidalgo, M. A.; Roussev, I. I.; Poedts, S.; Farrugia, C. J.

    2013-05-01

    This study aims to provide a reference for different magnetic field models and reconstruction methods for interplanetary coronal mass ejections (ICMEs). To understand the differences in the outputs of these models and codes, we analyzed 59 events from the Coordinated Data Analysis Workshop (CDAW) list, using four different magnetic field models and reconstruction techniques; force-free fitting, magnetostatic reconstruction using a numerical solution to the Grad-Shafranov equation, fitting to a self-similarly expanding cylindrical configuration and elliptical, non-force-free fitting. The resulting parameters of the reconstructions for the 59 events are compared statistically and in selected case studies. The ability of a method to fit or reconstruct an event is found to vary greatly; this depends on whether the event is a magnetic cloud or not. We find that the magnitude of the axial field is relatively consistent across models, but that the axis orientation of the ejecta is not. We also find that there are a few cases with different signs of the magnetic helicity for the same event when we leave the boundaries free to vary, which illustrates that this simplest of parameters is not necessarily always clearly constrained by fitting and reconstruction models. Finally, we examine three unique cases in depth to provide a comprehensive idea of the different aspects of how the fitting and reconstruction codes work.

  12. Wide-spectrum reconstruction method for a birefringence interference imaging spectrometer.

    PubMed

    Zhang, Chunmin; Jian, Xiaohua

    2010-02-01

    We present a mathematical method used to determine the spectrum detected by a birefringence interference imaging spectrometer (BIIS). The reconstructed spectrum has good precision over a wide spectral range, 0.4-1.0 microm. This method considers the light intensity as a function of wavelength and avoids the fatal error caused by birefringence effect in the conventional Fourier transform method. The experimental interferogram of the BIIS is processed in this new way, and the interference data and reconstructed spectrum are in good agreement, proving this method to be very exact and useful. Application of this method will greatly improve the instrument performance. PMID:20125723

  13. Adaptive Forward Modeling Method for Analysis and Reconstructions of Orientation Image Map

    SciTech Connect

    Frankie Li, Shiu Fai

    2014-06-01

    IceNine is a MPI-parallel orientation reconstruction and microstructure analysis code. It's primary purpose is to reconstruct a spatially resolved orientation map given a set of diffraction images from a high energy x-ray diffraction microscopy (HEDM) experiment (1). In particular, IceNine implements the adaptive version of the forward modeling method (2, 3). Part of IceNine is a library used to for conbined analysis of the microstructure with the experimentally measured diffraction signal. The libraries is also designed for tapid prototyping of new reconstruction and analysis algorithms. IceNine is also built with a simulator of diffraction images with an input microstructure.

  14. Comparison of kinoform synthesis methods for image reconstruction in Fourier plane

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Porshneva, Liudmila A.; Rodin, Vladislav G.; Starikov, Sergey N.

    2014-05-01

    Kinoform is synthesized phase diffractive optical element which allows to reconstruct image by its illumination with plane wave. Kinoforms are used in image processing systems. For tasks of kinoform synthesis iterative methods had become wide-spread because of relatively small error of resulting intensity distribution. There are articles in which two or three iterative methods are compared but they use only one or several test images. The goal of this work is to compare iterative methods by using many test images of different types. Images were reconstructed in Fourier plane from synthesized kinoforms displayed on phase-only LCOS SLM. Quality of reconstructed images and computational resources of the methods were analyzed. For kinoform synthesis four methods were implemented in programming environment: Gerchberg-Saxton algorithm (GS), Fienup algorithm (F), adaptive-additive algorithm (AA) and Gerchberg-Saxton algorithm with weight coefficients (GSW). To compare these methods 50 test images with different characteristics were used: binary and grayscale, contour and non-contour. Resolution of images varied from 64×64 to 1024×1024. Occupancy of images ranged from 0.008 to 0.89. Quantity of phase levels of synthesized kinoforms was 256 which is equal to number of phase levels of SLM LCOS HoloEye PLUTO VIS. Under numerical testing it was found that the best quality of reconstructed images provides the AA method. The GS, F and GSW methods showed worse results but roughly similar between each other. Execution time of single iteration of the analyzed methods is minimal for the GS method. The F method provides maximum execution time. Synthesized kinoforms were optically reconstructed using phase-only LCOS SLM HoloEye PLUTO VIS. Results of optical reconstruction were compared to the numerical ones. The AA method showed slightly better results than other methods especially in case of gray-scale images.

  15. A Penalized Linear and Nonlinear Combined Conjugate Gradient Method for the Reconstruction of Fluorescence Molecular Tomography

    PubMed Central

    Shang, Shang; Bai, Jing; Song, Xiaolei; Wang, Hongkai; Lau, Jaclyn

    2007-01-01

    Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT. PMID:18354740

  16. On climate reconstruction using bivalves: three methods to interpret the chemical signature of a shell.

    PubMed

    Bauwens, Maite; Ohlsson, Henrik; Barbé, Kurt; Beelaerts, Veerle; Dehairs, Frank; Schoukens, Johan

    2011-11-01

    To improve our understanding of the climate process and to assess the human impact on current global warming, past climate reconstruction is essential. The chemical composition of a bivalve shell is strongly coupled to environmental variations and therefore ancient shells are potential climate archives. The nonlinear nature of the relation between environmental condition (e.g. the seawater temperature) and proxy composition makes it hard to predict the former from the latter, however. In this paper we compare the ability of three nonlinear system identification methods to reconstruct the ambient temperature from the chemical composition of a shell. The comparison shows that nonlinear multi-proxy approaches are potentially useful tools for climate reconstructions and that manifold based methods result in smoother and more precise temperature reconstruction. PMID:20888663

  17. A Fast Edge Preserving Bayesian Reconstruction Method for Parallel Imaging Applications in Cardiac MRI

    PubMed Central

    Singh, Gurmeet; Raj, Ashish; Kressler, Bryan; Nguyen, Thanh D.; Spincemaille, Pascal; Zabih, Ramin; Wang, Yi

    2010-01-01

    Among recent parallel MR imaging reconstruction advances, a Bayesian method called Edge-preserving Parallel Imaging with GRAph cut Minimization (EPIGRAM) has been demonstrated to significantly improve signal to noise ratio (SNR) compared to conventional regularized sensitivity encoding (SENSE) method. However, EPIGRAM requires a large number of iterations in proportion to the number of intensity labels in the image, making it computationally expensive for high dynamic range images. The objective of this study is to develop a Fast EPIGRAM reconstruction based on the efficient binary jump move algorithm that provides a logarithmic reduction in reconstruction time while maintaining image quality. Preliminary in vivo validation of the proposed algorithm is presented for 2D cardiac cine MR imaging and 3D coronary MR angiography at acceleration factors of 2-4. Fast EPIGRAM was found to provide similar image quality to EPIGRAM and maintain the previously reported SNR improvement over regularized SENSE, while reducing EPIGRAM reconstruction time by 25-50 times. PMID:20939095

  18. A two-step Hilbert transform method for 2D image reconstruction.

    PubMed

    Noo, Frédéric; Clackdoyle, Rolf; Pack, Jed D

    2004-09-01

    The paper describes a new accurate two-dimensional (2D) image reconstruction method consisting of two steps. In the first step, the backprojected image is formed after taking the derivative of the parallel projection data. In the second step, a Hilbert filtering is applied along certain lines in the differentiated backprojection (DBP) image. Formulae for performing the DBP step in fanbeam geometry are also presented. The advantage of this two-step Hilbert transform approach is that in certain situations, regions of interest (ROIs) can be reconstructed from truncated projection data. Simulation results are presented that illustrate very similar reconstructed image quality using the new method compared to standard filtered backprojection, and that show the capability to correctly handle truncated projections. In particular, a simulation is presented of a wide patient whose projections are truncated laterally yet for which highly accurate ROI reconstruction is obtained. PMID:15470913

  19. Evaluation of time-efficient reconstruction methods in digital breast tomosynthesis.

    PubMed

    Svahn, T M; Houssami, N

    2015-07-01

    Three reconstruction algorithms for digital breast tomosynthesis were compared in this article: filtered back-projection (FBP), iterative adapted FBP and maximum likelihood-convex iterative algorithms. Quality metrics such as signal-difference-to-noise ratio, normalised line-profiles and artefact-spread function were used for evaluation of reconstructed tomosynthesis images. The iterative-based methods offered increased image quality in terms of higher detectability and reduced artefacts, which will be further examined in clinical images. PMID:25855075

  20. Novel l2,1-norm optimization method for fluorescence molecular tomography reconstruction

    PubMed Central

    Jiang, Shixin; Liu, Jie; An, Yu; Zhang, Guanglei; Ye, Jinzuo; Mao, Yamin; He, Kunshan; Chi, Chongwei; Tian, Jie

    2016-01-01

    Fluorescence molecular tomography (FMT) is a promising tomographic method in preclinical research, which enables noninvasive real-time three-dimensional (3-D) visualization for in vivo studies. The ill-posedness of the FMT reconstruction problem is one of the many challenges in the studies of FMT. In this paper, we propose a l2,1-norm optimization method using a priori information, mainly the structured sparsity of the fluorescent regions for FMT reconstruction. Compared to standard sparsity methods, the structured sparsity methods are often superior in reconstruction accuracy since the structured sparsity utilizes correlations or structures of the reconstructed image. To solve the problem effectively, the Nesterov’s method was used to accelerate the computation. To evaluate the performance of the proposed l2,1-norm method, numerical phantom experiments and in vivo mouse experiments are conducted. The results show that the proposed method not only achieves accurate and desirable fluorescent source reconstruction, but also demonstrates enhanced robustness to noise. PMID:27375949

  1. A Reconstructed Discontinuous Galerkin Method for the Compressible Euler Equations on Arbitrary Grids

    SciTech Connect

    Hong Luo; Luquing Luo; Robert Nourgaliev; Vincent Mousseau

    2009-06-01

    A reconstruction-based discontinuous Galerkin (DG) method is presented for the solution of the compressible Euler equations on arbitrary grids. By taking advantage of handily available and yet invaluable information, namely the derivatives, in the context of the discontinuous Galerkin methods, a solution polynomial of one degree higher is reconstructed using a least-squares method. The stencils used in the reconstruction involve only the van Neumann neighborhood (face-neighboring cells) and are compact and consistent with the underlying DG method. The resulting DG method can be regarded as an improvement of a recovery-based DG method in the sense that it shares the same nice features as the recovery-based DG method, such as high accuracy and efficiency, and yet overcomes some of its shortcomings such as a lack of flexibility, compactness, and robustness. The developed DG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate the accuracy and efficiency of the method. The numerical results indicate that this reconstructed DG method is able to obtain a third-order accurate solution at a slightly higher cost than its second-order DG method and provide an increase in performance over the third order DG method in terms of computing time and storage requirement.

  2. Reconstructed imaging of acoustic cloak using time-lapse reversal method

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Cheng, Ying; Xu, Jian-yi; Li, Bo; Liu, Xiao-jun

    2014-08-01

    We proposed and investigated a solution to the inverse acoustic cloak problem, an anti-stealth technology to make cloaks visible, using the time-lapse reversal (TLR) method. The TLR method reconstructs the image of an unknown acoustic cloak by utilizing scattered acoustic waves. Compared to previous anti-stealth methods, the TLR method can determine not only the existence of a cloak but also its exact geometric information like definite shape, size, and position. Here, we present the process for TLR reconstruction based on time reversal invariance. This technology may have potential applications in detecting various types of cloaks with different geometric parameters.

  3. An adaptive total variation image reconstruction method for speckles through disordered media

    NASA Astrophysics Data System (ADS)

    Gong, Changmei; Shao, Xiaopeng; Wu, Tengfei

    2013-09-01

    Multiple scattering of light in highly disordered medium can break the diffraction limit of conventional optical system combined with image reconstruction method. Once the transmission matrix of the imaging system is obtained, the target image can be reconstructed from its speckle pattern by image reconstruction algorithm. Nevertheless, the restored image attained by common image reconstruction algorithms such as Tikhonov regularization has a relatively low signal-tonoise ratio (SNR) due to the experimental noise and reconstruction noise, greatly reducing the quality of the result image. In this paper, the speckle pattern of the test image is simulated by the combination of light propagation theories and statistical optics theories. Subsequently, an adaptive total variation (ATV) algorithm—the TV minimization by augmented Lagrangian and alternating direction algorithms (TVAL3), which is based on augmented Lagrangian and alternating direction algorithm, is utilized to reconstruct the target image. Numerical simulation experimental results show that, the TVAL3 algorithm can effectively suppress the noise of the restored image and preserve more image details, thus greatly boosts the SNR of the restored image. It also indicates that, compared with the image directly formed by `clean' system, the reconstructed results can overcoming the diffraction limit of the `clean' system, therefore being conductive to the observation of cells and protein molecules in biological tissues and other structures in micro/nano scale.

  4. CuRe - A new wavefront reconstruction method for SH-WFS measurements

    NASA Astrophysics Data System (ADS)

    Obereder, Andreas; Ramlau, Ronny; Rosensteiner, Matthias; Zhariy, Mariya

    2011-09-01

    In order to fulfill the real-time requirements for AO on ELTs, one has to either invest in (very) high performance hardware or spend some effort on the development of highly efficient reconstruction algorithms for wavefront sensors. The AAO (Austrian Adaptive Optics) team is involved in deriving wavefront reconstructors for SH- and Pyramid-WFS measurements utilizing the mathematical properties of the forward operators for these wavefront sensors. At the moment, we focus mainly on direct reconstructors with complexity O(n) (where n denotes the number of subapertures of the WFS) to make the reconstruction scalable for large telescopes. In this talk we will introduce a new algorithm, the Cumulative Reconstructor (CuRe), present its properties, namely error propagation of the method and the numerical effort for the reconstruction of the incoming wavefront, as well as first results concerning the quality of the method (dependent on different noise sources). Further improvements of the algorithm, especially a domain decomposition method for enhancing reconstruction quality and improving the overall speed of the algorithm will be presented and analyzed. A speed comparison with different wavefront reconstruction algorithms will be presented to point out the enormous gain of the new CuReD (Cumulative Reconstructor with Domain Decomposition) algorithm concerning numerical performance and applicability for real life telescope adaptive optics applications. In the outlook of the talk we will present first XAO results utilizing a variant of the CuReD for the reconstruction of modulated Pyramid WFS measurements.

  5. Tests of one Brazilian facial reconstruction method using three soft tissue depth sets and familiar assessors.

    PubMed

    Fernandes, Clemente Maia S; Serra, Mônica da Costa; da Silva, Jorge Vicente Lopes; Noritomi, Pedro Yoshito; Pereira, Frederico David Alencar de Sena; Melani, Rodolfo Francisco Haltenhoff

    2012-01-10

    Facial reconstruction is a method that seeks to recreate a person's facial appearance from his/her skull. This technique can be the last resource used in a forensic investigation, when identification techniques such as DNA analysis, dental records, fingerprints and radiographic comparison cannot be used to identify a body or skeletal remains. To perform facial reconstruction, the data of facial soft tissue thickness are necessary. Scientific literature has described differences in the thickness of facial soft tissue between ethnic groups. There are different databases of soft tissue thickness published in the scientific literature. There are no literature records of facial reconstruction works carried out with data of soft tissues obtained from samples of Brazilian subjects. There are also no reports of digital forensic facial reconstruction performed in Brazil. There are two databases of soft tissue thickness published for the Brazilian population: one obtained from measurements performed in fresh cadavers (fresh cadavers' pattern), and another from measurements using magnetic resonance imaging (Magnetic Resonance pattern). This study aims to perform three different characterized digital forensic facial reconstructions (with hair, eyelashes and eyebrows) of a Brazilian subject (based on an international pattern and two Brazilian patterns for soft facial tissue thickness), and evaluate the digital forensic facial reconstructions comparing them to photos of the individual and other nine subjects. The DICOM data of the Computed Tomography (CT) donated by a volunteer were converted into stereolitography (STL) files and used for the creation of the digital facial reconstructions. Once the three reconstructions were performed, they were compared to photographs of the subject who had the face reconstructed and nine other subjects. Thirty examiners participated in this recognition process. The target subject was recognized by 26.67% of the examiners in the reconstruction

  6. A novel digital tomosynthesis (DTS) reconstruction method using a deformation field map

    SciTech Connect

    Ren Lei; Zhang Junan; Thongphiew, Danthai; Godfrey, Devon J.; Jackie Wu, Q.; Zhou Sumin; Yin Fangfang

    2008-07-15

    We developed a novel digital tomosynthesis (DTS) reconstruction method using a deformation field map to optimally estimate volumetric information in DTS images. The deformation field map is solved by using prior information, a deformation model, and new projection data. Patients' previous cone-beam CT (CBCT) or planning CT data are used as the prior information, and the new patient volume to be reconstructed is considered as a deformation of the prior patient volume. The deformation field is solved by minimizing bending energy and maintaining new projection data fidelity using a nonlinear conjugate gradient method. The new patient DTS volume is then obtained by deforming the prior patient CBCT or CT volume according to the solution to the deformation field. This method is novel because it is the first method to combine deformable registration with limited angle image reconstruction. The method was tested in 2D cases using simulated projections of a Shepp-Logan phantom, liver, and head-and-neck patient data. The accuracy of the reconstruction was evaluated by comparing both organ volume and pixel value differences between DTS and CBCT images. In the Shepp-Logan phantom study, the reconstructed pixel signal-to-noise ratio (PSNR) for the 60 deg. DTS image reached 34.3 dB. In the liver patient study, the relative error of the liver volume reconstructed using 60 deg. projections was 3.4%. The reconstructed PSNR for the 60 deg. DTS image reached 23.5 dB. In the head-and-neck patient study, the new method using 60 deg. projections was able to reconstruct the 8.1 deg. rotation of the bony structure with 0.0 deg. error. The reconstructed PSNR for the 60 deg. DTS image reached 24.2 dB. In summary, the new reconstruction method can optimally estimate the volumetric information in DTS images using 60 deg. projections. Preliminary validation of the algorithm showed that it is both technically and clinically feasible for image guidance in radiation therapy.

  7. Reducing the effects of acoustic heterogeneity with an iterative reconstruction method from experimental data in microwave induced thermoacoustic tomography

    SciTech Connect

    Wang, Jinguo; Zhao, Zhiqin Song, Jian; Chen, Guoping; Nie, Zaiping; Liu, Qing-Huo

    2015-05-15

    Purpose: An iterative reconstruction method has been previously reported by the authors of this paper. However, the iterative reconstruction method was demonstrated by solely using the numerical simulations. It is essential to apply the iterative reconstruction method to practice conditions. The objective of this work is to validate the capability of the iterative reconstruction method for reducing the effects of acoustic heterogeneity with the experimental data in microwave induced thermoacoustic tomography. Methods: Most existing reconstruction methods need to combine the ultrasonic measurement technology to quantitatively measure the velocity distribution of heterogeneity, which increases the system complexity. Different to existing reconstruction methods, the iterative reconstruction method combines time reversal mirror technique, fast marching method, and simultaneous algebraic reconstruction technique to iteratively estimate the velocity distribution of heterogeneous tissue by solely using the measured data. Then, the estimated velocity distribution is used subsequently to reconstruct the highly accurate image of microwave absorption distribution. Experiments that a target placed in an acoustic heterogeneous environment are performed to validate the iterative reconstruction method. Results: By using the estimated velocity distribution, the target in an acoustic heterogeneous environment can be reconstructed with better shape and higher image contrast than targets that are reconstructed with a homogeneous velocity distribution. Conclusions: The distortions caused by the acoustic heterogeneity can be efficiently corrected by utilizing the velocity distribution estimated by the iterative reconstruction method. The advantage of the iterative reconstruction method over the existing correction methods is that it is successful in improving the quality of the image of microwave absorption distribution without increasing the system complexity.

  8. An efficient MR image reconstruction method for arbitrary K-space trajectories without density compensation.

    PubMed

    Song, Jiayu; Liu, Q H

    2006-01-01

    Non-Cartesian sampling is widely used for fast magnetic resonance imaging (MRI). The well known gridding method usually requires density compensation to adjust the non-uniform sampling density, which is a major source of reconstruction error. Minimum-norm least square (MNLS) reconstruction, on the other hand, does not need density compensation, but requires intensive computations. In this paper, a new version of MNLS reconstruction method is developed using maximum likelihood and is speeded up by incorporating novel non-uniform fast Fourier transform (NUFFT) and bi-conjugate gradient fast Fourier transform (BCG-FFT) techniques. Studies on computer-simulated phantoms and a physically scanned phantom show improved reconstruction accuracy and signal-to-noise ratio compared to gridding method. The method is shown applicable to arbitrary k-space trajectory. Furthermore, we find that the method in fact performs un-blurring in the image space as an equivalent of density compensation in the k-space. Equalizing MNLS solution with gridding algorithm leads to new approaches of finding optimal density compensation functions (DCF). The method has been applied to radially encoded cardiac imaging on small animals. Reconstructed dynamic images of an in vivo mouse heart are shown. PMID:17946203

  9. A Reconstructed Discontinuous Galerkin Method for the Compressible Navier-Stokes Equations on Arbitrary Grids

    SciTech Connect

    Hong Luo; Luqing Luo; Robert Nourgaliev; Vincent A. Mousseau

    2010-01-01

    A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier-Stokes equations on arbitrary grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier-Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need to judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on arbitrary grids. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG method is able to deliver the same accuracy as the well-known Bassi-Rebay II scheme, at a half of its computing costs for the discretization of the viscous fluxes in the Navier-Stokes equations, clearly demonstrating its superior performance over the existing DG methods for solving the compressible Navier-Stokes equations.

  10. A Reconstructed Discontinuous Galerkin Method for the Compressible Navier-Stokes Equations on Arbitrary Grids

    SciTech Connect

    Hong Luo; Luqing Luo; Robert Nourgaliev; Vincent A. Mousseau

    2010-09-01

    A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier–Stokes equations on arbitrary grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier–Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need to judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on arbitrary grids. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG method is able to deliver the same accuracy as the well-known Bassi–Rebay II scheme, at a half of its computing costs for the discretization of the viscous fluxes in the Navier–Stokes equations, clearly demonstrating its superior performance over the existing DG methods for solving the compressible Navier–Stokes equations.

  11. Noninvasive reconstruction of cardiac transmembrane potentials using a kernelized extreme learning method

    NASA Astrophysics Data System (ADS)

    Jiang, Mingfeng; Zhang, Heng; Zhu, Lingyan; Cao, Li; Wang, Yaming; Xia, Ling; Gong, Yinglan

    2015-04-01

    Non-invasively reconstructing the cardiac transmembrane potentials (TMPs) from body surface potentials can act as a regression problem. The support vector regression (SVR) method is often used to solve the regression problem, however the computational complexity of the SVR training algorithm is usually intensive. In this paper, another learning algorithm, termed as extreme learning machine (ELM), is proposed to reconstruct the cardiac transmembrane potentials. Moreover, ELM can be extended to single-hidden layer feed forward neural networks with kernel matrix (kernelized ELM), which can achieve a good generalization performance at a fast learning speed. Based on the realistic heart-torso models, a normal and two abnormal ventricular activation cases are applied for training and testing the regression model. The experimental results show that the ELM method can perform a better regression ability than the single SVR method in terms of the TMPs reconstruction accuracy and reconstruction speed. Moreover, compared with the ELM method, the kernelized ELM method features a good approximation and generalization ability when reconstructing the TMPs.

  12. R-L Method and BLS-GSM Denoising for Penumbra Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Mei; Li, Yang; Sheng, Liang; Li, Chunhua; Wei, Fuli; Peng, Bodong

    2013-12-01

    When neutron yield is very low, reconstruction of coding penumbra image is rather difficult. In this paper, low-yield (109) 14 MeV neutron penumbra imaging was simulated by Monte Carlo method. The Richardson Lucy (R-L) iteration method was proposed to incorporated with Bayesian least square-Gaussian scale mixture model (BLS-GSM) wavelet denoising for the simulated image. Optimal number of R-L iterations was gotten by a large number of tests. The results show that compared with Wiener method and median filter denoising, this method is better in restraining background noise, the correlation coefficient Rsr between the reconstructed and the real images is larger, and the reconstruction result is better.

  13. Runout error correction in tomographic reconstruction by intensity summation method.

    PubMed

    Kwon, Ik Hwan; Lim, Jun; Hong, Chung Ki

    2016-09-01

    An alignment method for correction of the axial and radial runout errors of the rotation stage in X-ray phase-contrast computed tomography has been developed. Only intensity information was used, without extra hardware or complicated calculation. Notably, the method, as demonstrated herein, can utilize the halo artifact to determine displacement. PMID:27577781

  14. A Reconstructed Discontinuous Galerkin Method for the Compressible Flows on Unstructured Tetrahedral Grids

    SciTech Connect

    Hong Luo; Yidong Xia; Robert Nourgaliev; Chunpei Cai

    2011-06-01

    A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier-Stokes equations on unstructured tetrahedral grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier-Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need to judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on unstructured grids. The preliminary results indicate that this RDG method is stable on unstructured tetrahedral grids, and provides a viable and attractive alternative for the discretization of the viscous and heat fluxes in the Navier-Stokes equations.

  15. Noise reduction in computed tomography using a multiplicative continuous-time image reconstruction method

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Yusaku; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    In clinical X-ray computed tomography (CT), filtered back-projection as a transform method and iterative reconstruction such as the maximum-likelihood expectation-maximization (ML-EM) method are known methods to reconstruct tomographic images. As the other reconstruction method, we have presented a continuous-time image reconstruction (CIR) system described by a nonlinear dynamical system, based on the idea of continuous methods for solving tomographic inverse problems. Recently, we have also proposed a multiplicative CIR system described by differential equations based on the minimization of a weighted Kullback-Leibler divergence. We prove theoretically that the divergence measure decreases along the solution to the CIR system, for consistent inverse problems. In consideration of the noisy nature of projections in clinical CT, the inverse problem belongs to the category of ill-posed problems. The performance of a noise-reduction scheme for a new (previously developed) CIR system was investigated by means of numerical experiments using a circular phantom image. Compared to the conventional CIR and the ML-EM methods, the proposed CIR method has an advantage on noisy projection with lower signal-to-noise ratios in terms of the divergence measure on the actual image under the same common measure observed via the projection data. The results lead to the conclusion that the multiplicative CIR method is more effective and robust for noise reduction in CT compared to the ML-EM as well as conventional CIR methods.

  16. The Extratropical Northern Hemisphere Temperature Reconstruction during the Last Millennium Based on a Novel Method.

    PubMed

    Xing, Pei; Chen, Xin; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu

    2016-01-01

    Large-scale climate history of the past millennium reconstructed solely from tree-ring data is prone to underestimate the amplitude of low-frequency variability. In this paper, we aimed at solving this problem by utilizing a novel method termed "MDVM", which was a combination of the ensemble empirical mode decomposition (EEMD) and variance matching techniques. We compiled a set of 211 tree-ring records from the extratropical Northern Hemisphere (30-90°N) in an effort to develop a new reconstruction of the annual mean temperature by the MDVM method. Among these dataset, a number of 126 records were screened out to reconstruct temperature variability longer than decadal scale for the period 850-2000 AD. The MDVM reconstruction depicted significant low-frequency variability in the past millennium with evident Medieval Warm Period (MWP) over the interval 950-1150 AD and pronounced Little Ice Age (LIA) cumulating in 1450-1850 AD. In the context of 1150-year reconstruction, the accelerating warming in 20th century was likely unprecedented, and the coldest decades appeared in the 1640s, 1600s and 1580s, whereas the warmest decades occurred in the 1990s, 1940s and 1930s. Additionally, the MDVM reconstruction covaried broadly with changes in natural radiative forcing, and especially showed distinct footprints of multiple volcanic eruptions in the last millennium. Comparisons of our results with previous reconstructions and model simulations showed the efficiency of the MDVM method on capturing low-frequency variability, particularly much colder signals of the LIA relative to the reference period. Our results demonstrated that the MDVM method has advantages in studying large-scale and low-frequency climate signals using pure tree-ring data. PMID:26751947

  17. The Extratropical Northern Hemisphere Temperature Reconstruction during the Last Millennium Based on a Novel Method

    PubMed Central

    Xing, Pei; Chen, Xin; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu

    2016-01-01

    Large-scale climate history of the past millennium reconstructed solely from tree-ring data is prone to underestimate the amplitude of low-frequency variability. In this paper, we aimed at solving this problem by utilizing a novel method termed “MDVM”, which was a combination of the ensemble empirical mode decomposition (EEMD) and variance matching techniques. We compiled a set of 211 tree-ring records from the extratropical Northern Hemisphere (30–90°N) in an effort to develop a new reconstruction of the annual mean temperature by the MDVM method. Among these dataset, a number of 126 records were screened out to reconstruct temperature variability longer than decadal scale for the period 850–2000 AD. The MDVM reconstruction depicted significant low-frequency variability in the past millennium with evident Medieval Warm Period (MWP) over the interval 950–1150 AD and pronounced Little Ice Age (LIA) cumulating in 1450–1850 AD. In the context of 1150-year reconstruction, the accelerating warming in 20th century was likely unprecedented, and the coldest decades appeared in the 1640s, 1600s and 1580s, whereas the warmest decades occurred in the 1990s, 1940s and 1930s. Additionally, the MDVM reconstruction covaried broadly with changes in natural radiative forcing, and especially showed distinct footprints of multiple volcanic eruptions in the last millennium. Comparisons of our results with previous reconstructions and model simulations showed the efficiency of the MDVM method on capturing low-frequency variability, particularly much colder signals of the LIA relative to the reference period. Our results demonstrated that the MDVM method has advantages in studying large-scale and low-frequency climate signals using pure tree-ring data. PMID:26751947

  18. Meshless reconstruction method for fluorescence molecular tomography based on compactly supported radial basis function.

    PubMed

    An, Yu; Liu, Jie; Zhang, Guanglei; Ye, Jinzuo; Mao, Yamin; Jiang, Shixin; Shang, Wenting; Du, Yang; Chi, Chongwei; Tian, Jie

    2015-10-01

    Fluorescence molecular tomography (FMT) is a promising tool in the study of cancer, drug discovery, and disease diagnosis, enabling noninvasive and quantitative imaging of the biodistribution of fluorophores in deep tissues via image reconstruction techniques. Conventional reconstruction methods based on the finite-element method (FEM) have achieved acceptable stability and efficiency. However, some inherent shortcomings in FEM meshes, such as time consumption in mesh generation and a large discretization error, limit further biomedical application. In this paper, we propose a meshless method for reconstruction of FMT (MM-FMT) using compactly supported radial basis functions (CSRBFs). With CSRBFs, the image domain can be accurately expressed by continuous CSRBFs, avoiding the discretization error to a certain degree. After direct collocation with CSRBFs, the conventional optimization techniques, including Tikhonov, L1-norm iteration shrinkage (L1-IS), and sparsity adaptive matching pursuit, were adopted to solve the meshless reconstruction. To evaluate the performance of the proposed MM-FMT, we performed numerical heterogeneous mouse experiments and in vivo bead-implanted mouse experiments. The results suggest that the proposed MM-FMT method can reduce the position error of the reconstruction result to smaller than 0.4 mm for the double-source case, which is a significant improvement for FMT. PMID:26451513

  19. A physics-based intravascular ultrasound image reconstruction method for lumen segmentation.

    PubMed

    Mendizabal-Ruiz, Gerardo; Kakadiaris, Ioannis A

    2016-08-01

    Intravascular ultrasound (IVUS) refers to the medical imaging technique consisting of a miniaturized ultrasound transducer located at the tip of a catheter that can be introduced in the blood vessels providing high-resolution, cross-sectional images of their interior. Current methods for the generation of an IVUS image reconstruction from radio frequency (RF) data do not account for the physics involved in the interaction between the IVUS ultrasound signal and the tissues of the vessel. In this paper, we present a novel method to generate an IVUS image reconstruction based on the use of a scattering model that considers the tissues of the vessel as a distribution of three-dimensional point scatterers. We evaluated the impact of employing the proposed IVUS image reconstruction method in the segmentation of the lumen/wall interface on 40MHz IVUS data using an existing automatic lumen segmentation method. We compared the results with those obtained using the B-mode reconstruction on 600 randomly selected frames from twelve pullback sequences acquired from rabbit aortas and different arteries of swine. Our results indicate the feasibility of employing the proposed IVUS image reconstruction for the segmentation of the lumen. PMID:27235803

  20. Local and Non-local Regularization Techniques in Emission (PET/SPECT) Tomographic Image Reconstruction Methods.

    PubMed

    Ahmad, Munir; Shahzad, Tasawar; Masood, Khalid; Rashid, Khalid; Tanveer, Muhammad; Iqbal, Rabail; Hussain, Nasir; Shahid, Abubakar; Fazal-E-Aleem

    2016-06-01

    Emission tomographic image reconstruction is an ill-posed problem due to limited and noisy data and various image-degrading effects affecting the data and leads to noisy reconstructions. Explicit regularization, through iterative reconstruction methods, is considered better to compensate for reconstruction-based noise. Local smoothing and edge-preserving regularization methods can reduce reconstruction-based noise. However, these methods produce overly smoothed images or blocky artefacts in the final image because they can only exploit local image properties. Recently, non-local regularization techniques have been introduced, to overcome these problems, by incorporating geometrical global continuity and connectivity present in the objective image. These techniques can overcome drawbacks of local regularization methods; however, they also have certain limitations, such as choice of the regularization function, neighbourhood size or calibration of several empirical parameters involved. This work compares different local and non-local regularization techniques used in emission tomographic imaging in general and emission computed tomography in specific for improved quality of the resultant images. PMID:26714680

  1. L{sub 1/2} regularization based numerical method for effective reconstruction of bioluminescence tomography

    SciTech Connect

    Chen, Xueli E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin E-mail: jimleung@mail.xidian.edu.cn

    2014-05-14

    Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.

  2. Meshless reconstruction method for fluorescence molecular tomography based on compactly supported radial basis function

    NASA Astrophysics Data System (ADS)

    An, Yu; Liu, Jie; Zhang, Guanglei; Ye, Jinzuo; Mao, Yamin; Jiang, Shixin; Shang, Wenting; Du, Yang; Chi, Chongwei; Tian, Jie

    2015-10-01

    Fluorescence molecular tomography (FMT) is a promising tool in the study of cancer, drug discovery, and disease diagnosis, enabling noninvasive and quantitative imaging of the biodistribution of fluorophores in deep tissues via image reconstruction techniques. Conventional reconstruction methods based on the finite-element method (FEM) have achieved acceptable stability and efficiency. However, some inherent shortcomings in FEM meshes, such as time consumption in mesh generation and a large discretization error, limit further biomedical application. In this paper, we propose a meshless method for reconstruction of FMT (MM-FMT) using compactly supported radial basis functions (CSRBFs). With CSRBFs, the image domain can be accurately expressed by continuous CSRBFs, avoiding the discretization error to a certain degree. After direct collocation with CSRBFs, the conventional optimization techniques, including Tikhonov, L1-norm iteration shrinkage (L1-IS), and sparsity adaptive matching pursuit, were adopted to solve the meshless reconstruction. To evaluate the performance of the proposed MM-FMT, we performed numerical heterogeneous mouse experiments and in vivo bead-implanted mouse experiments. The results suggest that the proposed MM-FMT method can reduce the position error of the reconstruction result to smaller than 0.4 mm for the double-source case, which is a significant improvement for FMT.

  3. A diffusion-based truncated projection artifact reduction method for iterative digital breast tomosynthesis reconstruction

    NASA Astrophysics Data System (ADS)

    Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M.

    2013-02-01

    Digital breast tomosynthesis (DBT) has strong promise to improve sensitivity for detecting breast cancer. DBT reconstruction estimates the breast tissue attenuation using projection views (PVs) acquired in a limited angular range. Because of the limited field of view (FOV) of the detector, the PVs may not completely cover the breast in the x-ray source motion direction at large projection angles. The voxels in the imaged volume cannot be updated when they are outside the FOV, thus causing a discontinuity in intensity across the FOV boundaries in the reconstructed slices, which we refer to as the truncated projection artifact (TPA). Most existing TPA reduction methods were developed for the filtered backprojection method in the context of computed tomography. In this study, we developed a new diffusion-based method to reduce TPAs during DBT reconstruction using the simultaneous algebraic reconstruction technique (SART). Our TPA reduction method compensates for the discontinuity in background intensity outside the FOV of the current PV after each PV updating in SART. The difference in voxel values across the FOV boundary is smoothly diffused to the region beyond the FOV of the current PV. Diffusion-based background intensity estimation is performed iteratively to avoid structured artifacts. The method is applicable to TPA in both the forward and backward directions of the PVs and for any number of iterations during reconstruction. The effectiveness of the new method was evaluated by comparing the visual quality of the reconstructed slices and the measured discontinuities across the TPA with and without artifact correction at various iterations. The results demonstrated that the diffusion-based intensity compensation method reduced the TPA while preserving the detailed tissue structures. The visibility of breast lesions obscured by the TPA was improved after artifact reduction.

  4. A diffusion-based truncated projection artifact reduction method for iterative digital breast tomosynthesis reconstruction.

    PubMed

    Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M

    2013-02-01

    Digital breast tomosynthesis (DBT) has strong promise to improve sensitivity for detecting breast cancer. DBT reconstruction estimates the breast tissue attenuation using projection views (PVs) acquired in a limited angular range. Because of the limited field of view (FOV) of the detector, the PVs may not completely cover the breast in the x-ray source motion direction at large projection angles. The voxels in the imaged volume cannot be updated when they are outside the FOV, thus causing a discontinuity in intensity across the FOV boundaries in the reconstructed slices, which we refer to as the truncated projection artifact (TPA). Most existing TPA reduction methods were developed for the filtered backprojection method in the context of computed tomography. In this study, we developed a new diffusion-based method to reduce TPAs during DBT reconstruction using the simultaneous algebraic reconstruction technique (SART). Our TPA reduction method compensates for the discontinuity in background intensity outside the FOV of the current PV after each PV updating in SART. The difference in voxel values across the FOV boundary is smoothly diffused to the region beyond the FOV of the current PV. Diffusion-based background intensity estimation is performed iteratively to avoid structured artifacts. The method is applicable to TPA in both the forward and backward directions of the PVs and for any number of iterations during reconstruction. The effectiveness of the new method was evaluated by comparing the visual quality of the reconstructed slices and the measured discontinuities across the TPA with and without artifact correction at various iterations. The results demonstrated that the diffusion-based intensity compensation method reduced the TPA while preserving the detailed tissue structures. The visibility of breast lesions obscured by the TPA was improved after artifact reduction. PMID:23318346

  5. A sampling method for the reconstruction of a periodic interface in a layered medium

    NASA Astrophysics Data System (ADS)

    Sun, Guanying; Zhang, Ruming

    2016-07-01

    In this paper, we consider the inverse problem of reconstructing periodic interfaces in a two-layered medium with TM-mode. We propose a sampling-type method to recover the top periodic interface from the near-field data measured on a straight line above the total structure. Finally, numerical experiments are illustrated to show the effectiveness of the method.

  6. A Comparison of Affect Ratings Obtained with Ecological Momentary Assessment and the Day Reconstruction Method

    ERIC Educational Resources Information Center

    Dockray, Samantha; Grant, Nina; Stone, Arthur A.; Kahneman, Daniel; Wardle, Jane; Steptoe, Andrew

    2010-01-01

    Measurement of affective states in everyday life is of fundamental importance in many types of quality of life, health, and psychological research. Ecological momentary assessment (EMA) is the recognized method of choice, but the respondent burden can be high. The day reconstruction method (DRM) was developed by Kahneman and colleagues ("Science,"…

  7. Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay

    DOEpatents

    Huang, Jian

    2013-03-12

    A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.

  8. Potential benefit of the CT adaptive statistical iterative reconstruction method for pediatric cardiac diagnosis

    NASA Astrophysics Data System (ADS)

    Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2010-04-01

    Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.

  9. Revisiting the analog method to obtain uncertainty estimates for proxy surrogate reconstructions

    NASA Astrophysics Data System (ADS)

    Bothe, Oliver

    2015-04-01

    Proxy surrogate reconstructions are a computationally cheap method to combine information from spatially sparse proxy records or instrumental data series with the spatially complete fields from climate simulations to increase our knowledge about past climates. The method assumes that the analog pool includes the entire bandwidth of the state-space of the variable under consideration. As proxy records are uncertain indicators of the state of past climate variables, the analog search should ideally allow for the inclusion of the variance unexplained by the proxy indicator in the variable of interest, i.e. it should quantify the uncertainty of the reconstructions based on the signal strength in the proxy records. Upto this point traditional implementations have not considered this uncertainty. This presentation details assumptions based on the calibration correlation of the proxies which result in an ensemble pool of analogs consistent with the proxy record at each data point and explicitly considering the noise in the proxy record. The proxy-pool of the Euro2K-reconstruction and the MPI-ESM-COSMOS ensemble of simulations of the last millennium provide the data to obtain a set of proxy surrogate field estimates for the June, July and August summer near surface air temperature of the last 750 years for the European domain. The restrictions imposed on the analog selection can result in failure to find suitable analogs. The underlying assumptions allow to construct an uncertainty envelope for the areal mean of the field reconstructions. The ensemble of fields further highlights the ambiguity of field reconstructions constrained by a limited set of proxies. Additionally, the uncertainty envelope, its median estimate and the respective best estimate can be used to easily validate reconstructions obtained with more complex methods. That is, the proxy surrogate reconstruction estimates agree very well with the Euro2K-reconstruction over the last 750 years. They also well

  10. A gene network engineering platform for lactic acid bacteria

    PubMed Central

    Kong, Wentao; Kapuganti, Venkata S.; Lu, Ting

    2016-01-01

    Recent developments in synthetic biology have positioned lactic acid bacteria (LAB) as a major class of cellular chassis for applications. To achieve the full potential of LAB, one fundamental prerequisite is the capacity for rapid engineering of complex gene networks, such as natural biosynthetic pathways and multicomponent synthetic circuits, into which cellular functions are encoded. Here, we present a synthetic biology platform for rapid construction and optimization of large-scale gene networks in LAB. The platform involves a copy-controlled shuttle for hosting target networks and two associated strategies that enable efficient genetic editing and phenotypic validation. By using a nisin biosynthesis pathway and its variants as examples, we demonstrated multiplex, continuous editing of small DNA parts, such as ribosome-binding sites, as well as efficient manipulation of large building blocks such as genes and operons. To showcase the platform, we applied it to expand the phenotypic diversity of the nisin pathway by quickly generating a library of 63 pathway variants. We further demonstrated its utility by altering the regulatory topology of the nisin pathway for constitutive bacteriocin biosynthesis. This work demonstrates the feasibility of rapid and advanced engineering of gene networks in LAB, fostering their applications in biomedicine and other areas. PMID:26503255

  11. A gene network engineering platform for lactic acid bacteria.

    PubMed

    Kong, Wentao; Kapuganti, Venkata S; Lu, Ting

    2016-02-29

    Recent developments in synthetic biology have positioned lactic acid bacteria (LAB) as a major class of cellular chassis for applications. To achieve the full potential of LAB, one fundamental prerequisite is the capacity for rapid engineering of complex gene networks, such as natural biosynthetic pathways and multicomponent synthetic circuits, into which cellular functions are encoded. Here, we present a synthetic biology platform for rapid construction and optimization of large-scale gene networks in LAB. The platform involves a copy-controlled shuttle for hosting target networks and two associated strategies that enable efficient genetic editing and phenotypic validation. By using a nisin biosynthesis pathway and its variants as examples, we demonstrated multiplex, continuous editing of small DNA parts, such as ribosome-binding sites, as well as efficient manipulation of large building blocks such as genes and operons. To showcase the platform, we applied it to expand the phenotypic diversity of the nisin pathway by quickly generating a library of 63 pathway variants. We further demonstrated its utility by altering the regulatory topology of the nisin pathway for constitutive bacteriocin biosynthesis. This work demonstrates the feasibility of rapid and advanced engineering of gene networks in LAB, fostering their applications in biomedicine and other areas. PMID:26503255

  12. Comparison of Parallel MRI Reconstruction Methods for Accelerated 3D Fast Spin-Echo Imaging

    PubMed Central

    Xiao, Zhikui; Hoge, W. Scott; Mulkern, R.V.; Zhao, Lei; Hu, Guangshu; Kyriakos, Walid E.

    2014-01-01

    Parallel MRI (pMRI) achieves imaging acceleration by partially substituting gradient-encoding steps with spatial information contained in the component coils of the acquisition array. Variable-density subsampling in pMRI was previously shown to yield improved two-dimensional (2D) imaging in comparison to uniform subsampling, but has yet to be used routinely in clinical practice. In an effort to reduce acquisition time for 3D fast spin-echo (3D-FSE) sequences, this work explores a specific nonuniform sampling scheme for 3D imaging, subsampling along two phase-encoding (PE) directions on a rectilinear grid. We use two reconstruction methods—2D-GRAPPA-Operator and 2D-SPACE RIP—and present a comparison between them. We show that high-quality images can be reconstructed using both techniques. To evaluate the proposed sampling method and reconstruction schemes, results via simulation, phantom study, and in vivo 3D human data are shown. We find that fewer artifacts can be seen in the 2D-SPACE RIP reconstructions than in 2D-GRAPPA-Operator reconstructions, with comparable reconstruction times. PMID:18727083

  13. An extended stochastic reconstruction method for catalyst layers in proton exchange membrane fuel cells

    NASA Astrophysics Data System (ADS)

    Kang, Jinfen; Moriyama, Koji; Kim, Seung Hyun

    2016-09-01

    This paper presents an extended, stochastic reconstruction method for catalyst layers (CLs) of Proton Exchange Membrane Fuel Cells (PEMFCs). The focus is placed on the reconstruction of customized, low platinum (Pt) loading CLs where the microstructure of CLs can substantially influence the performance. The sphere-based simulated annealing (SSA) method is extended to generate the CL microstructures with specified and controllable structural properties for agglomerates, ionomer, and Pt catalysts. In the present method, the agglomerate structures are controlled by employing a trial two-point correlation function used in the simulated annealing process. An off-set method is proposed to generate more realistic ionomer structures. The variations of ionomer structures at different humidity conditions are considered to mimic the swelling effects. A method to control Pt loading, distribution, and utilization is presented. The extension of the method to consider heterogeneity in structural properties, which can be found in manufactured CL samples, is presented. Various reconstructed CLs are generated to demonstrate the capability of the proposed method. Proton transport properties of the reconstructed CLs are calculated and validated with experimental data.

  14. Methods of bronchial tree reconstruction and camera distortion corrections for virtual endoscopic environments.

    PubMed

    Socha, Mirosław; Duplaga, Mariusz; Turcza, Paweł

    2004-01-01

    The use of three-dimensional visualization of anatomical structures in diagnostics and medical training is growing. The main components of virtual respiratory tract environments include reconstruction and simulation algorithms as well as correction methods of endoscope camera distortions in the case of virtually-enhanced navigation systems. Reconstruction methods rely usually on initial computer tomography (CT) image segmentation to trace contours of the tracheobronchial tree, which in turn are used in the visualization process. The main segmentation methods, including relatively simple approaches such as adaptive region-growing algorithms and more complex methods, e.g. hybrid algorithms based on region growing and mathematical morphology methods, are described in this paper. The errors and difficulties in the process of tracheobronchial tree reconstruction depend on the occurrence of distortions during CT image acquisition. They are usually related to the inability to exactly fulfil the sampling theorem's conditions. Other forms of distortions and noise such as additive white Gaussian noise, may also appear. The impact of these distortions on the segmentation and reconstruction may be diminished through the application of appropriately selected image prefiltering, which is also demonstrated in this paper. Methods of surface rendering (ray-casting, ray-tracing techniques) and volume rendering will be shown, with special focus on aspects of hardware and software implementations. Finally, methods of camera distortions correction and simulation are presented. The mathematical camera models, the scope of their applications and types of distortions were have also been indicated. PMID:15718617

  15. Performance of climate field reconstruction methods over multiple seasons and climate variables

    NASA Astrophysics Data System (ADS)

    Dannenberg, Matthew P.; Wise, Erika K.

    2013-09-01

    Studies of climate variability require long time series of data but are limited by the absence of preindustrial instrumental records. For such studies, proxy-based climate reconstructions, such as those produced from tree-ring widths, provide the opportunity to extend climatic records into preindustrial periods. Climate field reconstruction (CFR) methods are capable of producing spatially-resolved reconstructions of climate fields. We assessed the performance of three commonly used CFR methods (canonical correlation analysis, point-by-point regression, and regularized expectation maximization) over spatially-resolved fields using multiple seasons and climate variables. Warm- and cool-season geopotential height, precipitable water, and surface temperature were tested for each method using tree-ring chronologies. Spatial patterns of reconstructive skill were found to be generally consistent across each of the methods, but the robustness of the validation metrics varied by CFR method, season, and climate variable. The most robust validation metrics were achieved with geopotential height, the October through March temporal composite, and the Regularized Expectation Maximization method. While our study is limited to assessment of skill over multidecadal (rather than multi-centennial) time scales, our findings suggest that the climate variable of interest, seasonality, and spatial domain of the target field should be considered when assessing potential CFR methods for real-world applications.

  16. A Reconstructed Discontiuous Galerkin Method for the Magnetohydrodynamics on Arbitrary Grids

    NASA Astrophysics Data System (ADS)

    Halashi, Behrouz Karami

    A reconstructed discontinuous Galerkin (RDG) method based on a Hierarchical Weighted Essentially Non-oscillatory (WENO) reconstruction using a Taylor basis, designed not only to enhance the accuracy of discontinuous Galerkin methods but also to ensure the nonlinear stability of the RDG method, is developed for the solution of the magnetohydro dynamics (MHD) on arbitrary grids. In this method, a quadratic polynomial solution (P2) is first reconstructed using a Hermite WENO (HWENO) reconstruction from the underlying linear polynomial (P 1) discontinuous Galerkin solution to ensure the linear stability of the RDG method and to improve the efficiency of the underlying DG method. By taking advantage of handily available and yet invaluable information, namely the derivatives in the DG formulation, the stencils used in the reconstruction involve only Von Neumann neighborhood (adjacent face-neighboring cells) and thus are compact and consistent with the underlying DG method. The gradients (first moments) of the quadratic polynomial solution are then reconstructed using a WENO reconstruction in order to eliminate spurious oscillations in the vicinity of strong discontinuities, thus ensuring the nonlinear stability of the RDG method. Temporal discretization is done using a 4th order explicit Runge-Kutta method. The HLLD Riemann solver, introduced in the literature for one dimensional MHD problems, is extended to three dimensional problems on unstructured grids and used to compute the flux functions at interfaces in the present work. Divergence free constraint is satisfied using the so-called Locally Divergence Free (LDF) approach. The LDF formulation is especially attractive in the context of DG methods, where the gradients of independent variables are handily available and only one of the computed gradients needs simply to be modified by the divergence-free constraint at the end of each time step. The developed RDG method is used to compute a variety of fluid dynamics and

  17. Cosmic web reconstruction through density ridges: method and algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yen-Chi; Ho, Shirley; Freeman, Peter E.; Genovese, Christopher R.; Wasserman, Larry

    2015-11-01

    The detection and characterization of filamentary structures in the cosmic web allows cosmologists to constrain parameters that dictate the evolution of the Universe. While many filament estimators have been proposed, they generally lack estimates of uncertainty, reducing their inferential power. In this paper, we demonstrate how one may apply the subspace constrained mean shift (SCMS) algorithm (Ozertem & Erdogmus 2011; Genovese et al. 2014) to uncover filamentary structure in galaxy data. The SCMS algorithm is a gradient ascent method that models filaments as density ridges, one-dimensional smooth curves that trace high-density regions within the point cloud. We also demonstrate how augmenting the SCMS algorithm with bootstrap-based methods of uncertainty estimation allows one to place uncertainty bands around putative filaments. We apply the SCMS first to the data set generated from the Voronoi model. The density ridges show strong agreement with the filaments from Voronoi method. We then apply the SCMS method data sets sampled from a P3M N-body simulation, with galaxy number densities consistent with SDSS and WFIRST-AFTA, and to LOWZ and CMASS data from the Baryon Oscillation Spectroscopic Survey (BOSS). To further assess the efficacy of SCMS, we compare the relative locations of BOSS filaments with galaxy clusters in the redMaPPer catalogue, and find that redMaPPer clusters are significantly closer (with p-values <10-9) to SCMS-detected filaments than to randomly selected galaxies.

  18. A novel building boundary reconstruction method based on lidar data and images

    NASA Astrophysics Data System (ADS)

    Chen, Yiming; Zhang, Wuming; Zhou, Guoqing; Yan, Guangjian

    2013-09-01

    Building boundary is important for the urban mapping and real estate industry applications. The reconstruction of building boundary is also a significant but difficult step in generating city building models. As Light detection and ranging system (Lidar) can acquire large and dense point cloud data fast and easily, it has great advantages for building reconstruction. In this paper, we combine Lidar data and images to develop a novel building boundary reconstruction method. We use only one scan of Lidar data and one image to do the reconstruction. The process consists of a sequence of three steps: project boundary Lidar points to image; extract accurate boundary from image; and reconstruct boundary in Lidar points. We define a relationship between 3D points and the pixel coordinates. Then we extract the boundary in the image and use the relationship to get boundary in the point cloud. The method presented here reduces the difficulty of data acquisition effectively. The theory is not complex so it has low computational complexity. It can also be widely used in the data acquired by other 3D scanning devices to improve the accuracy. Results of the experiment demonstrate that this method has a clear advantage and high efficiency over others, particularly in the data with large point spacing.

  19. Significant impact of miRNA–target gene networks on genetics of human complex traits

    PubMed Central

    Okada, Yukinori; Muramatsu, Tomoki; Suita, Naomasa; Kanai, Masahiro; Kawakami, Eiryo; Iotchkova, Valentina; Soranzo, Nicole; Inazawa, Johji; Tanaka, Toshihiro

    2016-01-01

    The impact of microRNA (miRNA) on the genetics of human complex traits, especially in the context of miRNA-target gene networks, has not been fully assessed. Here, we developed a novel analytical method, MIGWAS, to comprehensively evaluate enrichment of genome-wide association study (GWAS) signals in miRNA–target gene networks. We applied the method to the GWAS results of the 18 human complex traits from >1.75 million subjects, and identified significant enrichment in rheumatoid arthritis (RA), kidney function, and adult height (P < 0.05/18 = 0.0028, most significant enrichment in RA with P = 1.7 × 10−4). Interestingly, these results were consistent with current literature-based knowledge of the traits on miRNA obtained through the NCBI PubMed database search (adjusted P = 0.024). Our method provided a list of miRNA and target gene pairs with excess genetic association signals, part of which included drug target genes. We identified a miRNA (miR-4728-5p) that downregulates PADI2, a novel RA risk gene considered as a promising therapeutic target (rs761426, adjusted P = 2.3 × 10−9). Our study indicated the significant impact of miRNA–target gene networks on the genetics of human complex traits, and provided resources which should contribute to drug discovery and nucleic acid medicine. PMID:26927695

  20. Significant impact of miRNA-target gene networks on genetics of human complex traits.

    PubMed

    Okada, Yukinori; Muramatsu, Tomoki; Suita, Naomasa; Kanai, Masahiro; Kawakami, Eiryo; Iotchkova, Valentina; Soranzo, Nicole; Inazawa, Johji; Tanaka, Toshihiro

    2016-01-01

    The impact of microRNA (miRNA) on the genetics of human complex traits, especially in the context of miRNA-target gene networks, has not been fully assessed. Here, we developed a novel analytical method, MIGWAS, to comprehensively evaluate enrichment of genome-wide association study (GWAS) signals in miRNA-target gene networks. We applied the method to the GWAS results of the 18 human complex traits from >1.75 million subjects, and identified significant enrichment in rheumatoid arthritis (RA), kidney function, and adult height (P < 0.05/18 = 0.0028, most significant enrichment in RA with P = 1.7 × 10(-4)). Interestingly, these results were consistent with current literature-based knowledge of the traits on miRNA obtained through the NCBI PubMed database search (adjusted P = 0.024). Our method provided a list of miRNA and target gene pairs with excess genetic association signals, part of which included drug target genes. We identified a miRNA (miR-4728-5p) that downregulates PADI2, a novel RA risk gene considered as a promising therapeutic target (rs761426, adjusted P = 2.3 × 10(-9)). Our study indicated the significant impact of miRNA-target gene networks on the genetics of human complex traits, and provided resources which should contribute to drug discovery and nucleic acid medicine. PMID:26927695

  1. A comparative study of limited-angle cone-beam reconstruction methods for breast tomosynthesis

    SciTech Connect

    Zhang Yiheng; Chan, H.-P.; Sahiner, Berkman; Wei, Jun; Goodsitt, Mitchell M.; Hadjiiski, Lubomir M.; Ge Jun; Zhou Chuan

    2006-10-15

    Digital tomosynthesis mammography (DTM) is a promising new modality for breast cancer detection. In DTM, projection-view images are acquired at a limited number of angles over a limited angular range and the imaged volume is reconstructed from the two-dimensional projections, thus providing three-dimensional structural information of the breast tissue. In this work, we investigated three representative reconstruction methods for this limited-angle cone-beam tomographic problem, including the backprojection (BP) method, the simultaneous algebraic reconstruction technique (SART) and the maximum likelihood method with the convex algorithm (ML-convex). The SART and ML-convex methods were both initialized with BP results to achieve efficient reconstruction. A second generation GE prototype tomosynthesis mammography system with a stationary digital detector was used for image acquisition. Projection-view images were acquired from 21 angles in 3 deg. increments over a {+-}30 deg. angular range. We used an American College of Radiology phantom and designed three additional phantoms to evaluate the image quality and reconstruction artifacts. In addition to visual comparison of the reconstructed images of different phantom sets, we employed the contrast-to-noise ratio (CNR), a line profile of features, an artifact spread function (ASF), a relative noise power spectrum (NPS), and a line object spread function (LOSF) to quantitatively evaluate the reconstruction results. It was found that for the phantoms with homogeneous background, the BP method resulted in less noisy tomosynthesized images and higher CNR values for masses than the SART and ML-convex methods. However, the two iterative methods provided greater contrast enhancement for both masses and calcification, sharper LOSF, and reduced interplane blurring and artifacts with better ASF behaviors for masses. For a contrast-detail phantom with heterogeneous tissue-mimicking background, the BP method had strong blurring

  2. The least error method for sparse solution reconstruction

    NASA Astrophysics Data System (ADS)

    Bredies, K.; Kaltenbacher, B.; Resmerita, E.

    2016-09-01

    This work deals with a regularization method enforcing solution sparsity of linear ill-posed problems by appropriate discretization in the image space. Namely, we formulate the so called least error method in an ℓ 1 setting and perform the convergence analysis by choosing the discretization level according to an a priori rule, as well as two a posteriori rules, via the discrepancy principle and the monotone error rule, respectively. Depending on the setting, linear or sublinear convergence rates in the ℓ 1-norm are obtained under a source condition yielding sparsity of the solution. A part of the study is devoted to analyzing the structure of the approximate solutions and of the involved source elements.

  3. A limited-angle CT reconstruction method based on anisotropic TV minimization

    NASA Astrophysics Data System (ADS)

    Chen, Zhiqiang; Jin, Xin; Li, Liang; Wang, Ge

    2013-04-01

    This paper presents a compressed sensing (CS)-inspired reconstruction method for limited-angle computed tomography (CT). Currently, CS-inspired CT reconstructions are often performed by minimizing the total variation (TV) of a CT image subject to data consistency. A key to obtaining high image quality is to optimize the balance between TV-based smoothing and data fidelity. In the case of the limited-angle CT problem, the strength of data consistency is angularly varying. For example, given a parallel beam of x-rays, information extracted in the Fourier domain is mostly orthogonal to the direction of x-rays, while little is probed otherwise. However, the TV minimization process is isotropic, suggesting that it is unfit for limited-angle CT. Here we introduce an anisotropic TV minimization method to address this challenge. The advantage of our approach is demonstrated in numerical simulation with both phantom and real CT images, relative to the TV-based reconstruction.

  4. Reconstruction from Uniformly Attenuated SPECT Projection Data Using the DBH Method

    SciTech Connect

    Huang, Qiu; You, Jiangsheng; Zeng, Gengsheng L.; Gullberg, Grant T.

    2008-03-20

    An algorithm was developed for the two-dimensional (2D) reconstruction of truncated and non-truncated uniformly attenuated data acquired from single photon emission computed tomography (SPECT). The algorithm is able to reconstruct data from half-scan (180o) and short-scan (180?+fan angle) acquisitions for parallel- and fan-beam geometries, respectively, as well as data from full-scan (360o) acquisitions. The algorithm is a derivative, backprojection, and Hilbert transform (DBH) method, which involves the backprojection of differentiated projection data followed by an inversion of the finite weighted Hilbert transform. The kernel of the inverse weighted Hilbert transform is solved numerically using matrix inversion. Numerical simulations confirm that the DBH method provides accurate reconstructions from half-scan and short-scan data, even when there is truncation. However, as the attenuation increases, finer data sampling is required.

  5. An integrand reconstruction method for three-loop amplitudes

    NASA Astrophysics Data System (ADS)

    Badger, Simon; Frellesvig, Hjalte; Zhang, Yang

    2012-08-01

    We consider the maximal cut of a three-loop four point function with massless kinematics. By applying Gröbner bases and primary decomposition we develop a method which extracts all ten propagator master integral coefficients for an arbitrary triple-box configuration via generalized unitarity cuts. As an example we present analytic results for the three loop triple-box contribution to gluon-gluon scattering in Yang-Mills with adjoint fermions and scalars in terms of three master integrals.

  6. Application of information theory methods to food web reconstruction

    USGS Publications Warehouse

    Moniz, L.J.; Cooch, E.G.; Ellner, S.P.; Nichols, J.D.; Nichols, J.M.

    2007-01-01

    In this paper we use information theory techniques on time series of abundances to determine the topology of a food web. At the outset, the food web participants (two consumers, two resources) are known; in addition we know that each consumer prefers one of the resources over the other. However, we do not know which consumer prefers which resource, and if this preference is absolute (i.e., whether or not the consumer will consume the non-preferred resource). Although the consumers and resources are identified at the beginning of the experiment, we also provide evidence that the consumers are not resources for each other, and the resources do not consume each other. We do show that there is significant mutual information between resources; the model is seasonally forced and some shared information between resources is expected. Similarly, because the model is seasonally forced, we expect shared information between consumers as they respond to the forcing of the resources. The model that we consider does include noise, and in an effort to demonstrate that these methods may be of some use in other than model data, we show the efficacy of our methods with decreasing time series size; in this particular case we obtain reasonably clear results with a time series length of 400 points. This approaches ecological time series lengths from real systems.

  7. A comparative study of interface reconstruction methods for multi-material ALE simulations

    SciTech Connect

    Kucharik, Milan; Garimalla, Rao; Schofield, Samuel; Shashkov, Mikhail

    2009-01-01

    In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs some-what worse than the above two while the solutions with VOF using the wrong material order are considerably worse.

  8. Comparison of the calorimetric and kinematic methods of neutrino energy reconstruction in disappearance experiments

    SciTech Connect

    Ankowski, Artur M.; Benhar, Omar; Coloma, Pilar; Huber, Patrick; Jen, Chun -Min; Mariani, Camillo; Meloni, Davide; Vagnoni, Erica

    2015-10-22

    To be able to achieve their physics goals, future neutrino-oscillation experiments will need to reconstruct the neutrino energy with very high accuracy. In this work, we analyze how the energy reconstruction may be affected by realistic detection capabilities, such as energy resolutions, efficiencies, and thresholds. This allows us to estimate how well the detector performance needs to be determined a priori in order to avoid a sizable bias in the measurement of the relevant oscillation parameters. We compare the kinematic and calorimetric methods of energy reconstruction in the context of two νμ → νμ disappearance experiments operating in different energy regimes. For the calorimetric reconstruction method, we find that the detector performance has to be estimated with an O(10%) accuracy to avoid a significant bias in the extracted oscillation parameters. Thus, in the case of kinematic energy reconstruction, we observe that the results exhibit less sensitivity to an overestimation of the detector capabilities.

  9. Comparison of the calorimetric and kinematic methods of neutrino energy reconstruction in disappearance experiments

    DOE PAGESBeta

    Ankowski, Artur M.; Benhar, Omar; Coloma, Pilar; Huber, Patrick; Jen, Chun -Min; Mariani, Camillo; Meloni, Davide; Vagnoni, Erica

    2015-10-22

    To be able to achieve their physics goals, future neutrino-oscillation experiments will need to reconstruct the neutrino energy with very high accuracy. In this work, we analyze how the energy reconstruction may be affected by realistic detection capabilities, such as energy resolutions, efficiencies, and thresholds. This allows us to estimate how well the detector performance needs to be determined a priori in order to avoid a sizable bias in the measurement of the relevant oscillation parameters. We compare the kinematic and calorimetric methods of energy reconstruction in the context of two νμ → νμ disappearance experiments operating in different energymore » regimes. For the calorimetric reconstruction method, we find that the detector performance has to be estimated with an O(10%) accuracy to avoid a significant bias in the extracted oscillation parameters. Thus, in the case of kinematic energy reconstruction, we observe that the results exhibit less sensitivity to an overestimation of the detector capabilities.« less

  10. Analysis of dental root apical morphology: a new method for dietary reconstructions in primates.

    PubMed

    Hamon, NoÉmie; Emonet, Edouard-Georges; Chaimanee, Yaowalak; Guy, Franck; Tafforeau, Paul; Jaeger, Jean-Jacques

    2012-06-01

    The reconstruction of paleo-diets is an important task in the study of fossil primates. Previously, paleo-diet reconstructions were performed using different methods based on extant primate models. In particular, dental microwear or isotopic analyses provided accurate reconstructions for some fossil primates. However, there is sometimes difficult or impossible to apply these methods to fossil material. Therefore, the development of new, independent methods of diet reconstructions is crucial to improve our knowledge of primates paleobiology and paleoecology. This study aims to investigate the correlation between tooth root apical morphology and diet in primates, and its potential for paleo-diet reconstructions. Dental roots are composed of two portions: the eruptive portion with a smooth and regular surface, and the apical penetrative portion which displays an irregular and corrugated surface. Here, the angle formed by these two portions (aPE), and the ratio of penetrative portion over total root length (PPI), are calculated for each mandibular tooth root. A strong correlation between these two variables and the proportion of some food types (fruits, leaves, seeds, animal matter, and vertebrates) in diet is found, allowing the use of tooth root apical morphology as a tool for dietary reconstructions in primates. The method was then applied to the fossil hominoid Khoratpithecus piriyai, from the Late Miocene of Thailand. The paleo-diet deduced from aPE and PPI is dominated by fruits (>50%), associated with animal matter (1-25%). Leaves, vertebrates and most probably seeds were excluded from the diet of Khoratpithecus, which is consistent with previous studies. PMID:22553124

  11. Optical tomography reconstruction algorithm with the finite element method: An optimal approach with regularization tools

    SciTech Connect

    Balima, O.; Favennec, Y.; Rousse, D.

    2013-10-15

    Highlights: •New strategies to improve the accuracy of the reconstruction through mesh and finite element parameterization. •Use of gradient filtering through an alternative inner product within the adjoint method. •An integral form of the cost function is used to make the reconstruction compatible with all finite element formulations, continuous and discontinuous. •Gradient-based algorithm with the adjoint method is used for the reconstruction. -- Abstract: Optical tomography is mathematically treated as a non-linear inverse problem where the optical properties of the probed medium are recovered through the minimization of the errors between the experimental measurements and their predictions with a numerical model at the locations of the detectors. According to the ill-posed behavior of the inverse problem, some regularization tools must be performed and the Tikhonov penalization type is the most commonly used in optical tomography applications. This paper introduces an optimized approach for optical tomography reconstruction with the finite element method. An integral form of the cost function is used to take into account the surfaces of the detectors and make the reconstruction compatible with all finite element formulations, continuous and discontinuous. Through a gradient-based algorithm where the adjoint method is used to compute the gradient of the cost function, an alternative inner product is employed for preconditioning the reconstruction algorithm. Moreover, appropriate re-parameterization of the optical properties is performed. These regularization strategies are compared with the classical Tikhonov penalization one. It is shown that both the re-parameterization and the use of the Sobolev cost function gradient are efficient for solving such an ill-posed inverse problem.

  12. A regional method for craniofacial reconstruction based on coordinate adjustments and a new fusion strategy.

    PubMed

    Deng, Qingqiong; Zhou, Mingquan; Wu, Zhongke; Shui, Wuyang; Ji, Yuan; Wang, Xingce; Liu, Ching Yiu Jessica; Huang, Youliang; Jiang, Haiyan

    2016-02-01

    Craniofacial reconstruction recreates a facial outlook from the cranium based on the relationship between the face and the skull to assist identification. But craniofacial structures are very complex, and this relationship is not the same in different craniofacial regions. Several regional methods have recently been proposed, these methods segmented the face and skull into regions, and the relationship of each region is then learned independently, after that, facial regions for a given skull are estimated and finally glued together to generate a face. Most of these regional methods use vertex coordinates to represent the regions, and they define a uniform coordinate system for all of the regions. Consequently, the inconsistence in the positions of regions between different individuals is not eliminated before learning the relationships between the face and skull regions, and this reduces the accuracy of the craniofacial reconstruction. In order to solve this problem, an improved regional method is proposed in this paper involving two types of coordinate adjustments. One is the global coordinate adjustment performed on the skulls and faces with the purpose to eliminate the inconsistence of position and pose of the heads; the other is the local coordinate adjustment performed on the skull and face regions with the purpose to eliminate the inconsistence of position of these regions. After these two coordinate adjustments, partial least squares regression (PLSR) is used to estimate the relationship between the face region and the skull region. In order to obtain a more accurate reconstruction, a new fusion strategy is also proposed in the paper to maintain the reconstructed feature regions when gluing the facial regions together. This is based on the observation that the feature regions usually have less reconstruction errors compared to rest of the face. The results demonstrate that the coordinate adjustments and the new fusion strategy can significantly improve the

  13. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    SciTech Connect

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan

    2015-11-15

    Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method

  14. Reconstruction of conductivity using the dual-loop method with one injection current in MREIT.

    PubMed

    Lee, Tae Hwi; Nam, Hyun Soo; Lee, Min Gi; Kim, Yong Jung; Woo, Eung Je; Kwon, Oh In

    2010-12-21

    Magnetic resonance electrical impedance tomography (MREIT) is to visualize the internal current density and conductivity of an electrically conductive object. Injecting current through surface electrodes, we measure one component of the induced internal magnetic flux density using an MRI scanner. In order to reconstruct the conductivity distribution inside the imaging object, most algorithms in MREIT have required multiple magnetic flux density data by injecting at least two independent currents. In this paper, we propose a direct method to reconstruct the internal isotropic conductivity with one component of magnetic flux density data by injecting one current into the imaging object through a single pair of surface electrodes. Firstly, the proposed method reconstructs a projected current density which is a uniquely determined current from the measured one-component magnetic flux density. Using a relation between voltage potential and current, based on Kirchhoff's voltage law, the proposed method is designed to use a combination of two loops around each pixel from which to derive an implicit matrix system for determination of the internal conductivity. Results from numerical simulations demonstrate that the proposed algorithm stably determines the conductivity distribution in an imaging slice. We compare the reconstructed internal conductivity distribution using the proposed method with that using a conventional method with agarose gel phantom experiments. PMID:21098919

  15. Gaining insight into food webs reconstructed by the inverse method

    NASA Astrophysics Data System (ADS)

    Kones, Julius K.; Soetaert, Karline; van Oevelen, Dick; Owino, John O.; Mavuti, Kenneth

    2006-04-01

    The use of the inverse method to analyze flow patterns of organic components in ecological systems has had wide application in ecological modeling. Through this approach, an infinite number of food web flows describing the food web and satisfying biological constraints are generated, from which one (parsimonious) solution is drawn. Here we address two questions: (1) is there justification for the use of the parsimonious solution or is there a better alternative and (2) can we use the infinitely many solutions that describe the same food web to give more insight into the system? We reassess two published food webs, from the Gulf of Riga in the Baltic Sea and the Takapoto Atoll lagoon in the South Pacific. A finite number of random food web solutions is first generated using the Monte Carlo simulation technique. Using the Wilcoxon signed ranks test, we cannot find significant differences in the parsimonious solution and the average values of the finite random solutions generated. However, as the food web composed of the average flows has more attractive properties, the choice of the parsimonious solution to describe underdetermined food webs is challenged. We further demonstrate the use of the factor analysis technique to characterize flows that are closely related in the food web. Through this process sub-food webs are extracted within the plausible set of food webs, a property that can be utilized to gain insight into the sampling strategy for further constraining of the model.

  16. Wisdom of crowds for robust gene network inference

    PubMed Central

    Marbach, Daniel; Costello, James C.; Küffner, Robert; Vega, Nicci; Prill, Robert J.; Camacho, Diogo M.; Allison, Kyle R.; Kellis, Manolis; Collins, James J.; Stolovitzky, Gustavo

    2012-01-01

    Reconstructing gene regulatory networks from high-throughput data is a long-standing problem. Through the DREAM project (Dialogue on Reverse Engineering Assessment and Methods), we performed a comprehensive blind assessment of over thirty network inference methods on Escherichia coli, Staphylococcus aureus, Saccharomyces cerevisiae, and in silico microarray data. We characterize performance, data requirements, and inherent biases of different inference approaches offering guidelines for both algorithm application and development. We observe that no single inference method performs optimally across all datasets. In contrast, integration of predictions from multiple inference methods shows robust and high performance across diverse datasets. Thereby, we construct high-confidence networks for E. coli and S. aureus, each comprising ~1700 transcriptional interactions at an estimated precision of 50%. We experimentally test 53 novel interactions in E. coli, of which 23 were supported (43%). Our results establish community-based methods as a powerful and robust tool for the inference of transcriptional gene regulatory networks. PMID:22796662

  17. Novel iterative reconstruction method for optimal dose usage in redundant CT - acquisitions

    NASA Astrophysics Data System (ADS)

    Bruder, H.; Raupach, R.; Allmendinger, T.; Kappler, S.; Sunnegardh, J.; Stierstorfer, K.; Flohr, T.

    2014-03-01

    In CT imaging, a variety of applications exist where reconstructions are SNR and/or resolution limited. However, if the measured data provide redundant information, composite image data with high SNR can be computed. Generally, these composite image volumes will compromise spectral information and/or spatial resolution and/or temporal resolution. This brings us to the idea of transferring the high SNR of the composite image data to low SNR (but high resolution) `source' image data. It was shown that the SNR of CT image data can be improved using iterative reconstruction [1] .We present a novel iterative reconstruction method enabling optimal dose usage of redundant CT measurements of the same body region. The generalized update equation is formulated in image space without further referring to raw data after initial reconstruction of source and composite image data. The update equation consists of a linear combination of the previous update, a correction term constrained by the source data, and a regularization prior initialized by the composite data. The efficiency of the method is demonstrated for different applications: (i) Spectral imaging: we have analysed material decomposition data from dual energy data of our photon counting prototype scanner: the material images can be significantly improved transferring the good noise statistics of the 20 keV threshold image data to each of the material images. (ii) Multi-phase liver imaging: Reconstructions of multi-phase liver data can be optimized by utilizing the noise statistics of combined data from all measured phases (iii) Helical reconstruction with optimized temporal resolution: splitting up reconstruction of redundant helical acquisition data into a short scan reconstruction with Tam window optimizes the temporal resolution The reconstruction of full helical data is then used to optimize the SNR. (iv) Cardiac imaging: the optimal phase image (`best phase') can be improved by transferring all applied over

  18. Comparisons between real and complex Gauss wavelet transform methods of three-dimensional shape reconstruction

    NASA Astrophysics Data System (ADS)

    Xu, Luopeng; Dan, Youquan; Wang, Qingyuan

    2015-10-01

    The continuous wavelet transform (CWT) introduces an expandable spatial and frequency window which can overcome the inferiority of localization characteristic in Fourier transform and windowed Fourier transform. The CWT method is widely applied in the non-stationary signal analysis field including optical 3D shape reconstruction with remarkable performance. In optical 3D surface measurement, the performance of CWT for optical fringe pattern phase reconstruction usually depends on the choice of wavelet function. A large kind of wavelet functions of CWT, such as Mexican Hat wavelet, Morlet wavelet, DOG wavelet, Gabor wavelet and so on, can be generated from Gauss wavelet function. However, so far, application of the Gauss wavelet transform (GWT) method (i.e. CWT with Gauss wavelet function) in optical profilometry is few reported. In this paper, the method using GWT for optical fringe pattern phase reconstruction is presented first and the comparisons between real and complex GWT methods are discussed in detail. The examples of numerical simulations are also given and analyzed. The results show that both the real GWT method along with a Hilbert transform and the complex GWT method can realize three-dimensional surface reconstruction; and the performance of reconstruction generally depends on the frequency domain appearance of Gauss wavelet functions. For the case of optical fringe pattern of large phase variation with position, the performance of real GWT is better than that of complex one due to complex Gauss series wavelets existing frequency sidelobes. Finally, the experiments are carried out and the experimental results agree well with our theoretical analysis.

  19. New Image Reconstruction Methods for Accelerated Quantitative Parameter Mapping and Magnetic Resonance Angiography

    NASA Astrophysics Data System (ADS)

    Velikina, J. V.; Samsonov, A. A.

    2016-02-01

    Advanced MRI techniques often require sampling in additional (non-spatial) dimensions such as time or parametric dimensions, which significantly elongate scan time. Our purpose was to develop novel iterative image reconstruction methods to reduce amount of acquired data in such applications using prior knowledge about signal in the extra dimensions. The efforts have been made to accelerate two applications, namely, time resolved contrast enhanced MR angiography and T1 mapping. Our result demonstrate that significant acceleration (up to 27x times) may be achieved using our proposed iterative reconstruction techniques.

  20. The high sensitivity of the maximum likelihood estimator method of tomographic image reconstruction

    SciTech Connect

    Llacer, J.; Veklerov, E.

    1987-01-01

    Positron Emission Tomography (PET) images obtained by the MLE iterative method of image reconstruction converge towards strongly deteriorated versions of the original source image. The image deterioration is caused by an excessive attempt by the algorithm to match the projection data with high counts. We can modulate this effect. We compared a source image with reconstructions by filtered backprojection to the MLE algorithm to show that the MLE images can have similar noise to the filtered backprojection images at regions of high activity and very low noise, comparable to the source image, in regions of low activity, if the iterative procedure is stopped at an appropriate point.

  1. Image reconstruction of muon tomographic data using a density-based clustering method

    NASA Astrophysics Data System (ADS)

    Perry, Kimberly B.

    Muons are subatomic particles capable of reaching the Earth's surface before decaying. When these particles collide with an object that has a high atomic number (Z), their path of travel changes substantially. Tracking muon movement through shielded containers can indicate what types of materials lie inside. This thesis proposes using a density-based clustering algorithm called OPTICS to perform image reconstructions using muon tomographic data. The results show that this method is capable of detecting high-Z materials quickly, and can also produce detailed reconstructions with large amounts of data.

  2. Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction

    NASA Technical Reports Server (NTRS)

    Oliver, A Brandon; Amar, Adam J.

    2016-01-01

    Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems

  3. Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction

    NASA Technical Reports Server (NTRS)

    Oliver, A. Brandon; Amar, Adam J.

    2016-01-01

    Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.

  4. Reconceptualizing vulnerability: deconstruction and reconstruction as a postmodern feminist analytical research method.

    PubMed

    Glass, Nel; Davis, Kierrynn

    2004-01-01

    Nursing research informed by postmodern feminist perspectives has prompted many debates in recent times. While this is so, nurse researchers who have been tempted to break new ground have had few examples of appropriate analytical methods for a research design informed by the above perspectives. This article presents a deconstructive/reconstructive secondary analysis of a postmodern feminist ethnography in order to provide an analytical exemplar. In doing so, previous notions of vulnerability as a negative state have been challenged and reconstructed. PMID:15206680

  5. Iterative reconstruction method for three-dimensional non-cartesian parallel MRI

    NASA Astrophysics Data System (ADS)

    Jiang, Xuguang

    Parallel magnetic resonance imaging (MRI) with non-Cartesian sampling pattern is a promising technique that increases the scan speed using multiple receiver coils with reduced samples. However, reconstruction is challenging due to the increased complexity. Three reconstruction methods were evaluated: gridding, blocked uniform resampling (BURS) and non-uniform FFT (NUFFT). Computer simulations of parallel reconstruction were performed. Root mean square error (RMSE) of the reconstructed images to the simulated phantom were used as image quality criterion. Gridding method showed best RMSE performance. Two type of a priori constraints to reduce noise and artifacts were evaluated: edge preserving penalty, which suppresses noise and aliasing artifact in image while preventing over-smoothness, and object support penalty, which reduces background noise amplification. A trust region based step-ratio method that iteratively calculates the penalty coefficient was proposed for the penalty functions. Two methods to alleviate computation burden were evaluated: smaller over sampling ratio, and interpolation coefficient matrix compression. The performance were individually tested using computer simulations. Edge preserving penalty and object support penalty were shown to have consistent improvement on RMSE. The performance of calculated penalty coefficients on the two penalties were close to the best RMSE. Oversampling ratio as low as 1.125 was shown to have impact of less than one percent on RMSE for the radial sampling pattern reconstruction. The value reduced the three dimensional data requirement to less than 1/5 of what the conventional 2x grid needed. Interpolation matrix compression with compression ratio up to 50 percent showed small impact on RMSE. The proposed method was validated on 25MR data set from a GEMR scanner. Six image quality metrics were used to evaluate the performance. RMSE, normalized mutual information (NMI) and joint entropy (JE) relative to a reference

  6. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)

    2011-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  7. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory A. (Inventor)

    2010-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  8. Adaptive Forward Modeling Method for Analysis and Reconstructions of Orientation Image Map

    Energy Science and Technology Software Center (ESTSC)

    2014-06-01

    IceNine is a MPI-parallel orientation reconstruction and microstructure analysis code. It's primary purpose is to reconstruct a spatially resolved orientation map given a set of diffraction images from a high energy x-ray diffraction microscopy (HEDM) experiment (1). In particular, IceNine implements the adaptive version of the forward modeling method (2, 3). Part of IceNine is a library used to for conbined analysis of the microstructure with the experimentally measured diffraction signal. The libraries is alsomore » designed for tapid prototyping of new reconstruction and analysis algorithms. IceNine is also built with a simulator of diffraction images with an input microstructure.« less

  9. A comparison of force reconstruction methods for a lumped mass beam

    SciTech Connect

    Bateman, V.I.; Mayes, R.L.; Carne, T.G.

    1992-11-01

    Two extensions of the force reconstruction method, the Sum of Weighted Accelerations Technique (SWAT), are presented in this paper; and the results are compared to those obtained using SWAT. SWAT requires the use of the structure`s elastic mode shapes for reconstruction of the applied force. Although based on the same theory, the two, new techniques do not rely on mode shapes to reconstruct the applied force and may be applied to structures whose mode shapes are not available. One technique uses the measured force and acceleration responses with the rigid body mode shapes to calculate the scalar weighting vector, so the technique is called SWAT-CAL (SWAT using a CALibrated force input). The second technique uses only the free-decay time response of the structure with the rigid body mode shapes to calculate the scalar weighting vector and is called SWAT-TEEM (SWAT using Time Eliminated Elastic Modes).

  10. A comparison of force reconstruction methods for a lumped mass beam

    SciTech Connect

    Bateman, V.I.; Mayes, R.L.; Carne, T.G.

    1992-01-01

    Two extensions of the force reconstruction method, the Sum of Weighted Accelerations Technique (SWAT), are presented in this paper; and the results are compared to those obtained using SWAT. SWAT requires the use of the structure's elastic mode shapes for reconstruction of the applied force. Although based on the same theory, the two, new techniques do not rely on mode shapes to reconstruct the applied force and may be applied to structures whose mode shapes are not available. One technique uses the measured force and acceleration responses with the rigid body mode shapes to calculate the scalar weighting vector, so the technique is called SWAT-CAL (SWAT using a CALibrated force input). The second technique uses only the free-decay time response of the structure with the rigid body mode shapes to calculate the scalar weighting vector and is called SWAT-TEEM (SWAT using Time Eliminated Elastic Modes).