Supervised classification for gene network reconstruction.
Soinov, L A
2003-12-01
One of the central problems of functional genomics is revealing gene expression networks - the relationships between genes that reflect observations of how the expression level of each gene affects those of others. Microarray data are currently a major source of information about the interplay of biochemical network participants in living cells. Various mathematical techniques, such as differential equations, Bayesian and Boolean models and several statistical methods, have been applied to expression data in attempts to extract the underlying knowledge. Unsupervised clustering methods are often considered as the necessary first step in visualization and analysis of the expression data. As for supervised classification, the problem mainly addressed so far has been how to find discriminative genes separating various samples or experimental conditions. Numerous methods have been applied to identify genes that help to predict treatment outcome or to confirm a diagnosis, as well as to identify primary elements of gene regulatory circuits. However, less attention has been devoted to using supervised learning to uncover relationships between genes and/or their products. To start filling this gap a machine-learning approach for gene networks reconstruction is described here. This approach is based on building classifiers--functions, which determine the state of a gene's transcription machinery through expression levels of other genes. The method can be applied to various cases where relationships between gene expression levels could be expected. PMID:14641098
Reconstruct modular phenotype-specific gene networks by knowledge-driven matrix factorization
Yang, Xuerui; Zhou, Yang; Jin, Rong; Chan, Christina
2009-01-01
Motivation: Reconstructing gene networks from microarray data has provided mechanistic information on cellular processes. A popular structure learning method, Bayesian network inference, has been used to determine network topology despite its shortcomings, i.e. the high-computational cost when analyzing a large number of genes and the inefficiency in exploiting prior knowledge, such as the co-regulation information of the genes. To address these limitations, we are introducing an alternative method, knowledge-driven matrix factorization (KMF) framework, to reconstruct phenotype-specific modular gene networks. Results: Considering the reconstruction of gene network as a matrix factorization problem, we first use the gene expression data to estimate a correlation matrix, and then factorize the correlation matrix to recover the gene modules and the interactions between them. Prior knowledge from Gene Ontology is integrated into the matrix factorization. We applied this KMF algorithm to hepatocellular carcinoma (HepG2) cells treated with free fatty acids (FFAs). By comparing the module networks for the different conditions, we identified the specific modules that are involved in conferring the cytotoxic phenotype induced by palmitate. Further analysis of the gene modules of the different conditions suggested individual genes that play important roles in palmitate-induced cytotoxicity. In summary, KMF can efficiently integrate gene expression data with prior knowledge, thereby providing a powerful method of reconstructing phenotype-specific gene networks and valuable insights into the mechanisms that govern the phenotype. Contact: krischan@msu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19542155
Hub-Centered Gene Network Reconstruction Using Automatic Relevance Determination
Böck, Matthias; Ogishima, Soichi; Tanaka, Hiroshi; Kramer, Stefan; Kaderali, Lars
2012-01-01
Network inference deals with the reconstruction of biological networks from experimental data. A variety of different reverse engineering techniques are available; they differ in the underlying assumptions and mathematical models used. One common problem for all approaches stems from the complexity of the task, due to the combinatorial explosion of different network topologies for increasing network size. To handle this problem, constraints are frequently used, for example on the node degree, number of edges, or constraints on regulation functions between network components. We propose to exploit topological considerations in the inference of gene regulatory networks. Such systems are often controlled by a small number of hub genes, while most other genes have only limited influence on the network's dynamic. We model gene regulation using a Bayesian network with discrete, Boolean nodes. A hierarchical prior is employed to identify hub genes. The first layer of the prior is used to regularize weights on edges emanating from one specific node. A second prior on hyperparameters controls the magnitude of the former regularization for different nodes. The net effect is that central nodes tend to form in reconstructed networks. Network reconstruction is then performed by maximization of or sampling from the posterior distribution. We evaluate our approach on simulated and real experimental data, indicating that we can reconstruct main regulatory interactions from the data. We furthermore compare our approach to other state-of-the art methods, showing superior performance in identifying hubs. Using a large publicly available dataset of over 800 cell cycle regulated genes, we are able to identify several main hub genes. Our method may thus provide a valuable tool to identify interesting candidate genes for further study. Furthermore, the approach presented may stimulate further developments in regularization methods for network reconstruction from data. PMID:22570688
Semi-Supervised Multi-View Learning for Gene Network Reconstruction
Ceci, Michelangelo; Pio, Gianvito; Kuzmanovski, Vladimir; Džeroski, Sašo
2015-01-01
The task of gene regulatory network reconstruction from high-throughput data is receiving increasing attention in recent years. As a consequence, many inference methods for solving this task have been proposed in the literature. It has been recently observed, however, that no single inference method performs optimally across all datasets. It has also been shown that the integration of predictions from multiple inference methods is more robust and shows high performance across diverse datasets. Inspired by this research, in this paper, we propose a machine learning solution which learns to combine predictions from multiple inference methods. While this approach adds additional complexity to the inference process, we expect it would also carry substantial benefits. These would come from the automatic adaptation to patterns on the outputs of individual inference methods, so that it is possible to identify regulatory interactions more reliably when these patterns occur. This article demonstrates the benefits (in terms of accuracy of the reconstructed networks) of the proposed method, which exploits an iterative, semi-supervised ensemble-based algorithm. The algorithm learns to combine the interactions predicted by many different inference methods in the multi-view learning setting. The empirical evaluation of the proposed algorithm on a prokaryotic model organism (E. coli) and on a eukaryotic model organism (S. cerevisiae) clearly shows improved performance over the state of the art methods. The results indicate that gene regulatory network reconstruction for the real datasets is more difficult for S. cerevisiae than for E. coli. The software, all the datasets used in the experiments and all the results are available for download at the following link: http://figshare.com/articles/Semi_supervised_Multi_View_Learning_for_Gene_Network_Reconstruction/1604827. PMID:26641091
Semi-Supervised Multi-View Learning for Gene Network Reconstruction.
Ceci, Michelangelo; Pio, Gianvito; Kuzmanovski, Vladimir; Džeroski, Sašo
2015-01-01
The task of gene regulatory network reconstruction from high-throughput data is receiving increasing attention in recent years. As a consequence, many inference methods for solving this task have been proposed in the literature. It has been recently observed, however, that no single inference method performs optimally across all datasets. It has also been shown that the integration of predictions from multiple inference methods is more robust and shows high performance across diverse datasets. Inspired by this research, in this paper, we propose a machine learning solution which learns to combine predictions from multiple inference methods. While this approach adds additional complexity to the inference process, we expect it would also carry substantial benefits. These would come from the automatic adaptation to patterns on the outputs of individual inference methods, so that it is possible to identify regulatory interactions more reliably when these patterns occur. This article demonstrates the benefits (in terms of accuracy of the reconstructed networks) of the proposed method, which exploits an iterative, semi-supervised ensemble-based algorithm. The algorithm learns to combine the interactions predicted by many different inference methods in the multi-view learning setting. The empirical evaluation of the proposed algorithm on a prokaryotic model organism (E. coli) and on a eukaryotic model organism (S. cerevisiae) clearly shows improved performance over the state of the art methods. The results indicate that gene regulatory network reconstruction for the real datasets is more difficult for S. cerevisiae than for E. coli. The software, all the datasets used in the experiments and all the results are available for download at the following link: http://figshare.com/articles/Semi_supervised_Multi_View_Learning_for_Gene_Network_Reconstruction/1604827. PMID:26641091
[A generalized chemical-kinetic method for modeling gene networks].
Likhoshvaĭ, V A; Matushkin, Iu G; Ratushnyĭ, A V; Anan'ko, E A; Ignat'eva, E V; Podkolodnaia, O A
2001-01-01
Development of methods for mathematical simulation of biological systems and building specific simulations is an important trend of bioinformatics development. Here we describe the method of generalized chemokinetic simulation generating flexible and adequate simulations of various biological systems. Adequate simulations of complex nonlinear gene networks--control system of cholesterol by synthesis in the cell and erythrocyte differentiation and maturation--are given as the examples. The simulations were expressed in terms of unit processes--biochemical reactions. Optimal sets of parameters were determined and the systems were numerically simulated under various conditions. The simulations allow us to study possible functional conditions of these gene networks, calculate consequences of mutations, and define optimal strategies for their correction including therapeutic ones. Graphical user interface for these simulations is available at http://wwwmgs.bionet.nsc.ru/systems/MGL/GeneNet/. PMID:11771132
Kentzoglanakis, Kyriakos; Poole, Matthew
2012-01-01
In this paper, we investigate the problem of reverse engineering the topology of gene regulatory networks from temporal gene expression data. We adopt a computational intelligence approach comprising swarm intelligence techniques, namely particle swarm optimization (PSO) and ant colony optimization (ACO). In addition, the recurrent neural network (RNN) formalism is employed for modeling the dynamical behavior of gene regulatory systems. More specifically, ACO is used for searching the discrete space of network architectures and PSO for searching the corresponding continuous space of RNN model parameters. We propose a novel solution construction process in the context of ACO for generating biologically plausible candidate architectures. The objective is to concentrate the search effort into areas of the structure space that contain architectures which are feasible in terms of their topological resemblance to real-world networks. The proposed framework is initially applied to the reconstruction of a small artificial network that has previously been studied in the context of gene network reverse engineering. Subsequently, we consider an artificial data set with added noise for reconstructing a subnetwork of the genetic interaction network of S. cerevisiae (yeast). Finally, the framework is applied to a real-world data set for reverse engineering the SOS response system of the bacterium Escherichia coli. Results demonstrate the relative advantage of utilizing problem-specific knowledge regarding biologically plausible structural properties of gene networks over conducting a problem-agnostic search in the vast space of network architectures. PMID:21576756
Snapshot of iron response in Shewanella oneidensis by gene network reconstruction
Yang, Yunfeng; Harris, Daniel P.; Luo, Feng; Xiong, Wenlu; Joachimiak, Marcin; Wu, Liyou; Dehal, Paramvir; Jacobsen, Janet; Yang, Zamin; Palumbo, Anthony V.; Arkin, Adam P.; Zhou, Jizhong
2008-10-09
Background: Iron homeostasis of Shewanella oneidensis, a gamma-proteobacterium possessing high iron content, is regulated by a global transcription factor Fur. However, knowledge is incomplete about other biological pathways that respond to changes in iron concentration, as well as details of the responses. In this work, we integrate physiological, transcriptomics and genetic approaches to delineate the iron response of S. oneidensis. Results: We show that the iron response in S. oneidensis is a rapid process. Temporal gene expression profiles were examined for iron depletion and repletion, and a gene co-expression network was reconstructed. Modules of iron acquisition systems, anaerobic energy metabolism and protein degradation were the most noteworthy in the gene network. Bioinformatics analyses suggested that genes in each of the modules might be regulated by DNA-binding proteins Fur, CRP and RpoH, respectively. Closer inspection of these modules revealed a transcriptional regulator (SO2426) involved in iron acquisition and ten transcriptional factors involved in anaerobic energy metabolism. Selected genes in the network were analyzed by genetic studies. Disruption of genes encoding a putative alcaligin biosynthesis protein (SO3032) and a gene previously implicated in protein degradation (SO2017) led to severe growth deficiency under iron depletion conditions. Disruption of a novel transcriptional factor (SO1415) caused deficiency in both anaerobic iron reduction and growth with thiosulfate or TMAO as an electronic acceptor, suggesting that SO1415 is required for specific branches of anaerobic energy metabolism pathways. Conclusions: Using a reconstructed gene network, we identified major biological pathways that were differentially expressed during iron depletion and repletion. Genetic studies not only demonstrated the importance of iron acquisition and protein degradation for iron depletion, but also characterized a novel transcriptional factor (SO1415) with a
A Synthesis Method of Gene Networks Having Cyclic Expression Pattern Sequences by Network Learning
NASA Astrophysics Data System (ADS)
Mori, Yoshihiro; Kuroe, Yasuaki
Recently, synthesis of gene networks having desired functions has become of interest to many researchers because it is a complementary approach to understanding gene networks, and it could be the first step in controlling living cells. There exist several periodic phenomena in cells, e.g. circadian rhythm. These phenomena are considered to be generated by gene networks. We have already proposed synthesis method of gene networks based on gene expression. The method is applicable to synthesizing gene networks possessing the desired cyclic expression pattern sequences. It ensures that realized expression pattern sequences are periodic, however, it does not ensure that their corresponding solution trajectories are periodic, which might bring that their oscillations are not persistent. In this paper, in order to resolve the problem we propose a synthesis method of gene networks possessing the desired cyclic expression pattern sequences together with their corresponding solution trajectories being periodic. In the proposed method the persistent oscillations of the solution trajectories are realized by specifying passing points of them.
Franke, Lude; Bakel, Harm van; Fokkens, Like; de Jong, Edwin D.; Egmont-Petersen, Michael; Wijmenga, Cisca
2006-01-01
Most common genetic disorders have a complex inheritance and may result from variants in many genes, each contributing only weak effects to the disease. Pinpointing these disease genes within the myriad of susceptibility loci identified in linkage studies is difficult because these loci may contain hundreds of genes. However, in any disorder, most of the disease genes will be involved in only a few different molecular pathways. If we know something about the relationships between the genes, we can assess whether some genes (which may reside in different loci) functionally interact with each other, indicating a joint basis for the disease etiology. There are various repositories of information on pathway relationships. To consolidate this information, we developed a functional human gene network that integrates information on genes and the functional relationships between genes, based on data from the Kyoto Encyclopedia of Genes and Genomes, the Biomolecular Interaction Network Database, Reactome, the Human Protein Reference Database, the Gene Ontology database, predicted protein-protein interactions, human yeast two-hybrid interactions, and microarray coexpressions. We applied this network to interrelate positional candidate genes from different disease loci and then tested 96 heritable disorders for which the Online Mendelian Inheritance in Man database reported at least three disease genes. Artificial susceptibility loci, each containing 100 genes, were constructed around each disease gene, and we used the network to rank these genes on the basis of their functional interactions. By following up the top five genes per artificial locus, we were able to detect at least one known disease gene in 54% of the loci studied, representing a 2.8-fold increase over random selection. This suggests that our method can significantly reduce the cost and effort of pinpointing true disease genes in analyses of disorders for which numerous loci have been reported but for which
Ensemble-Based Network Aggregation Improves the Accuracy of Gene Network Reconstruction
Xiao, Guanghua; Xie, Yang
2014-01-01
Reverse engineering approaches to constructing gene regulatory networks (GRNs) based on genome-wide mRNA expression data have led to significant biological findings, such as the discovery of novel drug targets. However, the reliability of the reconstructed GRNs needs to be improved. Here, we propose an ensemble-based network aggregation approach to improving the accuracy of network topologies constructed from mRNA expression data. To evaluate the performances of different approaches, we created dozens of simulated networks from combinations of gene-set sizes and sample sizes and also tested our methods on three Escherichia coli datasets. We demonstrate that the ensemble-based network aggregation approach can be used to effectively integrate GRNs constructed from different studies – producing more accurate networks. We also apply this approach to building a network from epithelial mesenchymal transition (EMT) signature microarray data and identify hub genes that might be potential drug targets. The R code used to perform all of the analyses is available in an R package entitled “ENA”, accessible on CRAN (http://cran.r-project.org/web/packages/ENA/). PMID:25390635
Methods of Voice Reconstruction
Chen, Hung-Chi; Kim Evans, Karen F.; Salgado, Christopher J.; Mardini, Samir
2010-01-01
This article reviews methods of voice reconstruction. Nonsurgical methods of voice reconstruction include electrolarynx, pneumatic artificial larynx, and esophageal speech. Surgical methods of voice reconstruction include neoglottis, tracheoesophageal puncture, and prosthesis. Tracheoesophageal puncture can be performed in patients with pedicled flaps such as colon interposition, jejunum, or gastric pull-up or in free flaps such as perforator flaps, jejunum, and colon flaps. Other flaps for voice reconstruction include the ileocolon flap and jejunum. Laryngeal transplantation is also reviewed. PMID:22550443
How to train your microbe: methods for dynamically characterizing gene networks
Castillo-Hair, Sebastian M.; Igoshin, Oleg A.; Tabor, Jeffrey J.
2015-01-01
Gene networks regulate biological processes dynamically. However, researchers have largely relied upon static perturbations, such as growth media variations and gene knockouts, to elucidate gene network structure and function. Thus, much of the regulation on the path from DNA to phenotype remains poorly understood. Recent studies have utilized improved genetic tools, hardware, and computational control strategies to generate precise temporal perturbations outside and inside of live cells. These experiments have, in turn, provided new insights into the organizing principles of biology. Here, we introduce the major classes of dynamical perturbations that can be used to study gene networks, and discuss technologies available for creating them in a wide range of microbial pathways. PMID:25677419
How to train your microbe: methods for dynamically characterizing gene networks.
Castillo-Hair, Sebastian M; Igoshin, Oleg A; Tabor, Jeffrey J
2015-04-01
Gene networks regulate biological processes dynamically. However, researchers have largely relied upon static perturbations, such as growth media variations and gene knockouts, to elucidate gene network structure and function. Thus, much of the regulation on the path from DNA to phenotype remains poorly understood. Recent studies have utilized improved genetic tools, hardware, and computational control strategies to generate precise temporal perturbations outside and inside of live cells. These experiments have, in turn, provided new insights into the organizing principles of biology. Here, we introduce the major classes of dynamical perturbations that can be used to study gene networks, and discuss technologies available for creating them in a wide range of microbial pathways. PMID:25677419
2012-01-01
Background Reconstructing gene regulatory networks (GRNs) from expression data is one of the most important challenges in systems biology research. Many computational models and methods have been proposed to automate the process of network reconstruction. Inferring robust networks with desired behaviours remains challenging, however. This problem is related to network dynamics but has yet to be investigated using network modeling. Results We propose an incremental evolution approach for inferring GRNs that takes network robustness into consideration and can deal with a large number of network parameters. Our approach includes a sensitivity analysis procedure to iteratively select the most influential network parameters, and it uses a swarm intelligence procedure to perform parameter optimization. We have conducted a series of experiments to evaluate the external behaviors and internal robustness of the networks inferred by the proposed approach. The results and analyses have verified the effectiveness of our approach. Conclusions Sensitivity analysis is crucial to identifying the most sensitive parameters that govern the network dynamics. It can further be used to derive constraints for network parameters in the network reconstruction process. The experimental results show that the proposed approach can successfully infer robust GRNs with desired system behaviors. PMID:22595005
CHAI, Lian En; LAW, Chow Kuan; MOHAMAD, Mohd Saberi; CHONG, Chuii Khim; CHOON, Yee Wen; DERIS, Safaai; ILLIAS, Rosli Md
2014-01-01
Background: Gene expression data often contain missing expression values. Therefore, several imputation methods have been applied to solve the missing values, which include k-nearest neighbour (kNN), local least squares (LLS), and Bayesian principal component analysis (BPCA). However, the effects of these imputation methods on the modelling of gene regulatory networks from gene expression data have rarely been investigated and analysed using a dynamic Bayesian network (DBN). Methods: In the present study, we separately imputed datasets of the Escherichia coli S.O.S. DNA repair pathway and the Saccharomyces cerevisiae cell cycle pathway with kNN, LLS, and BPCA, and subsequently used these to generate gene regulatory networks (GRNs) using a discrete DBN. We made comparisons on the basis of previous studies in order to select the gene network with the least error. Results: We found that BPCA and LLS performed better on larger networks (based on the S. cerevisiae dataset), whereas kNN performed better on smaller networks (based on the E. coli dataset). Conclusion: The results suggest that the performance of each imputation method is dependent on the size of the dataset, and this subsequently affects the modelling of the resultant GRNs using a DBN. In addition, on the basis of these results, a DBN has the capacity to discover potential edges, as well as display interactions, between genes. PMID:24876803
Line profile reconstruction: validation and comparison of reconstruction methods
NASA Astrophysics Data System (ADS)
Tsai, Ming-Yi; Yost, Michael G.; Wu, Chang-Fu; Hashmonay, Ram A.; Larson, Timothy V.
Currently, open path Fourier transform infrared (OP-FTIR) spectrometers have been applied in some fenceline monitoring, but their use has been limited because path-integrated concentration measurements typically only provide an estimate of the average concentration. We present a series of experiments that further explore the use of path-integrated measurements to reconstruct various pollutant distributions along a linear path. Our experiments were conducted in a ventilation chamber using an OP-FTIR instrument to monitor a tracer-gas release over a fenceline configuration. These experiments validate a line profile method (1-D reconstruction). Additionally, we expand current reconstruction techniques by applying the Bootstrap to our measurements. We compared our reconstruction results to our point samplers using the concordance correlation factor (CCF). Of the four different release types, three were successfully reconstructed with CCFs greater than 0.9. The difficult reconstruction involved a narrow release where the pollutant was limited to one segment of the segmented beampath. In general, of the three reconstruction methods employed, the average of the bootstrapped reconstructions was found to have the highest CCFs when compared to the point samplers. Furthermore, the bootstrap method was the most flexible and allowed a determination of the uncertainty surrounding our reconstructions.
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
2010-01-01
Gap genes are involved in segment determination during the early development of the fruit fly Drosophila melanogaster as well as in other insects. This review attempts to synthesize the current knowledge of the gap gene network through a comprehensive survey of the experimental literature. I focus on genetic and molecular evidence, which provides us with an almost-complete picture of the regulatory interactions responsible for trunk gap gene expression. I discuss the regulatory mechanisms involved, and highlight the remaining ambiguities and gaps in the evidence. This is followed by a brief discussion of molecular regulatory mechanisms for transcriptional regulation, as well as precision and size-regulation provided by the system. Finally, I discuss evidence on the evolution of gap gene expression from species other than Drosophila. My survey concludes that studies of the gap gene system continue to reveal interesting and important new insights into the role of gene regulatory networks in development and evolution. PMID:20927566
Reconstructive methods in hearing disorders - surgical methods
Zahnert, Thomas
2005-01-01
Restoration of hearing is associated in many cases with resocialisation of those affected and therefore occupies an important place in a society where communication is becoming ever faster. Not all problems can be solved surgically. Even 50 years after the introduction of tympanoplasty, the hearing results are unsatisfactory and often do not reach the threshold for social hearing. The cause of this can in most cases be regarded as incomplete restoration of the mucosal function of the middle ear and tube, which leads to ventilation disorders of the ear and does not allow real vibration of the reconstructed middle ear. However, a few are also caused by the biomechanics of the reconstructed ossicular chain. There has been progress in reconstructive middle ear surgery, which applies particularly to the development of implants. Implants made of titanium, which are distinguished by outstanding biocompatibility, delicate design and by biomechanical possibilities in the reconstruction of chain function, can be regarded as a new generation. Metal implants for the first time allow a controlled close fit with the remainder of the chain and integration of micromechanical functions in the implant. Moreover, there has also been progress in microsurgery itself. This applies particularly to the operative procedures for auditory canal atresia, the restoration of the tympanic membrane and the coupling of implants. This paper gives a summary of the current state of reconstructive microsurgery paying attention to the acousto-mechanical rules. PMID:22073050
PET Image Reconstruction Using Kernel Method
Wang, Guobao; Qi, Jinyi
2014-01-01
Image reconstruction from low-count PET projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization (EM) algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4D dynamic PET patient dataset showed promising results. PMID:25095249
PET image reconstruction using kernel method.
Wang, Guobao; Qi, Jinyi
2015-01-01
Image reconstruction from low-count positron emission tomography (PET) projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4-D dynamic PET patient dataset showed promising results. PMID:25095249
Gene network and pathway generation and analysis: Editorial
Zhao, Zhongming; Sanfilippo, Antonio P.; Huang, Kun
2011-02-18
The past decade has witnessed an exponential growth of biological data including genomic sequences, gene annotations, expression and regulation, and protein-protein interactions. A key aim in the post-genome era is to systematically catalogue gene networks and pathways in a dynamic living cell and apply them to study diseases and phenotypes. To promote the research in systems biology and its application to disease studies, we organized a workshop focusing on the reconstruction and analysis of gene networks and pathways in any organisms from high-throughput data collected through techniques such as microarray analysis and RNA-Seq.
Bullet trajectory reconstruction - Methods, accuracy and precision.
Mattijssen, Erwin J A T; Kerkhoff, Wim
2016-05-01
Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032
Hybrid stochastic simplifications for multiscale gene networks
Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu
2009-01-01
Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach. PMID:19735554
Magnetic flux reconstruction methods for shaped tokamaks
Tsui, Chi-Wa
1993-12-01
The use of a variational method permits the Grad-Shafranov (GS) equation to be solved by reducing the problem of solving the 2D non-linear partial differential equation to the problem of minimizing a function of several variables. This high speed algorithm approximately solves the GS equation given a parameterization of the plasma boundary and the current profile (p` and FF` functions). The author treats the current profile parameters as unknowns. The goal is to reconstruct the internal magnetic flux surfaces of a tokamak plasma and the toroidal current density profile from the external magnetic measurements. This is a classic problem of inverse equilibrium determination. The current profile parameters can be evaluated by several different matching procedures. Matching of magnetic flux and field at the probe locations using the Biot-Savart law and magnetic Green`s function provides a robust method of magnetic reconstruction. The matching of poloidal magnetic field on the plasma surface provides a unique method of identifying the plasma current profile. However, the power of this method is greatly compromised by the experimental errors of the magnetic signals. The Casing Principle provides a very fast way to evaluate the plasma contribution to the magnetic signals. It has the potential of being a fast matching method. The performance of this method is hindered by the accuracy of the poloidal magnetic field computed from the equilibrium solver. A flux reconstruction package has been implemented which integrates a vacuum field solver using a filament model for the plasma, a multi-layer perception neural network as an interface, and the volume integration of plasma current density using Green`s functions as a matching method for the current profile parameters. The flux reconstruction package is applied to compare with the ASEQ and EFIT data. The results are promising.
Buffering in cyclic gene networks
NASA Astrophysics Data System (ADS)
Glyzin, S. D.; Kolesov, A. Yu.; Rozov, N. Kh.
2016-06-01
We consider cyclic chains of unidirectionally coupled delay differential-difference equations that are mathematical models of artificial oscillating gene networks. We establish that the buffering phenomenon is realized in these system for an appropriate choice of the parameters: any given finite number of stable periodic motions of a special type, the so-called traveling waves, coexist.
2014-01-01
Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel
Adaptive Models for Gene Networks
Shin, Yong-Jun; Sayed, Ali H.; Shen, Xiling
2012-01-01
Biological systems are often treated as time-invariant by computational models that use fixed parameter values. In this study, we demonstrate that the behavior of the p53-MDM2 gene network in individual cells can be tracked using adaptive filtering algorithms and the resulting time-variant models can approximate experimental measurements more accurately than time-invariant models. Adaptive models with time-variant parameters can help reduce modeling complexity and can more realistically represent biological systems. PMID:22359614
Gene networks controlling petal organogenesis.
Huang, Tengbo; Irish, Vivian F
2016-01-01
One of the biggest unanswered questions in developmental biology is how growth is controlled. Petals are an excellent organ system for investigating growth control in plants: petals are dispensable, have a simple structure, and are largely refractory to environmental perturbations that can alter their size and shape. In recent studies, a number of genes controlling petal growth have been identified. The overall picture of how such genes function in petal organogenesis is beginning to be elucidated. This review will focus on studies using petals as a model system to explore the underlying gene networks that control organ initiation, growth, and final organ morphology. PMID:26428062
Exhaustive Search for Fuzzy Gene Networks from Microarray Data
Sokhansanj, B A; Fitch, J P; Quong, J N; Quong, A A
2003-07-07
Recent technological advances in high-throughput data collection allow for the study of increasingly complex systems on the scale of the whole cellular genome and proteome. Gene network models are required to interpret large and complex data sets. Rationally designed system perturbations (e.g. gene knock-outs, metabolite removal, etc) can be used to iteratively refine hypothetical models, leading to a modeling-experiment cycle for high-throughput biological system analysis. We use fuzzy logic gene network models because they have greater resolution than Boolean logic models and do not require the precise parameter measurement needed for chemical kinetics-based modeling. The fuzzy gene network approach is tested by exhaustive search for network models describing cyclin gene interactions in yeast cell cycle microarray data, with preliminary success in recovering interactions predicted by previous biological knowledge and other analysis techniques. Our goal is to further develop this method in combination with experiments we are performing on bacterial regulatory networks.
Hong Luo; Hanping Xiao; Robert Nourgaliev; Chunpei Cai
2011-06-01
A comparative study of different reconstruction schemes for a reconstruction-based discontinuous Galerkin, termed RDG(P1P2) method is performed for compressible flow problems on arbitrary grids. The RDG method is designed to enhance the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution via a reconstruction scheme commonly used in the finite volume method. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are implemented to obtain a quadratic polynomial representation of the underlying discontinuous Galerkin linear polynomial solution on each cell. These three reconstruction/recovery methods are compared for a variety of compressible flow problems on arbitrary meshes to access their accuracy and robustness. The numerical results demonstrate that all three reconstruction methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstruction method provides the best performance in terms of both accuracy and robustness.
A new target reconstruction method considering atmospheric refraction
NASA Astrophysics Data System (ADS)
Zuo, Zhengrong; Yu, Lijuan
2015-12-01
In this paper, a new target reconstruction method considering the atmospheric refraction is presented to improve 3D reconstruction accuracy in long rang surveillance system. The basic idea of the method is that the atmosphere between the camera and the target is partitioned into several thin layers radially in which the density is regarded as uniform; Then the reverse tracking of the light propagation path from sensor to target was carried by applying Snell's law at the interface between layers; and finally the average of the tracked target's positions from different cameras is regarded as the reconstructed position. The reconstruction experiments were carried, and the experiment results showed that the new method have much better reconstruction accuracy than the traditional stereoscopic reconstruction method.
Magnetic Field Configuration Models and Reconstruction Methods: a comparative study
NASA Astrophysics Data System (ADS)
Al-haddad, Nada; Möstl, Christian; Roussev, Ilia; Nieves-Chinchilla, Teresa; Poedts, Stefaan; Hidalgo, Miguel Angel; Marubashi, Katsuhide; Savani, Neel
2012-07-01
This study aims to provide a reference to different magnetic field models and reconstruction methods. In order to understand the dissimilarities of those models and codes, we analyze 59 events from the CDAW list, using four different magnetic field models and reconstruction techniques; force- free reconstruction (Lepping et al.(1990); Lynch et al.(2003)), magnetostatic reconstruction, referred as Grad-Shafranov (Hu & Sonnerup(2001); Mostl et al.(2009)), cylinder reconstruction (Marubashi & Lepping(2007)), elliptical, non-force free (Hidalgo et al.(2002)). The resulted parameters of the reconstructions, for the 59 events are compared, statistically, as well as in more details for some cases. The differences between the reconstruction codes are discussed, and suggestions are provided as how to enhance them. Finally we look at 2 unique cases under the microscope, to provide a comprehensive idea of the different aspects of how the fitting codes work.
Spectrum reconstruction based on the constrained optimal linear inverse methods.
Ren, Wenyi; Zhang, Chunmin; Mu, Tingkui; Dai, Haishan
2012-07-01
The dispersion effect of birefringent material results in spectrally varying Nyquist frequency for the Fourier transform spectrometer based on birefringent prism. Correct spectral information cannot be retrieved from the observed interferogram if the dispersion effect is not appropriately compensated. Some methods, such as nonuniform fast Fourier transforms and compensation method, were proposed to reconstruct the spectrum. In this Letter, an alternative constrained spectrum reconstruction method is suggested for the stationary polarization interference imaging spectrometer (SPIIS) based on the Savart polariscope. In the theoretical model of the interferogram, the noise and the total measurement error are included, and the spectrum reconstruction is performed by using the constrained optimal linear inverse methods. From numerical simulation, it is found that the proposed method is much more effective and robust than the nonconstrained spectrum reconstruction method proposed by Jian, and provides a useful spectrum reconstruction approach for the SPIIS. PMID:22743461
Gene networks and liar paradoxes
Isalan, Mark
2009-01-01
Network motifs are small patterns of connections, found over-represented in gene regulatory networks. An example is the negative feedback loop (e.g. factor A represses itself). This opposes its own state so that when ‘on’ it tends towards ‘off’ – and vice versa. Here, we argue that such self-opposition, if considered dimensionlessly, is analogous to the liar paradox: ‘This statement is false’. When ‘true’ it implies ‘false’ – and vice versa. Such logical constructs have provided philosophical consternation for over 2000 years. Extending the analogy, other network topologies give strikingly varying outputs over different dimensions. For example, the motif ‘A activates B and A. B inhibits A’ can give switches or oscillators with time only, or can lead to Turing-type patterns with both space and time (spots, stripes or waves). It is argued here that the dimensionless form reduces to a variant of ‘The following statement is true. The preceding statement is false’. Thus, merely having a static topological description of a gene network can lead to a liar paradox. Network diagrams are only snapshots of dynamic biological processes and apparent paradoxes can reveal important biological mechanisms that are far from paradoxical when considered explicitly in time and space. PMID:19722183
Gene networks and liar paradoxes.
Isalan, Mark
2009-10-01
Network motifs are small patterns of connections, found over-represented in gene regulatory networks. An example is the negative feedback loop (e.g. factor A represses itself). This opposes its own state so that when 'on' it tends towards 'off' - and vice versa. Here, we argue that such self-opposition, if considered dimensionlessly, is analogous to the liar paradox: 'This statement is false'. When 'true' it implies 'false' - and vice versa. Such logical constructs have provided philosophical consternation for over 2000 years. Extending the analogy, other network topologies give strikingly varying outputs over different dimensions. For example, the motif 'A activates B and A. B inhibits A' can give switches or oscillators with time only, or can lead to Turing-type patterns with both space and time (spots, stripes or waves). It is argued here that the dimensionless form reduces to a variant of 'The following statement is true. The preceding statement is false'. Thus, merely having a static topological description of a gene network can lead to a liar paradox. Network diagrams are only snapshots of dynamic biological processes and apparent paradoxes can reveal important biological mechanisms that are far from paradoxical when considered explicitly in time and space. PMID:19722183
High resolution x-ray CMT: Reconstruction methods
Brown, J.K.
1997-02-01
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited for high accuracy, tomographic reconstruction codes.
Multigrid hierarchical simulated annealing method for reconstructing heterogeneous media.
Pant, Lalit M; Mitra, Sushanta K; Secanell, Marc
2015-12-01
A reconstruction methodology based on different-phase-neighbor (DPN) pixel swapping and multigrid hierarchical annealing is presented. The method performs reconstructions by starting at a coarse image and successively refining it. The DPN information is used at each refinement stage to freeze interior pixels of preformed structures. This preserves the large-scale structures in refined images and also reduces the number of pixels to be swapped, thereby resulting in a decrease in the necessary computational time to reach a solution. Compared to conventional single-grid simulated annealing, this method was found to reduce the required computation time to achieve a reconstruction by around a factor of 70-90, with the potential of even higher speedups for larger reconstructions. The method is able to perform medium sized (up to 300(3) voxels) three-dimensional reconstructions with multiple correlation functions in 36-47 h. PMID:26764849
Reconstructing ENSO - Methods, Proxy Data and Teleconnections
NASA Astrophysics Data System (ADS)
Wilson, R.; Cook, E.; D'Arrigo, R.; Riedwyl, N.; Evans, M.; Tudhope, A.; Allan, R.
2009-04-01
The El Niño/Southern Oscillation (ENSO) is globally important and influences climate at interannual and decadal time-scales with resultant links with extreme weather events and associated socio-economic problems. An understanding of the ENSO system is therefore crucial to allow for a better understanding of how ENSO will ‘react' under current global warming. Palaeoclimate reconstructions of ENSO variability allow extension prior to the relatively short instrumental record. However, due to the paucity of relevant annually resolved proxy archives (e.g. corals) in the central and eastern Pacific, reconstructions must rely on proxy data that are located in regions where the local climate is teleconnected with the tropical Pacific. In this study we compare three newly developed independent NINO3.4 SST reconstructions using data from (1) the central Pacific (corals), (2) the TexMex region of the United States (tree-rings), and (3) other regions in the tropics (corals and an ice-core) which are teleconnected with central Pacific SSTs in the 20th century. Although these three reconstructions are strongly calibrated and well verified, inter-proxy comparison shows a significant weakening in inter-proxy coherence in the 19th century. This break down in common signal could be related to insufficient data, dating errors in some of the proxy records or a break down in ENSO's influence on other regions. However, spectral analysis indicates that each reconstruction portrays ENSO-like spectral properties. Superposed epoch analysis also shows that each reconstruction shows a generally consistent ‘El-Niño-like' response to major volcanic events in the following year, while during years T+4 to T+7, ‘La Niña-like' conditions prevail. These results suggest that each of the series expresses ENSO-like ‘behaviour' but this ‘behaviour' however does not appear to be spatially or temporally consistent. This result may reflect published observations that there appear to be
An analytic reconstruction method for PET based on cubic splines
NASA Astrophysics Data System (ADS)
Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.
2014-03-01
PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.
A comparison of ancestral state reconstruction methods for quantitative characters.
Royer-Carenzi, Manuela; Didier, Gilles
2016-09-01
Choosing an ancestral state reconstruction method among the alternatives available for quantitative characters may be puzzling. We present here a comparison of seven of them, namely the maximum likelihood, restricted maximum likelihood, generalized least squares under Brownian, Brownian-with-trend and Ornstein-Uhlenbeck models, phylogenetic independent contrasts and squared parsimony methods. A review of the relations between these methods shows that the maximum likelihood, the restricted maximum likelihood and the generalized least squares under Brownian model infer the same ancestral states and can only be distinguished by the distributions accounting for the reconstruction uncertainty which they provide. The respective accuracy of the methods is assessed over character evolution simulated under a Brownian motion with (and without) directional or stabilizing selection. We give the general form of ancestral state distributions conditioned on leaf states under the simulation models. Ancestral distributions are used first, to give a theoretical lower bound of the expected reconstruction error, and second, to develop an original evaluation scheme which is more efficient than comparing the reconstructed and the simulated states. Our simulations show that: (i) the distributions of the reconstruction uncertainty provided by the methods generally make sense (some more than others); (ii) it is essential to detect the presence of an evolutionary trend and to choose a reconstruction method accordingly; (iii) all the methods show good performances on characters under stabilizing selection; (iv) without trend or stabilizing selection, the maximum likelihood method is generally the most accurate. PMID:27234644
Preconditioning methods for improved convergence rates in iterative reconstructions
Clinthorne, N.H.; Chiao, Pingchun; Rogers, W.L. . Div. of Nuclear Medicine); Pan, T.S. . Dept. of Nuclear Medicine); Stamos, J.A. . Dept. of Nuclear Engineering)
1993-03-01
Because of the characteristics of the tomographic inversion problem, iterative reconstruction techniques often suffer from poor convergence rates--especially at high spatial frequencies. By using preconditioning methods, the convergence properties of most iterative methods can be greatly enhanced without changing their ultimate solution. To increase reconstruction speed, the authors have applied spatially-invariant preconditioning filters that can be designed using the tomographic system response and implemented using 2-D frequency-domain filtering techniques. In a sample application, the authors performed reconstructions from noiseless, simulated projection data, using preconditioned and conventional steepest-descent algorithms. The preconditioned methods demonstrated residuals that were up to a factor of 30 lower than the unassisted algorithms at the same iteration. Applications of these methods to regularized reconstructions from projection data containing Poisson noise showed similar, although not as dramatic, behavior.
Reconstruction-classification method for quantitative photoacoustic tomography
NASA Astrophysics Data System (ADS)
Malone, Emma; Powell, Samuel; Cox, Ben T.; Arridge, Simon
2015-12-01
We propose a combined reconstruction-classification method for simultaneously recovering absorption and scattering in turbid media from images of absorbed optical energy. This method exploits knowledge that optical parameters are determined by a limited number of classes to iteratively improve their estimate. Numerical experiments show that the proposed approach allows for accurate recovery of absorption and scattering in two and three dimensions, and delivers superior image quality with respect to traditional reconstruction-only approaches.
Reconstruction methods for phase-contrast tomography
Raven, C.
1997-02-01
Phase contrast imaging with coherent x-rays can be distinguished in outline imaging and holography, depending on the wavelength {lambda}, the object size d and the object-to-detector distance r. When r << d{sup 2}{lambda}, phase contrast occurs only in regions where the refractive index fastly changes, i.e. at interfaces and edges in the sample. With increasing object-to-detector distance we come in the area of holographic imaging. The image contrast outside the shadow region of the object is due to interference of the direct, undiffracted beam and a beam diffracted by the object, or, in terms of holography, the interference of a reference wave with the object wave. Both, outline imaging and holography, offer the possibility to obtain three dimensional information of the sample in conjunction with a tomographic technique. But the data treatment and the kind of information one can obtain from the reconstruction is different.
New method for 3D reconstruction in digital tomosynthesis
NASA Astrophysics Data System (ADS)
Claus, Bernhard E. H.; Eberhard, Jeffrey W.
2002-05-01
Digital tomosynthesis mammography is an advanced x-ray application that can provide detailed 3D information about the imaged breast. We introduce a novel reconstruction method based on simple backprojection, which yields high contrast reconstructions with reduced artifacts at a relatively low computational complexity. The first step in the proposed reconstruction method is a simple backprojection with an order statistics-based operator (e.g., minimum) used for combining the backprojected images into a reconstructed slice. Accordingly, a given pixel value does generally not contribute to all slices. The percentage of slices where a given pixel value does not contribute, as well as the associated reconstructed values, are collected. Using a form of re-projection consistency constraint, one now updates the projection images, and repeats the order statistics backprojection reconstruction step, but now using the enhanced projection images calculated in the first step. In our digital mammography application, this new approach enhances the contrast of structures in the reconstruction, and allows in particular to recover the loss in signal level due to reduced tissue thickness near the skinline, while keeping artifacts to a minimum. We present results obtained with the algorithm for phantom images.
Digital Signal Processing and Control for the Study of Gene Networks
Shin, Yong-Jun
2016-01-01
Thanks to the digital revolution, digital signal processing and control has been widely used in many areas of science and engineering today. It provides practical and powerful tools to model, simulate, analyze, design, measure, and control complex and dynamic systems such as robots and aircrafts. Gene networks are also complex dynamic systems which can be studied via digital signal processing and control. Unlike conventional computational methods, this approach is capable of not only modeling but also controlling gene networks since the experimental environment is mostly digital today. The overall aim of this article is to introduce digital signal processing and control as a useful tool for the study of gene networks. PMID:27102828
Digital Signal Processing and Control for the Study of Gene Networks.
Shin, Yong-Jun
2016-01-01
Thanks to the digital revolution, digital signal processing and control has been widely used in many areas of science and engineering today. It provides practical and powerful tools to model, simulate, analyze, design, measure, and control complex and dynamic systems such as robots and aircrafts. Gene networks are also complex dynamic systems which can be studied via digital signal processing and control. Unlike conventional computational methods, this approach is capable of not only modeling but also controlling gene networks since the experimental environment is mostly digital today. The overall aim of this article is to introduce digital signal processing and control as a useful tool for the study of gene networks. PMID:27102828
Reconstruction of radiating sound fields using minimum energy method.
Bader, Rolf
2010-01-01
A method for reconstructing a pressure field at the surface of a radiating body or source is presented using recording data of a microphone array. The radiation is assumed to consist of many spherical radiators, as microphone positions are present in the array. These monopoles are weighted using a parameter alpha, which broadens or narrows the overall radiation directivity as an effective and highly intuitive parameter of the radiation characteristics. A radiation matrix is built out of these weighted monopole radiators, and for different assumed values of alpha, a linear equation solver reconstructs the pressure field at the body's surface. It appears that from these many arbitrary reconstructions, the correct one minimizes the reconstruction energy. The method is tested, localizing the radiation points of a Balinese suling flute, reconstructing complex radiation from a duff frame drum, and determining the radiation directivity for the first seven modes of an Usbek tambourine. Stability in terms of measurement noise is demonstrated for the plain method, and additional highly effective algorithm is added for a noise level up to 0 dB. The stability of alpha in terms of minimal reconstruction energy is shown over the whole range of possible values for alpha. Additionally, the treatment of unwanted room reflections is discussed, still leading to satisfactory results in many cases. PMID:20058977
Yeast Ancestral Genome Reconstructions: The Possibilities of Computational Methods
NASA Astrophysics Data System (ADS)
Tannier, Eric
In 2006, a debate has risen on the question of the efficiency of bioinformatics methods to reconstruct mammalian ancestral genomes. Three years later, Gordon et al. (PLoS Genetics, 5(5), 2009) chose not to use automatic methods to build up the genome of a 100 million year old Saccharomyces cerevisiae ancestor. Their manually constructed ancestor provides a reference genome to test whether automatic methods are indeed unable to approach confident reconstructions. Adapting several methodological frameworks to the same yeast gene order data, I discuss the possibilities, differences and similarities of the available algorithms for ancestral genome reconstructions. The methods can be classified into two types: local and global. Studying the properties of both helps to clarify what we can expect from their usage. Both methods propose contiguous ancestral regions that come very close (> 95% identity) to the manually predicted ancestral yeast chromosomes, with a good coverage of the extant genomes.
A Comparison of Methods for Ocean Reconstruction from Sparse Observations
NASA Astrophysics Data System (ADS)
Streletz, G. J.; Kronenberger, M.; Weber, C.; Gebbie, G.; Hagen, H.; Garth, C.; Hamann, B.; Kreylos, O.; Kellogg, L. H.; Spero, H. J.
2014-12-01
We present a comparison of two methods for developing reconstructions of oceanic scalar property fields from sparse scattered observations. Observed data from deep sea core samples provide valuable information regarding the properties of oceans in the past. However, because the locations of sample sites are distributed on the ocean floor in a sparse and irregular manner, developing a global ocean reconstruction is a difficult task. Our methods include a flow-based and a moving least squares -based approximation method. The flow-based method augments the process of interpolating or approximating scattered scalar data by incorporating known flow information. The scheme exploits this additional knowledge to define a non-Euclidean distance measure between points in the spatial domain. This distance measure is used to create a reconstruction of the desired scalar field on the spatial domain. The resulting reconstruction thus incorporates information from both the scattered samples and the known flow field. The second method does not assume a known flow field, but rather works solely with the observed scattered samples. It is based on a modification of the moving least squares approach, a weighted least squares approximation method that blends local approximations into a global result. The modifications target the selection of data used for these local approximations and the construction of the weighting function. The definition of distance used in the weighting function is crucial for this method, so we use a machine learning approach to determine a set of near-optimal parameters for the weighting. We have implemented both of the reconstruction methods and have tested them using several sparse oceanographic datasets. Based upon these studies, we discuss the advantages and disadvantages of each method and suggest possible ways to combine aspects of both methods in order to achieve an overall high-quality reconstruction.
Cheng's method for reconstruction of a functionally sensitive penis.
Cheng, K X; Zhang, R H; Zhou, S; Jiang, K C; Eid, A E; Huang, W Y
1997-01-01
This article introduces a new surgical method for one-stage reconstruction of the penis. It is applied to the reconstruction of the microphallus as well as to traumatic cases with the residual stump of the amputated penis not less than 3 cm long. By transferring the original glans or the residual penile stump to the anterior portion of the newly reconstructed penile body with microsurgical techniques, we have thus rebuilt a penis with more satisfactory results in both appearance and erotic sensation. Seven patients are reported here who were operated on by this method and who have been followed up for 18 months to 10 years. The good results achieved and the method's advantages over other methods are demonstrated and discussed. PMID:8982190
Digital holographic method for tomography-image reconstruction
NASA Astrophysics Data System (ADS)
Liu, Cheng; Yan, Changchun; Gao, Shumei
2004-02-01
A digital holographic method for three-dimensional reconstruction of tomography images is demonstrated theoretically and experimentally. In this proposed method, a numerical hologram is first computed by calculating the total diffraction field of all transect images of a detected organ. Then, the numerical hologram is transferred to the usual recording medium to generate a physical hologram. Last, all the transect images are reconstructed in their original position by illuminating the physical hologram with a laser, thereby forming a three-dimensional transparent image of the organ detected. Due to its true third dimension, the reconstructed image using this method is much more vivid and accurate than that of other methods. Potentially, it may have great prospects for application in medical engineering.
Path method for reconstructing images in fluorescence optical tomography
Kravtsenyuk, Olga V; Lyubimov, Vladimir V; Kalintseva, Natalie A
2006-11-30
A reconstruction method elaborated for the optical diffusion tomography of the internal structure of objects containing absorbing and scattering inhomogeneities is considered. The method is developed for studying objects with fluorescing inhomogeneities and can be used for imaging of distributions of artificial fluorophores whose aggregations indicate the presence of various diseases or pathological deviations. (special issue devoted to multiple radiation scattering in random media)
Matrix-based image reconstruction methods for tomography
Llacer, J.; Meng, J.D.
1984-10-01
Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures.
Sparse Reconstruction for Bioluminescence Tomography Based on the Semigreedy Method
Guo, Wei; Jia, Kebin; Zhang, Qian; Liu, Xueyan; Feng, Jinchao; Qin, Chenghu; Ma, Xibo; Yang, Xin; Tian, Jie
2012-01-01
Bioluminescence tomography (BLT) is a molecular imaging modality which can three-dimensionally resolve the molecular processes in small animals in vivo. The ill-posedness nature of BLT problem makes its reconstruction bears nonunique solution and is sensitive to noise. In this paper, we proposed a sparse BLT reconstruction algorithm based on semigreedy method. To reduce the ill-posedness and computational cost, the optimal permissible source region was automatically chosen by using an iterative search tree. The proposed method obtained fast and stable source reconstruction from the whole body and imposed constraint without using a regularization penalty term. Numerical simulations on a mouse atlas, and in vivo mouse experiments were conducted to validate the effectiveness and potential of the method. PMID:22927887
A fast-convergence POCS seismic denoising and reconstruction method
NASA Astrophysics Data System (ADS)
Ge, Zi-Jian; Li, Jing-Ye; Pan, Shu-Lin; Chen, Xiao-Hong
2015-06-01
The efficiency, precision, and denoising capabilities of reconstruction algorithms are critical to seismic data processing. Based on the Fourier-domain projection onto convex sets (POCS) algorithm, we propose an inversely proportional threshold model that defines the optimum threshold, in which the descent rate is larger than in the exponential threshold in the large-coefficient section and slower than in the exponential threshold in the small-coefficient section. Thus, the computation efficiency of the POCS seismic reconstruction greatly improves without affecting the reconstructed precision of weak reflections. To improve the flexibility of the inversely proportional threshold, we obtain the optimal threshold by using an adjustable dependent variable in the denominator of the inversely proportional threshold model. For random noise attenuation by completing the missing traces in seismic data reconstruction, we present a weighted reinsertion strategy based on the data-driven model that can be obtained by using the percentage of the data-driven threshold in each iteration in the threshold section. We apply the proposed POCS reconstruction method to 3D synthetic and field data. The results suggest that the inversely proportional threshold model improves the computational efficiency and precision compared with the traditional threshold models; furthermore, the proposed reinserting weight strategy increases the SNR of the reconstructed data.
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods
Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.
Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
An Event Reconstruction Method for the Telescope Array Fluorescence Detectors
Fujii, T.; Ogio, S.; Yamazaki, K.; Fukushima, M.; Ikeda, D.; Sagawa, H.; Takahashi, Y.; Tameda, Y.; Hayashi, K.; Ishimori, R.; Kobayashi, Y.; Tokuno, H.; Tsunesada, Y.; Honda, K.; Tomida, T.; Udo, S.
2011-09-22
We measure arrival directions, energies and mass composition of ultra-high energy cosmic rays with air fluorescence detector telescopes. The longitudinal profile of the cosmic ray induced extensive air shower cascade is imaged on focal plane of the telescope camera. Here, we show an event reconstruction method to obtain the primary information from data collected by the Telescope Array Fluorescence Detectors. In particular, we report on an ''Inverse Monte Carlo (IMC)'' method in which the reconstruction process searches for an optimum solution via repeated Monte Carlo simulations including characteristics of all detectors, atmospheric conditions, photon emission and scattering processes.
Tomographic fluorescence reconstruction by a spectral projected gradient pursuit method
NASA Astrophysics Data System (ADS)
Ye, Jinzuo; An, Yu; Mao, Yamin; Jiang, Shixin; Yang, Xin; Chi, Chongwei; Tian, Jie
2015-03-01
In vivo fluorescence molecular imaging (FMI) has played an increasingly important role in biomedical research of preclinical area. Fluorescence molecular tomography (FMT) further upgrades the two-dimensional FMI optical information to three-dimensional fluorescent source distribution, which can greatly facilitate applications in related studies. However, FMT presents a challenging inverse problem which is quite ill-posed and ill-conditioned. Continuous efforts to develop more practical and efficient methods for FMT reconstruction are still needed. In this paper, a method based on spectral projected gradient pursuit (SPGP) has been proposed for FMT reconstruction. The proposed method was based on the directional pursuit framework. A mathematical strategy named the nonmonotone line search was associated with the SPGP method, which guaranteed the global convergence. In addition, the Barzilai-Borwein step length was utilized to build the new step length of the SPGP method, which was able to speed up the convergence of this gradient method. To evaluate the performance of the proposed method, several heterogeneous simulation experiments including multisource cases as well as comparative analyses have been conducted. The results demonstrated that, the proposed method was able to achieve satisfactory source localizations with a bias less than 1 mm; the computational efficiency of the method was one order of magnitude faster than the contrast method; and the fluorescence reconstructed by the proposed method had a higher contrast to the background than the contrast method. All the results demonstrated the potential for practical FMT applications with the proposed method.
Bubble reconstruction method for wire-mesh sensors measurements
NASA Astrophysics Data System (ADS)
Mukin, Roman V.
2016-08-01
A new algorithm is presented for post-processing of void fraction measurements with wire-mesh sensors, particularly for identifying and reconstructing bubble surfaces in a two-phase flow. This method is a combination of the bubble recognition algorithm presented in Prasser (Nuclear Eng Des 237(15):1608, 2007) and Poisson surface reconstruction algorithm developed in Kazhdan et al. (Poisson surface reconstruction. In: Proceedings of the fourth eurographics symposium on geometry processing 7, 2006). To verify the proposed technique, a comparison was done of the reconstructed individual bubble shapes with those obtained numerically in Sato and Ničeno (Int J Numer Methods Fluids 70(4):441, 2012). Using the difference between reconstructed and referenced bubble shapes, the accuracy of the proposed algorithm was estimated. At the next step, the algorithm was applied to void fraction measurements performed in Ylönen (High-resolution flow structure measurements in a rod bundle (Diss., Eidgenössische Technische Hochschule ETH Zürich, Nr. 20961, 2013) by means of wire-mesh sensors in a rod bundle geometry. The reconstructed bubble shape yields bubble surface area and volume, hence its Sauter diameter d_{32} as well. Sauter diameter is proved to be more suitable for bubbles size characterization compared to volumetric diameter d_{30}, proved capable to capture the bi-disperse bubble size distribution in the flow. The effect of a spacer grid was studied as well: For the given spacer grid and considered flow rates, bubble size frequency distribution is obtained almost at the same position for all cases, approximately at d_{32} = 3.5 mm. This finding can be related to the specific geometry of the spacer grid or the air injection device applied in the experiments, or even to more fundamental properties of the bubble breakup and coagulation processes. In addition, an application of the new algorithm for reconstruction of a large air-water interface in a tube bundle is
Reconstruction method for curvilinear structures from two views
NASA Astrophysics Data System (ADS)
Hoffmann, Matthias; Brost, Alexander; Jakob, Carolin; Koch, Martin; Bourier, Felix; Kurzidim, Klaus; Hornegger, Joachim; Strobel, Norbert
2013-03-01
Minimally invasive interventions often involve tools of curvilinear shape like catheters and guide-wires. If the camera parameters of a fluoroscopic system or a stereoscopic endoscope are known, a 3-D reconstruction of corresponding points can be computed by triangulation. Manual identification of point correspondences is time consuming, but there exist methods that automatically select corresponding points along curvilinear structures. The focus here is on the evaluation of a recent published method for catheter reconstruction from two views. A previous evaluation of this method using clinical data yielded promising results. For that evaluation, however, no 3-D ground truth data was available such that the error could only be estimated using the forward-projection of the reconstruction. In this paper, we present a more extensive evaluation of this method based on both clinical and phantom data. For the evaluation using clinical images, 36 data sets and two different catheters were available. The mean error found when reconstructing both catheters was 0.1mm +/- 0.1mm. To evaluate the error in 3-D, images of a phantom were acquired from 13 different angulations. For the phantom, A 3D C-arm CT voxel data set of the phantom was also available. A reconstruction error was calculated by comparing the triangulated 3D reconstruction result to the 3D voxel data set. The evaluation yielded an average error of 1.2mm +/- 1.2mm for the circumferential mapping catheter and 1.3mm +/- 1.0mm for the ablation catheter.
Method for 3D fibre reconstruction on a microrobotic platform.
Hirvonen, J; Myllys, M; Kallio, P
2016-07-01
Automated handling of a natural fibrous object requires a method for acquiring the three-dimensional geometry of the object, because its dimensions cannot be known beforehand. This paper presents a method for calculating the three-dimensional reconstruction of a paper fibre on a microrobotic platform that contains two microscope cameras. The method is based on detecting curvature changes in the fibre centreline, and using them as the corresponding points between the different views of the images. We test the developed method with four fibre samples and compare the results with the references measured with an X-ray microtomography device. We rotate the samples through 16 different orientations on the platform and calculate the three-dimensional reconstruction to test the repeatability of the algorithm and its sensitivity to the orientation of the sample. We also test the noise sensitivity of the algorithm, and record the mismatch rate of the correspondences provided. We use the iterative closest point algorithm to align the measured three-dimensional reconstructions with the references. The average point-to-point distances between the reconstructed fibre centrelines and the references are 20-30 μm, and the mismatch rate is low. Given the manipulation tolerance, this shows that the method is well suited to automated fibre grasping. This has also been demonstrated with actual grasping experiments. PMID:26695385
An improved reconstruction method for cosmological density fields
NASA Technical Reports Server (NTRS)
Gramann, Mirt
1993-01-01
This paper proposes some improvements to existing reconstruction methods for recovering the initial linear density and velocity fields of the universe from the present large-scale density distribution. We derive the Eulerian continuity equation in the Zel'dovich approximation and show that, by applying this equation, we can trace the evolution of the gravitational potential of the universe more exactly than is possible with previous approaches based on the Zel'dovich-Bernoulli equation. The improved reconstruction method is tested using N-body simulations. When the Zel'dovich-Bernoulli equation describes the formation of filaments, then the Zel'dovich continuity equation also follows the clustering of clumps inside the filaments. Our reconstruction method recovers the true initial gravitational potential with an rms error about 3 times smaller than previous methods. We examine the recovery of the initial distribution of Fourier components and find the scale at which the recovered phases are scrambled with respect their true initial values. Integrating the Zel'dovich continuity equation back in time, we can improve the spatial resolution of the reconstruction by a factor of about 2.
Reconstructing Program Theories: Methods Available and Problems To Be Solved.
ERIC Educational Resources Information Center
Leeuw, Frans L.
2003-01-01
Discusses methods for reconstructing theories underlying programs and policies, focusing on three approaches: (1) an empirical approach that focuses on interviews, documents, and argumentational analysis; (2) an approach based on strategic assessment, group dynamics, and dialogue; and (3) an approach based on cognitive and organizational…
Robust Methods for Sensing and Reconstructing Sparse Signals
ERIC Educational Resources Information Center
Carrillo, Rafael E.
2012-01-01
Compressed sensing (CS) is an emerging signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are…
Testing the global flow reconstruction method on coupled chaotic oscillators
NASA Astrophysics Data System (ADS)
Plachy, Emese; Kolláth, Zoltán
2010-03-01
Irregular behaviour of pulsating variable stars may occur due to low dimensional chaos. To determine the quantitative properties of the dynamics in such systems, we apply a suitable time series analysis, the global flow reconstruction method. The robustness of the reconstruction can be tested through the resultant quantities, like Lyapunov dimension and Fourier frequencies. The latter is specially important as it is directly derivable from the observed light curves. We have performed tests using coupled Rossler oscillators to investigate the possible connection between those quantities. In this paper we present our test results.
3D reconstruction methods of coronal structures by radio observations
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.; Bastian, T. S.; White, Stephen M.
1992-11-01
The ability to carry out the three dimensional (3D) reconstruction of structures in the solar corona would represent a major advance in the study of the physical properties in active regions and in flares. Methods which allow a geometric reconstruction of quasistationary coronal structures (for example active region loops) or dynamic structures (for example flaring loops) are described: stereoscopy of multi-day imaging observations by the VLA (Very Large Array); tomography of optically thin emission (in radio or soft x-rays); multifrequency band imaging by the VLA; and tracing of magnetic field lines by propagating electron beams.
Method for image reconstruction of moving radionuclide source distribution
Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick
2012-12-18
A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.
Method for reconstructing the history of pollution emissions
Not Available
1986-05-01
This paper examines the methods for reconstructing the history of pollution emissions. Since very few direct measurements were made in the past, documentary evidence of releases is drawn mainly from the records of economic activity. The available data are integrated into a routing network for the flow of each pollutant, from raw materials through processing, shipment of products, consumption, and finally to waste disposal. This process, called the mass balance approach, is much like reconstructing a fossil skeleton. It was the process was used in the pilot historical study of the pollution of the Hudson region.
3D reconstruction methods of coronal structures by radio observations
NASA Technical Reports Server (NTRS)
Aschwanden, Markus J.; Bastian, T. S.; White, Stephen M.
1992-01-01
The ability to carry out the three dimensional (3D) reconstruction of structures in the solar corona would represent a major advance in the study of the physical properties in active regions and in flares. Methods which allow a geometric reconstruction of quasistationary coronal structures (for example active region loops) or dynamic structures (for example flaring loops) are described: stereoscopy of multi-day imaging observations by the VLA (Very Large Array); tomography of optically thin emission (in radio or soft x-rays); multifrequency band imaging by the VLA; and tracing of magnetic field lines by propagating electron beams.
Efficient method for content reconstruction with self-embedding.
Korus, Paweł; Dziech, Andrzej
2013-03-01
This paper presents a new model of the content reconstruction problem in self-embedding systems, based on an erasure communication channel. We explain why such a model is a good fit for this problem, and how it can be practically implemented with the use of digital fountain codes. The proposed method is based on an alternative approach to spreading the reference information over the whole image, which has recently been shown to be of critical importance in the application at hand. Our paper presents a theoretical analysis of the inherent restoration trade-offs. We analytically derive formulas for the reconstruction success bounds, and validate them experimentally with Monte Carlo simulations and a reference image authentication system. We perform an exhaustive reconstruction quality assessment, where the presented reference scheme is compared to five state-of-the-art alternatives in a common evaluation scenario. Our paper leads to important insights on how self-embedding schemes should be constructed to achieve optimal performance. The reference authentication system designed according to the presented principles allows for high-quality reconstruction, regardless of the amount of the tampered content. The average reconstruction quality, measured on 10000 natural images is 37 dB, and is achievable even when 50% of the image area becomes tampered. PMID:23193455
Chen, Bor-Sen; Chang, Yu-Te; Wang, Yu-Chao
2008-02-01
Molecular noises in gene networks come from intrinsic fluctuations, transmitted noise from upstream genes, and the global noise affecting all genes. Knowledge of molecular noise filtering in gene networks is crucial to understand the signal processing in gene networks and to design noise-tolerant gene circuits for synthetic biology. A nonlinear stochastic dynamic model is proposed in describing a gene network under intrinsic molecular fluctuations and extrinsic molecular noises. The stochastic molecular-noise-processing scheme of gene regulatory networks for attenuating these molecular noises is investigated from the nonlinear robust stabilization and filtering perspective. In order to improve the robust stability and noise filtering, a robust gene circuit design for gene networks is proposed based on the nonlinear robust H infinity stochastic stabilization and filtering scheme, which needs to solve a nonlinear Hamilton-Jacobi inequality. However, in order to avoid solving these complicated nonlinear stabilization and filtering problems, a fuzzy approximation method is employed to interpolate several linear stochastic gene networks at different operation points via fuzzy bases to approximate the nonlinear stochastic gene network. In this situation, the method of linear matrix inequality technique could be employed to simplify the gene circuit design problems to improve robust stability and molecular-noise-filtering ability of gene networks to overcome intrinsic molecular fluctuations and extrinsic molecular noises. PMID:18270080
NASA Astrophysics Data System (ADS)
Chan, Harley; Gilbert, Ralph W.; Pagedar, Nitin A.; Daly, Michael J.; Irish, Jonathan C.; Siewerdsen, Jeffrey H.
2010-02-01
esthetic appearance is one of the most important factors for reconstructive surgery. The current practice of maxillary reconstruction chooses radial forearm, fibula or iliac rest osteocutaneous to recreate three-dimensional complex structures of the palate and maxilla. However, these bone flaps lack shape similarity to the palate and result in a less satisfactory esthetic. Considering similarity factors and vasculature advantages, reconstructive surgeons recently explored the use of scapular tip myo-osseous free flaps to restore the excised site. We have developed a new method that quantitatively evaluates the morphological similarity of the scapula tip bone and palate based on a diagnostic volumetric computed tomography (CT) image. This quantitative result was further interpreted as a color map that rendered on the surface of a three-dimensional computer model. For surgical planning, this color interpretation could potentially assist the surgeon to maximize the orientation of the bone flaps for best fit of the reconstruction site. With approval from the Research Ethics Board (REB) of the University Health Network, we conducted a retrospective analysis with CT image obtained from 10 patients. Each patient had a CT scans including the maxilla and chest on the same day. Based on this image set, we simulated total, subtotal and hemi palate reconstruction. The procedure of simulation included volume segmentation, conversing the segmented volume to a stereo lithography (STL) model, manual registration, computation of minimum geometric distances and curvature between STL model. Across the 10 patients data, we found the overall root-mean-square (RMS) conformance was 3.71+/- 0.16 mm
A structured light method for underwater surface reconstruction
NASA Astrophysics Data System (ADS)
Sarafraz, Amin; Haus, Brian K.
2016-04-01
A new structured-light method for 3D imaging has been developed which can simultaneously estimate both the geometric shape of the water surface and the geometric shape of underwater objects. The method requires only a single image and thus can be applied to dynamic as well as static scenes. Experimental results show the utility of this method in non-invasive underwater 3D reconstruction applications. The performance of the new method is studied through a sensitivity analysis for different parameters of the suggested method.
Iterative reconstruction methods for high-throughput PET tomographs.
Hamill, James; Bruckbauer, Thomas
2002-08-01
A fast iterative method is described for processing clinical PET scans acquired in three dimensions, that is, with no inter-plane septa, using standard computers to replace dedicated processors used until the late 1990s. The method is based on sinogram resampling, Fourier rebinning, Monte Carlo scatter simulation and iterative reconstruction using the attenuation-weighted OSEM method and a projector based on a Gaussian pixel model. Resampling of measured sinogram values occurs before Fourier rebinning, to minimize parallax and geometric distortions due to the circular geometry, and also to reduce the size of the sinogram. We analyse the geometrical and statistical effects of resampling, showing that the lines of response are positioned correctly and that resampling is equivalent to about 4 mm of post-reconstruction filtering. We also present phantom and patient results. In this approach, multi-bed clinical oncology scans can be ready for diagnosis within minutes. PMID:12200928
Reverse engineering and analysis of large genome-scale gene networks
Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas
2013-01-01
Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web. PMID:23042249
The gridding method for image reconstruction by Fourier transformation
Schomberg, H.; Timmer, J.
1995-09-01
This paper explores a computational method for reconstructing an n-dimensional signal f from a sampled version of its Fourier transform {cflx f}. The method involves a window function {cflx w} and proceeds in three steps. First, the convolution {cflx g} = {cflx w} * {cflx f} is computed numerically on a Cartesian grid, using the available samples of {cflx f}. Then, g = wf is computed via the inverse discrete Fourier transform, and finally f is obtained as g/w. Due to the smoothing effect of the convolution, evaluating {cflx w} * {cflx f} is much less error prone than merely interpolating {cflx f}. The method was originally devised for image reconstruction in radio astronomy, but is actually applicable to a broad range of reconstructive imaging methods, including magnetic resonance imaging and computed tomography. In particular, it provides a fast and accurate alternative to the filtered backprojection. The basic method has several variants with other applications, such as the equidistant resampling of arbitrarily sampled signals or the fast computation of the Radon (Hough) transform.
Iterative reconstruction methods in X-ray CT.
Beister, Marcel; Kolditz, Daniel; Kalender, Willi A
2012-04-01
Iterative reconstruction (IR) methods have recently re-emerged in transmission x-ray computed tomography (CT). They were successfully used in the early years of CT, but given up when the amount of measured data increased because of the higher computational demands of IR compared to analytical methods. The availability of large computational capacities in normal workstations and the ongoing efforts towards lower doses in CT have changed the situation; IR has become a hot topic for all major vendors of clinical CT systems in the past 5 years. This review strives to provide information on IR methods and aims at interested physicists and physicians already active in the field of CT. We give an overview on the terminology used and an introduction to the most important algorithmic concepts including references for further reading. As a practical example, details on a model-based iterative reconstruction algorithm implemented on a modern graphics adapter (GPU) are presented, followed by application examples for several dedicated CT scanners in order to demonstrate the performance and potential of iterative reconstruction methods. Finally, some general thoughts regarding the advantages and disadvantages of IR methods as well as open points for research in this field are discussed. PMID:22316498
Detection of driver pathways using mutated gene network in cancer.
Li, Feng; Gao, Lin; Ma, Xiaoke; Yang, Xiaofei
2016-06-21
Distinguishing driver pathways has been extensively studied because they are critical for understanding the development and molecular mechanisms of cancers. Most existing methods for driver pathways are based on high coverage as well as high mutual exclusivity, with the underlying assumption that mutations are exclusive. However, in many cases, mutated driver genes in the same pathways are not strictly mutually exclusive. Based on this observation, we propose an index for quantifying mutual exclusivity between gene pairs. Then, we construct a mutated gene network for detecting driver pathways by integrating the proposed index and coverage. The detection of driver pathways on the mutated gene network consists of two steps: raw pathways are obtained using a CPM method, and the final driver pathways are selected using a strict testing strategy. We apply this method to glioblastoma and breast cancers and find that our method is more accurate than state-of-the-art methods in terms of enrichment of KEGG pathways. Furthermore, the detected driver pathways intersect with well-known pathways with moderate exclusivity, which cannot be discovered using the existing algorithms. In conclusion, the proposed method provides an effective way to investigate driver pathways in cancers. PMID:27118146
Improving automated 3D reconstruction methods via vision metrology
NASA Astrophysics Data System (ADS)
Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart
2015-05-01
This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.
Methods of graph network reconstruction in personalized medicine.
Danilov, A; Ivanov, Yu; Pryamonosov, R; Vassilevski, Yu
2016-08-01
The paper addresses methods for generation of individualized computational domains on the basis of medical imaging dataset. The computational domains will be used in one-dimensional (1D) and three-dimensional (3D)-1D coupled hemodynamic models. A 1D hemodynamic model employs a 1D network of a patient-specific vascular network with large number of vessels. The 1D network is the graph with nodes in the 3D space which bears additional geometric data such as length and radius of vessels. A 3D hemodynamic model requires a detailed 3D reconstruction of local parts of the vascular network. We propose algorithms which extend the automated segmentation of vascular and tubular structures, generation of centerlines, 1D network reconstruction, correction, and local adaptation. We consider two modes of centerline representation: (i) skeletal segments or sets of connected voxels and (ii) curved paths with corresponding radii. Individualized reconstruction of 1D networks depends on the mode of centerline representation. Efficiency of the proposed algorithms is demonstrated on several examples of 1D network reconstruction. The networks can be used in modeling of blood flows as well as other physiological processes in tubular structures. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26462139
Linear method of fluorescent source reconstruction in a diffusion medium.
Janunts, Edgar; Pöschinger, Thomas; Brünner, Holger; Langenbucher, Achim
2008-01-01
A new method is described for obtaining a 2D reconstruction of a fluorescent source distribution inside a diffusion medium from planar measurements of the emission light at the surface after excitation by a plane wave. Point sources are implanted at known locations of a rectangular phantom. The forward model of the photon transport is based on the diffusion approximation of the radiative transport equation (RTE) for homogeneous media. This can be described by a hierarchical system of two time-independent RTE's, one for the excitation plane wave originating from the external light source to the medium and another one for the fluorescence emission originating from the fluorophore marker to the detector. A linear inverse source problem was solved for image reconstruction. The applicability of the theoretical method is demonstrated in some representative working examples. For an optimization of the problem we used least squares minimization technique. PMID:18826162
PET iterative reconstruction incorporating an efficient positron range correction method.
Bertolli, Ottavia; Eleftheriou, Afroditi; Cecchetti, Matteo; Camarlinghi, Niccolò; Belcari, Nicola; Tsoumpas, Charalampos
2016-02-01
Positron range is one of the main physical effects limiting the spatial resolution of positron emission tomography (PET) images. If positrons travel inside a magnetic field, for instance inside a nuclear magnetic resonance (MR) tomograph, the mean range will be smaller but still significant. In this investigation we examined a method to correct for the positron range effect in iterative image reconstruction by including tissue-specific kernels in the forward projection operation. The correction method was implemented within STIR library (Software for Tomographic Image Reconstruction). In order to obtain the positron annihilation distribution of various radioactive isotopes in water and lung tissue, simulations were performed with the Monte Carlo package GATE [Jan et al. 2004 [1
Optical Sensors and Methods for Underwater 3D Reconstruction.
Massot-Campos, Miquel; Oliver-Codina, Gabriel
2015-01-01
This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389
Optical Sensors and Methods for Underwater 3D Reconstruction
Massot-Campos, Miquel; Oliver-Codina, Gabriel
2015-01-01
This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389
Efficient finite element method for grating profile reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Ruming; Sun, Jiguang
2015-12-01
This paper concerns the reconstruction of grating profiles from scattering data. The inverse problem is formulated as an optimization problem with a regularization term. We devise an efficient finite element method (FEM) and employ a quasi-Newton method to solve it. For the direct problems, the FEM stiff and mass matrices are assembled once at the beginning of the numerical procedure. Then only minor changes are made to the mass matrix at each iteration, which significantly saves the computation cost. Numerical examples show that the method is effective and robust.
Multiobjective H2/H∞ synthetic gene network design based on promoter libraries.
Wu, Chih-Hung; Zhang, Weihei; Chen, Bor-Sen
2011-10-01
Some current promoter libraries have been developed for synthetic gene networks. But an efficient method to engineer a synthetic gene network with some desired behaviors by selecting adequate promoters from these promoter libraries has not been presented. Thus developing a systematic method to efficiently employ promoter libraries to improve the engineering of synthetic gene networks with desired behaviors is appealing for synthetic biologists. In this study, a synthetic gene network with intrinsic parameter fluctuations and environmental disturbances in vivo is modeled by a nonlinear stochastic system. In order to engineer a synthetic gene network with a desired behavior despite intrinsic parameter fluctuations and environmental disturbances in vivo, a multiobjective H(2)/H(∞) reference tracking (H(2) optimal tracking and H(∞) noise filtering) design is introduced. The H(2) optimal tracking can make the tracking errors between the behaviors of a synthetic gene network and the desired behaviors as small as possible from the minimum mean square error point of view, and the H(∞) noise filtering can attenuate all possible noises, from the worst-case noise effect point of view, to achieve a desired noise filtering ability. If the multiobjective H(2)/H(∞) reference tracking design is satisfied, the synthetic gene network can robustly and optimally track the desired behaviors, simultaneously. First, based on the dynamic gene regulation, the existing promoter libraries are redefined by their promoter activities so that they can be efficiently selected in the design procedure. Then a systematic method is developed to select an adequate promoter set from the redefined promoter libraries to synthesize a gene network satisfying these two design objectives. But the multiobjective H(2)/H(∞) reference tracking design problem needs to solve a difficult Hamilton-Jacobi Inequality (HJI)-constrained optimization problem. Therefore, the fuzzy approximation method is
Reconstruction of Gene Networks of Iron Response in Shewanella oneidensis
Yang, Yunfeng; Harris, Daniel P; Luo, Feng; Joachimiak, Marcin; Wu, Liyou; Dehal, Paramvir; Jacobsen, Janet; Yang, Zamin Koo; Gao, Haichun; Arkin, Adam; Palumbo, Anthony Vito; Zhou, Jizhong
2009-01-01
It is of great interest to study the iron response of the -proteobacterium Shewanella oneidensis since it possesses a high content of iron and is capable of utilizing iron for anaerobic respiration. We report here that the iron response in S. oneidensis is a rapid process. To gain more insights into the bacterial response to iron, temporal gene expression profiles were examined for iron depletion and repletion, resulting in identification of iron-responsive biological pathways in a gene co-expression network. Iron acquisition systems, including genes unique to S. oneidensis, were rapidly and strongly induced by iron depletion, and repressed by iron repletion. Some were required for iron depletion, as exemplified by the mutational analysis of the putative siderophore biosynthesis protein SO3032. Unexpectedly, a number of genes related to anaerobic energy metabolism were repressed by iron depletion and induced by repletion, which might be due to the iron storage potential of their protein products. Other iron-responsive biological pathways include protein degradation, aerobic energy metabolism and protein synthesis. Furthermore, sequence motifs enriched in gene clusters as well as their corresponding DNA-binding proteins (Fur, CRP and RpoH) were identified, resulting in a regulatory network of iron response in S. oneidensis. Together, this work provides an overview of iron response and reveals novel features in S. oneidensis, including Shewanella-specific iron acquisition systems, and suggests the intimate relationship between anaerobic energy metabolism and iron response.
Computational methods estimating uncertainties for profile reconstruction in scatterometry
NASA Astrophysics Data System (ADS)
Gross, H.; Rathsfeld, A.; Scholze, F.; Model, R.; Bär, M.
2008-04-01
The solution of the inverse problem in scatterometry, i.e. the determination of periodic surface structures from light diffraction patterns, is incomplete without knowledge of the uncertainties associated with the reconstructed surface parameters. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to the efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation and end up minimizing the deviation of the measured efficiency or phase shift values from the simulated ones. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping and the uncertainties of the measured efficiencies or phase shifts. In particular, the uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data and can be estimated by various methods. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.
Transcriptional control in the segmentation gene network of Drosophila.
Schroeder, Mark D; Pearce, Michael; Fak, John; Fan, HongQing; Unnerstall, Ulrich; Emberly, Eldon; Rajewsky, Nikolaus; Siggia, Eric D; Gaul, Ulrike
2004-09-01
The segmentation gene network of Drosophila consists of maternal and zygotic factors that generate, by transcriptional (cross-) regulation, expression patterns of increasing complexity along the anterior-posterior axis of the embryo. Using known binding site information for maternal and zygotic gap transcription factors, the computer algorithm Ahab recovers known segmentation control elements (modules) with excellent success and predicts many novel modules within the network and genome-wide. We show that novel module predictions are highly enriched in the network and typically clustered proximal to the promoter, not only upstream, but also in intronic space and downstream. When placed upstream of a reporter gene, they consistently drive patterned blastoderm expression, in most cases faithfully producing one or more pattern elements of the endogenous gene. Moreover, we demonstrate for the entire set of known and newly validated modules that Ahab's prediction of binding sites correlates well with the expression patterns produced by the modules, revealing basic rules governing their composition. Specifically, we show that maternal factors consistently act as activators and that gap factors act as repressors, except for the bimodal factor Hunchback. Our data suggest a simple context-dependent rule for its switch from repressive to activating function. Overall, the composition of modules appears well fitted to the spatiotemporal distribution of their positive and negative input factors. Finally, by comparing Ahab predictions with different categories of transcription factor input, we confirm the global regulatory structure of the segmentation gene network, but find odd skipped behaving like a primary pair-rule gene. The study expands our knowledge of the segmentation gene network by increasing the number of experimentally tested modules by 50%. For the first time, the entire set of validated modules is analyzed for binding site composition under a uniform set of
Transcriptional Control in the Segmentation Gene Network of Drosophila
Fan, HongQing; Unnerstall, Ulrich; Emberly, Eldon; Rajewsky, Nikolaus; Siggia, Eric D
2004-01-01
The segmentation gene network of Drosophila consists of maternal and zygotic factors that generate, by transcriptional (cross-) regulation, expression patterns of increasing complexity along the anterior-posterior axis of the embryo. Using known binding site information for maternal and zygotic gap transcription factors, the computer algorithm Ahab recovers known segmentation control elements (modules) with excellent success and predicts many novel modules within the network and genome-wide. We show that novel module predictions are highly enriched in the network and typically clustered proximal to the promoter, not only upstream, but also in intronic space and downstream. When placed upstream of a reporter gene, they consistently drive patterned blastoderm expression, in most cases faithfully producing one or more pattern elements of the endogenous gene. Moreover, we demonstrate for the entire set of known and newly validated modules that Ahab's prediction of binding sites correlates well with the expression patterns produced by the modules, revealing basic rules governing their composition. Specifically, we show that maternal factors consistently act as activators and that gap factors act as repressors, except for the bimodal factor Hunchback. Our data suggest a simple context-dependent rule for its switch from repressive to activating function. Overall, the composition of modules appears well fitted to the spatiotemporal distribution of their positive and negative input factors. Finally, by comparing Ahab predictions with different categories of transcription factor input, we confirm the global regulatory structure of the segmentation gene network, but find odd skipped behaving like a primary pair-rule gene. The study expands our knowledge of the segmentation gene network by increasing the number of experimentally tested modules by 50%. For the first time, the entire set of validated modules is analyzed for binding site composition under a uniform set of
Sediment core and glacial environment reconstruction - a method review
NASA Astrophysics Data System (ADS)
Bakke, Jostein; Paasche, Øyvind
2010-05-01
Alpine glaciers are often located in remote and high-altitude regions of the world, areas that only rarely are covered by instrumental records. Reconstructions of glaciers has therefore proven useful for understanding past climate dynamics on both shorter and longer time-scales. One major drawback with glacier reconstructions based solely on moraine chronologies - by far the most common -, is that due to selective preservation of moraine ridges such records do not exclude the possibility of multiple Holocene glacier advances. This problem is true regardless whether cosmogenic isotopes or lichenometry have been used to date the moraines, or also radiocarbon dating of mega-fossils buried in till or underneath the moraines themselves. To overcome this problem Karlén (1976) initially suggested that glacial erosion and the associated production of rock-flour deposited in downstream lakes could provide a continuous record of glacial fluctuations, hence overcoming the problem of incomplete reconstructions. We want to discuss the methods used to reconstruct past glacier activity based on sediments deposited in distal glacier-fed lakes. By quantifying physical properties of glacial and extra-glacial sediments deposited in catchments, and in downstream lakes and fjords, it is possible to isolate and identify past glacier activity - size and production rate - that subsequently can be used to reconstruct changing environmental shifts and trends. Changes in average sediment evacuation from alpine glaciers are mainly governed by glacier size and the mass turnover gradient, determining the deformation rate at any given time. The amount of solid precipitation (mainly winter accumulation) versus loss due to melting during the ablation-season (mainly summer temperature) determines the mass turnover gradient in either positive or negative direction. A prevailing positive net balance will lead to higher sedimentation rates and vice versa, which in turn can be recorded in downstream
Track and vertex reconstruction: From classical to adaptive methods
Strandlie, Are; Fruehwirth, Rudolf
2010-04-15
This paper reviews classical and adaptive methods of track and vertex reconstruction in particle physics experiments. Adaptive methods have been developed to meet the experimental challenges at high-energy colliders, in particular, the CERN Large Hadron Collider. They can be characterized by the obliteration of the traditional boundaries between pattern recognition and statistical estimation, by the competition between different hypotheses about what constitutes a track or a vertex, and by a high level of flexibility and robustness achieved with a minimum of assumptions about the data. The theoretical background of some of the adaptive methods is described, and it is shown that there is a close connection between the two main branches of adaptive methods: neural networks and deformable templates, on the one hand, and robust stochastic filters with annealing, on the other hand. As both classical and adaptive methods of track and vertex reconstruction presuppose precise knowledge of the positions of the sensitive detector elements, the paper includes an overview of detector alignment methods and a survey of the alignment strategies employed by past and current experiments.
Reverse optimization reconstruction method in non-null aspheric interferometry
NASA Astrophysics Data System (ADS)
Zhang, Lei; Liu, Dong; Shi, Tu; Yang, Yongying; Chong, Shiyao; Shen, Yibing; Bai, Jian
2015-10-01
Aspheric non-null test achieves more flexible measurements than the null test. However, the precision calibration for retrace error has always been difficult. A reverse optimization reconstruction (ROR) method is proposed for the retrace error calibration as well as the aspheric figure error extraction based on system modeling. An optimization function is set up with system model, in which the wavefront data from experiment is inserted as the optimization objective while the figure error under test in the model as the optimization variable. The optimization is executed by the reverse ray tracing in the system model until the test wavefront in the model is consistent with the one in experiment. At this point, the surface figure error in the model is considered to be consistent with the one in experiment. With the Zernike fitting, the aspheric surface figure error is then reconstructed in the form of Zernike polynomials. Numerical simulations verifying the high accuracy of the ROR method are presented with error considerations. A set of experiments are carried out to demonstrate the validity and repeatability of ROR method. Compared with the results of Zygo interferometer (null test), the measurement error by the ROR method achieves better than 1/10λ.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos; Pina, Robert
2005-05-17
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.
Image reconstruction by the speckle-masking method.
Weigelt, G; Wirnitzer, B
1983-07-01
Speckle masking is a method for reconstructing high-resolution images of general astronomical objects from stellar speckle interferograms. In speckle masking no unresolvable star is required within the isoplanatic patch of the object. We present digital applications of speckle masking to close spectroscopic double stars. The speckle interferograms were recorded with the European Southern Observatory's 3.6-m telescope. Diffraction-limited resolution (0.03 arc see) was achieved, which is about 30 times higher than the resolution of conventional astrophotography. PMID:19718124
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos
2002-01-01
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape size, and/or position) as needed to best fit the data.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos
2002-01-01
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.
Algebraic filter approach for fast approximation of nonlinear tomographic reconstruction methods
NASA Astrophysics Data System (ADS)
Plantagie, Linda; Batenburg, Kees Joost
2015-01-01
We present a computational approach for fast approximation of nonlinear tomographic reconstruction methods by filtered backprojection (FBP) methods. Algebraic reconstruction algorithms are the methods of choice in a wide range of tomographic applications, yet they require significant computation time, restricting their usefulness. We build upon recent work on the approximation of linear algebraic reconstruction methods and extend the approach to the approximation of nonlinear reconstruction methods which are common in practice. We demonstrate that if a blueprint image is available that is sufficiently similar to the scanned object, our approach can compute reconstructions that approximate iterative nonlinear methods, yet have the same speed as FBP.
A New Method for Coronal Magnetic Field Reconstruction
NASA Astrophysics Data System (ADS)
Yi, Sibaek; Choe, Gwangson; Lim, Daye
2015-08-01
We present a new, simple, variational method for reconstruction of coronal force-free magnetic fields based on vector magnetogram data. Our method employs vector potentials for magnetic field description in order to ensure the divergence-free condition. As boundary conditions, it only requires the normal components of magnetic field and current density so that the boundary conditions are not over-specified as in many other methods. The boundary normal current distribution is initially fixed once and for all and does not need continual adjustment as in stress-and-relax type methods. We have tested the computational code based on our new method in problems with known solutions and those with actual photospheric data. When solutions are fully given at all boundaries, the accuracy of our method is almost comparable to best performing methods in the market. When magnetic field data are given only at the photospheric boundary, our method excels other methods in most “figures of merit” devised by Schrijver et al. (2006). Furthermore the residual force in the solution is at least an order of magnitude smaller than that of any other method. It can also accommodate the source-surface boundary condition at the top boundary. Our method is expected to contribute to the real time monitoring of the sun required for future space weather forecasts.
3D scanning modeling method application in ancient city reconstruction
NASA Astrophysics Data System (ADS)
Ren, Pu; Zhou, Mingquan; Du, Guoguang; Shui, Wuyang; Zhou, Pengbo
2015-07-01
With the development of optical engineering technology, the precision of 3D scanning equipment becomes higher, and its role in 3D modeling is getting more distinctive. This paper proposed a 3D scanning modeling method that has been successfully applied in Chinese ancient city reconstruction. On one hand, for the existing architectures, an improved algorithm based on multiple scanning is adopted. Firstly, two pieces of scanning data were rough rigid registered using spherical displacers and vertex clustering method. Secondly, a global weighted ICP (iterative closest points) method is used to achieve a fine rigid registration. On the other hand, for the buildings which have already disappeared, an exemplar-driven algorithm for rapid modeling was proposed. Based on the 3D scanning technology and the historical data, a system approach was proposed for 3D modeling and virtual display of ancient city.
A reconstruction method for gappy and noisy arterial flow data.
Yakhot, Alexander; Anor, Tomer; Karniadakis, George Em
2007-12-01
Proper orthogonal decomposition (POD), Kriging interpolation, and smoothing are applied to reconstruct gappy and noisy data of blood flow in a carotid artery. While we have applied these techniques to clinical data, in this paper in order to rigorously evaluate their effectiveness we rely on data obtained by computational fluid dynamics (CFD). Specifically, gappy data sets are generated by removing nodal values from high-resolution 3-D CFD data (at random or in a fixed area) while noisy data sets are formed by superimposing speckle noise on the CFD results. A combined POD-Kriging procedure is applied to planar data sets mimicking coarse resolution "ultrasound-like" blood flow images. A method for locating the vessel wall boundary and for calculating the wall shear stress (WSS) is also proposed. The results show good agreement with the original CFD data. The combined POD-Kriging method, enhanced by proper smoothing if needed, holds great potential in dealing effectively with gappy and noisy data reconstruction of in vivo velocity measurements based on color Doppler ultrasound (CDUS) imaging or magnetic resonance angiography (MRA). PMID:18092738
Hysteresis in a synthetic mammalian gene network.
Kramer, Beat P; Fussenegger, Martin
2005-07-01
Bistable and hysteretic switches, enabling cells to adopt multiple internal expression states in response to a single external input signal, have a pivotal impact on biological systems, ranging from cell-fate decisions to cell-cycle control. We have designed a synthetic hysteretic mammalian transcription network. A positive feedback loop, consisting of a transgene and transactivator (TA) cotranscribed by TA's cognate promoter, is repressed by constitutive expression of a macrolide-dependent transcriptional silencer, whose activity is modulated by the macrolide antibiotic erythromycin. The antibiotic concentration, at which a quasi-discontinuous switch of transgene expression occurs, depends on the history of the synthetic transcription circuitry. If the network components are imbalanced, a graded rather than a quasi-discontinuous signal integration takes place. These findings are consistent with a mathematical model. Synthetic gene networks, which are able to emulate natural gene expression behavior, may foster progress in future gene therapy and tissue engineering initiatives. PMID:15972812
Next-Generation Synthetic Gene Networks
Lu, Timothy K.; Khalil, Ahmad S.; Collins, James J.
2009-01-01
Synthetic biology is focused on the rational construction of biological systems based on engineering principles. During the field’s first decade of development, significant progress has been made in designing biological parts and assembling them into genetic circuits to achieve basic functionalities. These circuits have been used to construct proof-of-principle systems with promising results in industrial and medical applications. However, advances in synthetic biology have been limited by a lack of interoperable parts, techniques for dynamically probing biological systems, and frameworks for the reliable construction and operation of complex, higher-order networks. Here, we highlight challenges and goals for next-generation synthetic gene networks, in the context of potential applications in medicine, biotechnology, bioremediation, and bioenergy. PMID:20010597
Takata, Tadanori; Ichikawa, Katsuhiro; Hayashi, Hiroyuki; Mitsui, Wataru; Sakuta, Keita; Koshida, Haruka; Yokoi, Tomohiro; Matsubara, Kousuke; Horii, Jyunsei; Iida, Hiroji
2012-01-01
The purpose of this study was to evaluate the image quality of an iterative reconstruction method, the iterative reconstruction in image space (IRIS), which was implemented in a 128-slices multi-detector computed tomography system (MDCT), Siemens Somatom Definition Flash (Definition). We evaluated image noise by standard deviation (SD) as many researchers did before, and in addition, we measured modulation transfer function (MTF), noise power spectrum (NPS), and perceptual low-contrast detectability using a water phantom including a low-contrast object with a 10 Hounsfield unit (HU) contrast, to evaluate whether the noise reduction of IRIS was effective. The SD and NPS were measured from the images of a water phantom. The MTF was measured from images of a thin metal wire and a bar pattern phantom with the bar contrast of 125 HU. The NPS of IRIS was lower than that of filtered back projection (FBP) at middle and high frequency regions. The SD values were reduced by 21%. The MTF of IRIS and FBP measured by the wire phantom coincided precisely. However, for the bar pattern phantom, the MTF values of IRIS at 0.625 and 0.833 cycle/mm were lower than those of FBP. Despite the reduction of the SD and the NPS, the low-contrast detectability study indicated no significant difference between IRIS and FBP. From these results, it was demonstrated that IRIS had the noise reduction performance with exact preservation for high contrast resolution and slight degradation of middle contrast resolution, and could slightly improve the low contrast detectability but with no significance. PMID:22516592
[The method of the isolated reconstruction by gastropancreatoduodenal resection].
Shchepotin, I B; Vasil'ev, O V; Lukashenko, A V; Rozumiĭ, D A; Priĭmak, V V
2011-01-01
The modification of the reconstructive stage of gastropancreatoduodenal resection aims to increase the security of the pancreatojejunoanastomosis by minimizing the impact of such aggressive substances as bile and pancreatic juice. The modification represents the isolated pancreatojejunoanastomosis on the Roux-en-Y intestinal loop and gastro- and hepaticojejunoanastomoses on the second intestinal loop, separated with the use of the stub. Thus, the method allows the separate passage of pancreatic juice, bile and gastric contents, excluding their impact on other anastomoses. The described modification was performed in 6 patients. There were no cases of the anastomotic insufficiency. The mean hospital stay was 10,5 days. Thus. The method proved to be effective and safe, providing good initial results. PMID:22334901
Comparison of image reconstruction methods for structured illumination microscopy
NASA Astrophysics Data System (ADS)
Lukeš, Tomas; Hagen, Guy M.; Křížek, Pavel; Švindrych, Zdeněk.; Fliegel, Karel; Klíma, Miloš
2014-05-01
Structured illumination microscopy (SIM) is a recent microscopy technique that enables one to go beyond the diffraction limit using patterned illumination. The high frequency information is encoded through aliasing into the observed image. By acquiring multiple images with different illumination patterns aliased components can be separated and a highresolution image reconstructed. Here we investigate image processing methods that perform the task of high-resolution image reconstruction, namely square-law detection, scaled subtraction, super-resolution SIM (SR-SIM), and Bayesian estimation. The optical sectioning and lateral resolution improvement abilities of these algorithms were tested under various noise level conditions on simulated data and on fluorescence microscopy images of a pollen grain test sample and of a cultured cell stained for the actin cytoskeleton. In order to compare the performance of the algorithms, the following objective criteria were evaluated: Signal to Noise Ratio (SNR), Signal to Background Ratio (SBR), circular average of the power spectral density and the S3 sharpness index. The results show that SR-SIM and Bayesian estimation combine illumination patterned images more effectively and provide better lateral resolution in exchange for more complex image processing. SR-SIM requires one to precisely shift the separated spectral components to their proper positions in reciprocal space. High noise levels in the raw data can cause inaccuracies in the shifts of the spectral components which degrade the super-resolved image. Bayesian estimation has proven to be more robust to changes in noise level and illumination pattern frequency.
A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS
Hong Luo; Yidong Xia; Robert Nourgaliev
2011-05-01
A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.
Evolution of a Core Gene Network for Skeletogenesis in Chordates
Hecht, Jochen; Panopoulou, Georgia; Podsiadlowski, Lars; Poustka, Albert J.; Dieterich, Christoph; Ehrich, Siegfried; Suvorova, Julia; Mundlos, Stefan; Seitz, Volkhard
2008-01-01
The skeleton is one of the most important features for the reconstruction of vertebrate phylogeny but few data are available to understand its molecular origin. In mammals the Runt genes are central regulators of skeletogenesis. Runx2 was shown to be essential for osteoblast differentiation, tooth development, and bone formation. Both Runx2 and Runx3 are essential for chondrocyte maturation. Furthermore, Runx2 directly regulates Indian hedgehog expression, a master coordinator of skeletal development. To clarify the correlation of Runt gene evolution and the emergence of cartilage and bone in vertebrates, we cloned the Runt genes from hagfish as representative of jawless fish (MgRunxA, MgRunxB) and from dogfish as representative of jawed cartilaginous fish (ScRunx1–3). According to our phylogenetic reconstruction the stem species of chordates harboured a single Runt gene and thereafter Runt locus duplications occurred during early vertebrate evolution. All newly isolated Runt genes were expressed in cartilage according to quantitative PCR. In situ hybridisation confirmed high MgRunxA expression in hard cartilage of hagfish. In dogfish ScRunx2 and ScRunx3 were expressed in embryonal cartilage whereas all three Runt genes were detected in teeth and placoid scales. In cephalochordates (lancelets) Runt, Hedgehog and SoxE were strongly expressed in the gill bars and expression of Runt and Hedgehog was found in endo- as well as ectodermal cells. Furthermore we demonstrate that the lancelet Runt protein binds to Runt binding sites in the lancelet Hedgehog promoter and regulates its activity. Together, these results suggest that Runt and Hedgehog were part of a core gene network for cartilage formation, which was already active in the gill bars of the common ancestor of cephalochordates and vertebrates and diversified after Runt duplications had occurred during vertebrate evolution. The similarities in expression patterns of Runt genes support the view that teeth and
Vector intensity reconstruction using the data completion method.
Langrenne, Christophe; Garcia, Alexandre
2013-04-01
This paper presents an application of the data completion method (DCM) for vector intensity reconstructions. A mobile array of 36 pressure-pressure probes (72 microphones) is used to perform measurements near a planar surface. Nevertheless, since the proposed method is based on integral formulations, DCM can be applied with any kind of geometry. This method requires the knowledge of Cauchy data (pressure and velocity) on a part of the boundary of an empty domain in order to evaluate pressure and velocity on the remaining part of the boundary. Intensity vectors are calculated in the interior domain surrounded by the measurement array. This inverse acoustic problem requires the use of a regularization method to obtain a realistic solution. An experiment in a closed wooden car trunk mock-up excited by a shaker and two loudspeakers is presented. In this case, where the volume of the mock-up is small (0.61 m(3)), standing-waves and fluid structure interactions appear and show that DCM is a powerful tool to identify sources in a confined space. PMID:23556589
NASA Astrophysics Data System (ADS)
Hansis, Eberhard; Schäfer, Dirk; Grass, Michael; Dössel, Olaf
2007-03-01
Three-dimensional (3D) reconstruction of the coronary arteries offers great advantages in the diagnosis and treatment of cardiovascular diseases, compared to two-dimensional X-ray angiograms. Besides improved roadmapping, quantitative analysis of vessel lesions is possible. To perform 3D reconstruction, rotational projection data of the selectively contrast agent enhanced coronary arteries are acquired with simultaneous ECG recording. For the reconstruction of one cardiac phase, the corresponding projections are selected from the rotational sequence by nearest-neighbor ECG gating. This typically provides only 5-10 projections per cardiac phase. The severe angular undersampling leads to an ill-posed reconstruction problem. In this contribution, an iterative reconstruction method is presented which employs regularizations especially suited for the given reconstruction problem. The coronary arteries cover only a small fraction of the reconstruction volume. Therefore, we formulate the reconstruction problem as a minimization of the L I-norm of the reconstructed image, which results in a spatially sparse object. Two additional regularization terms are introduced: a 3D vesselness prior, which is reconstructed from vesselness-filtered projection data, and a Gibbs smoothing prior. The regularizations favor the reconstruction of the desired object, while taking care not to over-constrain the reconstruction by too detailed a-priori assumptions. Simulated projection data of a coronary artery software phantom are used to evaluate the performance of the method. Human data of clinical cases are presented to show the method's potential for clinical application.
Paper-based synthetic gene networks.
Pardee, Keith; Green, Alexander A; Ferrante, Tom; Cameron, D Ewen; DaleyKeyser, Ajay; Yin, Peng; Collins, James J
2014-11-01
Synthetic gene networks have wide-ranging uses in reprogramming and rewiring organisms. To date, there has not been a way to harness the vast potential of these networks beyond the constraints of a laboratory or in vivo environment. Here, we present an in vitro paper-based platform that provides an alternate, versatile venue for synthetic biologists to operate and a much-needed medium for the safe deployment of engineered gene circuits beyond the lab. Commercially available cell-free systems are freeze dried onto paper, enabling the inexpensive, sterile, and abiotic distribution of synthetic-biology-based technologies for the clinic, global health, industry, research, and education. For field use, we create circuits with colorimetric outputs for detection by eye and fabricate a low-cost, electronic optical interface. We demonstrate this technology with small-molecule and RNA actuation of genetic switches, rapid prototyping of complex gene circuits, and programmable in vitro diagnostics, including glucose sensors and strain-specific Ebola virus sensors. PMID:25417167
Analysis of Cascading Failure in Gene Networks
Sun, Longxiao; Wang, Shudong; Li, Kaikai; Meng, Dazhi
2012-01-01
It is an important subject to research the functional mechanism of cancer-related genes make in formation and development of cancers. The modern methodology of data analysis plays a very important role for deducing the relationship between cancers and cancer-related genes and analyzing functional mechanism of genome. In this research, we construct mutual information networks using gene expression profiles of glioblast and renal in normal condition and cancer conditions. We investigate the relationship between structure and robustness in gene networks of the two tissues using a cascading failure model based on betweenness centrality. Define some important parameters such as the percentage of failure nodes of the network, the average size-ratio of cascading failure, and the cumulative probability of size-ratio of cascading failure to measure the robustness of the networks. By comparing control group and experiment groups, we find that the networks of experiment groups are more robust than that of control group. The gene that can cause large scale failure is called structural key gene. Some of them have been confirmed to be closely related to the formation and development of glioma and renal cancer respectively. Most of them are predicted to play important roles during the formation of glioma and renal cancer, maybe the oncogenes, suppressor genes, and other cancer candidate genes in the glioma and renal cancer cells. However, these studies provide little information about the detailed roles of identified cancer genes. PMID:23248647
Paper-based Synthetic Gene Networks
Pardee, Keith; Green, Alexander A.; Ferrante, Tom; Cameron, D. Ewen; DaleyKeyser, Ajay; Yin, Peng; Collins, James J.
2014-01-01
Synthetic gene networks have wide-ranging uses in reprogramming and rewiring organisms. To date, there has not been a way to harness the vast potential of these networks beyond the constraints of a laboratory or in vivo environment. Here, we present an in vitro paper-based platform that provides a new venue for synthetic biologists to operate, and a much-needed medium for the safe deployment of engineered gene circuits beyond the lab. Commercially available cell-free systems are freeze-dried onto paper, enabling the inexpensive, sterile and abiotic distribution of synthetic biology-based technologies for the clinic, global health, industry, research and education. For field use, we create circuits with colorimetric outputs for detection by eye, and fabricate a low-cost, electronic optical interface. We demonstrate this technology with small molecule and RNA actuation of genetic switches, rapid prototyping of complex gene circuits, and programmable in vitro diagnostics, including glucose sensors and strain-specific Ebola virus sensors. PMID:25417167
Evaluation of back projection methods for breast tomosynthesis image reconstruction.
Zhou, Weihua; Lu, Jianping; Zhou, Otto; Chen, Ying
2015-06-01
Breast cancer is the most common cancer among women in the USA. Compared to mammography, digital breast tomosynthesis is a new imaging technique that may improve the diagnostic accuracy by removing the ambiguities of overlapped tissues and providing 3D information of the breast. Tomosynthesis reconstruction algorithms generate 3D reconstructed slices from a few limited angle projection images. Among different reconstruction algorithms, back projection (BP) is considered an important foundation of quite a few reconstruction techniques with deblurring algorithms such as filtered back projection. In this paper, two BP variants, including α-trimmed BP and principal component analysis-based BP, were proposed to improve the image quality against that of traditional BP. Computer simulations and phantom studies demonstrated that the α-trimmed BP may improve signal response performance and suppress noise in breast tomosynthesis image reconstruction. PMID:25384538
NASA Astrophysics Data System (ADS)
Salonen, J. Sakari; Luoto, Miska; Alenius, Teija; Heikkilä, Maija; Seppä, Heikki; Telford, Richard J.; Birks, H. John B.
2014-03-01
We test and analyse a new calibration method, boosted regression trees (BRTs) in palaeoclimatic reconstructions based on fossil pollen assemblages. We apply BRTs to multiple Holocene and Lateglacial pollen sequences from northern Europe, and compare their performance with two commonly-used calibration methods: weighted averaging regression (WA) and the modern-analogue technique (MAT). Using these calibration methods and fossil pollen data, we present synthetic reconstructions of Holocene summer temperature, winter temperature, and water balance changes in northern Europe. Highly consistent trends are found for summer temperature, with a distinct Holocene thermal maximum at ca 8000-4000 cal. a BP, with a mean Tjja anomaly of ca +0.7 °C at 6 ka compared to 0.5 ka. We were unable to reconstruct reliably winter temperature or water balance, due to the confounding effects of summer temperature and the great between-reconstruction variability. We find BRTs to be a promising tool for quantitative reconstructions from palaeoenvironmental proxy data. BRTs show good performance in cross-validations compared with WA and MAT, can model a variety of taxon response types, find relevant predictors and incorporate interactions between predictors, and show some robustness with non-analogue fossil assemblages.
NASA Astrophysics Data System (ADS)
Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook
2013-05-01
In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method.
Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook
2013-05-01
In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. PMID:23588215
High-quality image reconstruction method for ptychography with partially coherent illumination
NASA Astrophysics Data System (ADS)
Yu, Wei; Wang, Shouyu; Veetil, Suhas; Gao, Shumei; Liu, Cheng; Zhu, Jianqiang
2016-06-01
The influence of partial coherence on the image reconstruction in ptychography is analyzed, and a simple method is proposed to reconstruct a clear image for the weakly scattering object with partially coherent illumination. It is demonstrated numerically and experimentally that by illuminating a weakly scattering object with a divergent radiation beam, and doing the reconstruction only from the bright-field diffraction data, the mathematical ambiguity and corresponding reconstruction errors related to the partial coherency can be remarkably suppressed, thus clear reconstructed images can be generated even under seriously incoherent illumination.
Yoon, Sungwon; Pineda, Angel R.; Fahrig, Rebecca
2010-05-15
Purpose: An iterative tomographic reconstruction algorithm that simultaneously segments and reconstructs the reconstruction domain is proposed and applied to tomographic reconstructions from a sparse number of projection images. Methods: The proposed algorithm uses a two-phase level set method segmentation in conjunction with an iterative tomographic reconstruction to achieve simultaneous segmentation and reconstruction. The simultaneous segmentation and reconstruction is achieved by alternating between level set function evolutions and per-region intensity value updates. To deal with the limited number of projections, a priori information about the reconstruction is enforced via penalized likelihood function. Specifically, smooth function within each region (piecewise smooth function) and bounded function intensity values for each region are assumed. Such a priori information is formulated into a quadratic objective function with linear bound constraints. The level set function evolutions are achieved by artificially time evolving the level set function in the negative gradient direction; the intensity value updates are achieved by using the gradient projection conjugate gradient algorithm. Results: The proposed simultaneous segmentation and reconstruction results were compared to ''conventional'' iterative reconstruction (with no segmentation), iterative reconstruction followed by segmentation, and filtered backprojection. Improvements of 6%-13% in the normalized root mean square error were observed when the proposed algorithm was applied to simulated projections of a numerical phantom and to real fan-beam projections of the Catphan phantom, both of which did not satisfy the a priori assumptions. Conclusions: The proposed simultaneous segmentation and reconstruction resulted in improved reconstruction image quality. The algorithm correctly segments the reconstruction space into regions, preserves sharp edges between different regions, and smoothes the noise
Cerec: correlation, an accurate and practical method for occlusal reconstruction.
Prévost, A P; Bouchard, Y
2001-07-01
The correlation technique explained here shows one of the possibilities for occlusal reconstruction offered by the Cerec approach. The various stages of this technique are described and illustrated. The most current applications are reviewed. PMID:11862885
NASA Astrophysics Data System (ADS)
Shin, Seungwon; Yoon, Ikroh; Juric, Damir
2011-07-01
We present a new interface reconstruction technique, the Local Front Reconstruction Method (LFRM), for incompressible multiphase flows. This new method falls in the category of Front Tracking methods but it shares automatic topology handling characteristics of the previously proposed Level Contour Reconstruction Method (LCRM). The LFRM tracks the phase interface explicitly as in Front Tracking but there is no logical connectivity between interface elements thus greatly easing the algorithmic complexity. Topological changes such as interfacial merging or pinch off are dealt with automatically and naturally as in the Level Contour Reconstruction Method. Here the method is described for both two- and three-dimensional flow geometries. The interfacial reconstruction technique in the LFRM differs from that in the LCRM formulation by foregoing using an Eulerian distance field function. Instead, the LFRM uses information from the original interface elements directly to generate the new interface in a mass conservative way thus showing significantly improved local mass conservation. Because the reconstruction procedure is independently carried out in each individual reconstruction cell after an initial localization process, an adaptive reconstruction procedure can be easily implemented to increase the accuracy while at the same time significantly decreasing the computational time required to perform the reconstruction. Several benchmarking tests are performed to validate the improved accuracy and computational efficiency as compared to the LCRM. The results demonstrate superior performance of the LFRM in maintaining detailed interfacial shapes and good local mass conservation especially when using low-resolution Eulerian grids.
Analysis of method of 3D shape reconstruction using scanning deflectometry
NASA Astrophysics Data System (ADS)
Novák, Jiří; Novák, Pavel; Mikš, Antonín.
2013-04-01
This work presents a scanning deflectometric approach to solving a 3D surface reconstruction problem, which is based on measurements of a surface gradient of optically smooth surfaces. It is shown that a description of this problem leads to a nonlinear partial differential equation (PDE) of the first order, from which the surface shape can be reconstructed numerically. The method for effective finding of the solution of this differential equation is proposed, which is based on the transform of the problem of PDE solving to the optimization problem. We describe different types of surface description for the shape reconstruction and a numerical simulation of the presented method is performed. The reconstruction process is analyzed by computer simulations and presented on examples. The performed analysis confirms a robustness of the reconstruction method and a good possibility for measurements and reconstruction of the 3D shape of specular surfaces.
Liu, Kai; Tian, Jie; Qin, Chenghu; Yang, Xin; Zhu, Shouping; Han, Dong; Wu, Ping
2011-04-01
Generally, the performance of tomographic bioluminescence imaging is dependent on several factors, such as regularization parameters and initial guess of source distribution. In this paper, a global-inexact-Newton based reconstruction method, which is regularized by a dynamic sparse term, is presented for tomographic reconstruction. The proposed method can enhance higher imaging reliability and efficiency. In vivo mouse experimental reconstructions were performed to validate the proposed method. Reconstruction comparisons of the proposed method with other methods demonstrate the applicability on an entire region. Moreover, the reliable performance on a wide range of regularization parameters and initial unknown values were also investigated. Based on the in vivo experiment and a mouse atlas, the tolerance for optical property mismatch was evaluated with optical overestimation and underestimation. Additionally, the reconstruction efficiency was also investigated with different sizes of mouse grids. We showed that this method was reliable for tomographic bioluminescence imaging in practical mouse experimental applications. PMID:21529085
Blockwise conjugate gradient methods for image reconstruction in volumetric CT.
Qiu, W; Titley-Peloquin, D; Soleimani, M
2012-11-01
Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images. PMID:22325240
Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models
Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...
A combined reconstruction-classification method for diffuse optical tomography.
Hiltunen, P; Prince, S J D; Arridge, S
2009-11-01
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels. PMID:19820265
On multigrid methods for image reconstruction from projections
Henson, V.E.; Robinson, B.T.; Limber, M.
1994-12-31
The sampled Radon transform of a 2D function can be represented as a continuous linear map R : L{sup 1} {yields} R{sup N}. The image reconstruction problem is: given a vector b {element_of} R{sup N}, find an image (or density function) u(x, y) such that Ru = b. Since in general there are infinitely many solutions, the authors pick the solution with minimal 2-norm. Numerous proposals have been made regarding how best to discretize this problem. One can, for example, select a set of functions {phi}{sub j} that span a particular subspace {Omega} {contained_in} L{sup 1}, and model R : {Omega} {yields} R{sup N}. The subspace {Omega} may be chosen as a member of a sequence of subspaces whose limit is dense in L{sup 1}. One approach to the choice of {Omega} gives rise to a natural pixel discretization of the image space. Two possible choices of the set {phi}{sub j} are the set of characteristic functions of finite-width `strips` representing energy transmission paths and the set of intersections of such strips. The authors have studied the eigenstructure of the matrices B resulting from these choices and the effect of applying a Gauss-Seidel iteration to the problem Bw = b. There exists a near null space into which the error vectors migrate with iteration, after which Gauss-Seidel iteration stalls. The authors attempt to accelerate convergence via a multilevel scheme, based on the principles of McCormick`s Multilevel Projection Method (PML). Coarsening is achieved by thickening the rays which results in a much smaller discretization of an optimal grid, and a halving of the number of variables. This approach satisfies all the requirements of the PML scheme. They have observed that a multilevel approach based on this idea accelerates convergence at least to the point where noise in the data dominates.
Digital reconstructed radiography quality control with software methods
NASA Astrophysics Data System (ADS)
Denis, Eloise; Beaumont, Stephane; Guedon, JeanPierre
2005-04-01
Nowadays, most of treatments for external radiotherapy are prepared with Treatment Planning Systems (TPS) which uses a virtual patient generated by a set of transverse slices acquired with a CT scanner of the patient in treatment position 1 2 3. In the first step of virtual simulation, the TPS is used to define a ballistic allowing a good target covering and the lowest irradiation for normal tissues. This parameters optimisation of the treatment with the TPS is realised with particular graphic tools allowing to: ×Contour the target, ×Expand the limit of the target in order to take into account contouring uncertainties, patient set up errors, movements of the target during the treatment (internal movement of the target and external movement of the patient), and beam's penumbra, ×Determine beams orientation and define dimensions and forms of the beams, ×Visualize beams on the patient's skin and calculate some characteristic points which will be tattooed on the patient to assist the patient set up before treating, ×Calculate for each beam a Digital Reconstructed Radiography (DRR) consisting in projecting the 3D CT virtual patient and beam limits with a cone beam geometry onto a plane. These DRR allow one for insuring the patient positioning during the treatment, essentially bone structures alignment by comparison with real radiography realized with the treatment X-ray source in the same geometric conditions (portal imaging). Then DRR are preponderant to insure the geometric accuracy of the treatment. For this reason quality control of its computation is mandatory4 . Until now, this control is realised with real test objects including some special inclusions4 5 . This paper proposes to use some numerical test objects to control the quality DRR calculation in terms of computation time, beam angle, divergence and magnification precision, spatial and contrast resolutions. The main advantage of this proposed method is to avoid a real test object CT acquisition
Reconstruction method for data protection in telemedicine systems
NASA Astrophysics Data System (ADS)
Buldakova, T. I.; Suyatinov, S. I.
2015-03-01
In the report the approach to protection of transmitted data by creation of pair symmetric keys for the sensor and the receiver is offered. Since biosignals are unique for each person, their corresponding processing allows to receive necessary information for creation of cryptographic keys. Processing is based on reconstruction of the mathematical model generating time series that are diagnostically equivalent to initial biosignals. Information about the model is transmitted to the receiver, where the restoration of physiological time series is performed using the reconstructed model. Thus, information about structure and parameters of biosystem model received in the reconstruction process can be used not only for its diagnostics, but also for protection of transmitted data in telemedicine complexes.
Dictionary-Learning-Based Reconstruction Method for Electron Tomography
LIU, BAODONG; YU, HENGYONG; VERBRIDGE, SCOTT S.; SUN, LIZHI; WANG, GE
2014-01-01
Summary Electron tomography usually suffers from so-called “missing wedge” artifacts caused by limited tilt angle range. An equally sloped tomography (EST) acquisition scheme (which should be called the linogram sampling scheme) was recently applied to achieve 2.4-angstrom resolution. On the other hand, a compressive sensing inspired reconstruction algorithm, known as adaptive dictionary based statistical iterative reconstruction (ADSIR), has been reported for X-ray computed tomography. In this paper, we evaluate the EST, ADSIR, and an ordered-subset simultaneous algebraic reconstruction technique (OS-SART), and compare the ES and equally angled (EA) data acquisition modes. Our results show that OS-SART is comparable to EST, and the ADSIR outperforms EST and OS-SART. Furthermore, the equally sloped projection data acquisition mode has no advantage over the conventional equally angled mode in this context. PMID:25104167
A new method to combine 3D reconstruction volumes for multiple parallel circular cone beam orbits
Baek, Jongduk; Pelc, Norbert J.
2010-01-01
Purpose: This article presents a new reconstruction method for 3D imaging using a multiple 360° circular orbit cone beam CT system, specifically a way to combine 3D volumes reconstructed with each orbit. The main goal is to improve the noise performance in the combined image while avoiding cone beam artifacts. Methods: The cone beam projection data of each orbit are reconstructed using the FDK algorithm. When at least a portion of the total volume can be reconstructed by more than one source, the proposed combination method combines these overlap regions using weighted averaging in frequency space. The local exactness and the noise performance of the combination method were tested with computer simulations of a Defrise phantom, a FORBILD head phantom, and uniform noise in the raw data. Results: A noiseless simulation showed that the local exactness of the reconstructed volume from the source with the smallest tilt angle was preserved in the combined image. A noise simulation demonstrated that the combination method improved the noise performance compared to a single orbit reconstruction. Conclusions: In CT systems which have overlap volumes that can be reconstructed with data from more than one orbit and in which the spatial frequency content of each reconstruction can be calculated, the proposed method offers improved noise performance while keeping the local exactness of data from the source with the smallest tilt angle. PMID:21089770
NASA Astrophysics Data System (ADS)
Chen, Xueli; Yang, Defu; Zhang, Qitan; Liang, Jimin
2014-05-01
Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l1/2 regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l1/2 regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l1 regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.
Comparison of reconstruction methods and quantitative accuracy in Siemens Inveon PET scanner
NASA Astrophysics Data System (ADS)
Ram Yu, A.; Kim, Jin Su; Kang, Joo Hyun; Moo Lim, Sang
2015-04-01
PET reconstruction is key to the quantification of PET data. To our knowledge, no comparative study of reconstruction methods has been performed to date. In this study, we compared reconstruction methods with various filters in terms of their spatial resolution, non-uniformities (NU), recovery coefficients (RCs), and spillover ratios (SORs). In addition, the linearity of reconstructed radioactivity between linearity of measured and true concentrations were also assessed. A Siemens Inveon PET scanner was used in this study. Spatial resolution was measured with NEMA standard by using a 1 mm3 sized 18F point source. Image quality was assessed in terms of NU, RC and SOR. To measure the effect of reconstruction algorithms and filters, data was reconstructed using FBP, 3D reprojection algorithm (3DRP), ordered subset expectation maximization 2D (OSEM 2D), and maximum a posteriori (MAP) with various filters or smoothing factors (β). To assess the linearity of reconstructed radioactivity, image quality phantom filled with 18F was used using FBP, OSEM and MAP (β =1.5 & 5 × 10-5). The highest achievable volumetric resolution was 2.31 mm3 and the highest RCs were obtained when OSEM 2D was used. SOR was 4.87% for air and 3.97% for water, obtained OSEM 2D reconstruction was used. The measured radioactivity of reconstruction image was proportional to the injected one for radioactivity below 16 MBq/ml when FBP or OSEM 2D reconstruction methods were used. By contrast, when the MAP reconstruction method was used, activity of reconstruction image increased proportionally, regardless of the amount of injected radioactivity. When OSEM 2D or FBP were used, the measured radioactivity concentration was reduced by 53% compared with true injected radioactivity for radioactivity <16 MBq/ml. The OSEM 2D reconstruction method provides the highest achievable volumetric resolution and highest RC among all the tested methods and yields a linear relation between the measured and true
The point-source method for 3D reconstructions for the Helmholtz and Maxwell equations
NASA Astrophysics Data System (ADS)
Ben Hassen, M. F.; Erhard, K.; Potthast, R.
2006-02-01
We use the point-source method (PSM) to reconstruct a scattered field from its associated far field pattern. The reconstruction scheme is described and numerical results are presented for three-dimensional acoustic and electromagnetic scattering problems. We give new proofs of the algorithms, based on the Green and Stratton-Chu formulae, which are more general than with the former use of the reciprocity relation. This allows us to handle the case of limited aperture data and arbitrary incident fields. Both for 3D acoustics and electromagnetics, numerical reconstructions of the field for different settings and with noisy data are shown. For shape reconstruction in acoustics, we develop an appropriate strategy to identify areas with good reconstruction quality and combine different such regions into one joint function. Then, we show how shapes of unknown sound-soft scatterers are found as level curves of the total reconstructed field.
A method for investigating system matrix properties in optimization-based CT reconstruction
NASA Astrophysics Data System (ADS)
Rose, Sean D.; Sidky, Emil Y.; Pan, Xiaochuan
2016-04-01
Optimization-based iterative reconstruction methods have shown much promise for a variety of applications in X-ray computed tomography (CT). In these reconstruction methods, the X-ray measurement is modeled as a linear mapping from a finite-dimensional image space to a finite dimensional data-space. This mapping is dependent on a number of factors including the basis functions used for image representation1 and the method by which the matrix representing this mapping is generated.2 Understanding the properties of this linear mapping and how it depends on our choice of parameters is fundamental to optimization-based reconstruction. In this work, we confine our attention to a pixel basis and propose a method to investigate the effect of pixel size in optimization-based reconstruction. The proposed method provides insight into the tradeoff between higher resolution image representation and matrix conditioning. We demonstrate this method for a particular breast CT system geometry. We find that the images obtained from accurate solution of a least squares reconstruction optimization problem have high sensitivity to pixel size within certain regimes. We propose two methods by which this sensitivity can be reduced and demonstrate their efficacy. Our results indicate that the choice of pixel size in optimization-based reconstruction can have great impact on the quality of the reconstructed image, and that understanding the properties of the linear mapping modeling the X-ray measurement can help guide us with this choice.
Synthetic Gene Networks: De novo constructs -- in numero descriptions
NASA Astrophysics Data System (ADS)
Hasty, Jeff
2007-03-01
Uncovering the structure and function of gene regulatory networks has become one of the central challenges of the post-genomic era. Theoretical models of protein-DNA feedback loops and gene regulatory networks have long been proposed, and recently, certain qualitative features of such models have been experimentally corroborated. This talk will focus on model and experimental results that demonstrate how a naturally occurring gene network can be used as a ``parts list'' for synthetic network design. The model formulation leads to computational and analytical approaches relevant to nonlinear dynamics and statistical physics, and the utility of such a formulation will be demonstrated through the consideration of specific design criteria for several novel genetic devices. Fluctuations originating from small molecule-number effects will be discussed in the context of model predictions, and the experimental validation of these stochastic effects underscores the importance of internal noise in gene expression. The underlying methodology highlights the utility of engineering-based methods in the design of synthetic gene regulatory networks.
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
Qian, Weixin; Qi, Shuangxi; Wang, Wanli; Cheng, Jinming; Liu, Dongbing
2011-09-01
Neutron penumbral imaging is a significant diagnostic technique in laser-driven inertial confinement fusion experiment. It is very important to develop a new reconstruction method to improve the resolution of neutron penumbral imaging. A new nonlinear reconstruction method based on total variation (TV) regularization is proposed in this paper. A TV-norm is used as regularized term to construct a smoothing functional for penumbral image reconstruction in the new method, in this way, the problem of penumbral image reconstruction is transformed to the problem of a functional minimization. In addition, a fixed point iteration scheme is introduced to solve the problem of functional minimization. The numerical experimental results show that, compared to linear reconstruction method based on Wiener filter, the TV regularized nonlinear reconstruction method is beneficial to improve the quality of reconstructed image with better performance of noise smoothing and edge preserving. Meanwhile, it can also obtain the spatial resolution with 5 μm which is higher than the Wiener method. PMID:21974584
Qian Weixin; Qi Shuangxi; Wang Wanli; Cheng Jinming; Liu Dongbing
2011-09-15
Neutron penumbral imaging is a significant diagnostic technique in laser-driven inertial confinement fusion experiment. It is very important to develop a new reconstruction method to improve the resolution of neutron penumbral imaging. A new nonlinear reconstruction method based on total variation (TV) regularization is proposed in this paper. A TV-norm is used as regularized term to construct a smoothing functional for penumbral image reconstruction in the new method, in this way, the problem of penumbral image reconstruction is transformed to the problem of a functional minimization. In addition, a fixed point iteration scheme is introduced to solve the problem of functional minimization. The numerical experimental results show that, compared to linear reconstruction method based on Wiener filter, the TV regularized nonlinear reconstruction method is beneficial to improve the quality of reconstructed image with better performance of noise smoothing and edge preserving. Meanwhile, it can also obtain the spatial resolution with 5 {mu}m which is higher than the Wiener method.
Method for reconstruction of shape of specular surfaces using scanning beam deflectometry
NASA Astrophysics Data System (ADS)
Miks, Antonin; Novak, Jiri; Novak, Pavel
2013-07-01
A new method is presented for reconstruction of the shape of specular surfaces using scanning beam deflectometry. A description and an analysis of a deflectometric technique for 3D measurements of specular surfaces is provided and it is derived that a surface reconstruction problem leads to a theoretical description by a nonlinear partial differential equation. The surface shape can be calculated by solution of the derived equation. A method was proposed, which makes possible to find effectively the solution of the deflectometric differential equation for the shape reconstruction. The presented method is noncontact and no reference surface is needed as in interferometry.
Yang, Alice C; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole
2016-06-01
The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in magnetic resonance imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be used to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MRI, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold standards, are discussed. PMID:27003227
NASA Astrophysics Data System (ADS)
Liu, Ming; Qin, Zhuanping; Jia, Mengyu; Zhao, Huijuan; Gao, Feng
2015-03-01
Two-layered slab is a rational simplified sample to the near-infrared functional brain imaging using diffuse optical tomography (DOT).The quality of reconstructed images is substantially affected by the accuracy of the background optical properties. In this paper, region step wise reconstruction method is proposed for reconstructing the background optical properties of the two-layered slab sample with the known geometric information based on continuous wave (CW) DOT. The optical properties of the top and bottom layers are respectively reconstructed utilizing the different source-detector-separation groups according to the depth of maximum brain sensitivity of the source-detector-separation. We demonstrate the feasibility of the proposed method and investigate the application range of the source-detector-separation groups by the numerical simulations. The numerical simulation results indicate the proposed method can effectively reconstruct the background optical properties of two-layered slab sample. The relative reconstruction errors are less than 10% when the thickness of the top layer is approximate 10mm. The reconstruction of target caused by brain activation is investigated with the reconstructed optical properties as well. The quantitativeness ratio of the ROI is about 80% which is higher than that of the conventional method. The spatial resolution of the reconstructions (R) with two targets is investigated, and it demonstrates R with the proposed method is better than that with the conventional method as well.
The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method
NASA Astrophysics Data System (ADS)
Voronina, T. A.; Romanenko, A. A.
2016-04-01
Application of the r- solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.
Network Reconstruction Using Nonparametric Additive ODE Models
Henderson, James; Michailidis, George
2014-01-01
Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative
Simultaneous denoising and reconstruction of 5D seismic data via damped rank-reduction method
NASA Astrophysics Data System (ADS)
Chen, Yangkang; Zhang, Dong; Jin, Zhaoyu; Chen, Xiaohong; Zu, Shaohuan; Huang, Weilin; Gan, Shuwei
2016-06-01
The Cadzow rank-reduction method can be effectively utilized in simultaneously denoising and reconstructing 5D seismic data that depends on four spatial dimensions. The classic version of Cadzow rank-reduction method arranges the 4D spatial data into a level-four block Hankel/Toeplitz matrix and then applies truncated singular value decomposition (TSVD) for rank-reduction. When the observed data is extremely noisy, which is often the feature of real seismic data, traditional TSVD cannot be adequate for attenuating the noise and reconstructing the signals. The reconstructed data tends to contain a significant amount of residual noise using the traditional TSVD method, which can be explained by the fact that the reconstructed data space is a mixture of both signal subspace and noise subspace. In order to better decompose the block Hankel matrix into signal and noise components, we introduced a damping operator into the traditional TSVD formula, which we call the damped rank-reduction method. The damped rank-reduction method can obtain a perfect reconstruction performance even when the observed data has extremely low signal-to-noise ratio (SNR). The feasibility of the improved 5D seismic data reconstruction method was validated via both 5D synthetic and field data examples. We presented comprehensive analysis of the data examples and obtained valuable experience and guidelines in better utilizing the proposed method in practice. Since the proposed method is convenient to implement and can achieve immediate improvement, we suggest its wide application in the industry.
Simultaneous denoising and reconstruction of 5-D seismic data via damped rank-reduction method
NASA Astrophysics Data System (ADS)
Chen, Yangkang; Zhang, Dong; Jin, Zhaoyu; Chen, Xiaohong; Zu, Shaohuan; Huang, Weilin; Gan, Shuwei
2016-09-01
The Cadzow rank-reduction method can be effectively utilized in simultaneously denoising and reconstructing 5-D seismic data that depend on four spatial dimensions. The classic version of Cadzow rank-reduction method arranges the 4-D spatial data into a level-four block Hankel/Toeplitz matrix and then applies truncated singular value decomposition (TSVD) for rank reduction. When the observed data are extremely noisy, which is often the feature of real seismic data, traditional TSVD cannot be adequate for attenuating the noise and reconstructing the signals. The reconstructed data tend to contain a significant amount of residual noise using the traditional TSVD method, which can be explained by the fact that the reconstructed data space is a mixture of both signal subspace and noise subspace. In order to better decompose the block Hankel matrix into signal and noise components, we introduced a damping operator into the traditional TSVD formula, which we call the damped rank-reduction method. The damped rank-reduction method can obtain a perfect reconstruction performance even when the observed data have extremely low signal-to-noise ratio. The feasibility of the improved 5-D seismic data reconstruction method was validated via both 5-D synthetic and field data examples. We presented comprehensive analysis of the data examples and obtained valuable experience and guidelines in better utilizing the proposed method in practice. Since the proposed method is convenient to implement and can achieve immediate improvement, we suggest its wide application in the industry.
Zeng, Songjun; Liu, Hongrong; Yang, Qibin
2010-01-01
A method for three-dimensional (3D) reconstruction of macromolecule assembles, that is, octahedral symmetrical adapted functions (OSAFs) method, was introduced in this paper and a series of formulations for reconstruction by OSAF method were derived. To verify the feasibility and advantages of the method, two octahedral symmetrical macromolecules, that is, heat shock protein Degp24 and the Red-cell L Ferritin, were utilized as examples to implement reconstruction by the OSAF method. The schedule for simulation was designed as follows: 2000 random orientated projections of single particles with predefined Euler angles and centers of origins were generated, then different levels of noises that is signal-to-noise ratio (S/N) = 0.1, 0.5, and 0.8 were added. The structures reconstructed by the OSAF method were in good agreement with the standard models and the relative errors of the structures reconstructed by the OSAF method to standard structures were very little even for high level noise. The facts mentioned above account for that the OSAF method is feasible and efficient approach to reconstruct structures of macromolecules and have ability to suppress the influence of noise. PMID:20150955
A Parallel Reconstructed Discontinuous Galerkin Method for the Compressible Flows on Aritrary Grids
Hong Luo; Amjad Ali; Robert Nourgaliev; Vincent A. Mousseau
2010-01-01
A reconstruction-based discontinuous Galerkin method is presented for the solution of the compressible Navier-Stokes equations on arbitrary grids. In this method, an in-cell reconstruction is used to obtain a higher-order polynomial representation of the underlying discontinuous Galerkin polynomial solution and an inter-cell reconstruction is used to obtain a continuous polynomial solution on the union of two neighboring, interface-sharing cells. The in-cell reconstruction is designed to enhance the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. The inter-cell reconstruction is devised to remove an interface discontinuity of the solution and its derivatives and thus to provide a simple, accurate, consistent, and robust approximation to the viscous and heat fluxes in the Navier-Stokes equations. A parallel strategy is also devised for the resulting reconstruction discontinuous Galerkin method, which is based on domain partitioning and Single Program Multiple Data (SPMD) parallel programming model. The RDG method is used to compute a variety of compressible flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results demonstrate that this RDG method is third-order accurate at a cost slightly higher than its underlying second-order DG method, at the same time providing a better performance than the third order DG method, in terms of both computing costs and storage requirements.
NASA Astrophysics Data System (ADS)
Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.
2014-07-01
The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.
NASA Astrophysics Data System (ADS)
Myers, Glenn R.; Thomas, C. David L.; Paganin, David M.; Gureyev, Timur E.; Clement, John G.
2010-01-01
We present a method for tomographic reconstruction of objects containing several distinct materials, which is capable of accurately reconstructing a sample from vastly fewer angular projections than required by conventional algorithms. The algorithm is more general than many previous discrete tomography methods, as: (i) a priori knowledge of the exact number of materials is not required; (ii) the linear attenuation coefficient of each constituent material may assume a small range of a priori unknown values. We present reconstructions from an experimental x-ray computed tomography scan of cortical bone acquired at the SPring-8 synchrotron.
Myers, Glenn R.; Thomas, C. David L.; Clement, John G.; Paganin, David M.; Gureyev, Timur E.
2010-01-11
We present a method for tomographic reconstruction of objects containing several distinct materials, which is capable of accurately reconstructing a sample from vastly fewer angular projections than required by conventional algorithms. The algorithm is more general than many previous discrete tomography methods, as: (i) a priori knowledge of the exact number of materials is not required; (ii) the linear attenuation coefficient of each constituent material may assume a small range of a priori unknown values. We present reconstructions from an experimental x-ray computed tomography scan of cortical bone acquired at the SPring-8 synchrotron.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
Adaptive region of interest method for analytical micro-CT reconstruction.
Yang, Wanneng; Xu, Xiaochun; Bi, Kun; Zeng, Shaoqun; Liu, Qian; Chen, Shangbin
2011-01-01
The real-time imaging is important in automatic successive inspection with micro-computerized tomography (micro-CT). Generally, the size of the detector is chosen according to the most probable size of the measured object to acquire all the projection data. Given enough imaging area and imaging resolution of X-ray detector, the detector is larger than specimen projection area, which results in redundant data in the Sinogram. The process of real-time micro-CT is computation-intensive because of the large amounts of source and destination data. The speed of the reconstruction algorithm can't always meet the requirements of real-time applications. A preprocessing method called adaptive region of interest (AROI), which detects the object's boundaries automatically to focus the active Sinogram regions, is introduced into the analytical reconstruction algorithm in this paper. The AROI method reduces the volume of the reconstructing data and thus directly accelerates the reconstruction process. It has been further shown that image quality is not compromised when applying AROI, while the reconstruction speed is increased as the square of the ratio of the sizes of the detector and the specimen slice. In practice, the conch reconstruction experiment indicated that the process is accelerated by 5.2 times with AROI and the imaging quality is not degraded. Therefore, the AROI method improves the speed of analytical micro-CT reconstruction significantly. PMID:21422587
NASA Astrophysics Data System (ADS)
Zhu, Dianwen; Zhang, Wei; Zhao, Yue; Li, Changqing
2016-03-01
Dynamic fluorescence molecular tomography (FMT) has the potential to quantify physiological or biochemical information, known as pharmacokinetic parameters, which are important for cancer detection, drug development and delivery etc. To image those parameters, there are indirect methods, which are easier to implement but tend to provide images with low signal-to-noise ratio, and direct methods, which model all the measurement noises together and are statistically more efficient. The direct reconstruction methods in dynamic FMT have attracted a lot of attention recently. However, the coupling of tomographic image reconstruction and nonlinearity of kinetic parameter estimation due to the compartment modeling has imposed a huge computational burden to the direct reconstruction of the kinetic parameters. In this paper, we propose to take advantage of both the direct and indirect reconstruction ideas through a variable splitting strategy under the augmented Lagrangian framework. Each iteration of the direct reconstruction is split into two steps: the dynamic FMT image reconstruction and the node-wise nonlinear least squares fitting of the pharmacokinetic parameter images. Through numerical simulation studies, we have found that the proposed algorithm can achieve good reconstruction results within a small amount of time. This will be the first step for a combined dynamic PET and FMT imaging in the future.
NASA Astrophysics Data System (ADS)
Ren, Hongwu; Dekany, Richard
2004-07-01
Large degree-of-freedom real-time adaptive optics (AO) control requires reconstruction algorithms that are computationally efficient and readily parallelized for hardware implementation. In particular, we find the wave-front reconstruction for the Hudgin and Fried geometry can be cast into a form of the well-known Sylvester equation using the Kronecker product properties of matrices. We derive the filters and inverse filtering formulas for wave-front reconstruction in two-dimensional (2-D) Discrete Cosine Transform (DCT) domain for these two geometries using the Hadamard product concept of matrices and the principle of separable variables. We introduce a recursive filtering (RF) method for the wave-front reconstruction on an annular aperture, in which, an imbedding step is used to convert an annular-aperture wave-front reconstruction into a squareaperture wave-front reconstruction, and then solving the Hudgin geometry problem on the square aperture. We apply the Alternating Direction Implicit (ADI) method to this imbedding step of the RF algorithm, to efficiently solve the annular-aperture wave-front reconstruction problem at cost of order of the number of degrees of freedom, O(n). Moreover, the ADI method is better suited for parallel implementation and we describe a practical real-time implementation for AO systems of order 3,000 actuators.
Reconstruction for 3D PET Based on Total Variation Constrained Direct Fourier Method
Yu, Haiqing; Chen, Zhi; Zhang, Heye; Loong Wong, Kelvin Kian; Chen, Yunmei; Liu, Huafeng
2015-01-01
This paper presents a total variation (TV) regularized reconstruction algorithm for 3D positron emission tomography (PET). The proposed method first employs the Fourier rebinning algorithm (FORE), rebinning the 3D data into a stack of ordinary 2D data sets as sinogram data. Then, the resulted 2D sinogram are ready to be reconstructed by conventional 2D reconstruction algorithms. Given the locally piece-wise constant nature of PET images, we introduce the total variation (TV) based reconstruction schemes. More specifically, we formulate the 2D PET reconstruction problem as an optimization problem, whose objective function consists of TV norm of the reconstructed image and the data fidelity term measuring the consistency between the reconstructed image and sinogram. To solve the resulting minimization problem, we apply an efficient methods called the Bregman operator splitting algorithm with variable step size (BOSVS). Experiments based on Monte Carlo simulated data and real data are conducted as validations. The experiment results show that the proposed method produces higher accuracy than conventional direct Fourier (DF) (bias in BOSVS is 70% of ones in DF, variance of BOSVS is 80% of ones in DF). PMID:26398232
NASA Astrophysics Data System (ADS)
Ko, Han Seo; Gim, Yeonghyeon; Kang, Seung-Hwan
2015-11-01
A three-dimensional optical correction method was developed to reconstruct droplet-based flow fields. For a numerical simulation, synthetic phantoms were reconstructed by a simultaneous multiplicative algebraic reconstruction technique using three projection images which were positioned at an offset angle of 45°. If the synthetic phantom in a conical object with refraction index which differs from atmosphere, the image can be distorted because a light is refracted on the surface of the conical object. Thus, the direction of the projection ray was replaced by the refracted ray which occurred on the surface of the conical object. In order to prove the method considering the distorted effect, reconstruction results of the developed method were compared with the original phantom. As a result, the reconstruction result of the method showed smaller error than that without the method. The method was applied for a Taylor cone which was caused by high voltage between a droplet and a substrate to reconstruct the three-dimensional flow fields for analysis of the characteristics of the droplet. This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korean government (MEST) (No. 2013R1A2A2A01068653).
Lu, Huancai; Wu, Sean F
2009-03-01
The vibroacoustic responses of a highly nonspherical vibrating object are reconstructed using Helmholtz equation least-squares (HELS) method. The objectives of this study are to examine the accuracy of reconstruction and the impacts of various parameters involved in reconstruction using HELS. The test object is a simply supported and baffled thin plate. The reason for selecting this object is that it represents a class of structures that cannot be exactly described by the spherical Hankel functions and spherical harmonics, which are taken as the basis functions in the HELS formulation, yet the analytic solutions to vibroacoustic responses of a baffled plate are readily available so the accuracy of reconstruction can be checked accurately. The input field acoustic pressures for reconstruction are generated by the Rayleigh integral. The reconstructed normal surface velocities are validated against the benchmark values, and the out-of-plane vibration patterns at several natural frequencies are compared with the natural modes of a simply supported plate. The impacts of various parameters such as number of measurement points, measurement distance, location of the origin of the coordinate system, microphone spacing, and ratio of measurement aperture size to the area of source surface of reconstruction on the resultant accuracy of reconstruction are examined. PMID:19275312
Optimization of a Stochastically Simulated Gene Network Model via Simulated Annealing
Tomshine, Jonathan; Kaznessis, Yiannis N.
2006-01-01
By rearranging naturally occurring genetic components, gene networks can be created that display novel functions. When designing these networks, the kinetic parameters describing DNA/protein binding are of great importance, as these parameters strongly influence the behavior of the resulting gene network. This article presents an optimization method based on simulated annealing to locate combinations of kinetic parameters that produce a desired behavior in a genetic network. Since gene expression is an inherently stochastic process, the simulation component of simulated annealing optimization is conducted using an accurate multiscale simulation algorithm to calculate an ensemble of network trajectories at each iteration of the simulated annealing algorithm. Using the three-gene repressilator of Elowitz and Leibler as an example, we show that gene network optimizations can be conducted using a mechanistically realistic model integrated stochastically. The repressilator is optimized to give oscillations of an arbitrary specified period. These optimized designs may then provide a starting-point for the selection of genetic components needed to realize an in vivo system. PMID:16920827
Leng, Chengcai; Yu, Dongdong; Zhang, Shuang; An, Yu; Hu, Yifang
2015-01-01
Optical molecular imaging is a promising technique and has been widely used in physiology, and pathology at cellular and molecular levels, which includes different modalities such as bioluminescence tomography, fluorescence molecular tomography and Cerenkov luminescence tomography. The inverse problem is ill-posed for the above modalities, which cause a nonunique solution. In this paper, we propose an effective reconstruction method based on the linearized Bregman iterative algorithm with sparse regularization (LBSR) for reconstruction. Considering the sparsity characteristics of the reconstructed sources, the sparsity can be regarded as a kind of a priori information and sparse regularization is incorporated, which can accurately locate the position of the source. The linearized Bregman iteration method is exploited to minimize the sparse regularization problem so as to further achieve fast and accurate reconstruction results. Experimental results in a numerical simulation and in vivo mouse demonstrate the effectiveness and potential of the proposed method. PMID:26421055
Apparatus And Method For Reconstructing Data Using Cross-Parity Stripes On Storage Media
Hughes, James Prescott
2003-06-17
An apparatus and method for reconstructing missing data using cross-parity stripes on a storage medium is provided. The apparatus and method may operate on data symbols having sizes greater than a data bit. The apparatus and method makes use of a plurality of parity stripes for reconstructing missing data stripes. The parity symbol values in the parity stripes are used as a basis for determining the value of the missing data symbol in a data stripe. A correction matrix is shifted along the data stripes, correcting missing data symbols as it is shifted. The correction is performed from the outside data stripes towards the inner data stripes to thereby use previously reconstructed data symbols to reconstruct other missing data symbols.
Leng, Chengcai; Yu, Dongdong; Zhang, Shuang; An, Yu; Hu, Yifang
2015-01-01
Optical molecular imaging is a promising technique and has been widely used in physiology, and pathology at cellular and molecular levels, which includes different modalities such as bioluminescence tomography, fluorescence molecular tomography and Cerenkov luminescence tomography. The inverse problem is ill-posed for the above modalities, which cause a nonunique solution. In this paper, we propose an effective reconstruction method based on the linearized Bregman iterative algorithm with sparse regularization (LBSR) for reconstruction. Considering the sparsity characteristics of the reconstructed sources, the sparsity can be regarded as a kind of a priori information and sparse regularization is incorporated, which can accurately locate the position of the source. The linearized Bregman iteration method is exploited to minimize the sparse regularization problem so as to further achieve fast and accurate reconstruction results. Experimental results in a numerical simulation and in vivo mouse demonstrate the effectiveness and potential of the proposed method. PMID:26421055
The feasibility of images reconstructed with the method of sieves
Veklerov, E.; Llacer, J.
1989-04-01
The concept of sieves has been applied with the Maximum likelihood Estimator (MLE) to image reconstruction. While it makes it possible to recover smooth images consistent with the data, the degree of smoothness provided by it is arbitrary. It is shown that the concept of feasibility is able to resolve this arbitrariness. By varying the values of parameters determining the degree of smoothness, one can generate images on both sides of the feasibility region, as well as within the region. Feasible images recovered by using different sieve parameters are compared with feasible results of other procedures. One- and two-dimensional examples using both simulated and real data sets are considered. 12 refs., 3 figs., 2 tabs.
Possible methods of reconstructing conveyor gallery span structures
Kolesnichenko, V.G.; Beizer, V.N.; Raskina, A.M.
1983-01-01
Problems of reconstruction of industrial buildings and structures are acquiring increasing national economic importance. The Makeevka Construction Engineering Institute of the conducted investigations of the design of the conveyor galleries at the Yasinovka and Makeevka Coke Works, in operation for 20-40 years. The bearing constructions of the span structures are generally welded (in the old galleries riveted) metal trusses. The principal trusses are predominantly made discontinuous with spans of 10-40 m, supported by hinges on intermediate lattice columns, connected through the upper and lower horizontal strips by ties (see figure, a). The height of the main trusses corresponds to the height of the galleries (2.3-3.4 m). The width of a gallery depends on the width and number of conveyors; for galleries with a single conveyor it is 3.1-3.8 m, and 5.5-7.1 m for a gallery with two conveyors.
NASA Astrophysics Data System (ADS)
Krasnov, V. V.; Cheremkhin, P. A.; Erkin, I. Yu.; Evtikhiev, N. N.; Starikov, R. S.; Starikov, S. N.
Modified method of increasing of reconstruction quality of diffractive optical elements (DOE) displayed with liquid crystal (LC) spatial light modulators (SLM) is presented. Method is based on optimization of DOE synthesized with conventional method by application of direct search with random trajectory method while taking into account LC SLM phase fluctuations. Reduction of synthesis error up to 88% is achieved.
NASA Astrophysics Data System (ADS)
Poudel, Joemini; Matthews, Thomas P.; Anastasio, Mark A.; Wang, Lihong V.
2016-03-01
Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the absorbed optical energy density within tissue. If the object possesses spatially variant acoustic properties that are unaccounted for by the reconstruction algorithm, the estimated image can contain distortions. While reconstruction algorithms have recently been developed for compensating for this effect, they generally require the objects acoustic properties to be known a priori. To circumvent the need for detailed information regarding an objects acoustic properties, we have previously proposed a half-time reconstruction method for PACT. A half-time reconstruction method estimates the PACT image from a data set that has been temporally truncated to exclude the data components that have been strongly aberrated. In this approach, the degree of temporal truncation is the same for all measurements. However, this strategy can be improved upon it when the approximate sizes and locations of strongly heterogeneous structures such as gas voids or bones are known. In this work, we investigate PACT reconstruction algorithms that are based on a variable temporal data truncation (VTDT) approach that represents a generalization of the half-time reconstruction approach. In the VTDT approach, the degree of temporal truncation for each measurement is determined by the distance between the corresponding transducer location and the nearest known bone or gas void location. Reconstructed images from a numerical phantom is employed to demonstrate the feasibility and effectiveness of the approach.
NASA Astrophysics Data System (ADS)
Al-Haddad, N.; Nieves-Chinchilla, T.; Savani, N. P.; Möstl, C.; Marubashi, K.; Hidalgo, M. A.; Roussev, I. I.; Poedts, S.; Farrugia, C. J.
2013-05-01
This study aims to provide a reference for different magnetic field models and reconstruction methods for interplanetary coronal mass ejections (ICMEs). To understand the differences in the outputs of these models and codes, we analyzed 59 events from the Coordinated Data Analysis Workshop (CDAW) list, using four different magnetic field models and reconstruction techniques; force-free fitting, magnetostatic reconstruction using a numerical solution to the Grad-Shafranov equation, fitting to a self-similarly expanding cylindrical configuration and elliptical, non-force-free fitting. The resulting parameters of the reconstructions for the 59 events are compared statistically and in selected case studies. The ability of a method to fit or reconstruct an event is found to vary greatly; this depends on whether the event is a magnetic cloud or not. We find that the magnitude of the axial field is relatively consistent across models, but that the axis orientation of the ejecta is not. We also find that there are a few cases with different signs of the magnetic helicity for the same event when we leave the boundaries free to vary, which illustrates that this simplest of parameters is not necessarily always clearly constrained by fitting and reconstruction models. Finally, we examine three unique cases in depth to provide a comprehensive idea of the different aspects of how the fitting and reconstruction codes work.
Wide-spectrum reconstruction method for a birefringence interference imaging spectrometer.
Zhang, Chunmin; Jian, Xiaohua
2010-02-01
We present a mathematical method used to determine the spectrum detected by a birefringence interference imaging spectrometer (BIIS). The reconstructed spectrum has good precision over a wide spectral range, 0.4-1.0 microm. This method considers the light intensity as a function of wavelength and avoids the fatal error caused by birefringence effect in the conventional Fourier transform method. The experimental interferogram of the BIIS is processed in this new way, and the interference data and reconstructed spectrum are in good agreement, proving this method to be very exact and useful. Application of this method will greatly improve the instrument performance. PMID:20125723
Adaptive Forward Modeling Method for Analysis and Reconstructions of Orientation Image Map
Frankie Li, Shiu Fai
2014-06-01
IceNine is a MPI-parallel orientation reconstruction and microstructure analysis code. It's primary purpose is to reconstruct a spatially resolved orientation map given a set of diffraction images from a high energy x-ray diffraction microscopy (HEDM) experiment (1). In particular, IceNine implements the adaptive version of the forward modeling method (2, 3). Part of IceNine is a library used to for conbined analysis of the microstructure with the experimentally measured diffraction signal. The libraries is also designed for tapid prototyping of new reconstruction and analysis algorithms. IceNine is also built with a simulator of diffraction images with an input microstructure.
Comparison of kinoform synthesis methods for image reconstruction in Fourier plane
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Porshneva, Liudmila A.; Rodin, Vladislav G.; Starikov, Sergey N.
2014-05-01
Kinoform is synthesized phase diffractive optical element which allows to reconstruct image by its illumination with plane wave. Kinoforms are used in image processing systems. For tasks of kinoform synthesis iterative methods had become wide-spread because of relatively small error of resulting intensity distribution. There are articles in which two or three iterative methods are compared but they use only one or several test images. The goal of this work is to compare iterative methods by using many test images of different types. Images were reconstructed in Fourier plane from synthesized kinoforms displayed on phase-only LCOS SLM. Quality of reconstructed images and computational resources of the methods were analyzed. For kinoform synthesis four methods were implemented in programming environment: Gerchberg-Saxton algorithm (GS), Fienup algorithm (F), adaptive-additive algorithm (AA) and Gerchberg-Saxton algorithm with weight coefficients (GSW). To compare these methods 50 test images with different characteristics were used: binary and grayscale, contour and non-contour. Resolution of images varied from 64×64 to 1024×1024. Occupancy of images ranged from 0.008 to 0.89. Quantity of phase levels of synthesized kinoforms was 256 which is equal to number of phase levels of SLM LCOS HoloEye PLUTO VIS. Under numerical testing it was found that the best quality of reconstructed images provides the AA method. The GS, F and GSW methods showed worse results but roughly similar between each other. Execution time of single iteration of the analyzed methods is minimal for the GS method. The F method provides maximum execution time. Synthesized kinoforms were optically reconstructed using phase-only LCOS SLM HoloEye PLUTO VIS. Results of optical reconstruction were compared to the numerical ones. The AA method showed slightly better results than other methods especially in case of gray-scale images.
Shang, Shang; Bai, Jing; Song, Xiaolei; Wang, Hongkai; Lau, Jaclyn
2007-01-01
Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT. PMID:18354740
Singh, Gurmeet; Raj, Ashish; Kressler, Bryan; Nguyen, Thanh D.; Spincemaille, Pascal; Zabih, Ramin; Wang, Yi
2010-01-01
Among recent parallel MR imaging reconstruction advances, a Bayesian method called Edge-preserving Parallel Imaging with GRAph cut Minimization (EPIGRAM) has been demonstrated to significantly improve signal to noise ratio (SNR) compared to conventional regularized sensitivity encoding (SENSE) method. However, EPIGRAM requires a large number of iterations in proportion to the number of intensity labels in the image, making it computationally expensive for high dynamic range images. The objective of this study is to develop a Fast EPIGRAM reconstruction based on the efficient binary jump move algorithm that provides a logarithmic reduction in reconstruction time while maintaining image quality. Preliminary in vivo validation of the proposed algorithm is presented for 2D cardiac cine MR imaging and 3D coronary MR angiography at acceleration factors of 2-4. Fast EPIGRAM was found to provide similar image quality to EPIGRAM and maintain the previously reported SNR improvement over regularized SENSE, while reducing EPIGRAM reconstruction time by 25-50 times. PMID:20939095
Bauwens, Maite; Ohlsson, Henrik; Barbé, Kurt; Beelaerts, Veerle; Dehairs, Frank; Schoukens, Johan
2011-11-01
To improve our understanding of the climate process and to assess the human impact on current global warming, past climate reconstruction is essential. The chemical composition of a bivalve shell is strongly coupled to environmental variations and therefore ancient shells are potential climate archives. The nonlinear nature of the relation between environmental condition (e.g. the seawater temperature) and proxy composition makes it hard to predict the former from the latter, however. In this paper we compare the ability of three nonlinear system identification methods to reconstruct the ambient temperature from the chemical composition of a shell. The comparison shows that nonlinear multi-proxy approaches are potentially useful tools for climate reconstructions and that manifold based methods result in smoother and more precise temperature reconstruction. PMID:20888663
A two-step Hilbert transform method for 2D image reconstruction.
Noo, Frédéric; Clackdoyle, Rolf; Pack, Jed D
2004-09-01
The paper describes a new accurate two-dimensional (2D) image reconstruction method consisting of two steps. In the first step, the backprojected image is formed after taking the derivative of the parallel projection data. In the second step, a Hilbert filtering is applied along certain lines in the differentiated backprojection (DBP) image. Formulae for performing the DBP step in fanbeam geometry are also presented. The advantage of this two-step Hilbert transform approach is that in certain situations, regions of interest (ROIs) can be reconstructed from truncated projection data. Simulation results are presented that illustrate very similar reconstructed image quality using the new method compared to standard filtered backprojection, and that show the capability to correctly handle truncated projections. In particular, a simulation is presented of a wide patient whose projections are truncated laterally yet for which highly accurate ROI reconstruction is obtained. PMID:15470913
Evaluation of time-efficient reconstruction methods in digital breast tomosynthesis.
Svahn, T M; Houssami, N
2015-07-01
Three reconstruction algorithms for digital breast tomosynthesis were compared in this article: filtered back-projection (FBP), iterative adapted FBP and maximum likelihood-convex iterative algorithms. Quality metrics such as signal-difference-to-noise ratio, normalised line-profiles and artefact-spread function were used for evaluation of reconstructed tomosynthesis images. The iterative-based methods offered increased image quality in terms of higher detectability and reduced artefacts, which will be further examined in clinical images. PMID:25855075
Novel l2,1-norm optimization method for fluorescence molecular tomography reconstruction
Jiang, Shixin; Liu, Jie; An, Yu; Zhang, Guanglei; Ye, Jinzuo; Mao, Yamin; He, Kunshan; Chi, Chongwei; Tian, Jie
2016-01-01
Fluorescence molecular tomography (FMT) is a promising tomographic method in preclinical research, which enables noninvasive real-time three-dimensional (3-D) visualization for in vivo studies. The ill-posedness of the FMT reconstruction problem is one of the many challenges in the studies of FMT. In this paper, we propose a l2,1-norm optimization method using a priori information, mainly the structured sparsity of the fluorescent regions for FMT reconstruction. Compared to standard sparsity methods, the structured sparsity methods are often superior in reconstruction accuracy since the structured sparsity utilizes correlations or structures of the reconstructed image. To solve the problem effectively, the Nesterov’s method was used to accelerate the computation. To evaluate the performance of the proposed l2,1-norm method, numerical phantom experiments and in vivo mouse experiments are conducted. The results show that the proposed method not only achieves accurate and desirable fluorescent source reconstruction, but also demonstrates enhanced robustness to noise. PMID:27375949
An adaptive total variation image reconstruction method for speckles through disordered media
NASA Astrophysics Data System (ADS)
Gong, Changmei; Shao, Xiaopeng; Wu, Tengfei
2013-09-01
Multiple scattering of light in highly disordered medium can break the diffraction limit of conventional optical system combined with image reconstruction method. Once the transmission matrix of the imaging system is obtained, the target image can be reconstructed from its speckle pattern by image reconstruction algorithm. Nevertheless, the restored image attained by common image reconstruction algorithms such as Tikhonov regularization has a relatively low signal-tonoise ratio (SNR) due to the experimental noise and reconstruction noise, greatly reducing the quality of the result image. In this paper, the speckle pattern of the test image is simulated by the combination of light propagation theories and statistical optics theories. Subsequently, an adaptive total variation (ATV) algorithm—the TV minimization by augmented Lagrangian and alternating direction algorithms (TVAL3), which is based on augmented Lagrangian and alternating direction algorithm, is utilized to reconstruct the target image. Numerical simulation experimental results show that, the TVAL3 algorithm can effectively suppress the noise of the restored image and preserve more image details, thus greatly boosts the SNR of the restored image. It also indicates that, compared with the image directly formed by `clean' system, the reconstructed results can overcoming the diffraction limit of the `clean' system, therefore being conductive to the observation of cells and protein molecules in biological tissues and other structures in micro/nano scale.
Hong Luo; Luquing Luo; Robert Nourgaliev; Vincent Mousseau
2009-06-01
A reconstruction-based discontinuous Galerkin (DG) method is presented for the solution of the compressible Euler equations on arbitrary grids. By taking advantage of handily available and yet invaluable information, namely the derivatives, in the context of the discontinuous Galerkin methods, a solution polynomial of one degree higher is reconstructed using a least-squares method. The stencils used in the reconstruction involve only the van Neumann neighborhood (face-neighboring cells) and are compact and consistent with the underlying DG method. The resulting DG method can be regarded as an improvement of a recovery-based DG method in the sense that it shares the same nice features as the recovery-based DG method, such as high accuracy and efficiency, and yet overcomes some of its shortcomings such as a lack of flexibility, compactness, and robustness. The developed DG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate the accuracy and efficiency of the method. The numerical results indicate that this reconstructed DG method is able to obtain a third-order accurate solution at a slightly higher cost than its second-order DG method and provide an increase in performance over the third order DG method in terms of computing time and storage requirement.
Reconstructed imaging of acoustic cloak using time-lapse reversal method
NASA Astrophysics Data System (ADS)
Zhou, Chen; Cheng, Ying; Xu, Jian-yi; Li, Bo; Liu, Xiao-jun
2014-08-01
We proposed and investigated a solution to the inverse acoustic cloak problem, an anti-stealth technology to make cloaks visible, using the time-lapse reversal (TLR) method. The TLR method reconstructs the image of an unknown acoustic cloak by utilizing scattered acoustic waves. Compared to previous anti-stealth methods, the TLR method can determine not only the existence of a cloak but also its exact geometric information like definite shape, size, and position. Here, we present the process for TLR reconstruction based on time reversal invariance. This technology may have potential applications in detecting various types of cloaks with different geometric parameters.
CuRe - A new wavefront reconstruction method for SH-WFS measurements
NASA Astrophysics Data System (ADS)
Obereder, Andreas; Ramlau, Ronny; Rosensteiner, Matthias; Zhariy, Mariya
2011-09-01
In order to fulfill the real-time requirements for AO on ELTs, one has to either invest in (very) high performance hardware or spend some effort on the development of highly efficient reconstruction algorithms for wavefront sensors. The AAO (Austrian Adaptive Optics) team is involved in deriving wavefront reconstructors for SH- and Pyramid-WFS measurements utilizing the mathematical properties of the forward operators for these wavefront sensors. At the moment, we focus mainly on direct reconstructors with complexity O(n) (where n denotes the number of subapertures of the WFS) to make the reconstruction scalable for large telescopes. In this talk we will introduce a new algorithm, the Cumulative Reconstructor (CuRe), present its properties, namely error propagation of the method and the numerical effort for the reconstruction of the incoming wavefront, as well as first results concerning the quality of the method (dependent on different noise sources). Further improvements of the algorithm, especially a domain decomposition method for enhancing reconstruction quality and improving the overall speed of the algorithm will be presented and analyzed. A speed comparison with different wavefront reconstruction algorithms will be presented to point out the enormous gain of the new CuReD (Cumulative Reconstructor with Domain Decomposition) algorithm concerning numerical performance and applicability for real life telescope adaptive optics applications. In the outlook of the talk we will present first XAO results utilizing a variant of the CuReD for the reconstruction of modulated Pyramid WFS measurements.
Fernandes, Clemente Maia S; Serra, Mônica da Costa; da Silva, Jorge Vicente Lopes; Noritomi, Pedro Yoshito; Pereira, Frederico David Alencar de Sena; Melani, Rodolfo Francisco Haltenhoff
2012-01-10
Facial reconstruction is a method that seeks to recreate a person's facial appearance from his/her skull. This technique can be the last resource used in a forensic investigation, when identification techniques such as DNA analysis, dental records, fingerprints and radiographic comparison cannot be used to identify a body or skeletal remains. To perform facial reconstruction, the data of facial soft tissue thickness are necessary. Scientific literature has described differences in the thickness of facial soft tissue between ethnic groups. There are different databases of soft tissue thickness published in the scientific literature. There are no literature records of facial reconstruction works carried out with data of soft tissues obtained from samples of Brazilian subjects. There are also no reports of digital forensic facial reconstruction performed in Brazil. There are two databases of soft tissue thickness published for the Brazilian population: one obtained from measurements performed in fresh cadavers (fresh cadavers' pattern), and another from measurements using magnetic resonance imaging (Magnetic Resonance pattern). This study aims to perform three different characterized digital forensic facial reconstructions (with hair, eyelashes and eyebrows) of a Brazilian subject (based on an international pattern and two Brazilian patterns for soft facial tissue thickness), and evaluate the digital forensic facial reconstructions comparing them to photos of the individual and other nine subjects. The DICOM data of the Computed Tomography (CT) donated by a volunteer were converted into stereolitography (STL) files and used for the creation of the digital facial reconstructions. Once the three reconstructions were performed, they were compared to photographs of the subject who had the face reconstructed and nine other subjects. Thirty examiners participated in this recognition process. The target subject was recognized by 26.67% of the examiners in the reconstruction
A novel digital tomosynthesis (DTS) reconstruction method using a deformation field map
Ren Lei; Zhang Junan; Thongphiew, Danthai; Godfrey, Devon J.; Jackie Wu, Q.; Zhou Sumin; Yin Fangfang
2008-07-15
We developed a novel digital tomosynthesis (DTS) reconstruction method using a deformation field map to optimally estimate volumetric information in DTS images. The deformation field map is solved by using prior information, a deformation model, and new projection data. Patients' previous cone-beam CT (CBCT) or planning CT data are used as the prior information, and the new patient volume to be reconstructed is considered as a deformation of the prior patient volume. The deformation field is solved by minimizing bending energy and maintaining new projection data fidelity using a nonlinear conjugate gradient method. The new patient DTS volume is then obtained by deforming the prior patient CBCT or CT volume according to the solution to the deformation field. This method is novel because it is the first method to combine deformable registration with limited angle image reconstruction. The method was tested in 2D cases using simulated projections of a Shepp-Logan phantom, liver, and head-and-neck patient data. The accuracy of the reconstruction was evaluated by comparing both organ volume and pixel value differences between DTS and CBCT images. In the Shepp-Logan phantom study, the reconstructed pixel signal-to-noise ratio (PSNR) for the 60 deg. DTS image reached 34.3 dB. In the liver patient study, the relative error of the liver volume reconstructed using 60 deg. projections was 3.4%. The reconstructed PSNR for the 60 deg. DTS image reached 23.5 dB. In the head-and-neck patient study, the new method using 60 deg. projections was able to reconstruct the 8.1 deg. rotation of the bony structure with 0.0 deg. error. The reconstructed PSNR for the 60 deg. DTS image reached 24.2 dB. In summary, the new reconstruction method can optimally estimate the volumetric information in DTS images using 60 deg. projections. Preliminary validation of the algorithm showed that it is both technically and clinically feasible for image guidance in radiation therapy.
Wang, Jinguo; Zhao, Zhiqin Song, Jian; Chen, Guoping; Nie, Zaiping; Liu, Qing-Huo
2015-05-15
Purpose: An iterative reconstruction method has been previously reported by the authors of this paper. However, the iterative reconstruction method was demonstrated by solely using the numerical simulations. It is essential to apply the iterative reconstruction method to practice conditions. The objective of this work is to validate the capability of the iterative reconstruction method for reducing the effects of acoustic heterogeneity with the experimental data in microwave induced thermoacoustic tomography. Methods: Most existing reconstruction methods need to combine the ultrasonic measurement technology to quantitatively measure the velocity distribution of heterogeneity, which increases the system complexity. Different to existing reconstruction methods, the iterative reconstruction method combines time reversal mirror technique, fast marching method, and simultaneous algebraic reconstruction technique to iteratively estimate the velocity distribution of heterogeneous tissue by solely using the measured data. Then, the estimated velocity distribution is used subsequently to reconstruct the highly accurate image of microwave absorption distribution. Experiments that a target placed in an acoustic heterogeneous environment are performed to validate the iterative reconstruction method. Results: By using the estimated velocity distribution, the target in an acoustic heterogeneous environment can be reconstructed with better shape and higher image contrast than targets that are reconstructed with a homogeneous velocity distribution. Conclusions: The distortions caused by the acoustic heterogeneity can be efficiently corrected by utilizing the velocity distribution estimated by the iterative reconstruction method. The advantage of the iterative reconstruction method over the existing correction methods is that it is successful in improving the quality of the image of microwave absorption distribution without increasing the system complexity.
Song, Jiayu; Liu, Q H
2006-01-01
Non-Cartesian sampling is widely used for fast magnetic resonance imaging (MRI). The well known gridding method usually requires density compensation to adjust the non-uniform sampling density, which is a major source of reconstruction error. Minimum-norm least square (MNLS) reconstruction, on the other hand, does not need density compensation, but requires intensive computations. In this paper, a new version of MNLS reconstruction method is developed using maximum likelihood and is speeded up by incorporating novel non-uniform fast Fourier transform (NUFFT) and bi-conjugate gradient fast Fourier transform (BCG-FFT) techniques. Studies on computer-simulated phantoms and a physically scanned phantom show improved reconstruction accuracy and signal-to-noise ratio compared to gridding method. The method is shown applicable to arbitrary k-space trajectory. Furthermore, we find that the method in fact performs un-blurring in the image space as an equivalent of density compensation in the k-space. Equalizing MNLS solution with gridding algorithm leads to new approaches of finding optimal density compensation functions (DCF). The method has been applied to radially encoded cardiac imaging on small animals. Reconstructed dynamic images of an in vivo mouse heart are shown. PMID:17946203
Hong Luo; Luqing Luo; Robert Nourgaliev; Vincent A. Mousseau
2010-09-01
A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier–Stokes equations on arbitrary grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier–Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need to judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on arbitrary grids. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG method is able to deliver the same accuracy as the well-known Bassi–Rebay II scheme, at a half of its computing costs for the discretization of the viscous fluxes in the Navier–Stokes equations, clearly demonstrating its superior performance over the existing DG methods for solving the compressible Navier–Stokes equations.
Hong Luo; Luqing Luo; Robert Nourgaliev; Vincent A. Mousseau
2010-01-01
A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier-Stokes equations on arbitrary grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier-Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need to judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on arbitrary grids. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG method is able to deliver the same accuracy as the well-known Bassi-Rebay II scheme, at a half of its computing costs for the discretization of the viscous fluxes in the Navier-Stokes equations, clearly demonstrating its superior performance over the existing DG methods for solving the compressible Navier-Stokes equations.
NASA Astrophysics Data System (ADS)
Jiang, Mingfeng; Zhang, Heng; Zhu, Lingyan; Cao, Li; Wang, Yaming; Xia, Ling; Gong, Yinglan
2015-04-01
Non-invasively reconstructing the cardiac transmembrane potentials (TMPs) from body surface potentials can act as a regression problem. The support vector regression (SVR) method is often used to solve the regression problem, however the computational complexity of the SVR training algorithm is usually intensive. In this paper, another learning algorithm, termed as extreme learning machine (ELM), is proposed to reconstruct the cardiac transmembrane potentials. Moreover, ELM can be extended to single-hidden layer feed forward neural networks with kernel matrix (kernelized ELM), which can achieve a good generalization performance at a fast learning speed. Based on the realistic heart-torso models, a normal and two abnormal ventricular activation cases are applied for training and testing the regression model. The experimental results show that the ELM method can perform a better regression ability than the single SVR method in terms of the TMPs reconstruction accuracy and reconstruction speed. Moreover, compared with the ELM method, the kernelized ELM method features a good approximation and generalization ability when reconstructing the TMPs.
R-L Method and BLS-GSM Denoising for Penumbra Image Reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Mei; Li, Yang; Sheng, Liang; Li, Chunhua; Wei, Fuli; Peng, Bodong
2013-12-01
When neutron yield is very low, reconstruction of coding penumbra image is rather difficult. In this paper, low-yield (109) 14 MeV neutron penumbra imaging was simulated by Monte Carlo method. The Richardson Lucy (R-L) iteration method was proposed to incorporated with Bayesian least square-Gaussian scale mixture model (BLS-GSM) wavelet denoising for the simulated image. Optimal number of R-L iterations was gotten by a large number of tests. The results show that compared with Wiener method and median filter denoising, this method is better in restraining background noise, the correlation coefficient Rsr between the reconstructed and the real images is larger, and the reconstruction result is better.
Runout error correction in tomographic reconstruction by intensity summation method.
Kwon, Ik Hwan; Lim, Jun; Hong, Chung Ki
2016-09-01
An alignment method for correction of the axial and radial runout errors of the rotation stage in X-ray phase-contrast computed tomography has been developed. Only intensity information was used, without extra hardware or complicated calculation. Notably, the method, as demonstrated herein, can utilize the halo artifact to determine displacement. PMID:27577781
NASA Astrophysics Data System (ADS)
Yamaguchi, Yusaku; Kojima, Takeshi; Yoshinaga, Tetsuya
2016-03-01
In clinical X-ray computed tomography (CT), filtered back-projection as a transform method and iterative reconstruction such as the maximum-likelihood expectation-maximization (ML-EM) method are known methods to reconstruct tomographic images. As the other reconstruction method, we have presented a continuous-time image reconstruction (CIR) system described by a nonlinear dynamical system, based on the idea of continuous methods for solving tomographic inverse problems. Recently, we have also proposed a multiplicative CIR system described by differential equations based on the minimization of a weighted Kullback-Leibler divergence. We prove theoretically that the divergence measure decreases along the solution to the CIR system, for consistent inverse problems. In consideration of the noisy nature of projections in clinical CT, the inverse problem belongs to the category of ill-posed problems. The performance of a noise-reduction scheme for a new (previously developed) CIR system was investigated by means of numerical experiments using a circular phantom image. Compared to the conventional CIR and the ML-EM methods, the proposed CIR method has an advantage on noisy projection with lower signal-to-noise ratios in terms of the divergence measure on the actual image under the same common measure observed via the projection data. The results lead to the conclusion that the multiplicative CIR method is more effective and robust for noise reduction in CT compared to the ML-EM as well as conventional CIR methods.
Hong Luo; Yidong Xia; Robert Nourgaliev; Chunpei Cai
2011-06-01
A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier-Stokes equations on unstructured tetrahedral grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier-Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need to judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on unstructured grids. The preliminary results indicate that this RDG method is stable on unstructured tetrahedral grids, and provides a viable and attractive alternative for the discretization of the viscous and heat fluxes in the Navier-Stokes equations.
Xing, Pei; Chen, Xin; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu
2016-01-01
Large-scale climate history of the past millennium reconstructed solely from tree-ring data is prone to underestimate the amplitude of low-frequency variability. In this paper, we aimed at solving this problem by utilizing a novel method termed "MDVM", which was a combination of the ensemble empirical mode decomposition (EEMD) and variance matching techniques. We compiled a set of 211 tree-ring records from the extratropical Northern Hemisphere (30-90°N) in an effort to develop a new reconstruction of the annual mean temperature by the MDVM method. Among these dataset, a number of 126 records were screened out to reconstruct temperature variability longer than decadal scale for the period 850-2000 AD. The MDVM reconstruction depicted significant low-frequency variability in the past millennium with evident Medieval Warm Period (MWP) over the interval 950-1150 AD and pronounced Little Ice Age (LIA) cumulating in 1450-1850 AD. In the context of 1150-year reconstruction, the accelerating warming in 20th century was likely unprecedented, and the coldest decades appeared in the 1640s, 1600s and 1580s, whereas the warmest decades occurred in the 1990s, 1940s and 1930s. Additionally, the MDVM reconstruction covaried broadly with changes in natural radiative forcing, and especially showed distinct footprints of multiple volcanic eruptions in the last millennium. Comparisons of our results with previous reconstructions and model simulations showed the efficiency of the MDVM method on capturing low-frequency variability, particularly much colder signals of the LIA relative to the reference period. Our results demonstrated that the MDVM method has advantages in studying large-scale and low-frequency climate signals using pure tree-ring data. PMID:26751947
Xing, Pei; Chen, Xin; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu
2016-01-01
Large-scale climate history of the past millennium reconstructed solely from tree-ring data is prone to underestimate the amplitude of low-frequency variability. In this paper, we aimed at solving this problem by utilizing a novel method termed “MDVM”, which was a combination of the ensemble empirical mode decomposition (EEMD) and variance matching techniques. We compiled a set of 211 tree-ring records from the extratropical Northern Hemisphere (30–90°N) in an effort to develop a new reconstruction of the annual mean temperature by the MDVM method. Among these dataset, a number of 126 records were screened out to reconstruct temperature variability longer than decadal scale for the period 850–2000 AD. The MDVM reconstruction depicted significant low-frequency variability in the past millennium with evident Medieval Warm Period (MWP) over the interval 950–1150 AD and pronounced Little Ice Age (LIA) cumulating in 1450–1850 AD. In the context of 1150-year reconstruction, the accelerating warming in 20th century was likely unprecedented, and the coldest decades appeared in the 1640s, 1600s and 1580s, whereas the warmest decades occurred in the 1990s, 1940s and 1930s. Additionally, the MDVM reconstruction covaried broadly with changes in natural radiative forcing, and especially showed distinct footprints of multiple volcanic eruptions in the last millennium. Comparisons of our results with previous reconstructions and model simulations showed the efficiency of the MDVM method on capturing low-frequency variability, particularly much colder signals of the LIA relative to the reference period. Our results demonstrated that the MDVM method has advantages in studying large-scale and low-frequency climate signals using pure tree-ring data. PMID:26751947
A physics-based intravascular ultrasound image reconstruction method for lumen segmentation.
Mendizabal-Ruiz, Gerardo; Kakadiaris, Ioannis A
2016-08-01
Intravascular ultrasound (IVUS) refers to the medical imaging technique consisting of a miniaturized ultrasound transducer located at the tip of a catheter that can be introduced in the blood vessels providing high-resolution, cross-sectional images of their interior. Current methods for the generation of an IVUS image reconstruction from radio frequency (RF) data do not account for the physics involved in the interaction between the IVUS ultrasound signal and the tissues of the vessel. In this paper, we present a novel method to generate an IVUS image reconstruction based on the use of a scattering model that considers the tissues of the vessel as a distribution of three-dimensional point scatterers. We evaluated the impact of employing the proposed IVUS image reconstruction method in the segmentation of the lumen/wall interface on 40MHz IVUS data using an existing automatic lumen segmentation method. We compared the results with those obtained using the B-mode reconstruction on 600 randomly selected frames from twelve pullback sequences acquired from rabbit aortas and different arteries of swine. Our results indicate the feasibility of employing the proposed IVUS image reconstruction for the segmentation of the lumen. PMID:27235803
An, Yu; Liu, Jie; Zhang, Guanglei; Ye, Jinzuo; Mao, Yamin; Jiang, Shixin; Shang, Wenting; Du, Yang; Chi, Chongwei; Tian, Jie
2015-10-01
Fluorescence molecular tomography (FMT) is a promising tool in the study of cancer, drug discovery, and disease diagnosis, enabling noninvasive and quantitative imaging of the biodistribution of fluorophores in deep tissues via image reconstruction techniques. Conventional reconstruction methods based on the finite-element method (FEM) have achieved acceptable stability and efficiency. However, some inherent shortcomings in FEM meshes, such as time consumption in mesh generation and a large discretization error, limit further biomedical application. In this paper, we propose a meshless method for reconstruction of FMT (MM-FMT) using compactly supported radial basis functions (CSRBFs). With CSRBFs, the image domain can be accurately expressed by continuous CSRBFs, avoiding the discretization error to a certain degree. After direct collocation with CSRBFs, the conventional optimization techniques, including Tikhonov, L1-norm iteration shrinkage (L1-IS), and sparsity adaptive matching pursuit, were adopted to solve the meshless reconstruction. To evaluate the performance of the proposed MM-FMT, we performed numerical heterogeneous mouse experiments and in vivo bead-implanted mouse experiments. The results suggest that the proposed MM-FMT method can reduce the position error of the reconstruction result to smaller than 0.4 mm for the double-source case, which is a significant improvement for FMT. PMID:26451513
NASA Astrophysics Data System (ADS)
An, Yu; Liu, Jie; Zhang, Guanglei; Ye, Jinzuo; Mao, Yamin; Jiang, Shixin; Shang, Wenting; Du, Yang; Chi, Chongwei; Tian, Jie
2015-10-01
Fluorescence molecular tomography (FMT) is a promising tool in the study of cancer, drug discovery, and disease diagnosis, enabling noninvasive and quantitative imaging of the biodistribution of fluorophores in deep tissues via image reconstruction techniques. Conventional reconstruction methods based on the finite-element method (FEM) have achieved acceptable stability and efficiency. However, some inherent shortcomings in FEM meshes, such as time consumption in mesh generation and a large discretization error, limit further biomedical application. In this paper, we propose a meshless method for reconstruction of FMT (MM-FMT) using compactly supported radial basis functions (CSRBFs). With CSRBFs, the image domain can be accurately expressed by continuous CSRBFs, avoiding the discretization error to a certain degree. After direct collocation with CSRBFs, the conventional optimization techniques, including Tikhonov, L1-norm iteration shrinkage (L1-IS), and sparsity adaptive matching pursuit, were adopted to solve the meshless reconstruction. To evaluate the performance of the proposed MM-FMT, we performed numerical heterogeneous mouse experiments and in vivo bead-implanted mouse experiments. The results suggest that the proposed MM-FMT method can reduce the position error of the reconstruction result to smaller than 0.4 mm for the double-source case, which is a significant improvement for FMT.
Ahmad, Munir; Shahzad, Tasawar; Masood, Khalid; Rashid, Khalid; Tanveer, Muhammad; Iqbal, Rabail; Hussain, Nasir; Shahid, Abubakar; Fazal-E-Aleem
2016-06-01
Emission tomographic image reconstruction is an ill-posed problem due to limited and noisy data and various image-degrading effects affecting the data and leads to noisy reconstructions. Explicit regularization, through iterative reconstruction methods, is considered better to compensate for reconstruction-based noise. Local smoothing and edge-preserving regularization methods can reduce reconstruction-based noise. However, these methods produce overly smoothed images or blocky artefacts in the final image because they can only exploit local image properties. Recently, non-local regularization techniques have been introduced, to overcome these problems, by incorporating geometrical global continuity and connectivity present in the objective image. These techniques can overcome drawbacks of local regularization methods; however, they also have certain limitations, such as choice of the regularization function, neighbourhood size or calibration of several empirical parameters involved. This work compares different local and non-local regularization techniques used in emission tomographic imaging in general and emission computed tomography in specific for improved quality of the resultant images. PMID:26714680
Chen, Xueli E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin E-mail: jimleung@mail.xidian.edu.cn
2014-05-14
Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.
NASA Astrophysics Data System (ADS)
Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M.
2013-02-01
Digital breast tomosynthesis (DBT) has strong promise to improve sensitivity for detecting breast cancer. DBT reconstruction estimates the breast tissue attenuation using projection views (PVs) acquired in a limited angular range. Because of the limited field of view (FOV) of the detector, the PVs may not completely cover the breast in the x-ray source motion direction at large projection angles. The voxels in the imaged volume cannot be updated when they are outside the FOV, thus causing a discontinuity in intensity across the FOV boundaries in the reconstructed slices, which we refer to as the truncated projection artifact (TPA). Most existing TPA reduction methods were developed for the filtered backprojection method in the context of computed tomography. In this study, we developed a new diffusion-based method to reduce TPAs during DBT reconstruction using the simultaneous algebraic reconstruction technique (SART). Our TPA reduction method compensates for the discontinuity in background intensity outside the FOV of the current PV after each PV updating in SART. The difference in voxel values across the FOV boundary is smoothly diffused to the region beyond the FOV of the current PV. Diffusion-based background intensity estimation is performed iteratively to avoid structured artifacts. The method is applicable to TPA in both the forward and backward directions of the PVs and for any number of iterations during reconstruction. The effectiveness of the new method was evaluated by comparing the visual quality of the reconstructed slices and the measured discontinuities across the TPA with and without artifact correction at various iterations. The results demonstrated that the diffusion-based intensity compensation method reduced the TPA while preserving the detailed tissue structures. The visibility of breast lesions obscured by the TPA was improved after artifact reduction.
Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M
2013-02-01
Digital breast tomosynthesis (DBT) has strong promise to improve sensitivity for detecting breast cancer. DBT reconstruction estimates the breast tissue attenuation using projection views (PVs) acquired in a limited angular range. Because of the limited field of view (FOV) of the detector, the PVs may not completely cover the breast in the x-ray source motion direction at large projection angles. The voxels in the imaged volume cannot be updated when they are outside the FOV, thus causing a discontinuity in intensity across the FOV boundaries in the reconstructed slices, which we refer to as the truncated projection artifact (TPA). Most existing TPA reduction methods were developed for the filtered backprojection method in the context of computed tomography. In this study, we developed a new diffusion-based method to reduce TPAs during DBT reconstruction using the simultaneous algebraic reconstruction technique (SART). Our TPA reduction method compensates for the discontinuity in background intensity outside the FOV of the current PV after each PV updating in SART. The difference in voxel values across the FOV boundary is smoothly diffused to the region beyond the FOV of the current PV. Diffusion-based background intensity estimation is performed iteratively to avoid structured artifacts. The method is applicable to TPA in both the forward and backward directions of the PVs and for any number of iterations during reconstruction. The effectiveness of the new method was evaluated by comparing the visual quality of the reconstructed slices and the measured discontinuities across the TPA with and without artifact correction at various iterations. The results demonstrated that the diffusion-based intensity compensation method reduced the TPA while preserving the detailed tissue structures. The visibility of breast lesions obscured by the TPA was improved after artifact reduction. PMID:23318346
A sampling method for the reconstruction of a periodic interface in a layered medium
NASA Astrophysics Data System (ADS)
Sun, Guanying; Zhang, Ruming
2016-07-01
In this paper, we consider the inverse problem of reconstructing periodic interfaces in a two-layered medium with TM-mode. We propose a sampling-type method to recover the top periodic interface from the near-field data measured on a straight line above the total structure. Finally, numerical experiments are illustrated to show the effectiveness of the method.
ERIC Educational Resources Information Center
Dockray, Samantha; Grant, Nina; Stone, Arthur A.; Kahneman, Daniel; Wardle, Jane; Steptoe, Andrew
2010-01-01
Measurement of affective states in everyday life is of fundamental importance in many types of quality of life, health, and psychological research. Ecological momentary assessment (EMA) is the recognized method of choice, but the respondent burden can be high. The day reconstruction method (DRM) was developed by Kahneman and colleagues ("Science,"…
Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay
Huang, Jian
2013-03-12
A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.
A gene network engineering platform for lactic acid bacteria
Kong, Wentao; Kapuganti, Venkata S.; Lu, Ting
2016-01-01
Recent developments in synthetic biology have positioned lactic acid bacteria (LAB) as a major class of cellular chassis for applications. To achieve the full potential of LAB, one fundamental prerequisite is the capacity for rapid engineering of complex gene networks, such as natural biosynthetic pathways and multicomponent synthetic circuits, into which cellular functions are encoded. Here, we present a synthetic biology platform for rapid construction and optimization of large-scale gene networks in LAB. The platform involves a copy-controlled shuttle for hosting target networks and two associated strategies that enable efficient genetic editing and phenotypic validation. By using a nisin biosynthesis pathway and its variants as examples, we demonstrated multiplex, continuous editing of small DNA parts, such as ribosome-binding sites, as well as efficient manipulation of large building blocks such as genes and operons. To showcase the platform, we applied it to expand the phenotypic diversity of the nisin pathway by quickly generating a library of 63 pathway variants. We further demonstrated its utility by altering the regulatory topology of the nisin pathway for constitutive bacteriocin biosynthesis. This work demonstrates the feasibility of rapid and advanced engineering of gene networks in LAB, fostering their applications in biomedicine and other areas. PMID:26503255
A gene network engineering platform for lactic acid bacteria.
Kong, Wentao; Kapuganti, Venkata S; Lu, Ting
2016-02-29
Recent developments in synthetic biology have positioned lactic acid bacteria (LAB) as a major class of cellular chassis for applications. To achieve the full potential of LAB, one fundamental prerequisite is the capacity for rapid engineering of complex gene networks, such as natural biosynthetic pathways and multicomponent synthetic circuits, into which cellular functions are encoded. Here, we present a synthetic biology platform for rapid construction and optimization of large-scale gene networks in LAB. The platform involves a copy-controlled shuttle for hosting target networks and two associated strategies that enable efficient genetic editing and phenotypic validation. By using a nisin biosynthesis pathway and its variants as examples, we demonstrated multiplex, continuous editing of small DNA parts, such as ribosome-binding sites, as well as efficient manipulation of large building blocks such as genes and operons. To showcase the platform, we applied it to expand the phenotypic diversity of the nisin pathway by quickly generating a library of 63 pathway variants. We further demonstrated its utility by altering the regulatory topology of the nisin pathway for constitutive bacteriocin biosynthesis. This work demonstrates the feasibility of rapid and advanced engineering of gene networks in LAB, fostering their applications in biomedicine and other areas. PMID:26503255
NASA Astrophysics Data System (ADS)
Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.
2010-04-01
Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.
Revisiting the analog method to obtain uncertainty estimates for proxy surrogate reconstructions
NASA Astrophysics Data System (ADS)
Bothe, Oliver
2015-04-01
Proxy surrogate reconstructions are a computationally cheap method to combine information from spatially sparse proxy records or instrumental data series with the spatially complete fields from climate simulations to increase our knowledge about past climates. The method assumes that the analog pool includes the entire bandwidth of the state-space of the variable under consideration. As proxy records are uncertain indicators of the state of past climate variables, the analog search should ideally allow for the inclusion of the variance unexplained by the proxy indicator in the variable of interest, i.e. it should quantify the uncertainty of the reconstructions based on the signal strength in the proxy records. Upto this point traditional implementations have not considered this uncertainty. This presentation details assumptions based on the calibration correlation of the proxies which result in an ensemble pool of analogs consistent with the proxy record at each data point and explicitly considering the noise in the proxy record. The proxy-pool of the Euro2K-reconstruction and the MPI-ESM-COSMOS ensemble of simulations of the last millennium provide the data to obtain a set of proxy surrogate field estimates for the June, July and August summer near surface air temperature of the last 750 years for the European domain. The restrictions imposed on the analog selection can result in failure to find suitable analogs. The underlying assumptions allow to construct an uncertainty envelope for the areal mean of the field reconstructions. The ensemble of fields further highlights the ambiguity of field reconstructions constrained by a limited set of proxies. Additionally, the uncertainty envelope, its median estimate and the respective best estimate can be used to easily validate reconstructions obtained with more complex methods. That is, the proxy surrogate reconstruction estimates agree very well with the Euro2K-reconstruction over the last 750 years. They also well
Comparison of Parallel MRI Reconstruction Methods for Accelerated 3D Fast Spin-Echo Imaging
Xiao, Zhikui; Hoge, W. Scott; Mulkern, R.V.; Zhao, Lei; Hu, Guangshu; Kyriakos, Walid E.
2014-01-01
Parallel MRI (pMRI) achieves imaging acceleration by partially substituting gradient-encoding steps with spatial information contained in the component coils of the acquisition array. Variable-density subsampling in pMRI was previously shown to yield improved two-dimensional (2D) imaging in comparison to uniform subsampling, but has yet to be used routinely in clinical practice. In an effort to reduce acquisition time for 3D fast spin-echo (3D-FSE) sequences, this work explores a specific nonuniform sampling scheme for 3D imaging, subsampling along two phase-encoding (PE) directions on a rectilinear grid. We use two reconstruction methods—2D-GRAPPA-Operator and 2D-SPACE RIP—and present a comparison between them. We show that high-quality images can be reconstructed using both techniques. To evaluate the proposed sampling method and reconstruction schemes, results via simulation, phantom study, and in vivo 3D human data are shown. We find that fewer artifacts can be seen in the 2D-SPACE RIP reconstructions than in 2D-GRAPPA-Operator reconstructions, with comparable reconstruction times. PMID:18727083
Cosmic web reconstruction through density ridges: method and algorithm
NASA Astrophysics Data System (ADS)
Chen, Yen-Chi; Ho, Shirley; Freeman, Peter E.; Genovese, Christopher R.; Wasserman, Larry
2015-11-01
The detection and characterization of filamentary structures in the cosmic web allows cosmologists to constrain parameters that dictate the evolution of the Universe. While many filament estimators have been proposed, they generally lack estimates of uncertainty, reducing their inferential power. In this paper, we demonstrate how one may apply the subspace constrained mean shift (SCMS) algorithm (Ozertem & Erdogmus 2011; Genovese et al. 2014) to uncover filamentary structure in galaxy data. The SCMS algorithm is a gradient ascent method that models filaments as density ridges, one-dimensional smooth curves that trace high-density regions within the point cloud. We also demonstrate how augmenting the SCMS algorithm with bootstrap-based methods of uncertainty estimation allows one to place uncertainty bands around putative filaments. We apply the SCMS first to the data set generated from the Voronoi model. The density ridges show strong agreement with the filaments from Voronoi method. We then apply the SCMS method data sets sampled from a P3M N-body simulation, with galaxy number densities consistent with SDSS and WFIRST-AFTA, and to LOWZ and CMASS data from the Baryon Oscillation Spectroscopic Survey (BOSS). To further assess the efficacy of SCMS, we compare the relative locations of BOSS filaments with galaxy clusters in the redMaPPer catalogue, and find that redMaPPer clusters are significantly closer (with p-values <10-9) to SCMS-detected filaments than to randomly selected galaxies.
NASA Astrophysics Data System (ADS)
Kang, Jinfen; Moriyama, Koji; Kim, Seung Hyun
2016-09-01
This paper presents an extended, stochastic reconstruction method for catalyst layers (CLs) of Proton Exchange Membrane Fuel Cells (PEMFCs). The focus is placed on the reconstruction of customized, low platinum (Pt) loading CLs where the microstructure of CLs can substantially influence the performance. The sphere-based simulated annealing (SSA) method is extended to generate the CL microstructures with specified and controllable structural properties for agglomerates, ionomer, and Pt catalysts. In the present method, the agglomerate structures are controlled by employing a trial two-point correlation function used in the simulated annealing process. An off-set method is proposed to generate more realistic ionomer structures. The variations of ionomer structures at different humidity conditions are considered to mimic the swelling effects. A method to control Pt loading, distribution, and utilization is presented. The extension of the method to consider heterogeneity in structural properties, which can be found in manufactured CL samples, is presented. Various reconstructed CLs are generated to demonstrate the capability of the proposed method. Proton transport properties of the reconstructed CLs are calculated and validated with experimental data.
Performance of climate field reconstruction methods over multiple seasons and climate variables
NASA Astrophysics Data System (ADS)
Dannenberg, Matthew P.; Wise, Erika K.
2013-09-01
Studies of climate variability require long time series of data but are limited by the absence of preindustrial instrumental records. For such studies, proxy-based climate reconstructions, such as those produced from tree-ring widths, provide the opportunity to extend climatic records into preindustrial periods. Climate field reconstruction (CFR) methods are capable of producing spatially-resolved reconstructions of climate fields. We assessed the performance of three commonly used CFR methods (canonical correlation analysis, point-by-point regression, and regularized expectation maximization) over spatially-resolved fields using multiple seasons and climate variables. Warm- and cool-season geopotential height, precipitable water, and surface temperature were tested for each method using tree-ring chronologies. Spatial patterns of reconstructive skill were found to be generally consistent across each of the methods, but the robustness of the validation metrics varied by CFR method, season, and climate variable. The most robust validation metrics were achieved with geopotential height, the October through March temporal composite, and the Regularized Expectation Maximization method. While our study is limited to assessment of skill over multidecadal (rather than multi-centennial) time scales, our findings suggest that the climate variable of interest, seasonality, and spatial domain of the target field should be considered when assessing potential CFR methods for real-world applications.
Socha, Mirosław; Duplaga, Mariusz; Turcza, Paweł
2004-01-01
The use of three-dimensional visualization of anatomical structures in diagnostics and medical training is growing. The main components of virtual respiratory tract environments include reconstruction and simulation algorithms as well as correction methods of endoscope camera distortions in the case of virtually-enhanced navigation systems. Reconstruction methods rely usually on initial computer tomography (CT) image segmentation to trace contours of the tracheobronchial tree, which in turn are used in the visualization process. The main segmentation methods, including relatively simple approaches such as adaptive region-growing algorithms and more complex methods, e.g. hybrid algorithms based on region growing and mathematical morphology methods, are described in this paper. The errors and difficulties in the process of tracheobronchial tree reconstruction depend on the occurrence of distortions during CT image acquisition. They are usually related to the inability to exactly fulfil the sampling theorem's conditions. Other forms of distortions and noise such as additive white Gaussian noise, may also appear. The impact of these distortions on the segmentation and reconstruction may be diminished through the application of appropriately selected image prefiltering, which is also demonstrated in this paper. Methods of surface rendering (ray-casting, ray-tracing techniques) and volume rendering will be shown, with special focus on aspects of hardware and software implementations. Finally, methods of camera distortions correction and simulation are presented. The mathematical camera models, the scope of their applications and types of distortions were have also been indicated. PMID:15718617
A Reconstructed Discontiuous Galerkin Method for the Magnetohydrodynamics on Arbitrary Grids
NASA Astrophysics Data System (ADS)
Halashi, Behrouz Karami
A reconstructed discontinuous Galerkin (RDG) method based on a Hierarchical Weighted Essentially Non-oscillatory (WENO) reconstruction using a Taylor basis, designed not only to enhance the accuracy of discontinuous Galerkin methods but also to ensure the nonlinear stability of the RDG method, is developed for the solution of the magnetohydro dynamics (MHD) on arbitrary grids. In this method, a quadratic polynomial solution (P2) is first reconstructed using a Hermite WENO (HWENO) reconstruction from the underlying linear polynomial (P 1) discontinuous Galerkin solution to ensure the linear stability of the RDG method and to improve the efficiency of the underlying DG method. By taking advantage of handily available and yet invaluable information, namely the derivatives in the DG formulation, the stencils used in the reconstruction involve only Von Neumann neighborhood (adjacent face-neighboring cells) and thus are compact and consistent with the underlying DG method. The gradients (first moments) of the quadratic polynomial solution are then reconstructed using a WENO reconstruction in order to eliminate spurious oscillations in the vicinity of strong discontinuities, thus ensuring the nonlinear stability of the RDG method. Temporal discretization is done using a 4th order explicit Runge-Kutta method. The HLLD Riemann solver, introduced in the literature for one dimensional MHD problems, is extended to three dimensional problems on unstructured grids and used to compute the flux functions at interfaces in the present work. Divergence free constraint is satisfied using the so-called Locally Divergence Free (LDF) approach. The LDF formulation is especially attractive in the context of DG methods, where the gradients of independent variables are handily available and only one of the computed gradients needs simply to be modified by the divergence-free constraint at the end of each time step. The developed RDG method is used to compute a variety of fluid dynamics and
A novel building boundary reconstruction method based on lidar data and images
NASA Astrophysics Data System (ADS)
Chen, Yiming; Zhang, Wuming; Zhou, Guoqing; Yan, Guangjian
2013-09-01
Building boundary is important for the urban mapping and real estate industry applications. The reconstruction of building boundary is also a significant but difficult step in generating city building models. As Light detection and ranging system (Lidar) can acquire large and dense point cloud data fast and easily, it has great advantages for building reconstruction. In this paper, we combine Lidar data and images to develop a novel building boundary reconstruction method. We use only one scan of Lidar data and one image to do the reconstruction. The process consists of a sequence of three steps: project boundary Lidar points to image; extract accurate boundary from image; and reconstruct boundary in Lidar points. We define a relationship between 3D points and the pixel coordinates. Then we extract the boundary in the image and use the relationship to get boundary in the point cloud. The method presented here reduces the difficulty of data acquisition effectively. The theory is not complex so it has low computational complexity. It can also be widely used in the data acquired by other 3D scanning devices to improve the accuracy. Results of the experiment demonstrate that this method has a clear advantage and high efficiency over others, particularly in the data with large point spacing.
Significant impact of miRNA–target gene networks on genetics of human complex traits
Okada, Yukinori; Muramatsu, Tomoki; Suita, Naomasa; Kanai, Masahiro; Kawakami, Eiryo; Iotchkova, Valentina; Soranzo, Nicole; Inazawa, Johji; Tanaka, Toshihiro
2016-01-01
The impact of microRNA (miRNA) on the genetics of human complex traits, especially in the context of miRNA-target gene networks, has not been fully assessed. Here, we developed a novel analytical method, MIGWAS, to comprehensively evaluate enrichment of genome-wide association study (GWAS) signals in miRNA–target gene networks. We applied the method to the GWAS results of the 18 human complex traits from >1.75 million subjects, and identified significant enrichment in rheumatoid arthritis (RA), kidney function, and adult height (P < 0.05/18 = 0.0028, most significant enrichment in RA with P = 1.7 × 10−4). Interestingly, these results were consistent with current literature-based knowledge of the traits on miRNA obtained through the NCBI PubMed database search (adjusted P = 0.024). Our method provided a list of miRNA and target gene pairs with excess genetic association signals, part of which included drug target genes. We identified a miRNA (miR-4728-5p) that downregulates PADI2, a novel RA risk gene considered as a promising therapeutic target (rs761426, adjusted P = 2.3 × 10−9). Our study indicated the significant impact of miRNA–target gene networks on the genetics of human complex traits, and provided resources which should contribute to drug discovery and nucleic acid medicine. PMID:26927695
Significant impact of miRNA-target gene networks on genetics of human complex traits.
Okada, Yukinori; Muramatsu, Tomoki; Suita, Naomasa; Kanai, Masahiro; Kawakami, Eiryo; Iotchkova, Valentina; Soranzo, Nicole; Inazawa, Johji; Tanaka, Toshihiro
2016-01-01
The impact of microRNA (miRNA) on the genetics of human complex traits, especially in the context of miRNA-target gene networks, has not been fully assessed. Here, we developed a novel analytical method, MIGWAS, to comprehensively evaluate enrichment of genome-wide association study (GWAS) signals in miRNA-target gene networks. We applied the method to the GWAS results of the 18 human complex traits from >1.75 million subjects, and identified significant enrichment in rheumatoid arthritis (RA), kidney function, and adult height (P < 0.05/18 = 0.0028, most significant enrichment in RA with P = 1.7 × 10(-4)). Interestingly, these results were consistent with current literature-based knowledge of the traits on miRNA obtained through the NCBI PubMed database search (adjusted P = 0.024). Our method provided a list of miRNA and target gene pairs with excess genetic association signals, part of which included drug target genes. We identified a miRNA (miR-4728-5p) that downregulates PADI2, a novel RA risk gene considered as a promising therapeutic target (rs761426, adjusted P = 2.3 × 10(-9)). Our study indicated the significant impact of miRNA-target gene networks on the genetics of human complex traits, and provided resources which should contribute to drug discovery and nucleic acid medicine. PMID:26927695
A comparative study of limited-angle cone-beam reconstruction methods for breast tomosynthesis
Zhang Yiheng; Chan, H.-P.; Sahiner, Berkman; Wei, Jun; Goodsitt, Mitchell M.; Hadjiiski, Lubomir M.; Ge Jun; Zhou Chuan
2006-10-15
Digital tomosynthesis mammography (DTM) is a promising new modality for breast cancer detection. In DTM, projection-view images are acquired at a limited number of angles over a limited angular range and the imaged volume is reconstructed from the two-dimensional projections, thus providing three-dimensional structural information of the breast tissue. In this work, we investigated three representative reconstruction methods for this limited-angle cone-beam tomographic problem, including the backprojection (BP) method, the simultaneous algebraic reconstruction technique (SART) and the maximum likelihood method with the convex algorithm (ML-convex). The SART and ML-convex methods were both initialized with BP results to achieve efficient reconstruction. A second generation GE prototype tomosynthesis mammography system with a stationary digital detector was used for image acquisition. Projection-view images were acquired from 21 angles in 3 deg. increments over a {+-}30 deg. angular range. We used an American College of Radiology phantom and designed three additional phantoms to evaluate the image quality and reconstruction artifacts. In addition to visual comparison of the reconstructed images of different phantom sets, we employed the contrast-to-noise ratio (CNR), a line profile of features, an artifact spread function (ASF), a relative noise power spectrum (NPS), and a line object spread function (LOSF) to quantitatively evaluate the reconstruction results. It was found that for the phantoms with homogeneous background, the BP method resulted in less noisy tomosynthesized images and higher CNR values for masses than the SART and ML-convex methods. However, the two iterative methods provided greater contrast enhancement for both masses and calcification, sharper LOSF, and reduced interplane blurring and artifacts with better ASF behaviors for masses. For a contrast-detail phantom with heterogeneous tissue-mimicking background, the BP method had strong blurring
The least error method for sparse solution reconstruction
NASA Astrophysics Data System (ADS)
Bredies, K.; Kaltenbacher, B.; Resmerita, E.
2016-09-01
This work deals with a regularization method enforcing solution sparsity of linear ill-posed problems by appropriate discretization in the image space. Namely, we formulate the so called least error method in an ℓ 1 setting and perform the convergence analysis by choosing the discretization level according to an a priori rule, as well as two a posteriori rules, via the discrepancy principle and the monotone error rule, respectively. Depending on the setting, linear or sublinear convergence rates in the ℓ 1-norm are obtained under a source condition yielding sparsity of the solution. A part of the study is devoted to analyzing the structure of the approximate solutions and of the involved source elements.
An integrand reconstruction method for three-loop amplitudes
NASA Astrophysics Data System (ADS)
Badger, Simon; Frellesvig, Hjalte; Zhang, Yang
2012-08-01
We consider the maximal cut of a three-loop four point function with massless kinematics. By applying Gröbner bases and primary decomposition we develop a method which extracts all ten propagator master integral coefficients for an arbitrary triple-box configuration via generalized unitarity cuts. As an example we present analytic results for the three loop triple-box contribution to gluon-gluon scattering in Yang-Mills with adjoint fermions and scalars in terms of three master integrals.
A limited-angle CT reconstruction method based on anisotropic TV minimization
NASA Astrophysics Data System (ADS)
Chen, Zhiqiang; Jin, Xin; Li, Liang; Wang, Ge
2013-04-01
This paper presents a compressed sensing (CS)-inspired reconstruction method for limited-angle computed tomography (CT). Currently, CS-inspired CT reconstructions are often performed by minimizing the total variation (TV) of a CT image subject to data consistency. A key to obtaining high image quality is to optimize the balance between TV-based smoothing and data fidelity. In the case of the limited-angle CT problem, the strength of data consistency is angularly varying. For example, given a parallel beam of x-rays, information extracted in the Fourier domain is mostly orthogonal to the direction of x-rays, while little is probed otherwise. However, the TV minimization process is isotropic, suggesting that it is unfit for limited-angle CT. Here we introduce an anisotropic TV minimization method to address this challenge. The advantage of our approach is demonstrated in numerical simulation with both phantom and real CT images, relative to the TV-based reconstruction.
Reconstruction from Uniformly Attenuated SPECT Projection Data Using the DBH Method
Huang, Qiu; You, Jiangsheng; Zeng, Gengsheng L.; Gullberg, Grant T.
2008-03-20
An algorithm was developed for the two-dimensional (2D) reconstruction of truncated and non-truncated uniformly attenuated data acquired from single photon emission computed tomography (SPECT). The algorithm is able to reconstruct data from half-scan (180o) and short-scan (180?+fan angle) acquisitions for parallel- and fan-beam geometries, respectively, as well as data from full-scan (360o) acquisitions. The algorithm is a derivative, backprojection, and Hilbert transform (DBH) method, which involves the backprojection of differentiated projection data followed by an inversion of the finite weighted Hilbert transform. The kernel of the inverse weighted Hilbert transform is solved numerically using matrix inversion. Numerical simulations confirm that the DBH method provides accurate reconstructions from half-scan and short-scan data, even when there is truncation. However, as the attenuation increases, finer data sampling is required.
Application of information theory methods to food web reconstruction
Moniz, L.J.; Cooch, E.G.; Ellner, S.P.; Nichols, J.D.; Nichols, J.M.
2007-01-01
In this paper we use information theory techniques on time series of abundances to determine the topology of a food web. At the outset, the food web participants (two consumers, two resources) are known; in addition we know that each consumer prefers one of the resources over the other. However, we do not know which consumer prefers which resource, and if this preference is absolute (i.e., whether or not the consumer will consume the non-preferred resource). Although the consumers and resources are identified at the beginning of the experiment, we also provide evidence that the consumers are not resources for each other, and the resources do not consume each other. We do show that there is significant mutual information between resources; the model is seasonally forced and some shared information between resources is expected. Similarly, because the model is seasonally forced, we expect shared information between consumers as they respond to the forcing of the resources. The model that we consider does include noise, and in an effort to demonstrate that these methods may be of some use in other than model data, we show the efficacy of our methods with decreasing time series size; in this particular case we obtain reasonably clear results with a time series length of 400 points. This approaches ecological time series lengths from real systems.
A comparative study of interface reconstruction methods for multi-material ALE simulations
Kucharik, Milan; Garimalla, Rao; Schofield, Samuel; Shashkov, Mikhail
2009-01-01
In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs some-what worse than the above two while the solutions with VOF using the wrong material order are considerably worse.
Ankowski, Artur M.; Benhar, Omar; Coloma, Pilar; Huber, Patrick; Jen, Chun -Min; Mariani, Camillo; Meloni, Davide; Vagnoni, Erica
2015-10-22
To be able to achieve their physics goals, future neutrino-oscillation experiments will need to reconstruct the neutrino energy with very high accuracy. In this work, we analyze how the energy reconstruction may be affected by realistic detection capabilities, such as energy resolutions, efficiencies, and thresholds. This allows us to estimate how well the detector performance needs to be determined a priori in order to avoid a sizable bias in the measurement of the relevant oscillation parameters. We compare the kinematic and calorimetric methods of energy reconstruction in the context of two ν_{μ} → ν_{μ} disappearance experiments operating in different energy regimes. For the calorimetric reconstruction method, we find that the detector performance has to be estimated with an O(10%) accuracy to avoid a significant bias in the extracted oscillation parameters. Thus, in the case of kinematic energy reconstruction, we observe that the results exhibit less sensitivity to an overestimation of the detector capabilities.
Ankowski, Artur M.; Benhar, Omar; Coloma, Pilar; Huber, Patrick; Jen, Chun -Min; Mariani, Camillo; Meloni, Davide; Vagnoni, Erica
2015-10-22
To be able to achieve their physics goals, future neutrino-oscillation experiments will need to reconstruct the neutrino energy with very high accuracy. In this work, we analyze how the energy reconstruction may be affected by realistic detection capabilities, such as energy resolutions, efficiencies, and thresholds. This allows us to estimate how well the detector performance needs to be determined a priori in order to avoid a sizable bias in the measurement of the relevant oscillation parameters. We compare the kinematic and calorimetric methods of energy reconstruction in the context of two νμ → νμ disappearance experiments operating in different energymore » regimes. For the calorimetric reconstruction method, we find that the detector performance has to be estimated with an O(10%) accuracy to avoid a significant bias in the extracted oscillation parameters. Thus, in the case of kinematic energy reconstruction, we observe that the results exhibit less sensitivity to an overestimation of the detector capabilities.« less
Analysis of dental root apical morphology: a new method for dietary reconstructions in primates.
Hamon, NoÉmie; Emonet, Edouard-Georges; Chaimanee, Yaowalak; Guy, Franck; Tafforeau, Paul; Jaeger, Jean-Jacques
2012-06-01
The reconstruction of paleo-diets is an important task in the study of fossil primates. Previously, paleo-diet reconstructions were performed using different methods based on extant primate models. In particular, dental microwear or isotopic analyses provided accurate reconstructions for some fossil primates. However, there is sometimes difficult or impossible to apply these methods to fossil material. Therefore, the development of new, independent methods of diet reconstructions is crucial to improve our knowledge of primates paleobiology and paleoecology. This study aims to investigate the correlation between tooth root apical morphology and diet in primates, and its potential for paleo-diet reconstructions. Dental roots are composed of two portions: the eruptive portion with a smooth and regular surface, and the apical penetrative portion which displays an irregular and corrugated surface. Here, the angle formed by these two portions (aPE), and the ratio of penetrative portion over total root length (PPI), are calculated for each mandibular tooth root. A strong correlation between these two variables and the proportion of some food types (fruits, leaves, seeds, animal matter, and vertebrates) in diet is found, allowing the use of tooth root apical morphology as a tool for dietary reconstructions in primates. The method was then applied to the fossil hominoid Khoratpithecus piriyai, from the Late Miocene of Thailand. The paleo-diet deduced from aPE and PPI is dominated by fruits (>50%), associated with animal matter (1-25%). Leaves, vertebrates and most probably seeds were excluded from the diet of Khoratpithecus, which is consistent with previous studies. PMID:22553124
Deng, Qingqiong; Zhou, Mingquan; Wu, Zhongke; Shui, Wuyang; Ji, Yuan; Wang, Xingce; Liu, Ching Yiu Jessica; Huang, Youliang; Jiang, Haiyan
2016-02-01
Craniofacial reconstruction recreates a facial outlook from the cranium based on the relationship between the face and the skull to assist identification. But craniofacial structures are very complex, and this relationship is not the same in different craniofacial regions. Several regional methods have recently been proposed, these methods segmented the face and skull into regions, and the relationship of each region is then learned independently, after that, facial regions for a given skull are estimated and finally glued together to generate a face. Most of these regional methods use vertex coordinates to represent the regions, and they define a uniform coordinate system for all of the regions. Consequently, the inconsistence in the positions of regions between different individuals is not eliminated before learning the relationships between the face and skull regions, and this reduces the accuracy of the craniofacial reconstruction. In order to solve this problem, an improved regional method is proposed in this paper involving two types of coordinate adjustments. One is the global coordinate adjustment performed on the skulls and faces with the purpose to eliminate the inconsistence of position and pose of the heads; the other is the local coordinate adjustment performed on the skull and face regions with the purpose to eliminate the inconsistence of position of these regions. After these two coordinate adjustments, partial least squares regression (PLSR) is used to estimate the relationship between the face region and the skull region. In order to obtain a more accurate reconstruction, a new fusion strategy is also proposed in the paper to maintain the reconstructed feature regions when gluing the facial regions together. This is based on the observation that the feature regions usually have less reconstruction errors compared to rest of the face. The results demonstrate that the coordinate adjustments and the new fusion strategy can significantly improve the
Balima, O.; Favennec, Y.; Rousse, D.
2013-10-15
Highlights: •New strategies to improve the accuracy of the reconstruction through mesh and finite element parameterization. •Use of gradient filtering through an alternative inner product within the adjoint method. •An integral form of the cost function is used to make the reconstruction compatible with all finite element formulations, continuous and discontinuous. •Gradient-based algorithm with the adjoint method is used for the reconstruction. -- Abstract: Optical tomography is mathematically treated as a non-linear inverse problem where the optical properties of the probed medium are recovered through the minimization of the errors between the experimental measurements and their predictions with a numerical model at the locations of the detectors. According to the ill-posed behavior of the inverse problem, some regularization tools must be performed and the Tikhonov penalization type is the most commonly used in optical tomography applications. This paper introduces an optimized approach for optical tomography reconstruction with the finite element method. An integral form of the cost function is used to take into account the surfaces of the detectors and make the reconstruction compatible with all finite element formulations, continuous and discontinuous. Through a gradient-based algorithm where the adjoint method is used to compute the gradient of the cost function, an alternative inner product is employed for preconditioning the reconstruction algorithm. Moreover, appropriate re-parameterization of the optical properties is performed. These regularization strategies are compared with the classical Tikhonov penalization one. It is shown that both the re-parameterization and the use of the Sobolev cost function gradient are efficient for solving such an ill-posed inverse problem.
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan
2015-11-15
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method
Reconstruction of conductivity using the dual-loop method with one injection current in MREIT.
Lee, Tae Hwi; Nam, Hyun Soo; Lee, Min Gi; Kim, Yong Jung; Woo, Eung Je; Kwon, Oh In
2010-12-21
Magnetic resonance electrical impedance tomography (MREIT) is to visualize the internal current density and conductivity of an electrically conductive object. Injecting current through surface electrodes, we measure one component of the induced internal magnetic flux density using an MRI scanner. In order to reconstruct the conductivity distribution inside the imaging object, most algorithms in MREIT have required multiple magnetic flux density data by injecting at least two independent currents. In this paper, we propose a direct method to reconstruct the internal isotropic conductivity with one component of magnetic flux density data by injecting one current into the imaging object through a single pair of surface electrodes. Firstly, the proposed method reconstructs a projected current density which is a uniquely determined current from the measured one-component magnetic flux density. Using a relation between voltage potential and current, based on Kirchhoff's voltage law, the proposed method is designed to use a combination of two loops around each pixel from which to derive an implicit matrix system for determination of the internal conductivity. Results from numerical simulations demonstrate that the proposed algorithm stably determines the conductivity distribution in an imaging slice. We compare the reconstructed internal conductivity distribution using the proposed method with that using a conventional method with agarose gel phantom experiments. PMID:21098919
Gaining insight into food webs reconstructed by the inverse method
NASA Astrophysics Data System (ADS)
Kones, Julius K.; Soetaert, Karline; van Oevelen, Dick; Owino, John O.; Mavuti, Kenneth
2006-04-01
The use of the inverse method to analyze flow patterns of organic components in ecological systems has had wide application in ecological modeling. Through this approach, an infinite number of food web flows describing the food web and satisfying biological constraints are generated, from which one (parsimonious) solution is drawn. Here we address two questions: (1) is there justification for the use of the parsimonious solution or is there a better alternative and (2) can we use the infinitely many solutions that describe the same food web to give more insight into the system? We reassess two published food webs, from the Gulf of Riga in the Baltic Sea and the Takapoto Atoll lagoon in the South Pacific. A finite number of random food web solutions is first generated using the Monte Carlo simulation technique. Using the Wilcoxon signed ranks test, we cannot find significant differences in the parsimonious solution and the average values of the finite random solutions generated. However, as the food web composed of the average flows has more attractive properties, the choice of the parsimonious solution to describe underdetermined food webs is challenged. We further demonstrate the use of the factor analysis technique to characterize flows that are closely related in the food web. Through this process sub-food webs are extracted within the plausible set of food webs, a property that can be utilized to gain insight into the sampling strategy for further constraining of the model.
Wisdom of crowds for robust gene network inference
Marbach, Daniel; Costello, James C.; Küffner, Robert; Vega, Nicci; Prill, Robert J.; Camacho, Diogo M.; Allison, Kyle R.; Kellis, Manolis; Collins, James J.; Stolovitzky, Gustavo
2012-01-01
Reconstructing gene regulatory networks from high-throughput data is a long-standing problem. Through the DREAM project (Dialogue on Reverse Engineering Assessment and Methods), we performed a comprehensive blind assessment of over thirty network inference methods on Escherichia coli, Staphylococcus aureus, Saccharomyces cerevisiae, and in silico microarray data. We characterize performance, data requirements, and inherent biases of different inference approaches offering guidelines for both algorithm application and development. We observe that no single inference method performs optimally across all datasets. In contrast, integration of predictions from multiple inference methods shows robust and high performance across diverse datasets. Thereby, we construct high-confidence networks for E. coli and S. aureus, each comprising ~1700 transcriptional interactions at an estimated precision of 50%. We experimentally test 53 novel interactions in E. coli, of which 23 were supported (43%). Our results establish community-based methods as a powerful and robust tool for the inference of transcriptional gene regulatory networks. PMID:22796662
Novel iterative reconstruction method for optimal dose usage in redundant CT - acquisitions
NASA Astrophysics Data System (ADS)
Bruder, H.; Raupach, R.; Allmendinger, T.; Kappler, S.; Sunnegardh, J.; Stierstorfer, K.; Flohr, T.
2014-03-01
In CT imaging, a variety of applications exist where reconstructions are SNR and/or resolution limited. However, if the measured data provide redundant information, composite image data with high SNR can be computed. Generally, these composite image volumes will compromise spectral information and/or spatial resolution and/or temporal resolution. This brings us to the idea of transferring the high SNR of the composite image data to low SNR (but high resolution) `source' image data. It was shown that the SNR of CT image data can be improved using iterative reconstruction [1] .We present a novel iterative reconstruction method enabling optimal dose usage of redundant CT measurements of the same body region. The generalized update equation is formulated in image space without further referring to raw data after initial reconstruction of source and composite image data. The update equation consists of a linear combination of the previous update, a correction term constrained by the source data, and a regularization prior initialized by the composite data. The efficiency of the method is demonstrated for different applications: (i) Spectral imaging: we have analysed material decomposition data from dual energy data of our photon counting prototype scanner: the material images can be significantly improved transferring the good noise statistics of the 20 keV threshold image data to each of the material images. (ii) Multi-phase liver imaging: Reconstructions of multi-phase liver data can be optimized by utilizing the noise statistics of combined data from all measured phases (iii) Helical reconstruction with optimized temporal resolution: splitting up reconstruction of redundant helical acquisition data into a short scan reconstruction with Tam window optimizes the temporal resolution The reconstruction of full helical data is then used to optimize the SNR. (iv) Cardiac imaging: the optimal phase image (`best phase') can be improved by transferring all applied over
NASA Astrophysics Data System (ADS)
Xu, Luopeng; Dan, Youquan; Wang, Qingyuan
2015-10-01
The continuous wavelet transform (CWT) introduces an expandable spatial and frequency window which can overcome the inferiority of localization characteristic in Fourier transform and windowed Fourier transform. The CWT method is widely applied in the non-stationary signal analysis field including optical 3D shape reconstruction with remarkable performance. In optical 3D surface measurement, the performance of CWT for optical fringe pattern phase reconstruction usually depends on the choice of wavelet function. A large kind of wavelet functions of CWT, such as Mexican Hat wavelet, Morlet wavelet, DOG wavelet, Gabor wavelet and so on, can be generated from Gauss wavelet function. However, so far, application of the Gauss wavelet transform (GWT) method (i.e. CWT with Gauss wavelet function) in optical profilometry is few reported. In this paper, the method using GWT for optical fringe pattern phase reconstruction is presented first and the comparisons between real and complex GWT methods are discussed in detail. The examples of numerical simulations are also given and analyzed. The results show that both the real GWT method along with a Hilbert transform and the complex GWT method can realize three-dimensional surface reconstruction; and the performance of reconstruction generally depends on the frequency domain appearance of Gauss wavelet functions. For the case of optical fringe pattern of large phase variation with position, the performance of real GWT is better than that of complex one due to complex Gauss series wavelets existing frequency sidelobes. Finally, the experiments are carried out and the experimental results agree well with our theoretical analysis.
The high sensitivity of the maximum likelihood estimator method of tomographic image reconstruction
Llacer, J.; Veklerov, E.
1987-01-01
Positron Emission Tomography (PET) images obtained by the MLE iterative method of image reconstruction converge towards strongly deteriorated versions of the original source image. The image deterioration is caused by an excessive attempt by the algorithm to match the projection data with high counts. We can modulate this effect. We compared a source image with reconstructions by filtered backprojection to the MLE algorithm to show that the MLE images can have similar noise to the filtered backprojection images at regions of high activity and very low noise, comparable to the source image, in regions of low activity, if the iterative procedure is stopped at an appropriate point.
Image reconstruction of muon tomographic data using a density-based clustering method
NASA Astrophysics Data System (ADS)
Perry, Kimberly B.
Muons are subatomic particles capable of reaching the Earth's surface before decaying. When these particles collide with an object that has a high atomic number (Z), their path of travel changes substantially. Tracking muon movement through shielded containers can indicate what types of materials lie inside. This thesis proposes using a density-based clustering algorithm called OPTICS to perform image reconstructions using muon tomographic data. The results show that this method is capable of detecting high-Z materials quickly, and can also produce detailed reconstructions with large amounts of data.
Glass, Nel; Davis, Kierrynn
2004-01-01
Nursing research informed by postmodern feminist perspectives has prompted many debates in recent times. While this is so, nurse researchers who have been tempted to break new ground have had few examples of appropriate analytical methods for a research design informed by the above perspectives. This article presents a deconstructive/reconstructive secondary analysis of a postmodern feminist ethnography in order to provide an analytical exemplar. In doing so, previous notions of vulnerability as a negative state have been challenged and reconstructed. PMID:15206680
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.
NASA Astrophysics Data System (ADS)
Velikina, J. V.; Samsonov, A. A.
2016-02-01
Advanced MRI techniques often require sampling in additional (non-spatial) dimensions such as time or parametric dimensions, which significantly elongate scan time. Our purpose was to develop novel iterative image reconstruction methods to reduce amount of acquired data in such applications using prior knowledge about signal in the extra dimensions. The efforts have been made to accelerate two applications, namely, time resolved contrast enhanced MR angiography and T1 mapping. Our result demonstrate that significant acceleration (up to 27x times) may be achieved using our proposed iterative reconstruction techniques.
Iterative reconstruction method for three-dimensional non-cartesian parallel MRI
NASA Astrophysics Data System (ADS)
Jiang, Xuguang
Parallel magnetic resonance imaging (MRI) with non-Cartesian sampling pattern is a promising technique that increases the scan speed using multiple receiver coils with reduced samples. However, reconstruction is challenging due to the increased complexity. Three reconstruction methods were evaluated: gridding, blocked uniform resampling (BURS) and non-uniform FFT (NUFFT). Computer simulations of parallel reconstruction were performed. Root mean square error (RMSE) of the reconstructed images to the simulated phantom were used as image quality criterion. Gridding method showed best RMSE performance. Two type of a priori constraints to reduce noise and artifacts were evaluated: edge preserving penalty, which suppresses noise and aliasing artifact in image while preventing over-smoothness, and object support penalty, which reduces background noise amplification. A trust region based step-ratio method that iteratively calculates the penalty coefficient was proposed for the penalty functions. Two methods to alleviate computation burden were evaluated: smaller over sampling ratio, and interpolation coefficient matrix compression. The performance were individually tested using computer simulations. Edge preserving penalty and object support penalty were shown to have consistent improvement on RMSE. The performance of calculated penalty coefficients on the two penalties were close to the best RMSE. Oversampling ratio as low as 1.125 was shown to have impact of less than one percent on RMSE for the radial sampling pattern reconstruction. The value reduced the three dimensional data requirement to less than 1/5 of what the conventional 2x grid needed. Interpolation matrix compression with compression ratio up to 50 percent showed small impact on RMSE. The proposed method was validated on 25MR data set from a GEMR scanner. Six image quality metrics were used to evaluate the performance. RMSE, normalized mutual information (NMI) and joint entropy (JE) relative to a reference
NASA Technical Reports Server (NTRS)
Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)
2011-01-01
Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.
NASA Technical Reports Server (NTRS)
Baxes, Gregory A. (Inventor)
2010-01-01
Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.
Adaptive Forward Modeling Method for Analysis and Reconstructions of Orientation Image Map
2014-06-01
IceNine is a MPI-parallel orientation reconstruction and microstructure analysis code. It's primary purpose is to reconstruct a spatially resolved orientation map given a set of diffraction images from a high energy x-ray diffraction microscopy (HEDM) experiment (1). In particular, IceNine implements the adaptive version of the forward modeling method (2, 3). Part of IceNine is a library used to for conbined analysis of the microstructure with the experimentally measured diffraction signal. The libraries is alsomore » designed for tapid prototyping of new reconstruction and analysis algorithms. IceNine is also built with a simulator of diffraction images with an input microstructure.« less
A comparison of force reconstruction methods for a lumped mass beam
Bateman, V.I.; Mayes, R.L.; Carne, T.G.
1992-11-01
Two extensions of the force reconstruction method, the Sum of Weighted Accelerations Technique (SWAT), are presented in this paper; and the results are compared to those obtained using SWAT. SWAT requires the use of the structure`s elastic mode shapes for reconstruction of the applied force. Although based on the same theory, the two, new techniques do not rely on mode shapes to reconstruct the applied force and may be applied to structures whose mode shapes are not available. One technique uses the measured force and acceleration responses with the rigid body mode shapes to calculate the scalar weighting vector, so the technique is called SWAT-CAL (SWAT using a CALibrated force input). The second technique uses only the free-decay time response of the structure with the rigid body mode shapes to calculate the scalar weighting vector and is called SWAT-TEEM (SWAT using Time Eliminated Elastic Modes).
A comparison of force reconstruction methods for a lumped mass beam
Bateman, V.I.; Mayes, R.L.; Carne, T.G.
1992-01-01
Two extensions of the force reconstruction method, the Sum of Weighted Accelerations Technique (SWAT), are presented in this paper; and the results are compared to those obtained using SWAT. SWAT requires the use of the structure's elastic mode shapes for reconstruction of the applied force. Although based on the same theory, the two, new techniques do not rely on mode shapes to reconstruct the applied force and may be applied to structures whose mode shapes are not available. One technique uses the measured force and acceleration responses with the rigid body mode shapes to calculate the scalar weighting vector, so the technique is called SWAT-CAL (SWAT using a CALibrated force input). The second technique uses only the free-decay time response of the structure with the rigid body mode shapes to calculate the scalar weighting vector and is called SWAT-TEEM (SWAT using Time Eliminated Elastic Modes).
NASA Astrophysics Data System (ADS)
Crawford, Douglas Spencer
Verification and Validation of reconstructed neutron flux based on the maximum entropy method, is presented in this paper. The verification is carried out by comparing the neutron flux spectrum from the maximum entropy method with Monte Carlo N Particle 5 version 1.40 (MCNP5) and Attila-7.1.0-beta (Attila). A spherical 100% 235U critical assembly is modeled as the test case to compare the three methods. The verification error range for the maximum entropy method is 15% to 23% where MCNP5 is taken to be the comparison standard. Attila relative error for the critical assembly is 20% to 35%. Validation is accomplished by comparing a neutron flux spectrum that is back calculated from foil activation measurements performed in the GODIVA experiment (GODIVA). The error range of the reconstructed flux compared to GODIVA is 0%-10%. The error range of the neutron flux spectrum from MCNP5 compared to GODIVA is 0%-20% and the Attila error range compared to the GODIVA is 0%-35%. The maximum entropy method for reconstructing flux is shown to be a fast reliable method, compared to either Monte Carlo methods (MCNP5) or 30 multienergy group methods (Attila) and with respect to the GODIVA experiment.
A FIB-nanotomography method for accurate 3D reconstruction of open nanoporous structures.
Mangipudi, K R; Radisch, V; Holzer, L; Volkert, C A
2016-04-01
We present an automated focused ion beam nanotomography method for nanoporous microstructures with open porosity, and apply it to reconstruct nanoporous gold (np-Au) structures with ligament sizes on the order of a few tens of nanometers. This method uses serial sectioning of a well-defined wedge-shaped geometry to determine the thickness of individual slices from the changes in the sample width in successive cross-sectional images. The pore space of a selected region of the np-Au is infiltrated with ion-beam-deposited Pt composite before serial sectioning. The cross-sectional images are binarized and stacked according to the individual slice thicknesses, and then processed using standard reconstruction methods. For the image conditions and sample geometry used here, we are able to determine the thickness of individual slices with an accuracy much smaller than a pixel. The accuracy of the new method based on actual slice thickness is assessed by comparing it with (i) a reconstruction using the same cross-sectional images but assuming a constant slice thickness, and (ii) a reconstruction using traditional FIB-tomography method employing constant slice thickness. The morphology and topology of the structures are characterized using ligament and pore size distributions, interface shape distribution functions, interface normal distributions, and genus. The results suggest that the morphology and topology of the final reconstructions are significantly influenced when a constant slice thickness is assumed. The study reveals grain-to-grain variations in the morphology and topology of np-Au. PMID:26906523
Richart, Jose; Otal, Antonio; Rodriguez, Silvia; Nicolás, Ana Isabel; DePiaggio, Marina; Santos, Manuel; Vijande, Javier; Perez-Calatayud, Jose
2015-01-01
Purpose There are perineal templates for interstitial implants such as MUPIT and Syed applicators. Their limitations are the intracavitary component deficit and the necessity to use computed tomography (CT) for treatment planning since both applicators are non-magnetic resonance imaging (MRI) compatibles. To overcome these problems, a new template named Template Benidorm (TB) has been recently developed. Titanium needles are usually reconstructed based on their own artifacts, mainly in T1-weighted sequence, using the void on the tip as the needle tip position. Nevertheless, patient tissues surrounding the needles present heterogeneities that complicate the accurate identification of these artifact patterns. The purpose of this work is to improve the titanium needle reconstruction uncertainty for the TB case using a simple method based on the free needle lengths and typical MRI pellets markers. Material and methods The proposed procedure consists on the inclusion of three small A-vitamin pellets (hyperintense on MRI images) compressed by both applicator plates defining the central plane of the plate's arrangement. The needles used are typically 20 cm in length. For each needle, two points are selected defining the straight line. From such line and the plane equations, the intersection can be obtained, and using the free length (knowing the offset distance), the coordinates of the needle tip can be obtained. The method is applied in both T1W and T2W acquisition sequences. To evaluate the inter-observer variation of the method, three implants of T1W and another three of T2W have been reconstructed by two different medical physicists with experience on these reconstructions. Results and conclusions The differences observed in the positioning were significantly smaller than 1 mm in all cases. The presented algorithm also allows the use of only T2W sequence either for contouring or reconstruction purposes. The proposed method is robust and independent of the visibility
RECONSTRUCTION OF THE CORONAL MAGNETIC FIELD USING THE CESE-MHD METHOD
Jiang Chaowei; Feng, Xueshang; Xiang, Changqing; Fan, Yuliang E-mail: fengx@spaceweather.ac.cn E-mail: fanyuliang@bao.ac.cn
2011-02-01
We present a new implementation of the MHD relaxation method for reconstruction of the nearly force-free coronal magnetic field from a photospheric vector magnetogram. A new numerical MHD scheme is proposed to solve the full MHD equations by using the spacetime conservation-element and solution-element method. The bottom boundary condition is prescribed in a similar way as in the stress-and-relax method, by changing the transverse field incrementally to match the magnetogram, and other boundaries of the computational box are set by the nonreflecting boundary conditions. Applications to the well-known benchmarks for nonlinear force-free-field reconstruction, the Low and Lou force-free equilibria, validate the method and confirm its capability for future practical application, with observed magnetograms as inputs.
A Method for 3D Histopathology Reconstruction Supporting Mouse Microvasculature Analysis.
Xu, Yiwen; Pickering, J Geoffrey; Nong, Zengxuan; Gibson, Eli; Arpino, John-Michael; Yin, Hao; Ward, Aaron D
2015-01-01
Structural abnormalities of the microvasculature can impair perfusion and function. Conventional histology provides good spatial resolution with which to evaluate the microvascular structure but affords no 3-dimensional information; this limitation could lead to misinterpretations of the complex microvessel network in health and disease. The objective of this study was to develop and evaluate an accurate, fully automated 3D histology reconstruction method to visualize the arterioles and venules within the mouse hind-limb. Sections of the tibialis anterior muscle from C57BL/J6 mice (both normal and subjected to femoral artery excision) were reconstructed using pairwise rigid and affine registrations of 5 µm-thick, paraffin-embedded serial sections digitized at 0.25 µm/pixel. Low-resolution intensity-based rigid registration was used to initialize the nucleus landmark-based registration, and conventional high-resolution intensity-based registration method. The affine nucleus landmark-based registration was developed in this work and was compared to the conventional affine high-resolution intensity-based registration method. Target registration errors were measured between adjacent tissue sections (pairwise error), as well as with respect to a 3D reference reconstruction (accumulated error, to capture propagation of error through the stack of sections). Accumulated error measures were lower (p < 0.01) for the nucleus landmark technique and superior vasculature continuity was observed. These findings indicate that registration based on automatic extraction and correspondence of small, homologous landmarks may support accurate 3D histology reconstruction. This technique avoids the otherwise problematic "banana-into-cylinder" effect observed using conventional methods that optimize the pairwise alignment of salient structures, forcing them to be section-orthogonal. This approach will provide a valuable tool for high-accuracy 3D histology tissue reconstructions for
Benazzi, Stefano; Stansfield, Ekaterina; Kullmer, Ottmar; Fiorenza, Luca; Gruppioni, Giorgio
2009-08-01
The issue of reconstructing lost or deformed bone presents an equal challenge in the fields of paleoanthropology, bioarchaeology, forensics, and medicine. Particularly, within the disciplines of orthodontics and surgery, the main goal of reconstruction is to restore or create ex novo the proper form and function. The reconstruction of the mandibular condyle requires restoration of articulation, occlusion, and mastication from the functional side as well as the correct shape of the mandible from the esthetic point of view. Meeting all these demands is still problematic for surgeons. It is unfortunate that the collaboration between anthropologists and medical professionals is still limited. Nowadays, geometric morphometric methods (GMM) are routinely applied in shape analysis and increasingly in the reconstruction of missing data in skeletal material in paleoanthropology. Together with methods for three-dimensional (3D) digital model construction and reverse engineering, these methods could prove to be useful in surgical fields for virtual planning of operations and the production of customized biocompatible scaffolds. In this contribution, we have reconstructed the missing left condylar process of the mandible belonging to a famous Italian humanist of the 15th century, Pico della Mirandola (1463-1494) by means of 3D digital models and GMM, having first compared two methods (a simple reflection of the opposite side and the mathematical-statistical GMM approach) in a complete human mandible on which loss of the left condyle was virtually simulated. Finally, stereolithographic models of Pico's skull were prototyped providing the physical assembly of the bony skull structures with a high fitting accuracy. PMID:19645014
Finding pathway-modulating genes from a novel Ontology Fingerprint-derived gene network
Qin, Tingting; Matmati, Nabil; Tsoi, Lam C.; Mohanty, Bidyut K.; Gao, Nan; Tang, Jijun; Lawson, Andrew B.; Hannun, Yusuf A.; Zheng, W. Jim
2014-01-01
To enhance our knowledge regarding biological pathway regulation, we took an integrated approach, using the biomedical literature, ontologies, network analyses and experimental investigation to infer novel genes that could modulate biological pathways. We first constructed a novel gene network via a pairwise comparison of all yeast genes’ Ontology Fingerprints—a set of Gene Ontology terms overrepresented in the PubMed abstracts linked to a gene along with those terms’ corresponding enrichment P-values. The network was further refined using a Bayesian hierarchical model to identify novel genes that could potentially influence the pathway activities. We applied this method to the sphingolipid pathway in yeast and found that many top-ranked genes indeed displayed altered sphingolipid pathway functions, initially measured by their sensitivity to myriocin, an inhibitor of de novo sphingolipid biosynthesis. Further experiments confirmed the modulation of the sphingolipid pathway by one of these genes, PFA4, encoding a palmitoyl transferase. Comparative analysis showed that few of these novel genes could be discovered by other existing methods. Our novel gene network provides a unique and comprehensive resource to study pathway modulations and systems biology in general. PMID:25063300
Spectral/HP Element Method With Hierarchical Reconstruction for Solving Hyperbolic Conservation Laws
Xu, Zhiliang; Lin, Guang
2009-12-01
Hierarchical reconstruction (HR) has been successfully applied to prevent oscillations in solutions computed by finite volume, discontinuous Galerkin, spectral volume schemes when solving hyperbolic conservation laws. In this paper, we demonstrate that HR can also be combined with spectral/hp element methods for solving hyperbolic conservation laws. We show that HR preserves the order of accuracy of spectral/hp element methods for smooth solutions and generate essentially non-oscillatory solution profiles for shock wave problems.
A second-order method for interface reconstruction in orthogonal coordinate systems
Colella, P.; Graves, D.T.; Greenough, J.A.
2002-01-02
The authors present a second-order algorithm for reconstructing an interface from a distribution of volume fractions in a general orthogonal coordinate system with derivatives approximated using finite differences. The method approximates the interface curve by a piecewise-linear profile. An integral formulation is used that accounts for the orthogonal coordinate system in a natural way. The authors present results obtained using this method for tracking a material interface between two compressible media in spherical coordinates.
Monitoring 3D dose distributions in proton therapy by reconstruction using an iterative method.
Kim, Young-Hak; Yoon, Changyeon; Lee, Wonho
2016-08-01
The Bragg peak of protons can be determined by measuring prompt γ-rays. In this study, prompt γ-rays detected by single-photon emission computed tomography with a geometrically optimized collimation system were reconstructed by an iterative method. The falloff position by iterative method (52.48mm) was most similar to the Bragg peak (52mm) of an 80MeV proton compared with those of back-projection (54.11mm) and filtered back-projection (54.91mm) methods. Iterative method also showed better image performance than other methods. PMID:27179145
New method for the design of a phase-only computer hologram for multiplane reconstruction
NASA Astrophysics Data System (ADS)
Ying, Chao-Fu; Pang, Hui; Fan, Chang-Jiang; Zhou, Wei-Dong
2011-05-01
A new iterative method for creating a pure phase hologram to diffract light into two arbitrary two-dimensional intensity profiles in two output planes is presented. This new method combines the Gerchberg-Saxton (GS) iterative algorithm and the compensation iterative algorithm. Numerical simulation indicates that the new method outperforms the most frequently used method in accuracy when it is used to generate large size images. A preliminary experiment of optical reconstruction has been taken and used to verify the feasibility of our method.
NASA Astrophysics Data System (ADS)
Ye, Jinzuo; Du, Yang; An, Yu; Chi, Chongwei; Tian, Jie
2014-12-01
Fluorescence molecular tomography (FMT) is a promising imaging technique in preclinical research, enabling three-dimensional location of the specific tumor position for small animal imaging. However, FMT presents a challenging inverse problem that is quite ill-posed and ill-conditioned. Thus, the reconstruction of FMT faces various challenges in its robustness and efficiency. We present an FMT reconstruction method based on nonmonotone spectral projected gradient pursuit (NSPGP) with l1-norm optimization. At each iteration, a spectral gradient-projection method approximately minimizes a least-squares problem with an explicit one-norm constraint. A nonmonotone line search strategy is utilized to get the appropriate updating direction, which guarantees global convergence. Additionally, the Barzilai-Borwein step length is applied to build the optimal step length, further improving the convergence speed of the proposed method. Several numerical simulation studies, including multisource cases as well as comparative analyses, have been performed to evaluate the performance of the proposed method. The results indicate that the proposed NSPGP method is able to ensure the accuracy, robustness, and efficiency of FMT reconstruction. Furthermore, an in vivo experiment based on a heterogeneous mouse model was conducted, and the results demonstrated that the proposed method held the potential for practical applications of FMT.
New reconstruction method for x-ray testing of multilayer printed circuit board
NASA Astrophysics Data System (ADS)
Yang, Min; Wang, Gao; Liu, Yongzhan
2010-05-01
For multilayer printed circuit board (PCB) and large-scale integrated circuit (LIC) chips, nondestructive testing of the inner structure and welding defects is very important for circuit diagram reverse design and manufacturing quality control. The traditional nondestructive testing of this kind of plate-like object is digital radiography (DR), which can provide only images with overlapped information, so it is difficult to get a full and accurate circuit image of every layer and the position of the defects using the DR method. At the same time, traditional computed tomography scanning methods are also unable to resolve this problem. A new reconstruction method is proposed for the nondestructive testing of plate-like objects. With this method, x rays irradiate the surface of the reconstructed object at an oblique angle, and a series of projection images are obtained while the object is rotating. Then, through a relevant preprocessing method on the projections and a special reconstructing algorithm, cross sections of the scanning region are finally obtained slice by slice. The experimental results prove that this method satisfactorily addresses the challenges of nondestructive testing of plate-like objects such as PCB or LIC.
Xiaodong Liu; Lijun Xuan; Hong Luo; Yidong Xia
2001-01-01
A reconstructed discontinuous Galerkin (rDG(P1P2)) method, originally introduced for the compressible Euler equations, is developed for the solution of the compressible Navier- Stokes equations on 3D hybrid grids. In this method, a piecewise quadratic polynomial solution is obtained from the underlying piecewise linear DG solution using a hierarchical Weighted Essentially Non-Oscillatory (WENO) reconstruction. The reconstructed quadratic polynomial solution is then used for the computation of the inviscid fluxes and the viscous fluxes using the second formulation of Bassi and Reay (Bassi-Rebay II). The developed rDG(P1P2) method is used to compute a variety of flow problems to assess its accuracy, efficiency, and robustness. The numerical results demonstrate that the rDG(P1P2) method is able to achieve the designed third-order of accuracy at a cost slightly higher than its underlying second-order DG method, outperform the third order DG method in terms of both computing costs and storage requirements, and obtain reliable and accurate solutions to the large eddy simulation (LES) and direct numerical simulation (DNS) of compressible turbulent flows.
Reconstructing paleo- and initial landscapes using a multi-method approach in hummocky NE Germany
NASA Astrophysics Data System (ADS)
van der Meij, Marijn; Temme, Arnaud; Sommer, Michael
2016-04-01
The unknown state of the landscape at the onset of soil and landscape formation is one of the main sources of uncertainty in landscape evolution modelling. Reconstruction of these initial conditions is not straightforward due to the problems of polygenesis and equifinality: different initial landscapes can change through different sets of processes to an identical end state. Many attempts have been done to reconstruct this initial landscape. These include remote sensing, reverse modelling and the usage of soil properties. However, each of these methods is only applicable on a certain spatial scale and comes with its own uncertainties. Here we present a new framework and preliminary results of reconstructing paleo-landscapes in an eroding setting, where we combine reverse modelling, remote sensing, geochronology, historical data and present soil data. With the combination of these different approaches, different spatial scales can be covered and the uncertainty in the reconstructed landscape can be reduced. The study area is located in north-east Germany, where the landscape consists of a collection of small local depressions, acting as closed catchments. This postglacial hummocky landscape is suitable to test our new multi-method approach because of several reasons: i) the closed catchments enable a full mass balance of erosion and deposition, due to the collection of colluvium in these depressions, ii) significant topography changes only started recently with medieval deforestation and recent intensification of agriculture and iii) due to extensive previous research a large dataset is readily available.
Choi, Tae Joon; Yang, Won Yong; Kang, Sang Yoon
2016-01-01
Titanium micro-mesh implants are widely used in orbital wall reconstructions because they have several advantageous characteristics. However, the rough and irregular marginal spurs of the cut edges of the titanium mesh sheet impede the efficacious and minimally traumatic insertion of the implant, because these spurs may catch or hook the orbital soft tissue, skin, or conjunctiva during the insertion procedure. In order to prevent this problem, we developed an easy method of inserting a titanium micro-mesh, in which it is wrapped with the aseptic transparent plastic film that is used to pack surgical instruments or is attached to one side of the inner suture package. Fifty-four patients underwent orbital wall reconstruction using a transconjunctival or transcutaneous approach. The wrapped implant was easily inserted without catching or injuring the orbital soft tissue, skin, or conjunctiva. In most cases, the implant was inserted in one attempt. Postoperative computed tomographic scans showed excellent placement of the titanium micro-mesh and adequate anatomic reconstruction of the orbital walls. This wrapping insertion method may be useful for making the insertion of titanium micro-mesh implants in the reconstruction of orbital wall fractures easier and less traumatic. PMID:26848451
Choi, Tae Joon; Burm, Jin Sik; Yang, Won Yong; Kang, Sang Yoon
2016-01-01
Titanium micro-mesh implants are widely used in orbital wall reconstructions because they have several advantageous characteristics. However, the rough and irregular marginal spurs of the cut edges of the titanium mesh sheet impede the efficacious and minimally traumatic insertion of the implant, because these spurs may catch or hook the orbital soft tissue, skin, or conjunctiva during the insertion procedure. In order to prevent this problem, we developed an easy method of inserting a titanium micro-mesh, in which it is wrapped with the aseptic transparent plastic film that is used to pack surgical instruments or is attached to one side of the inner suture package. Fifty-four patients underwent orbital wall reconstruction using a transconjunctival or transcutaneous approach. The wrapped implant was easily inserted without catching or injuring the orbital soft tissue, skin, or conjunctiva. In most cases, the implant was inserted in one attempt. Postoperative computed tomographic scans showed excellent placement of the titanium micro-mesh and adequate anatomic reconstruction of the orbital walls. This wrapping insertion method may be useful for making the insertion of titanium micro-mesh implants in the reconstruction of orbital wall fractures easier and less traumatic. PMID:26848451
Improved total variation minimization method for few-view computed tomography image reconstruction
2014-01-01
Background Due to the harmful radiation dose effects for patients, minimizing the x-ray exposure risk has been an area of active research in medical computed tomography (CT) imaging. In CT, reducing the number of projection views is an effective means for reducing dose. The use of fewer projection views can also lead to a reduced imaging time and minimizing potential motion artifacts. However, conventional CT image reconstruction methods will appears prominent streak artifacts for few-view data. Inspired by the compressive sampling (CS) theory, iterative CT reconstruction algorithms have been developed and generated impressive results. Method In this paper, we propose a few-view adaptive prior image total variation (API-TV) algorithm for CT image reconstruction. The prior image reconstructed by a conventional analytic algorithm such as filtered backprojection (FBP) algorithm from densely angular-sampled projections. Results To validate and evaluate the performance of the proposed algorithm, we carried out quantitative evaluation studies in computer simulation and physical experiment. Conclusion The results show that the API-TV algorithm can yield images with quality comparable to that obtained with existing algorithms. PMID:24903155
Accident or homicide--virtual crime scene reconstruction using 3D methods.
Buck, Ursula; Naether, Silvio; Räss, Beat; Jackowski, Christian; Thali, Michael J
2013-02-10
The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event. PMID:22727689
A multi-thread scheduling method for 3D CT image reconstruction using multi-GPU.
Zhu, Yining; Zhao, Yunsong; Zhao, Xing
2012-01-01
As a whole process, we present a concept that the complete reconstruction of CT image should include the computation part on GPUs and the data storage part on hard disks. From this point of view, we propose a Multi-Thread Scheduling (MTS) method to implement the 3D CT image reconstruction such as using FDK algorithm, to trade off the computing and storage time. In this method we use Multi-Threads to control GPUs and a separate thread to accomplish data storage, so that we make the calculation and data storage simultaneously. In addition, we use the 4-channel texture to maintain symmetrical projection data in CUDA framework, which can reduce the calculation time significantly. Numerical experiment shows that the time for the whole process with our method is almost the same as the data storage time. PMID:22635174
A semi-automatic method for positioning a femoral bone reconstruction for strict view generation.
Milano, Federico; Ritacco, Lucas; Gomez, Adrian; Gonzalez Bernaldo de Quiros, Fernan; Risk, Marcelo
2010-01-01
In this paper we present a semi-automatic method for femoral bone positioning after 3D image reconstruction from Computed Tomography images. This serves as grounding for the definition of strict axial, longitudinal and anterior-posterior views, overcoming the problem of patient positioning biases in 2D femoral bone measuring methods. After the bone reconstruction is aligned to a standard reference frame, new tomographic slices can be generated, on which unbiased measures may be taken. This could allow not only accurate inter-patient comparisons but also intra-patient comparisons, i.e., comparisons of images of the same patient taken at different times. This method could enable medical doctors to diagnose and follow up several bone deformities more easily. PMID:21096490
Astigmatism error modification for absolute shape reconstruction using Fourier transform method
NASA Astrophysics Data System (ADS)
He, Yuhang; Li, Qiang; Gao, Bo; Liu, Ang; Xu, Kaiyuan; Wei, Xiaohong; Chai, Liqun
2014-12-01
A method is proposed to modify astigmatism errors in absolute shape reconstruction of optical plane using Fourier transform method. If a transmission and reflection flat are used in an absolute test, two translation measurements lead to obtain the absolute shapes by making use of the characteristic relationship between the differential and original shapes in spatial frequency domain. However, because the translation device cannot guarantee the test and reference flats rigidly parallel to each other after the translations, a tilt error exists in the obtained differential data, which caused power and astigmatism errors in the reconstructed shapes. In order to modify the astigmatism errors, a rotation measurement is added. Based on the rotation invariability of the form of Zernike polynomial in circular domain, the astigmatism terms are calculated by solving polynomial coefficient equations related to the rotation differential data, and subsequently the astigmatism terms including error are modified. Computer simulation proves the validity of the proposed method.
High-contrast pattern reconstructions using a phase-seeded point CGH method.
McWilliam, Richard; Williams, Gavin L; Cowling, Joshua J; Seed, Nicholas L; Purvis, Alan
2016-03-01
A major challenge encountered in digital holography applications is the need to synthesize computer-generated holograms (CGHs) that are realizable as phase-only elements while also delivering high quality reconstruction. This trade-off is particularly acute in high-precision applications such as photolithography where contrast typically must exceed 0.6. A seeded-phase point method is proposed to address this challenge, whereby patterns composed of fine lines that intersect and form closed shapes are reconstructed with high contrast while maintaining a phase-only CGH. The method achieves superior contrast to that obtained by uniform or random seeded-phase methods while maintaining computational efficiency for large area exposures. It is also shown that binary phase modulation achieves similar contrast performance with benefits for the fabrication of simpler diffractive optical elements. PMID:26974633
An infrared image super-resolution reconstruction method based on compressive sensing
NASA Astrophysics Data System (ADS)
Mao, Yuxing; Wang, Yan; Zhou, Jintao; Jia, Haiwei
2016-05-01
Limited by the properties of infrared detector and camera lens, infrared images are often detail missing and indistinct in vision. The spatial resolution needs to be improved to satisfy the requirements of practical application. Based on compressive sensing (CS) theory, this thesis presents a single image super-resolution reconstruction (SRR) method. With synthetically adopting image degradation model, difference operation-based sparse transformation method and orthogonal matching pursuit (OMP) algorithm, the image SRR problem is transformed into a sparse signal reconstruction issue in CS theory. In our work, the sparse transformation matrix is obtained through difference operation to image, and, the measurement matrix is achieved analytically from the imaging principle of infrared camera. Therefore, the time consumption can be decreased compared with the redundant dictionary obtained by sample training such as K-SVD. The experimental results show that our method can achieve favorable performance and good stability with low algorithm complexity.
Zhang, Haiyan; Zhang, Liyi; Sun, Yunshan; Zhang, Jingyu
2015-01-01
Reducing X-ray tube current is one of the widely used methods for decreasing the radiation dose. Unfortunately, the signal-to-noise ratio (SNR) of the projection data degrades simultaneously. To improve the quality of reconstructed images, a dictionary learning based penalized weighted least-squares (PWLS) approach is proposed for sinogram denoising. The weighted least-squares considers the statistical characteristic of noise and the penalty models the sparsity of sinogram based on dictionary learning. Then reconstruct CT image using filtered back projection (FBP) algorithm from the denoised sinogram. The proposed method is particularly suitable for the projection data with low SNR. Experimental results show that the proposed method can get high-quality CT images when the signal to noise ratio of projection data declines sharply. PMID:26409424
A Monte Carlo based three-dimensional dose reconstruction method derived from portal dose images
Elmpt, Wouter J. C. van; Nijsten, Sebastiaan M. J. J. G.; Schiffeleers, Robert F. H.; Dekker, Andre L. A. J.; Mijnheer, Ben J.; Lambin, Philippe; Minken, Andre W. H.
2006-07-15
The verification of intensity-modulated radiation therapy (IMRT) is necessary for adequate quality control of the treatment. Pretreatment verification may trace the possible differences between the planned dose and the actual dose delivered to the patient. To estimate the impact of differences between planned and delivered photon beams, a three-dimensional (3-D) dose verification method has been developed that reconstructs the dose inside a phantom. The pretreatment procedure is based on portal dose images measured with an electronic portal imaging device (EPID) of the separate beams, without the phantom in the beam and a 3-D dose calculation engine based on the Monte Carlo calculation. Measured gray scale portal images are converted into portal dose images. From these images the lateral scattered dose in the EPID is subtracted and the image is converted into energy fluence. Subsequently, a phase-space distribution is sampled from the energy fluence and a 3-D dose calculation in a phantom is started based on a Monte Carlo dose engine. The reconstruction model is compared to film and ionization chamber measurements for various field sizes. The reconstruction algorithm is also tested for an IMRT plan using 10 MV photons delivered to a phantom and measured using films at several depths in the phantom. Depth dose curves for both 6 and 10 MV photons are reconstructed with a maximum error generally smaller than 1% at depths larger than the buildup region, and smaller than 2% for the off-axis profiles, excluding the penumbra region. The absolute dose values are reconstructed to within 1.5% for square field sizes ranging from 5 to 20 cm width. For the IMRT plan, the dose was reconstructed and compared to the dose distribution with film using the gamma evaluation, with a 3% and 3 mm criterion. 99% of the pixels inside the irradiated field had a gamma value smaller than one. The absolute dose at the isocenter agreed to within 1% with the dose measured with an ionization
Sparsity reconstruction for bioluminescence tomography based on an augmented Lagrangian method
NASA Astrophysics Data System (ADS)
Guo, Wei; Jia, Kebin; Tian, Jie; Han, Dong; Liu, Xueyan; Liu, Kai; Zhang, Qian; Feng, Jinchao; Qin, Chenghu
2012-03-01
Bioluminescence imaging (BLI) is an optical molecular imaging modality for monitoring physiological and pathological activities at the molecular level. The information of bioluminescent probe distribution in small animals can be threedimensionally and quantitatively obtained by bioluminescence tomography (BLT). Due to ill-posed nature, BLT may bear multiple solutions and aberrant reconstruction in the presence of measurement noise and optical parameter mismatches. Among different regularization methods, L2-type regularization strategy is the most popular and commonly-applied method, which minimizes the output-least-square formulation incorporated with the l2-norm regularization term to stabilize the problem. However, it often imposes over-smoothing on the reconstruction results. In contrast, for many practical applications, such as early detection of tumors, the volumes of the bioluminescent sources are very small compared with the whole body. In this paper, L1 regularization is used to fully take advantage of the sparsity prior knowledge and improve both efficiency and stability. And then a reconstruction method based on the augmented Lagrangian approach is proposed, which considers the BLT problem as the constrained optimization problem and employs the Bregman iterative method to deal with it. By using "divide and conquer" approach, the optimization problem can be exactly and fast solved by iteratively solving a sequence of unconstrained subproblems. To evaluate the performance of the proposed method in turbid mouse geometry, stimulate experiments with a heterogeneous 3D mouse atlas are conducted. In addition, physical experiments further demonstrate the potential of the proposed algorithm in practical applications.
Listening to the noise: random fluctuations reveal gene network parameters
Munsky, Brian; Khammash, Mustafa
2009-01-01
The cellular environment is abuzz with noise. The origin of this noise is attributed to the inherent random motion of reacting molecules that take part in gene expression and post expression interactions. In this noisy environment, clonal populations of cells exhibit cell-to-cell variability that frequently manifests as significant phenotypic differences within the cellular population. The stochastic fluctuations in cellular constituents induced by noise can be measured and their statistics quantified. We show that these random fluctuations carry within them valuable information about the underlying genetic network. Far from being a nuisance, the ever-present cellular noise acts as a rich source of excitation that, when processed through a gene network, carries its distinctive fingerprint that encodes a wealth of information about that network. We demonstrate that in some cases the analysis of these random fluctuations enables the full identification of network parameters, including those that may otherwise be difficult to measure. This establishes a potentially powerful approach for the identification of gene networks and offers a new window into the workings of these networks.
A Statistical Method for Reconstructing the Core Location of an Extensive Air Shower
NASA Astrophysics Data System (ADS)
Hedayati Kh., H.; Moradi, A.; Emami, M.
2015-09-01
Conventional methods of reconstructing extensive air showers (EASs) depend on a lateral density function which itself depends on shower size, age parameter, and core location. In the fitting procedure of a lateral density function to surface array information, the only parameter whose initial value is essential is core location. In this paper, we describe a refined version of a statistical method which can be used to find the initial trial core location of EASs with better precision than the conventional methods. In this method, we use arrival time information of secondary particles for finding not only arrival direction, but also core location.
Karnowski, Thomas Paul; Govindaswamy, Priya; Tobin Jr, Kenneth William; Chaum, Edward; Abramoff, M.D.
2008-01-01
In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.
Reconstructing uniformly attenuated rotating slant-hole SPECT projection data using the DBH method
NASA Astrophysics Data System (ADS)
Huang, Qiu; Xu, Jingyan; Tsui, Benjamin M. W.; Gullberg, Grant T.
2009-07-01
This work applies a previously developed analytical algorithm to the reconstruction problem in a rotating multi-segment slant-hole (RMSSH) SPECT system. The RMSSH collimator has greater detection efficiency than the parallel-hole collimator with comparable spatial resolution at the expense of limited common volume-of-view (CVOV) and is therefore suitable for detecting low-contrast lesions in breast, cardiac and brain imaging. The absorption of gamma photons in both the human breast and brain can be assu- med to follow an exponential rule with a constant attenuation coefficient. In this work, the RMSSH SPECT data of a digital NCAT phantom with breast attachment are modeled as the uniformly attenuated Radon transform of the activity distribution. These data are reconstructed using an analytical algorithm called the DBH method, which is an acronym for the procedure of differentiation backprojection followed by a finite weighted inverse Hilbert transform. The projection data are first differentiated along a specific direction in the projection space and then backprojected to the image space. The result from this first step is equal to a one-dimensional finite weighted Hilbert transform of the object; this transform is then numerically inverted to obtain the reconstructed image. With the limited CVOV of the RMSSH collimator, the detector captures gamma photon emissions from the breast and from parts of the torso. The simulation results show that the DBH method is capable of exactly reconstructing the activity within a well-defined region-of-interest (ROI) within the breast if the activity is confined to the breast or if the activity outside the CVOV is uniformly attenuated for each measured projection, while a conventional filtered backprojection algorithm only reconstructs the high frequency components of the activity function in the same geometry.
NASA Astrophysics Data System (ADS)
Zhang, Yue; Sun, Xian; Thiele, Antje; Hinz, Stefan
2015-10-01
Synthetic aperture radar (SAR) systems, such as TanDEM-X, TerraSAR-X and Cosmo-SkyMed, acquire imagery with high spatial resolution (HR), making it possible to observe objects in urban areas with high detail. In this paper, we propose a new top-down framework for three-dimensional (3D) building reconstruction from HR interferometric SAR (InSAR) data. Unlike most methods proposed before, we adopt a generative model and utilize the reconstruction process by maximizing a posteriori estimation (MAP) through Monte Carlo methods. The reason for this strategy refers to the fact that the noisiness of SAR images calls for a thorough prior model to better cope with the inherent amplitude and phase fluctuations. In the reconstruction process, according to the radar configuration and the building geometry, a 3D building hypothesis is mapped to the SAR image plane and decomposed to feature regions such as layover, corner line, and shadow. Then, the statistical properties of intensity, interferometric phase and coherence of each region are explored respectively, and are included as region terms. Roofs are not directly considered as they are mixed with wall into layover area in most cases. When estimating the similarity between the building hypothesis and the real data, the prior, the region term, together with the edge term related to the contours of layover and corner line, are taken into consideration. In the optimization step, in order to achieve convergent reconstruction outputs and get rid of local extrema, special transition kernels are designed. The proposed framework is evaluated on the TanDEM-X dataset and performs well for buildings reconstruction.
Does thorax EIT image analysis depend on the image reconstruction method?
NASA Astrophysics Data System (ADS)
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2013-04-01
Different methods were proposed to analyze the resulting images of electrical impedance tomography (EIT) measurements during ventilation. The aim of our study was to examine if the analysis methods based on back-projection deliver the same results when applied on images based on other reconstruction algorithms. Seven mechanically ventilated patients with ARDS were examined by EIT. The thorax contours were determined from the routine CT images. EIT raw data was reconstructed offline with (1) filtered back-projection with circular forward model (BPC); (2) GREIT reconstruction method with circular forward model (GREITC) and (3) GREIT with individual thorax geometry (GREITT). Three parameters were calculated on the resulting images: linearity, global ventilation distribution and regional ventilation distribution. The results of linearity test are 5.03±2.45, 4.66±2.25 and 5.32±2.30 for BPC, GREITC and GREITT, respectively (median ±interquartile range). The differences among the three methods are not significant (p = 0.93, Kruskal-Wallis test). The proportions of ventilation in the right lung are 0.58±0.17, 0.59±0.20 and 0.59±0.25 for BPC, GREITC and GREITT, respectively (p = 0.98). The differences of the GI index based on different reconstruction methods (0.53±0.16, 0.51±0.25 and 0.54±0.16 for BPC, GREITC and GREITT, respectively) are also not significant (p = 0.93). We conclude that the parameters developed for images generated with GREITT are comparable with filtered back-projection and GREITC.
NASA Astrophysics Data System (ADS)
Bernstein, Ally Leigh; Dhanantwari, Amar; Jurcova, Martina; Cheheltani, Rabee; Naha, Pratap Chandra; Ivanc, Thomas; Shefer, Efrat; Cormode, David Peter
2016-05-01
Computed tomography is a widely used medical imaging technique that has high spatial and temporal resolution. Its weakness is its low sensitivity towards contrast media. Iterative reconstruction techniques (ITER) have recently become available, which provide reduced image noise compared with traditional filtered back-projection methods (FBP), which may allow the sensitivity of CT to be improved, however this effect has not been studied in detail. We scanned phantoms containing either an iodine contrast agent or gold nanoparticles. We used a range of tube voltages and currents. We performed reconstruction with FBP, ITER and a novel, iterative, modal-based reconstruction (IMR) algorithm. We found that noise decreased in an algorithm dependent manner (FBP > ITER > IMR) for every scan and that no differences were observed in attenuation rates of the agents. The contrast to noise ratio (CNR) of iodine was highest at 80 kV, whilst the CNR for gold was highest at 140 kV. The CNR of IMR images was almost tenfold higher than that of FBP images. Similar trends were found in dual energy images formed using these algorithms. In conclusion, IMR-based reconstruction techniques will allow contrast agents to be detected with greater sensitivity, and may allow lower contrast agent doses to be used.
Bernstein, Ally Leigh; Dhanantwari, Amar; Jurcova, Martina; Cheheltani, Rabee; Naha, Pratap Chandra; Ivanc, Thomas; Shefer, Efrat; Cormode, David Peter
2016-01-01
Computed tomography is a widely used medical imaging technique that has high spatial and temporal resolution. Its weakness is its low sensitivity towards contrast media. Iterative reconstruction techniques (ITER) have recently become available, which provide reduced image noise compared with traditional filtered back-projection methods (FBP), which may allow the sensitivity of CT to be improved, however this effect has not been studied in detail. We scanned phantoms containing either an iodine contrast agent or gold nanoparticles. We used a range of tube voltages and currents. We performed reconstruction with FBP, ITER and a novel, iterative, modal-based reconstruction (IMR) algorithm. We found that noise decreased in an algorithm dependent manner (FBP > ITER > IMR) for every scan and that no differences were observed in attenuation rates of the agents. The contrast to noise ratio (CNR) of iodine was highest at 80 kV, whilst the CNR for gold was highest at 140 kV. The CNR of IMR images was almost tenfold higher than that of FBP images. Similar trends were found in dual energy images formed using these algorithms. In conclusion, IMR-based reconstruction techniques will allow contrast agents to be detected with greater sensitivity, and may allow lower contrast agent doses to be used. PMID:27185492
A comparison of reconstruction methods for undersampled atomic force microscopy images.
Luo, Yufan; Andersson, Sean B
2015-12-18
Non-raster scanning and undersampling of atomic force microscopy (AFM) images is a technique for improving imaging rate and reducing the amount of tip-sample interaction needed to produce an image. Generation of the final image can be done using a variety of image processing techniques based on interpolation or optimization. The choice of reconstruction method has a large impact on the quality of the recovered image and the proper choice depends on the sample under study. In this work we compare interpolation through the use of inpainting algorithms with reconstruction based on optimization through the use of the basis pursuit algorithm commonly used for signal recovery in compressive sensing. Using four different sampling patterns found in non-raster AFM, namely row subsampling, spiral scanning, Lissajous scanning, and random scanning, we subsample data from existing images and compare reconstruction performance against the original image. The results illustrate that inpainting generally produces superior results when the image contains primarily low frequency content while basis pursuit is better when the images have mixed, but sparse, frequency content. Using support vector machines, we then classify images based on their frequency content and sparsity and, from this classification, develop a fast decision strategy to select a reconstruction algorithm to be used on subsampled data. The performance of the classification and decision test are demonstrated on test AFM images. PMID:26585418
A comparison of reconstruction methods for undersampled atomic force microscopy images
NASA Astrophysics Data System (ADS)
Luo, Yufan; Andersson, Sean B.
2015-12-01
Non-raster scanning and undersampling of atomic force microscopy (AFM) images is a technique for improving imaging rate and reducing the amount of tip-sample interaction needed to produce an image. Generation of the final image can be done using a variety of image processing techniques based on interpolation or optimization. The choice of reconstruction method has a large impact on the quality of the recovered image and the proper choice depends on the sample under study. In this work we compare interpolation through the use of inpainting algorithms with reconstruction based on optimization through the use of the basis pursuit algorithm commonly used for signal recovery in compressive sensing. Using four different sampling patterns found in non-raster AFM, namely row subsampling, spiral scanning, Lissajous scanning, and random scanning, we subsample data from existing images and compare reconstruction performance against the original image. The results illustrate that inpainting generally produces superior results when the image contains primarily low frequency content while basis pursuit is better when the images have mixed, but sparse, frequency content. Using support vector machines, we then classify images based on their frequency content and sparsity and, from this classification, develop a fast decision strategy to select a reconstruction algorithm to be used on subsampled data. The performance of the classification and decision test are demonstrated on test AFM images.
Bernstein, Ally Leigh; Dhanantwari, Amar; Jurcova, Martina; Cheheltani, Rabee; Naha, Pratap Chandra; Ivanc, Thomas; Shefer, Efrat; Cormode, David Peter
2016-01-01
Computed tomography is a widely used medical imaging technique that has high spatial and temporal resolution. Its weakness is its low sensitivity towards contrast media. Iterative reconstruction techniques (ITER) have recently become available, which provide reduced image noise compared with traditional filtered back-projection methods (FBP), which may allow the sensitivity of CT to be improved, however this effect has not been studied in detail. We scanned phantoms containing either an iodine contrast agent or gold nanoparticles. We used a range of tube voltages and currents. We performed reconstruction with FBP, ITER and a novel, iterative, modal-based reconstruction (IMR) algorithm. We found that noise decreased in an algorithm dependent manner (FBP > ITER > IMR) for every scan and that no differences were observed in attenuation rates of the agents. The contrast to noise ratio (CNR) of iodine was highest at 80 kV, whilst the CNR for gold was highest at 140 kV. The CNR of IMR images was almost tenfold higher than that of FBP images. Similar trends were found in dual energy images formed using these algorithms. In conclusion, IMR-based reconstruction techniques will allow contrast agents to be detected with greater sensitivity, and may allow lower contrast agent doses to be used. PMID:27185492
NASA Astrophysics Data System (ADS)
Preza, Chrysanthe; Miller, Michael I.; Conchello, Jose-Angel
1993-07-01
We have shown that the linear least-squares (LLS) estimate of the intensities of a 3-D object obtained from a set of optical sections is unstable due to the inversion of small and zero-valued eigenvalues of the point-spread function (PSF) operator. The LLS solution was regularized by constraining it to lie in a subspace spanned by the eigenvectors corresponding to a selected number of the largest eigenvalues. In this paper we extend the regularized LLS solution to a maximum a posteriori (MAP) solution induced by a prior formed from a 'Good's like' smoothness penalty. This approach also yields a regularized linear estimator which reduces noise as well as edge artifacts in the reconstruction. The advantage of the linear MAP (LMAP) estimate over the current regularized LLS (RLLS) is its ability to regularize the inverse problem by smoothly penalizing components in the image associated with small eigenvalues. Computer simulations were performed using a theoretical PSF and a simple phantom to compare the two regularization techniques. It is shown that the reconstructions using the smoothness prior, give superior variance and bias results compared to the RLLS reconstructions. Encouraging reconstructions obtained with the LMAP method from real microscopical images of a 10 micrometers fluorescent bead, and a four-cell Volvox embryo are shown.
NASA Astrophysics Data System (ADS)
Lartizien, Carole; Kinahan, Paul E.; Comtat, Claude; Lin, Michael; Swensson, Richard G.; Trebossen, Regine; Bendriem, Bernard
2000-04-01
This work presents initial results from observer detection performance studies using the same volume visualization software tools that are used in clinical PET oncology imaging. Research into the FORE+OSEM and FORE+AWOSEM statistical image reconstruction methods tailored to whole- body 3D PET oncology imaging have indicated potential improvements in image SNR compared to currently used analytic reconstruction methods (FBP). To assess the resulting impact of these reconstruction methods on the performance of human observers in detecting and localizing tumors, we use a non- Monte Carlo technique to generate multiple statistically accurate realizations of 3D whole-body PET data, based on an extended MCAT phantom and with clinically realistic levels of statistical noise. For each realization, we add a fixed number of randomly located 1 cm diam. lesions whose contrast is varied among pre-calibrated values so that the range of true positive fractions is well sampled. The observer is told the number of tumors and, similar to the AFROC method, asked to localize all of them. The true positive fraction for the three algorithms (FBP, FORE+OSEM, FORE+AWOSEM) as a function of lesion contrast is calculated, although other protocols could be compared. A confidence level for each tumor is also recorded for incorporation into later AFROC analysis.
A Reconstruction Method Based on AL0FGD for Compressed Sensing in Border Monitoring WSN System
Wang, Yan; Wu, Xi; Li, Wenzao; Zhang, Yi; Li, Zhi; Zhou, Jiliu
2014-01-01
In this paper, to monitor the border in real-time with high efficiency and accuracy, we applied the compressed sensing (CS) technology on the border monitoring wireless sensor network (WSN) system and proposed a reconstruction method based on approximately l0 norm and fast gradient descent (AL0FGD) for CS. In the frontend of the system, the measurement matrix was used to sense the border information in a compressed manner, and then the proposed reconstruction method was applied to recover the border information at the monitoring terminal. To evaluate the performance of the proposed method, the helicopter sound signal was used as an example in the experimental simulation, and three other typical reconstruction algorithms 1)split Bregman algorithm, 2)iterative shrinkage algorithm, and 3)smoothed approximate l0 norm (SL0), were employed for comparison. The experimental results showed that the proposed method has a better performance in recovering the helicopter sound signal in most cases, which could be used as a basis for further study of the border monitoring WSN system. PMID:25461759
NASA Astrophysics Data System (ADS)
Lei, Jing; Liu, Shi
2015-12-01
Electrical capacitance tomography (ECT) is considered to be a competitive measurement method. The imaging objects in ECT measurements are often in a time-varying process, and exploiting the prior information related to the dynamic nature is important for reconstructing high-quality images. Different from existing reconstruction models, in this paper a new model that incorporates the spatial correlation of the pixels by introducing the radial basis function (RBF) method, the dynamic behaviors of a timevarying imaging object, and the ECT measurement information is proposed to formulate the dynamic imaging problem. An objective functional that exploits the spatial correlation of the pixels, the combinational regularizer of the first-order total variation (FOTV) and the second-order total variation (SOTV), the multi-scale regularization, the spatial constraint, and the temporal correlation is proposed to convert the ECT imaging task into an optimization problem. A split Bregman iteration (SBI) method based iteration scheme is developed for solving the proposed objective functional. Numerical simulation results validate the superiority of the proposed reconstruction method on the improvement of the imaging quality.
A Reconstruction Method of Blood Flow Velocity in Left Ventricle Using Color Flow Ultrasound
Jang, Jaeseong; Ahn, Chi Young; Jeon, Kiwan; Heo, Jung; Lee, DongHak; Choi, Jung-il
2015-01-01
Vortex flow imaging is a relatively new medical imaging method for the dynamic visualization of intracardiac blood flow, a potentially useful index of cardiac dysfunction. A reconstruction method is proposed here to quantify the distribution of blood flow velocity fields inside the left ventricle from color flow images compiled from ultrasound measurements. In this paper, a 2D incompressible Navier-Stokes equation with a mass source term is proposed to utilize the measurable color flow ultrasound data in a plane along with the moving boundary condition. The proposed model reflects out-of-plane blood flows on the imaging plane through the mass source term. The boundary conditions to solve the system of equations are derived from the dimensions of the ventricle extracted from 2D echocardiography data. The performance of the proposed method is evaluated numerically using synthetic flow data acquired from simulating left ventricle flows. The numerical simulations show the feasibility and potential usefulness of the proposed method of reconstructing the intracardiac flow fields. Of particular note is the finding that the mass source term in the proposed model improves the reconstruction performance. PMID:26078773
Reconstruction of 3D structure using stochastic methods: morphology and transport properties
NASA Astrophysics Data System (ADS)
Karsanina, Marina; Gerke, Kirill; Čapek, Pavel; Vasilyev, Roman; Korost, Dmitry; Skvortsova, Elena
2013-04-01
One of the main factors defining numerous flow phenomena in rocks, soils and other porous media, including fluid and solute movements, is pore structure, e.g., pore sizes and their connectivity. Numerous numerical methods were developed to quantify single and multi-phase flow in such media on microscale. Among most popular ones are: 1) a wide range of finite difference/element/volume solutions of Navier-Stokes equations and its simplifications; 2) lattice-Boltzmann method; 3) pore-network models, among others. Each method has some advantages and shortcomings, so that different research teams usually utilize more than one, depending on the study case. Recent progress in 3D imaging of internal structure, e.g., X-ray tomography, FIB-SEM and confocal microscopy, made it possible to obtain digitized input pore parameters for such models, however, a trade-off between resolution and sample size is usually unavoidable. There are situations then only standard two-dimensional information of porous structure is known due to tomography high cost or resolution limitations. However, physical modeling on microscale requires 3D information. There are three main approaches to reconstruct (using 2D cut(s) or some other limited information/properties) porous media: 1) statistical methods (correlation functions and simulated annealing, multi-point statistics, entropy methods), 2) sequential methods (sphere or other granular packs) and 3) morphological methods. Stochastic reconstructions using correlation functions possess some important advantage - they provide a statistical description of the structure, which is known to have relationships with all physical properties. In addition, this method is more flexible for other applications to characterize porous media. Taking different 3D scans of natural and artificial porous materials (sandstones, soils, shales, ceramics) we choose some 2D cut/s as sources of input correlation functions. Based on different types of correlation functions
A numerical method of reconstructing the pollutant concentration field in a ventilated room.
Braconnier, R; Bonthoux, F
2007-04-01
Pollutant source emission flow rates in the workplace are typically unknown in occupational hygiene. Similarly, a restricted number of concentration measurements can provide only spatial limited information on the pollutant distribution in the room. This paper presents a numerical method to evaluate the intensities of pollutant sources and to reconstruct the associated concentration field at every point of a ventilated enclosure containing one or several pollutant sources of unknown emission rate. This reconstructed concentration field is obtained both from the geometric and ventilation characteristics of the enclosure and from a limited number of fixed-station concentration measurements. The method is currently applicable to steady situations. The predictions obtained are then compared with concentration measurements in a laboratory closed cabin under controlled ventilation. Pollutant sources generated tracer gas emissions at known flow rates. Comparisons were performed successively for three different physical configurations. PMID:17337459
A Fast Greedy Sparse Method of Current Sources Reconstruction for Ventricular Torsion Detection
NASA Astrophysics Data System (ADS)
Bing, Lu; Jiang, Shiqin; Chen, Mengpei; Zhao, Chen; Grönemeyer, D.; Hailer, B.; Van Leeuwen, P.
2015-09-01
A fast greedy sparse (FGS) method of cardiac equivalent current sources reconstruction is developed for non-invasive detection and quantitative analysis of individual left ventricular torsion. The cardiac magnetic field inverse problem is solved based on a distributed source model. The analysis of real 61-channel magnetocardiogram (MCG) data demonstrates that one or two dominant current source with larger strength can be identified efficiently by the FGS algorithm. Then, the left ventricle torsion during systole is examined on the basis of x, y and z coordination curves and angle change of reconstructed dominant current sources. The advantages of this method are non-invasive, visible, with higher sensitivity and resolution. It may enable the clinical detection of cardiac systolic and ejection dysfunction.
NASA Technical Reports Server (NTRS)
Yin, Lo I.; Bielefeld, Michael J.
1987-01-01
The maximum entropy method (MEM) and balanced correlation method were used to reconstruct the images of low-intensity X-ray objects obtained experimentally by means of a uniformly redundant array coded aperture system. The reconstructed images from MEM are clearly superior. However, the MEM algorithm is computationally more time-consuming because of its iterative nature. On the other hand, both the inherently two-dimensional character of images and the iterative computations of MEM suggest the use of parallel processing machines. Accordingly, computations were carried out on the massively parallel processor at Goddard Space Flight Center as well as on the serial processing machine VAX 8600, and the results are compared.
The Nagoya cosmic-ray muon spectrometer 3, part 4: Track reconstruction method
NASA Technical Reports Server (NTRS)
Shibata, S.; Kamiya, Y.; Iijima, K.; Iida, S.
1985-01-01
One of the greatest problems in measuring particle trajectories with an optical or visual detector system, is the reconstruction of trajectories in real space from their recorded images. In the Nagoya cosmic-ray muon spectrometer, muon tracks are detected by wide gap spark chambers and their images are recorded on the photographic film through an optical system of 10 mirrors and two cameras. For the spatial reconstruction, 42 parameters of the optical system should be known to determine the configuration of this system. It is almost impossible to measure this many parameters directly with usual techniques. In order to solve this problem, the inverse transformation method was applied. In this method, all the optical parameters are determined from the locations of fiducial marks in real space and the locations of their images on the photographic film by the non-linear least square fitting.
NASA Technical Reports Server (NTRS)
Striepe, Scott A.; Blanchard, Robert C.; Kirsch, Michael F.; Fowler, Wallace T.
2007-01-01
On January 14, 2005, ESA's Huygens probe separated from NASA's Cassini spacecraft, entered the Titan atmosphere and landed on its surface. As part of NASA Engineering Safety Center Independent Technical Assessment of the Huygens entry, descent, and landing, and an agreement with ESA, NASA provided results of all EDL analyses and associated findings to the Huygens project team prior to probe entry. In return, NASA was provided the flight data from the probe so that trajectory reconstruction could be done and simulation models assessed. Trajectory reconstruction of the Huygens entry probe at Titan was accomplished using two independent approaches: a traditional method and a POST2-based method. Results from both approaches are discussed in this paper.
Pre-Conditioning Optmization Methods and Display for Mega-Pixel DEM Reconstructions
NASA Astrophysics Data System (ADS)
Sette, A. L.; DeLuca, E. E.; Weber, M. A.; Golub, L.
2004-05-01
The Atmospheric Imaging Assembly (AIA) for the Solar Dynamics Observatory will provide an unprecedented rate of mega-pixel solar corona data. This hastens the need for faster differential emission measure (DEM) reconstruction methods, as well as scientifically useful ways of displaying this information for mega-pixel datasets. We investigate pre-conditioning methods, which optimize DEM reconstruction by making an informed initial DEM guess that takes advantage of the sharing of DEM information among the pixels in an image. In addition, we evaluate the effectiveness of different DEM image display options, including single temperature emission maps and time-progression DEM movies. This work is supported under contract SP02D4301R to the Lockheed Martin Corp.
Bergen, Tobias; Wittenberg, Thomas
2016-01-01
Endoscopic procedures form part of routine clinical practice for minimally invasive examinations and interventions. While they are beneficial for the patient, reducing surgical trauma and making convalescence times shorter, they make orientation and manipulation more challenging for the physician, due to the limited field of view through the endoscope. However, this drawback can be reduced by means of medical image processing and computer vision, using image stitching and surface reconstruction methods to expand the field of view. This paper provides a comprehensive overview of the current state of the art in endoscopic image stitching and surface reconstruction. The literature in the relevant fields of application and algorithmic approaches is surveyed. The technological maturity of the methods and current challenges and trends are analyzed. PMID:25532214
Aibar, Sara; Fontanillo, Celia; Droste, Conrad; De Las Rivas, Javier
2015-01-01
Summary: Functional Gene Networks (FGNet) is an R/Bioconductor package that generates gene networks derived from the results of functional enrichment analysis (FEA) and annotation clustering. The sets of genes enriched with specific biological terms (obtained from a FEA platform) are transformed into a network by establishing links between genes based on common functional annotations and common clusters. The network provides a new view of FEA results revealing gene modules with similar functions and genes that are related to multiple functions. In addition to building the functional network, FGNet analyses the similarity between the groups of genes and provides a distance heatmap and a bipartite network of functionally overlapping genes. The application includes an interface to directly perform FEA queries using different external tools: DAVID, GeneTerm Linker, TopGO or GAGE; and a graphical interface to facilitate the use. Availability and implementation: FGNet is available in Bioconductor, including a tutorial. URL: http://bioconductor.org/packages/release/bioc/html/FGNet.html Contact: jrivas@usal.es Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25600944
NASA Astrophysics Data System (ADS)
Jang, Jaeseong; Ahn, Chi Young; Jeon, Kiwan; Choi, Jung-il; Lee, Changhoon; Seo, Jin Keun
2015-03-01
A reconstruction method is proposed here to quantify the distribution of blood flow velocity fields inside the left ventricle from color Doppler echocardiography measurement. From 3D incompressible Navier- Stokes equation, a 2D incompressible Navier-Stokes equation with a mass source term is derived to utilize the measurable color flow ultrasound data in a plane along with the moving boundary condition. The proposed model reflects out-of-plane blood flows on the imaging plane through the mass source term. For demonstrating a feasibility of the proposed method, we have performed numerical simulations of the forward problem and numerical analysis of the reconstruction method. First, we construct a 3D moving LV region having a specific stroke volume. To obtain synthetic intra-ventricular flows, we performed a numerical simulation of the forward problem of Navier-Stokes equation inside the 3D moving LV, computed 3D intra-ventricular velocity fields as a solution of the forward problem, projected the 3D velocity fields on the imaging plane and took the inner product of the 2D velocity fields on the imaging plane and scanline directional velocity fields for synthetic scanline directional projected velocity at each position. The proposed method utilized the 2D synthetic projected velocity data for reconstructing LV blood flow. By computing the difference between synthetic flow and reconstructed flow fields, we obtained the averaged point-wise errors of 0.06 m/s and 0.02 m/s for u- and v-components, respectively.
Bourantas, Christos V; Kourtis, Iraklis C; Plissiti, Marina E; Fotiadis, Dimitrios I; Katsouras, Christos S; Papafaklis, Michail I; Michalis, Lampros K
2005-12-01
The aim of this study is to describe a new method for the three-dimensional reconstruction of coronary arteries and its quantitative validation. Our approach is based on the fusion of the data provided by intravascular ultrasound images (IVUS) and biplane angiographies. A specific segmentation algorithm is used for the detection of the regions of interest in intravascular ultrasound images. A new methodology is also introduced for the accurate extraction of the catheter path. In detail, a cubic B-spline is used for approximating the catheter path in each biplane projection. Each B-spline curve is swept along the normal direction of its X-ray angiographic plane forming a surface. The intersection of the two surfaces is a 3D curve, which represents the reconstructed path. The detected regions of interest in the IVUS images are placed perpendicularly onto the path and their relative axial twist is computed using the sequential triangulation algorithm. Then, an efficient algorithm is applied to estimate the absolute orientation of the first IVUS frame. In order to obtain 3D visualization the commercial package Geomagic Studio 4.0 is used. The performance of the proposed method is assessed using a validation methodology which addresses the separate validation of each step followed for obtaining the coronary reconstruction. The performance of the segmentation algorithm was examined in 80 IVUS images. The reliability of the path extraction method was studied in vitro using a metal wire model and in vivo in a dataset of 11 patients. The performance of the sequential triangulation algorithm was tested in two gutter models and in the coronary arteries (marked with metal clips) of six cadaveric sheep hearts. Finally, the accuracy in the estimation of the first IVUS frame absolute orientation was examined in the same set of cadaveric sheep hearts. The obtained results demonstrate that the proposed reconstruction method is reliable and capable of depicting the morphology of
An Iterative Method for Improving the Quality of Reconstruction of a Three-Dimensional Surface
Vishnyakov, G.N.; Levin, G.G.; Sukhorukov, K.A.
2005-12-15
A complex image with constraints imposed on the amplitude and phase image components is processed using the Gerchberg iterative algorithm for the first time. The use of the Gerchberg iterative algorithm makes it possible to improve the quality of a three-dimensional surface profile reconstructed by the previously proposed method that is based on the multiangle projection of fringes and the joint processing of the obtained images by Fourier synthesis.
Free cross leg flap as a method of reconstruction of soft tissues defects.
Tvrdek, M; Pros, Z; Nejedlý, A; Kletenský, J; Stehlík, J
1995-01-01
Authors demonstrate on clinical cases the possibility of reconstruction of soft leg tissues by the use of the so called free cross flap. This method appears to be convenient in cases where there are no suitable recipient vessels within reach on the injured extremity and where the defect is of such an extent that the conventional cross flap does not provide sufficient amount of tissue. PMID:7653169
Four-dimensional cone beam CT reconstruction and enhancement using a temporal nonlocal means method
Jia Xun; Tian Zhen; Lou Yifei; Sonke, Jan-Jakob; Jiang, Steve B.
2012-09-15
Purpose: Four-dimensional cone beam computed tomography (4D-CBCT) has been developed to provide respiratory phase-resolved volumetric imaging in image guided radiation therapy. Conventionally, it is reconstructed by first sorting the x-ray projections into multiple respiratory phase bins according to a breathing signal extracted either from the projection images or some external surrogates, and then reconstructing a 3D CBCT image in each phase bin independently using FDK algorithm. This method requires adequate number of projections for each phase, which can be achieved using a low gantry rotation or multiple gantry rotations. Inadequate number of projections in each phase bin results in low quality 4D-CBCT images with obvious streaking artifacts. 4D-CBCT images at different breathing phases share a lot of redundant information, because they represent the same anatomy captured at slightly different temporal points. Taking this redundancy along the temporal dimension into account can in principle facilitate the reconstruction in the situation of inadequate number of projection images. In this work, the authors propose two novel 4D-CBCT algorithms: an iterative reconstruction algorithm and an enhancement algorithm, utilizing a temporal nonlocal means (TNLM) method. Methods: The authors define a TNLM energy term for a given set of 4D-CBCT images. Minimization of this term favors those 4D-CBCT images such that any anatomical features at one spatial point at one phase can be found in a nearby spatial point at neighboring phases. 4D-CBCT reconstruction is achieved by minimizing a total energy containing a data fidelity term and the TNLM energy term. As for the image enhancement, 4D-CBCT images generated by the FDK algorithm are enhanced by minimizing the TNLM function while keeping the enhanced images close to the FDK results. A forward-backward splitting algorithm and a Gauss-Jacobi iteration method are employed to solve the problems. The algorithms implementation on
NASA Astrophysics Data System (ADS)
Li, Rong; Zhao, Feng
2015-10-01
Solar-induced chlorophyll fluorescence is closely related to photosynthesis and can serve as an indicator of plant status. Several methods have been proposed to retrieve fluorescence signal (Fs) either at specific spectral bands or within the whole fluorescence emission region. In this study, we investigated the precision of the fluorescence signal obtained through these methods under various sensor spectral characteristics. Simulated datasets generated by the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model with known `true' Fs as well as an experimental dataset are exploited to investigate four commonly used Fs retrieval methods, namely the original Fraunhofer Line Discriminator method (FLD), the 3 bands FLD (3FLD), the improved FLD (iFLD), and the Spectral Fitting Methods (SFMs). Fluorescence Spectrum Reconstruction (FSR) method is also investigated using simulated datasets. The sensor characteristics of spectral resolution (SR) and signal-to-noise ratio (SNR) are taken into account. According to the results, finer SR and SNR both lead to better accuracy. Lowest precision is obtained for the FLD method with strong overestimation. Some improvements are made by the 3FLD method, but it still tends to overestimate. Generally, the iFLD method and the SFMs provide better accuracy. As to FSR, the shape and magnitude of reconstructed Fs are generally consistent with the `true' Fs distributions when fine SR is exploited. With coarser SR, however, though R2 of the retrieved Fs may be high, large bias is likely to be obtained as well.
Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.
2015-04-29
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also
Quantitative comparison of reconstruction methods for intra-voxel fiber recovery from diffusion MRI.
Daducci, Alessandro; Canales-Rodríguez, Erick Jorge; Descoteaux, Maxime; Garyfallidis, Eleftherios; Gur, Yaniv; Lin, Ying-Chia; Mani, Merry; Merlet, Sylvain; Paquette, Michael; Ramirez-Manzanares, Alonso; Reisert, Marco; Reis Rodrigues, Paulo; Sepehrband, Farshid; Caruyer, Emmanuel; Choupan, Jeiran; Deriche, Rachid; Jacob, Mathews; Menegaz, Gloria; Prčkovska, Vesna; Rivera, Mariano; Wiaux, Yves; Thiran, Jean-Philippe
2014-02-01
Validation is arguably the bottleneck in the diffusion magnetic resonance imaging (MRI) community. This paper evaluates and compares 20 algorithms for recovering the local intra-voxel fiber structure from diffusion MRI data and is based on the results of the "HARDI reconstruction challenge" organized in the context of the "ISBI 2012" conference. Evaluated methods encompass a mixture of classical techniques well known in the literature such as diffusion tensor, Q-Ball and diffusion spectrum imaging, algorithms inspired by the recent theory of compressed sensing and also brand new approaches proposed for the first time at this contest. To quantitatively compare the methods under controlled conditions, two datasets with known ground-truth were synthetically generated and two main criteria were used to evaluate the quality of the reconstructions in every voxel: correct assessment of the number of fiber populations and angular accuracy in their orientation. This comparative study investigates the behavior of every algorithm with varying experimental conditions and highlights strengths and weaknesses of each approach. This information can be useful not only for enhancing current algorithms and develop the next generation of reconstruction methods, but also to assist physicians in the choice of the most adequate technique for their studies. PMID:24132007
a Data Driven Method for Building Reconstruction from LiDAR Point Clouds
NASA Astrophysics Data System (ADS)
Sajadian, M.; Arefi, H.
2014-10-01
Airborne laser scanning, commonly referred to as LiDAR, is a superior technology for three-dimensional data acquisition from Earth's surface with high speed and density. Building reconstruction is one of the main applications of LiDAR system which is considered in this study. For a 3D reconstruction of the buildings, the buildings points should be first separated from the other points such as; ground and vegetation. In this paper, a multi-agent strategy has been proposed for simultaneous extraction and segmentation of buildings from LiDAR point clouds. Height values, number of returned pulse, length of triangles, direction of normal vectors, and area are five criteria which have been utilized in this step. Next, the building edge points are detected using a new method named "Grid Erosion". A RANSAC based technique has been employed for edge line extraction. Regularization constraints are performed to achieve the final lines. Finally, by modelling of the roofs and walls, 3D building model is reconstructed. The results indicate that the proposed method could successfully extract the building from LiDAR data and generate the building models automatically. A qualitative and quantitative assessment of the proposed method is then provided.
Sadigursky, David; Gobbi, Riccardo Gomes; Pereira, César Augusto Martins; Pécora, José Ricardo; Camanho, Gilberto Luis
2015-01-01
Objective: To present a biomechanical device for evaluating medial patellofemoral ligament (MPFL) reconstruction and its isometricity. Methods: An accessible biomechanical method that allowed application of physiological and non-physiological forces to the knee using a mechanical arm and application of weights and counterweights was developed, so as to enable many different evaluations and have a very accurate measurement system for distances between different structures, for analysis on experiments. This article describes the assembly of this system, and suggests some practical applications. Six cadaver knees were studied. The knees were prepared in a testing machine developed at the Biomechanics Laboratory of IOT–HCFMUSP, which allowed dynamic evaluation of patellar behavior, with quantification of patellar lateralization between 0° and 120°. The differences between the distances found with and without load applied to the patella were grouped according to the graft fixation angle (0°, 30°, 60° or 90°) and knee position (intact, damaged or reconstructed). Results: There was a tendency for smaller lateral displacement to occur at fixation angles greater than 30 degrees of flexion, especially between the angles of 45° and 60° degrees of flexion, after the reconstruction. For the other angles, there was no statistical significance. Conclusion: The method developed is a useful tool for studies on the patellofemoral joint and the MPFL, and has a very accurate measurement system for distances between different structures. It can be used in institutions with fewer resources available. PMID:27047872
The Pixon Method for Data Compression Image Classification, and Image Reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard; Yahil, Amos
2002-01-01
As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.
NASA Astrophysics Data System (ADS)
Nara, T.; Koiwa, K.; Takagi, S.; Oyama, D.; Uehara, G.
2014-05-01
This paper presents an algebraic reconstruction method for dipole-quadrupole sources using magnetoencephalography data. Compared to the conventional methods with the equivalent current dipoles source model, our method can more accurately reconstruct two close, oppositely directed sources. Numerical simulations show that two sources on both sides of the longitudinal fissure of cerebrum are stably estimated. The method is verified using a quadrupolar source phantom, which is composed of two isosceles-triangle-coils with parallel bases.
Cell Cycle Gene Networks Are Associated with Melanoma Prognosis
Watkins, Wendy; Araki, Hiromitsu; Tamada, Yoshinori; Muthukaruppan, Anita; Ranjard, Louis; Derkac, Eliane; Imoto, Seiya; Miyano, Satoru; Crampin, Edmund J.; Print, Cristin G.
2012-01-01
Background Our understanding of the molecular pathways that underlie melanoma remains incomplete. Although several published microarray studies of clinical melanomas have provided valuable information, we found only limited concordance between these studies. Therefore, we took an in vitro functional genomics approach to understand melanoma molecular pathways. Methodology/Principal Findings Affymetrix microarray data were generated from A375 melanoma cells treated in vitro with siRNAs against 45 transcription factors and signaling molecules. Analysis of this data using unsupervised hierarchical clustering and Bayesian gene networks identified proliferation-association RNA clusters, which were co-ordinately expressed across the A375 cells and also across melanomas from patients. The abundance in metastatic melanomas of these cellular proliferation clusters and their putative upstream regulators was significantly associated with patient prognosis. An 8-gene classifier derived from gene network hub genes correctly classified the prognosis of 23/26 metastatic melanoma patients in a cross-validation study. Unlike the RNA clusters associated with cellular proliferation described above, co-ordinately expressed RNA clusters associated with immune response were clearly identified across melanoma tumours from patients but not across the siRNA-treated A375 cells, in which immune responses are not active. Three uncharacterised genes, which the gene networks predicted to be upstream of apoptosis- or cellular proliferation-associated RNAs, were found to significantly alter apoptosis and cell number when over-expressed in vitro. Conclusions/Significance This analysis identified co-expression of RNAs that encode functionally-related proteins, in particular, proliferation-associated RNA clusters that are linked to melanoma patient prognosis. Our analysis suggests that A375 cells in vitro may be valid models in which to study the gene expression modules that underlie some melanoma
NASA Astrophysics Data System (ADS)
Geach, M. R.; Stokes, M.; Telfer, M. W.; Mather, A. E.; Fyfe, R. M.; Lewin., S.
2014-07-01
Erosional landform features and their associated sedimentary assemblages (river terraces) often provide important records of long-term landscape evolution. However, the methods available for spatial representations of such records are typically limited to the generation of two-dimensional transects (valley long profiles and cross sections). Such transects limit the full quantification of system responses in a three-dimensional landscape (e.g., the identification of spatial changes in net sediment flux within a hydrological basin). The purpose of this paper is to explore the use of geospatial interpolation methods in the reconstruction of Quaternary landform records. This approach enables more precise quantifications of terrace landform records at a range of spatial scales (from a single river reach to geological basin scales). Here we use a case study from the Tabernas basin in SE Spain to test the applicability of multiple methods of geospatial interpolation in the reconstruction of Quaternary landforms (river terrace and alluvial fan remnants). We take steps in (1) refining the terrace data sets and the methods of technique application in order to reduce modelling errors, and (2) in highlighting the requirements for an assessment of interpolation method suitability when modelling highly fragmented landform records. The results from our study show that the performance of interpolation methods varies considerably and is dependent upon the data modelled. Method performance is primarily controlled by the inherent geomorphological characteristics (surface morphology and elevation) of the data; however, the attributes of data structure are significant. We further identify the importance of predefined model parameters (e.g., search radius) upon technique performance, increasing the appreciation of these commonly neglected variables in such studies. Ultimately, the overall applicability of the interpolation process is evidenced by the close correlation of surface volume
Benazzi, S; Stansfield, E; Milani, C; Gruppioni, G
2009-07-01
The process of forensic identification of missing individuals is frequently reliant on the superimposition of cranial remains onto an individual's picture and/or facial reconstruction. In the latter, the integrity of the skull or a cranium is an important factor in successful identification. Here, we recommend the usage of computerized virtual reconstruction and geometric morphometrics for the purposes of individual reconstruction and identification in forensics. We apply these methods to reconstruct a complete cranium from facial remains that allegedly belong to the famous Italian humanist of the fifteenth century, Angelo Poliziano (1454-1494). Raw data was obtained by computed tomography scans of the Poliziano face and a complete reference skull of a 37-year-old Italian male. Given that the amount of distortion of the facial remains is unknown, two reconstructions are proposed: The first calculates the average shape between the original and its reflection, and the second discards the less preserved left side of the cranium under the assumption that there is no deformation on the right. Both reconstructions perform well in the superimposition with the original preserved facial surface in a virtual environment. The reconstruction by means of averaging between the original and reflection yielded better results during the superimposition with portraits of Poliziano. We argue that the combination of computerized virtual reconstruction and geometric morphometric methods offers a number of advantages over traditional plastic reconstruction, among which are speed, reproducibility, easiness of manipulation when superimposing with pictures in virtual environment, and assumptions control. PMID:19294402
Setterbo, Jacob J.; Chau, Anh; Fyhrie, Patricia B.; Hubbard, Mont; Upadhyaya, Shrini K.; Symons, Jennifer E.; Stover, Susan M.
2012-01-01
Background Racetrack surface is a risk factor for racehorse injuries and fatalities. Current research indicates that race surface mechanical properties may be influenced by material composition, moisture content, temperature, and maintenance. Race surface mechanical testing in a controlled laboratory setting would allow for objective evaluation of dynamic properties of surface and factors that affect surface behavior. Objective To develop a method for reconstruction of race surfaces in the laboratory and validate the method by comparison with racetrack measurements of dynamic surface properties. Methods Track-testing device (TTD) impact tests were conducted to simulate equine hoof impact on dirt and synthetic race surfaces; tests were performed both in situ (racetrack) and using laboratory reconstructions of harvested surface materials. Clegg Hammer in situ measurements were used to guide surface reconstruction in the laboratory. Dynamic surface properties were compared between in situ and laboratory settings. Relationships between racetrack TTD and Clegg Hammer measurements were analyzed using stepwise multiple linear regression. Results Most dynamic surface property setting differences (racetrack-laboratory) were small relative to surface material type differences (dirt-synthetic). Clegg Hammer measurements were more strongly correlated with TTD measurements on the synthetic surface than the dirt surface. On the dirt surface, Clegg Hammer decelerations were negatively correlated with TTD forces. Conclusions Laboratory reconstruction of racetrack surfaces guided by Clegg Hammer measurements yielded TTD impact measurements similar to in situ values. The negative correlation between TTD and Clegg Hammer measurements confirms the importance of instrument mass when drawing conclusions from testing results. Lighter impact devices may be less appropriate for assessing dynamic surface properties compared to testing equipment designed to simulate hoof impact (TTD
Wisdom of crowds for robust gene network inference.
Marbach, Daniel; Costello, James C; Küffner, Robert; Vega, Nicole M; Prill, Robert J; Camacho, Diogo M; Allison, Kyle R; Kellis, Manolis; Collins, James J; Stolovitzky, Gustavo
2012-08-01
Reconstructing gene regulatory networks from high-throughput data is a long-standing challenge. Through the Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, we performed a comprehensive blind assessment of over 30 network inference methods on Escherichia coli, Staphylococcus aureus, Saccharomyces cerevisiae and in silico microarray data. We characterize the performance, data requirements and inherent biases of different inference approaches, and we provide guidelines for algorithm application and development. We observed that no single inference method performs optimally across all data sets. In contrast, integration of predictions from multiple inference methods shows robust and high performance across diverse data sets. We thereby constructed high-confidence networks for E. coli and S. aureus, each comprising ~1,700 transcriptional interactions at a precision of ~50%. We experimentally tested 53 previously unobserved regulatory interactions in E. coli, of which 23 (43%) were supported. Our results establish community-based methods as a powerful and robust tool for the inference of transcriptional gene regulatory networks. PMID:22796662
A method of dose reconstruction for moving targets compatible with dynamic treatments
Poulsen, Per Rugaard; Schmidt, Mai Lykkegaard; Keall, Paul; Worm, Esben Schjødt; Fledelius, Walther; Hoffmann, Lone
2012-01-01
Purpose: To develop a method that allows a commercial treatment planning system (TPS) to perform accurate dose reconstruction for rigidly moving targets and to validate the method in phantom measurements for a range of treatments including intensity modulated radiation therapy (IMRT), volumetric arc therapy (VMAT), and dynamic multileaf collimator (DMLC) tracking. Methods: An in-house computer program was developed to manipulate Dicom treatment plans exported from a TPS (Eclipse, Varian Medical Systems) such that target motion during treatment delivery was incorporated into the plans. For each treatment, a motion including plan was generated by dividing the intratreatment target motion into 1 mm position bins and construct sub-beams that represented the parts of the treatment that were delivered, while the target was located within each position bin. For each sub-beam, the target shift was modeled by a corresponding isocenter shift. The motion incorporating Dicom plans were reimported into the TPS, where dose calculation resulted in motion including target dose distributions. For experimental validation of the dose reconstruction a thorax phantom with a moveable lung equivalent rod with a tumor insert of solid water was first CT scanned. The tumor insert was delineated as a gross tumor volume (GTV), and a planning target volume (PTV) was formed by adding margins. A conformal plan, two IMRT plans (step-and-shoot and sliding windows), and a VMAT plan were generated giving minimum target doses of 95% (GTV) and 67% (PTV) of the prescription dose (3 Gy). Two conformal fields with MLC leaves perpendicular and parallel to the tumor motion, respectively, were generated for DMLC tracking. All treatment plans were delivered to the thorax phantom without tumor motion and with a sinusoidal tumor motion. The two conformal fields were delivered with and without portal image guided DMLC tracking based on an embedded gold marker. The target dose distribution was measured with a
NASA Astrophysics Data System (ADS)
Paget, A. C.; Brodzik, M. J.; Gotberg, J.; Hardman, M.; Long, D. G.
2014-12-01
Spanning over 35 years of Earth observations, satellite passive microwave sensors have generated a near-daily, multi-channel brightness temperature record of observations. Critical to describing and understanding Earth system hydrologic and cryospheric parameters, data products derived from the passive microwave record include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. While swath data are valuable to oceanographers due to the temporal scales of ocean phenomena, gridded data are more valuable to researchers interested in derived parameters at fixed locations through time and are widely used in climate studies. We are applying recent developments in image reconstruction methods to produce a systematically reprocessed historical time series NASA MEaSUREs Earth System Data Record, at higher spatial resolutions than have previously been available, for the entire SMMR, SSM/I-SSMIS and AMSR-E record. We take advantage of recently released, recalibrated SSM/I-SSMIS swath format Fundamental Climate Data Records. Our presentation will compare and contrast the two candidate image reconstruction techniques we are evaluating: Backus-Gilbert (BG) interpolation and a radiometer version of Scatterometer Image Reconstruction (SIR). Both BG and SIR use regularization to trade off noise and resolution. We discuss our rationale for the respective algorithm parameters we have selected, compare results and computational costs, and include prototype SSM/I images at enhanced resolutions of up to 3 km. We include a sensitivity analysis for estimating sensor measurement response functions critical to both methods.
Research on image matching method of big data image of three-dimensional reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Chunsen; Qiu, Zhenguo; Zhu, Shihuan; Wang, Xiqi; Xu, Xiaolei; Zhong, Sidong
2015-12-01
Image matching is the main flow of a three-dimensional reconstruction. With the development of computer processing technology, seeking the image to be matched from the large date image sets which acquired from different image formats, different scales and different locations has put forward a new request for image matching. To establish the three dimensional reconstruction based on image matching from big data images, this paper put forward a new effective matching method based on visual bag of words model. The main technologies include building the bag of words model and image matching. First, extracting the SIFT feature points from images in the database, and clustering the feature points to generate the bag of words model. We established the inverted files based on the bag of words. The inverted files can represent all images corresponding to each visual word. We performed images matching depending on the images under the same word to improve the efficiency of images matching. Finally, we took the three-dimensional model with those images. Experimental results indicate that this method is able to improve the matching efficiency, and is suitable for the requirements of large data reconstruction.
A novel reconstruction method for giant incisional hernia: Hybrid laparoscopic technique
Ozturk, G; Malya, FU; Ersavas, C; Ozdenkaya, Y; Bektasoglu, H; Cipe, G; Citgez, B; Karatepe, O
2015-01-01
BACKGROUND AND OBJECTIVES: Laparoscopic reconstruction of ventral hernia is a popular technique today. Patients with large defects have various difficulties of laparoscopic approach. In this study, we aimed to present a new reconstruction technique that combines laparoscopic and open approach in giant incisional hernias. MATERIALS AND METHODS: Between January 2006 and August 2012, 28 patients who were operated consequently for incisional hernia with defect size over 10 cm included in this study and separated into two groups. Group 1 (n = 12) identifies patients operated with standard laparoscopic approach, whereas group 2 (n = 16) labels laparoscopic technique combined with open approach. Patients were evaluated in terms of age, gender, body mass index (BMI), mean operation time, length of hospital stay, surgical site infection (SSI) and recurrence rate. RESULTS: There are 12 patients in group 1 and 16 patients in group 2. Mean length of hospital stay and SSI rates are similar in both groups. Postoperative seroma formation was observed in six patients for group 1 and in only 1 patient for group 2. Group 1 had 1 patient who suffered from recurrence where group 2 had no recurrence. DISCUSSION: Laparoscopic technique combined with open approach may safely be used as an alternative method for reconstruction of giant incisional hernias. PMID:26622118
Jiang, Q X; Chester, D W; Sigworth, F J
2001-01-01
We propose a new method for single-particle reconstruction, which should be generally applicable to structure determination for membrane proteins. After reconstitution into a small spherical vesicle, a membrane protein takes a particular orientation relative to the membrane normal, and its position in the projected image of the vesicle directly defines two of its three Euler angles of orientation. The spherical constraint imposed by the vesicle effectively reduces the dimensionality of the alignment search from 5 to 3 and simplifies the detection of the particle. Projection images of particles in vesicles collectively take all possible orientations and therefore cover the whole Fourier space. Analysis of images of vesicles in ice showed that the vesicle density is well described by a simple model for membrane electron scattering density. In fitting this model we found that osmotically swollen vesicles remain nearly spherical through the freezing process. These results satisfy the basic experimental requirements for spherical reconstruction. A computer simulation of particles in vesicles showed that this method provides good estimates of the two Euler angles and thus may improve single-particle reconstruction and extend it to smaller membrane proteins. PMID:11472084
Flexible 3D reconstruction method based on phase-matching in multi-sensor system.
Wu, Qingyang; Zhang, Baichun; Huang, Jinhui; Wu, Zejun; Zeng, Zeng
2016-04-01
Considering the measuring range limitation of a single sensor system, multi-sensor system has become essential in obtaining complete image information of the object in the field of 3D image reconstruction. However, for the traditional multi-sensors worked independently in its system, there was some point in calibrating each sensor system separately. And the calibration between all single sensor systems was complicated and required a long time. In this paper, we present a flexible 3D reconstruction method based on phase-matching in multi-sensor system. While calibrating each sensor, it realizes the data registration of multi-sensor system in a unified coordinate system simultaneously. After all sensors are calibrated, the whole 3D image data directly exist in the unified coordinate system, and there is no need to calibrate the positions between sensors any more. Experimental results prove that the method is simple in operation, accurate in measurement, and fast in 3D image reconstruction. PMID:27137020
Modeling DNA sequence-based cis-regulatory gene networks.
Bolouri, Hamid; Davidson, Eric H
2002-06-01
Gene network analysis requires computationally based models which represent the functional architecture of regulatory interactions, and which provide directly testable predictions. The type of model that is useful is constrained by the particular features of developmentally active cis-regulatory systems. These systems function by processing diverse regulatory inputs, generating novel regulatory outputs. A computational model which explicitly accommodates this basic concept was developed earlier for the cis-regulatory system of the endo16 gene of the sea urchin. This model represents the genetically mandated logic functions that the system executes, but also shows how time-varying kinetic inputs are processed in different circumstances into particular kinetic outputs. The same basic design features can be utilized to construct models that connect the large number of cis-regulatory elements constituting developmental gene networks. The ultimate aim of the network models discussed here is to represent the regulatory relationships among the genomic control systems of the genes in the network, and to state their functional meaning. The target site sequences of the cis-regulatory elements of these genes constitute the physical basis of the network architecture. Useful models for developmental regulatory networks must represent the genetic logic by which the system operates, but must also be capable of explaining the real time dynamics of cis-regulatory response as kinetic input and output data become available. Most importantly, however, such models must display in a direct and transparent manner fundamental network design features such as intra- and intercellular feedback circuitry; the sources of parallel inputs into each cis-regulatory element; gene battery organization; and use of repressive spatial inputs in specification and boundary formation. Successful network models lead to direct tests of key architectural features by targeted cis-regulatory analysis. PMID
An image reconstruction method from Fourier data with uncertainties on the spatial frequencies
NASA Astrophysics Data System (ADS)
Cornelio, Anastasia; Bonettini, Silvia; Prato, Marco
2013-10-01
In this paper the reconstruction of a two-dimensional image from a nonuniform sampling of its Fourier transform is considered, in the presence of uncertainties on the frequencies corresponding to the measured data. The problem therefore becomes a blind deconvolution, in which the unknowns are both the image to be reconstructed and the exact frequencies. The availability of information on the image and the frequencies allows to reformulate the problem as a constrained minimization of the least squares functional. A regularized solution of this optimization problem is achieved by early stopping an alternating minimization scheme. In particular, a gradient projection method is employed at each step to compute an inexact solution of the minimization subproblems. The resulting algorithm is applied on some numerical examples arising in a real-world astronomical application.
Method of producing nanopatterned articles using surface-reconstructed block copolymer films
Russell, Thomas P; Park, Soojin; Wang, Jia-Yu; Kim, Bokyung
2013-08-27
Nanopatterned surfaces are prepared by a method that includes forming a block copolymer film on a substrate, annealing and surface reconstructing the block copolymer film to create an array of cylindrical voids, depositing a metal on the surface-reconstructed block copolymer film, and heating the metal-coated block copolymer film to redistribute at least some of the metal into the cylindrical voids. When very thin metal layers and low heating temperatures are used, metal nanodots can be formed. When thicker metal layers and higher heating temperatures are used, the resulting metal structure includes nanoring-shaped voids. The nanopatterned surfaces can be transferred to the underlying substrates via etching, or used to prepare nanodot- or nanoring-decorated substrate surfaces.
Götz, Carolin; Warnke, Patrick H; Kolk, Andreas
2015-09-01
Musculoskeletal defects attributable to trauma or infection or as a result of oncologic surgery present a common challenge in reconstructive maxillofacial surgery. The autologous vascularized bone graft still represents the gold standard for salvaging these situations. Preoperative virtual planning offers great potential and provides assistance in reconstructive surgery. Nevertheless, the applicability of autologous bone transfer might be limited within the medically compromised patient or because of the complexity of the defect and the required size of the graft to be harvested. The development of alternative methods are urgently needed in the field of regenerative medicine to enable the regeneration of the original tissue. Since the first demonstration of de novo bone formation by regenerative strategies and the application of bone growth factors some decades ago, further progress has been achieved by tissue engineering, gene transfer, and stem cell application concepts. This review summarizes recent approaches and current developments in regenerative medicine. PMID:26297391
Bayesian network reconstruction using systems genetics data: comparison of MCMC methods.
Tasaki, Shinya; Sauerwine, Ben; Hoff, Bruce; Toyoshiba, Hiroyoshi; Gaiteri, Chris; Chaibub Neto, Elias
2015-04-01
Reconstructing biological networks using high-throughput technologies has the potential to produce condition-specific interactomes. But are these reconstructed networks a reliable source of biological interactions? Do some network inference methods offer dramatically improved performance on certain types of networks? To facilitate the use of network inference methods in systems biology, we report a large-scale simulation study comparing the ability of Markov chain Monte Carlo (MCMC) samplers to reverse engineer Bayesian networks. The MCMC samplers we investigated included foundational and state-of-the-art Metropolis-Hastings and Gibbs sampling approaches, as well as novel samplers we have designed. To enable a comprehensive comparison, we simulated gene expression and genetics data from known network structures under a range of biologically plausible scenarios. We examine the overall quality of network inference via different methods, as well as how their performance is affected by network characteristics. Our simulations reveal that network size, edge density, and strength of gene-to-gene signaling are major parameters that differentiate the performance of various samplers. Specifically, more recent samplers including our novel methods outperform traditional samplers for highly interconnected large networks with strong gene-to-gene signaling. Our newly developed samplers show comparable or superior performance to the top existing methods. Moreover, this performance gain is strongest in networks with biologically oriented topology, which indicates that our novel samplers are suitable for inferring biological networks. The performance of MCMC samplers in this simulation framework can guide the choice of methods for network reconstruction using systems genetics data. PMID:25631319
Bayesian Network Reconstruction Using Systems Genetics Data: Comparison of MCMC Methods
Tasaki, Shinya; Sauerwine, Ben; Hoff, Bruce; Toyoshiba, Hiroyoshi; Gaiteri, Chris; Chaibub Neto, Elias
2015-01-01
Reconstructing biological networks using high-throughput technologies has the potential to produce condition-specific interactomes. But are these reconstructed networks a reliable source of biological interactions? Do some network inference methods offer dramatically improved performance on certain types of networks? To facilitate the use of network inference methods in systems biology, we report a large-scale simulation study comparing the ability of Markov chain Monte Carlo (MCMC) samplers to reverse engineer Bayesian networks. The MCMC samplers we investigated included foundational and state-of-the-art Metropolis–Hastings and Gibbs sampling approaches, as well as novel samplers we have designed. To enable a comprehensive comparison, we simulated gene expression and genetics data from known network structures under a range of biologically plausible scenarios. We examine the overall quality of network inference via different methods, as well as how their performance is affected by network characteristics. Our simulations reveal that network size, edge density, and strength of gene-to-gene signaling are major parameters that differentiate the performance of various samplers. Specifically, more recent samplers including our novel methods outperform traditional samplers for highly interconnected large networks with strong gene-to-gene signaling. Our newly developed samplers show comparable or superior performance to the top existing methods. Moreover, this performance gain is strongest in networks with biologically oriented topology, which indicates that our novel samplers are suitable for inferring biological networks. The performance of MCMC samplers in this simulation framework can guide the choice of methods for network reconstruction using systems genetics data. PMID:25631319
Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.
2014-08-20
We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
Reconstruction of multiple gastric electrical wave fronts using potential-based inverse methods
NASA Astrophysics Data System (ADS)
Kim, J. H. K.; Pullan, A. J.; Cheng, L. K.
2012-08-01
One approach for non-invasively characterizing gastric electrical activity, commonly used in the field of electrocardiography, involves solving an inverse problem whereby electrical potentials on the stomach surface are directly reconstructed from dense potential measurements on the skin surface. To investigate this problem, an anatomically realistic torso model and an electrical stomach model were used to simulate potentials on stomach and skin surfaces arising from normal gastric electrical activity. The effectiveness of the Greensite-Tikhonov or the Tikhonov inverse methods were compared under the presence of 10% Gaussian noise with either 84 or 204 body surface electrodes. The stability and accuracy of the Greensite-Tikhonov method were further investigated by introducing varying levels of Gaussian signal noise or by increasing or decreasing the size of the stomach by 10%. Results showed that the reconstructed solutions were able to represent the presence of propagating multiple wave fronts and the Greensite-Tikhonov method with 204 electrodes performed best (correlation coefficients of activation time: 90%; pacemaker localization error: 3 cm). The Greensite-Tikhonov method was stable with Gaussian noise levels up to 20% and 10% change in stomach size. The use of 204 rather than 84 body surface electrodes improved the performance; however, for all investigated cases, the Greensite-Tikhonov method outperformed the Tikhonov method.
Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian
2015-01-01
We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional “point intersection” to “trajectory intersection” in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable. PMID:25760053
FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector
Schaefer, Dirk; Grass, Michael; Haar, Peter van de
2011-05-15
Purpose: Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. Methods: The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting and back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. Results: The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical
NASA Astrophysics Data System (ADS)
Li, Yue; Zhao, Yuan-meng; Deng, Chao; Zhang, Cunlin
2014-11-01
Terrorist attacks make the public safety issue becoming the focus of national attention. Passive terahertz security instrument can help overcomesome shortcomings with current security instruments. Terahertz wave has a strong penetrating power which can pass through clothes without harming human bodies and detected objects. However, in the lab experiments, we found that original terahertz imagesobtained by passive terahertz technique were often too vague to detect the objects of interest. Prior studies suggest that learning-based image super-resolution reconstruction(SRR) method can solve this problem. To our knowledge, we applied the learning-based image SRR method for the first time in single-frame passive terahertz image processing. Experimental results showed that the processed passive terahertz images wereclearer and easier to identify suspicious objects than the original images. We also compare our method with three conventional methods and our method show greater advantage over the other methods.
Ehrhardt, Jan; Werner, Rene; Saering, Dennis; Frenzel, Thorsten; Lu Wei; Low, Daniel; Handels, Heinz
2007-02-15
Respiratory motion degrades anatomic position reproducibility and leads to issues affecting image acquisition, treatment planning, and radiation delivery. Four-dimensional (4D) computer tomography (CT) image acquisition can be used to measure the impact of organ motion and to explicitly account for respiratory motion during treatment planning and radiation delivery. Modern CT scanners can only scan a limited region of the body simultaneously and patients have to be scanned in segments consisting of multiple slices. A respiratory signal (spirometer signal or surface tracking) is used to reconstruct a 4D data set by sorting the CT scans according to the couch position and signal coherence with predefined respiratory phases. But artifacts can occur if there are no acquired data segments for exactly the same respiratory state for all couch positions. These artifacts are caused by device-dependent limitations of gantry rotation, image reconstruction times and by the variability of the patient's respiratory pattern. In this paper an optical flow based method for improved reconstruction of 4D CT data sets from multislice CT scans is presented. The optical flow between scans at neighboring respiratory states is estimated by a non-linear registration method. The calculated velocity field is then used to reconstruct a 4D CT data set by interpolating data at exactly the predefined respiratory phase. Our reconstruction method is compared with the usually used reconstruction based on amplitude sorting. The procedures described were applied to reconstruct 4D CT data sets for four cancer patients and a qualitative and quantitative evaluation of the optical flow based reconstruction method was performed. Evaluation results show a relevant reduction of reconstruction artifacts by our technique. The reconstructed 4D data sets were used to quantify organ displacements and to visualize the abdominothoracic organ motion.
NASA Astrophysics Data System (ADS)
Liu, Zhe; Zhang, Li; Jiang, Xiaolei
2012-10-01
Tomographic Gamma Scanning (TGS) is one of the non-destructive analysis technologies based on the principles of Emission Computed Tomography (ECT). Dedicated on imaging of the Gamma ray emission, TGS reveals radioactivity distributions for different radionuclides inside target objects such as nuclear waste barrels. Due to the special characteristics of TGS imaging geometry, namely, the relatively larger detector cell size and more remarkable view change in the imaging region, the line integral projection model widely used in ECT problems is no longer applicable for the radioactive intensity image reconstruction in TGS. The alternative Monte-Carlo based methods which calculate the detection efficiency at every detecting position for each voxel are effective and accurate, however time consuming. In this paper, we consider the geometrical detection efficiency of detector that is dependent on the detector-voxel relative position independently from the intrinsic detection efficiency. Further, a new geometrical correction method is proposed, where the voxel volume within the detector view is applied as the projection weight substituting the track length used in line integral model. And the geometrical detection efficiencies at different positions are analytically expressed by the volume integral on the voxel of geometrical point-source response function of the detector. Numerical simulations are taken and discussions are provided. The results show that the proposed method reduces the reconstruction errors compared to the line integral projection method while gaining better calculating efficiency and flexibility than former Monte-Carlo methods.
A Weighted Two-Level Bregman Method with Dictionary Updating for Nonconvex MR Image Reconstruction
Peng, Xi; Liu, Jianbo; Yang, Dingcheng
2014-01-01
Nonconvex optimization has shown that it needs substantially fewer measurements than l 1 minimization for exact recovery under fixed transform/overcomplete dictionary. In this work, two efficient numerical algorithms which are unified by the method named weighted two-level Bregman method with dictionary updating (WTBMDU) are proposed for solving lp optimization under the dictionary learning model and subjecting the fidelity to the partial measurements. By incorporating the iteratively reweighted norm into the two-level Bregman iteration method with dictionary updating scheme (TBMDU), the modified alternating direction method (ADM) solves the model of pursuing the approximated lp-norm penalty efficiently. Specifically, the algorithms converge after a relatively small number of iterations, under the formulation of iteratively reweighted l 1 and l 2 minimization. Experimental results on MR image simulations and real MR data, under a variety of sampling trajectories and acceleration factors, consistently demonstrate that the proposed method can efficiently reconstruct MR images from highly undersampled k-space data and presents advantages over the current state-of-the-art reconstruction approaches, in terms of higher PSNR and lower HFEN values. PMID:25431583
Chen, Quan; Zhigang, Cai; Xin, Peng; Yang, Wang; Chuanbin, Guo
2016-02-01
The treatment of complicated mandibular defects, including misshaped and missing bones, is challenging, and the success of reconstruction depends to a large extent on the formulation of a precise surgical plan. There is still no ideal preoperative method of design for reconstruction to deal with large, cross-midline, mandibular, segmental defects. We have built a virtual deformable mandibular model (VDMM) with 3-dimensional animation software. Sixteen handles were set on the model, and these could be easily controlled with a computer mouse to change the morphology of the deformable mandibular model. The computed tomographic (CT) data from 10 normal skulls was used to validate the adjustability of the VDMM. According to the positions of the mandibular fossa of the temporomandibular joint, the maxillary dental arch, and the craniomaxillofacial profile, the model could be adjusted to an ideal contour, which was coordinated with the skull. The VDMM was then adjusted further according to the morphology of the original mandible. A 3-dimensional comparison was made between the model of the deformed mandible and the original mandible. Using 16 control handles, the VDMM could be adjusted to a new outline, which was similar in shape to the original mandible. Within 3mm deviation either way, the absolute mean distribution of deviation between the contour of the deformed model and the original mandible was 92.5%. The VDMM might be useful for preoperative design of reconstruction of complicated mandibular defects. PMID:26711316
Statistical image reconstruction methods for simultaneous emission/transmission PET scans
Erdogan, H.; Fessler, J.A.
1996-12-31
Transmission scans are necessary for estimating the attenuation correction factors (ACFs) to yield quantitatively accurate PET emission images. To reduce the total scan time, post-injection transmission scans have been proposed in which one can simultaneously acquire emission and transmission data using rod sources and sinogram windowing. However, since the post-injection transmission scans are corrupted by emission coincidences, accurate correction for attenuation becomes more challenging. Conventional methods (emission subtraction) for ACF computation from post-injection scans are suboptimal and require relatively long scan times. We introduce statistical methods based on penalized-likelihood objectives to compute ACFs and then use them to reconstruct lower noise PET emission images from simultaneous transmission/emission scans. Simulations show the efficacy of the proposed methods. These methods improve image quality and SNR of the estimates as compared to conventional methods.
NASA Astrophysics Data System (ADS)
Kollár, László E.; Lucas, Gary P.; Zhang, Zhichao
2014-07-01
An analytical method is developed for the reconstruction of velocity profiles using measured potential distributions obtained around the boundary of a multi-electrode electromagnetic flow meter (EMFM). The method is based on the discrete Fourier transform (DFT), and is implemented in Matlab. The method assumes the velocity profile in a section of a pipe as a superposition of polynomials up to sixth order. Each polynomial component is defined along a specific direction in the plane of the pipe section. For a potential distribution obtained in a uniform magnetic field, this direction is not unique for quadratic and higher-order components; thus, multiple possible solutions exist for the reconstructed velocity profile. A procedure for choosing the optimum velocity profile is proposed. It is applicable for single-phase or two-phase flows, and requires measurement of the potential distribution in a non-uniform magnetic field. The potential distribution in this non-uniform magnetic field is also calculated for the possible solutions using weight values. Then, the velocity profile with the calculated potential distribution which is closest to the measured one provides the optimum solution. The reliability of the method is first demonstrated by reconstructing an artificial velocity profile defined by polynomial functions. Next, velocity profiles in different two-phase flows, based on results from the literature, are used to define the input velocity fields. In all cases, COMSOL Multiphysics is used to model the physical specifications of the EMFM and to simulate the measurements; thus, COMSOL simulations produce the potential distributions on the internal circumference of the flow pipe. These potential distributions serve as inputs for the analytical method. The reconstructed velocity profiles show satisfactory agreement with the input velocity profiles. The method described in this paper is most suitable for stratified flows and is not applicable to axisymmetric flows in
RECONSTRUCTING THE INITIAL DENSITY FIELD OF THE LOCAL UNIVERSE: METHODS AND TESTS WITH MOCK CATALOGS
Wang Huiyuan; Mo, H. J.; Yang Xiaohu; Van den Bosch, Frank C.
2013-07-20
Our research objective in this paper is to reconstruct an initial linear density field, which follows the multivariate Gaussian distribution with variances given by the linear power spectrum of the current cold dark matter model and evolves through gravitational instabilities to the present-day density field in the local universe. For this purpose, we develop a Hamiltonian Markov Chain Monte Carlo method to obtain the linear density field from a posterior probability function that consists of two components: a prior of a Gaussian density field with a given linear spectrum and a likelihood term that is given by the current density field. The present-day density field can be reconstructed from galaxy groups using the method developed in Wang et al. Using a realistic mock Sloan Digital Sky Survey DR7, obtained by populating dark matter halos in the Millennium simulation (MS) with galaxies, we show that our method can effectively and accurately recover both the amplitudes and phases of the initial, linear density field. To examine the accuracy of our method, we use N-body simulations to evolve these reconstructed initial conditions to the present day. The resimulated density field thus obtained accurately matches the original density field of the MS in the density range 0.3{approx}<{rho}/ {rho}-bar {approx}<20 without any significant bias. In particular, the Fourier phases of the resimulated density fields are tightly correlated with those of the original simulation down to a scale corresponding to a wavenumber of {approx}1 h Mpc{sup -1}, much smaller than the translinear scale, which corresponds to a wavenumber of {approx}0.15 h Mpc{sup -1}.
Kuiper, Justin J; Zimmerman, M Bridget; Pagedar, Nitin A; Carter, Keith D; Allen, Richard C; Shriver, Erin M
2016-08-01
This article compares the perception of health and beauty of patients after exenteration reconstruction with free flap, eyelid-sparing, split-thickness skin graft, or with a prosthesis. Cross-sectional evaluation was performed through a survey sent to all students enrolled at the University of Iowa Carver College of Medicine. The survey included inquiries about observer comfort, perceived patient health, difficulty of social interactions, and which patient appearance was least bothersome. Responses were scored from 0 to 4 for each method of reconstruction and an orbital prosthesis. A Friedman test was used to compare responses among each method of repair and the orbital prosthesis for each of the four questions, and if this was significant, then post-hoc pairwise comparison was performed with p values adjusted using Bonferroni's method. One hundred and thirty two students responded to the survey and 125 completed all four questions. Favorable response for all questions was highest for the orbital prosthesis and lowest for the split-thickness skin graft. Patient appearance with an orbital prosthesis had significantly higher scores compared to patient appearance with each of the other methods for all questions (p value < 0.0001). Second highest scores were for the free flap, which were higher than eyelid-sparing and significantly higher compared to split-thickness skin grafting (p value: Question 1: < 0.0001; Question 2: 0.0005; Question 3: 0.006; and Question 4: 0.019). The orbital prosthesis was the preferred post-operative appearance for the exenterated socket for each question. Free flap was the preferred appearance for reconstruction without an orbital prosthesis. Split-thickness skin graft was least preferred for all questions. PMID:27341072
Mory, Cyril; Auvray, Vincent; Zhang, Bo; Grass, Michael; Schäfer, Dirk; Chen, S. James; Carroll, John D.; Rit, Simon; Peyrin, Françoise; Douek, Philippe; Boussel, Loïc
2014-02-15
Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.
Prioritization of Susceptibility Genes for Ectopic Pregnancy by Gene Network Analysis
Liu, Ji-Long; Zhao, Miao
2016-01-01
Ectopic pregnancy is a very dangerous complication of pregnancy, affecting 1%–2% of all reported pregnancies. Due to ethical constraints on human biopsies and the lack of suitable animal models, there has been little success in identifying functionally important genes in the pathogenesis of ectopic pregnancy. In the present study, we developed a random walk–based computational method named TM-rank to prioritize ectopic pregnancy–related genes based on text mining data and gene network information. Using a defined threshold value, we identified five top-ranked genes: VEGFA (vascular endothelial growth factor A), IL8 (interleukin 8), IL6 (interleukin 6), ESR1 (estrogen receptor 1) and EGFR (epidermal growth factor receptor). These genes are promising candidate genes that can serve as useful diagnostic biomarkers and therapeutic targets. Our approach represents a novel strategy for prioritizing disease susceptibility genes. PMID:26840308
Prioritization of Susceptibility Genes for Ectopic Pregnancy by Gene Network Analysis.
Liu, Ji-Long; Zhao, Miao
2016-01-01
Ectopic pregnancy is a very dangerous complication of pregnancy, affecting 1%-2% of all reported pregnancies. Due to ethical constraints on human biopsies and the lack of suitable animal models, there has been little success in identifying functionally important genes in the pathogenesis of ectopic pregnancy. In the present study, we developed a random walk-based computational method named TM-rank to prioritize ectopic pregnancy-related genes based on text mining data and gene network information. Using a defined threshold value, we identified five top-ranked genes: VEGFA (vascular endothelial growth factor A), IL8 (interleukin 8), IL6 (interleukin 6), ESR1 (estrogen receptor 1) and EGFR (epidermal growth factor receptor). These genes are promising candidate genes that can serve as useful diagnostic biomarkers and therapeutic targets. Our approach represents a novel strategy for prioritizing disease susceptibility genes. PMID:26840308
Zhao, Weizhao; Ginsberg, M. . Cerebral Vascular Disease Research Center); Young, T.Y. . Dept. of Electrical and Computer Engineering)
1993-12-01
Quantitative autoradiography is a powerful radio-isotopic-imaging method for neuroscientists to study local cerebral blood flow and glucose-metabolic rate at rest, in response to physiologic activation of the visual, auditory, somatosensory, and motor systems, and in pathologic conditions. Most autoradiographic studies analyze glucose utilization and blood flow in two-dimensional (2-D) coronal sections. With modern digital computer and image-processing techniques, a large number of closely spaced coronal sections can be stacked appropriately to form a three-dimensional (3-d) image. 3-D autoradiography allows investigators to observe cerebral sections and surfaces from any viewing angle. A fundamental problem in 3-D reconstruction is the alignment (registration) of the coronal sections. A new alignment method based on disparity analysis is presented which can overcome many of the difficulties encountered by previous methods. The disparity analysis method can deal with asymmetric, damaged, or tilted coronal sections under the same general framework, and it can be used to match coronal sections of different sizes and shapes. Experimental results on alignment and 3-D reconstruction are presented.
Jha, Abhinav K.; Song, Na; Caffo, Brian; Frey, Eric C.
2015-01-01
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output. PMID:26430292
NASA Astrophysics Data System (ADS)
Jha, Abhinav K.; Song, Na; Caffo, Brian; Frey, Eric C.
2015-03-01
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method pro- vided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
Han, Zhaolong; Li, Jiasong; Singh, Manmohan; Wu, Chen; Liu, Chih-hao; Wang, Shang; Idugboe, Rita; Raghunathan, Raksha; Sudheendran, Narendran; Aglyamov, Salavat R.; Twa, Michael D.; Larin, Kirill V.
2015-01-01
We present a systematic analysis of the accuracy of five different methods for extracting the biomechanical properties of soft samples using optical coherence elastography (OCE). OCE is an emerging noninvasive technique, which allows assessing biomechanical properties of tissues with a micrometer spatial resolution. However, in order to accurately extract biomechanical properties from OCE measurements, application of proper mechanical model is required. In this study, we utilize tissue-mimicking phantoms with controlled elastic properties and investigate the feasibilities of four available methods for reconstructing elasticity (Young’s modulus) based on OCE measurements of an air-pulse induced elastic wave. The approaches are based on the shear wave equation (SWE), the surface wave equation (SuWE), Rayleigh-Lamb frequency equation (RLFE), and finite element method (FEM), Elasticity values were compared with uniaxial mechanical testing. The results show that the RLFE and the FEM are more robust in quantitatively assessing elasticity than the other simplified models. This study provides a foundation and reference for reconstructing the biomechanical properties of tissues from OCE data, which is important for the further development of noninvasive elastography methods. PMID:25860076
NASA Astrophysics Data System (ADS)
Murphy, Martin J.; Todor, Dorin A.
2005-06-01
By monitoring brachytherapy seed placement and determining the actual configuration of the seeds in vivo, one can optimize the treatment plan during the process of implantation. Two or more radiographic images from different viewpoints can in principle allow one to reconstruct the configuration of implanted seeds uniquely. However, the reconstruction problem is complicated by several factors: (1) the seeds can overlap and cluster in the images; (2) the images can have distortion that varies with viewpoint when a C-arm fluoroscope is used; (3) there can be uncertainty in the imaging viewpoints; (4) the angular separation of the imaging viewpoints can be small owing to physical space constraints; (5) there can be inconsistency in the number of seeds detected in the images; and (6) the patient can move while being imaged. We propose and conceptually demonstrate a novel reconstruction method that handles all of these complications and uncertainties in a unified process. The method represents the three-dimensional seed and camera configurations as parametrized models that are adjusted iteratively to conform to the observed radiographic images. The morphed model seed configuration that best reproduces the appearance of the seeds in the radiographs is the best estimate of the actual seed configuration. All of the information needed to establish both the seed configuration and the camera model is derived from the seed images without resort to external calibration fixtures. Furthermore, by comparing overall image content rather than individual seed coordinates, the process avoids the need to establish correspondence between seed identities in the several images. The method has been shown to work robustly in simulation tests that simultaneously allow for unknown individual seed positions, uncertainties in the imaging viewpoints and variable image distortion.
Rezac, K.; Klir, D.; Kubes, P.; Kravarik, J.
2009-01-21
We present the reconstruction of neutron energy spectra from time-of-flight signals. This technique is useful in experiments with the time of neutron production in the range of about tens or hundreds of nanoseconds. The neutron signals were obtained by a common hard X-ray and neutron fast plastic scintillation detectors. The reconstruction is based on the Monte Carlo method which has been improved by simultaneous usage of neutron detectors placed on two opposite sides from the neutron source. Although the reconstruction from detectors placed on two opposite sides is more difficult and a little bit inaccurate (it followed from several presumptions during the inclusion of both sides of detection), there are some advantages. The most important advantage is smaller influence of scattered neutrons on the reconstruction. Finally, we describe the estimation of the error of this reconstruction.
A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction
Lu, Hongyang; Wei, Jingbo; Wang, Yuhao; Deng, Xiaohua
2016-01-01
Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values. PMID:27110235
A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction.
Lu, Hongyang; Wei, Jingbo; Liu, Qiegen; Wang, Yuhao; Deng, Xiaohua
2016-01-01
Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values. PMID:27110235
NASA Astrophysics Data System (ADS)
Beilina, Larisa
2016-08-01
We present domain decomposition finite element/finite difference method for the solution of hyperbolic equation. The domain decomposition is performed such that finite elements and finite differences are used in different subdomains of the computational domain: finite difference method is used on the structured part of the computational domain and finite elements on the unstructured part of the domain. Explicit discretizations for both methods are constructed such that the finite element and the finite difference schemes coincide on the common structured overlapping layer between computational subdomains. Then the resulting approach can be considered as a pure finite element scheme which avoids instabilities at the interfaces. We derive an energy estimate for the underlying hyperbolic equation with absorbing boundary conditions and illustrate efficiency of the domain decomposition method on the reconstruction of the conductivity function in three dimensions.
Liu, Xueqi; Wang, Hong-Wei
2011-01-01
Single particle electron microscopy (EM) reconstruction has recently become a popular tool to get the three-dimensional (3D) structure of large macromolecular complexes. Compared to X-ray crystallography, it has some unique advantages. First, single particle EM reconstruction does not need to crystallize the protein sample, which is the bottleneck in X-ray crystallography, especially for large macromolecular complexes. Secondly, it does not need large amounts of protein samples. Compared with milligrams of proteins necessary for crystallization, single particle EM reconstruction only needs several micro-liters of protein solution at nano-molar concentrations, using the negative staining EM method. However, despite a few macromolecular assemblies with high symmetry, single particle EM is limited at relatively low resolution (lower than 1 nm resolution) for many specimens especially those without symmetry. This technique is also limited by the size of the molecules under study, i.e. 100 kDa for negatively stained specimens and 300 kDa for frozen-hydrated specimens in general. For a new sample of unknown structure, we generally use a heavy metal solution to embed the molecules by negative staining. The specimen is then examined in a transmission electron microscope to take two-dimensional (2D) micrographs of the molecules. Ideally, the protein molecules have a homogeneous 3D structure but exhibit different orientations in the micrographs. These micrographs are digitized and processed in computers as "single particles". Using two-dimensional alignment and classification techniques, homogenous molecules in the same views are clustered into classes. Their averages enhance the signal of the molecule's 2D shapes. After we assign the particles with the proper relative orientation (Euler angles), we will be able to reconstruct the 2D particle images into a 3D virtual volume. In single particle 3D reconstruction, an essential step is to correctly assign the proper orientation
A simple method for reconstruction of severely damaged primary anterior teeth
Eshghi, Alireza; Esfahan, Raha Kowsari; Khoroushi, Maryam
2011-01-01
Restoration of severely decayed primary anterior teeth is often considered as a special challenge by pedodontists. This case report presents a 5-year-old boy with severely damaged maxillary right canine. Subsequent to root canal treatment, a reversed (upside-down) metal post was put into the canal and composite build-up was performed. This new method offers a simple, practical and effective procedure for reconstruction of severely decayed primary anterior teeth, which re-establishes function and esthetics for time the tooth should be present and functional in the child's mouth. PMID:22135694
Xie, Pei-yue; Yang, Jian-feng; Xue, Bin; Lü, Juan; He, Ying-hong; Li, Ting; Ma, Xiao-long
2016-03-01
Interference imaging spectrometer is one of the most important equipments of Chang'E 1 satellite, which is applied to analysis the material composition and its distribution of the surface on the moon. At present, the spectral resolution of level 2B scientific data obtained by existing methods is 325 cm(-1). If we use the description way of wavelength resolution, various spectrum is different: the first band is 7.6 nm, the last band is 29 nm, which introduces two questions: (1) the spectral resolution description way mismatch with the way of ground spectral library used for calibration and comparison; (2) The signal-to-noise ratio of the spectra in the shortwave band is low due to the signal entering narrow band is little. This paper discussed the relationship between wavelength resolution and cut-off function based on the reconstruction model of CE-1 interference imaging spectrometer. It proposed an adjustable cut-off function changing with wavelength or wavelength resolution, while selected the appropriate Sinc function as apodization to realize the reconstruction of arbitrary specified wavelength resolution in the band coverage. Then we used this method to CE-1 on orbit 0B data to get a spectral image of 29 nm wavelength resolution. Finally, by using the signal-to-noise ratio, principal component analysis and unsupervised classification method on the reconstruction results with 2 grade science data from ground application system for comparison, the results showed that: signal-to-noise ratio of the shortwave band increased about 4 times, and the average increased about 2.4 times, the classification based on the spectrum was consistent, and the quality of the data was greatly improved. So, EWSR method has the advantages that: (1) in the case of keeping spectral information steadiness, it can improve the signal-to-noise ratio of shortwave band spectrum though sacrificed part of spectral resolution; (2) it can achieve the spectral data reconstruction which can set
Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method
Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.; Raval, Amish N.; Van Lysel, Michael S.; Speidel, Michael A.
2014-07-15
Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937
Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method
Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.; Raval, Amish N.; Van Lysel, Michael S.; Speidel, Michael A.
2014-01-01
Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937
Lee, H.R.
1997-11-18
A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object. 5 figs.
Lee, Heung-Rae
1997-01-01
A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object.
Wang, Shaobu; Lu, Shuai; Zhou, Ning; Lin, Guang; Elizondo, Marcelo A.; Pai, M. A.
2014-09-04
In interconnected power systems, dynamic model reduction can be applied on generators outside the area of interest to mitigate the computational cost with transient stability studies. This paper presents an approach of deriving the reduced dynamic model of the external area based on dynamic response measurements, which comprises of three steps, dynamic-feature extraction, attribution and reconstruction (DEAR). In the DEAR approach, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal ‘basis’ of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original external system. Network model is un-changed in the DEAR method. Tests on several IEEE standard systems show that the proposed method gets better reduction ratio and response errors than the traditional coherency aggregation methods.
Vehmeijer, Maarten; van Eijnatten, Maureen; Liberton, Niels; Wolff, Jan
2016-08-01
Fractures of the orbital floor are often a result of traffic accidents or interpersonal violence. To date, numerous materials and methods have been used to reconstruct the orbital floor. However, simple and cost-effective 3-dimensional (3D) printing technologies for the treatment of orbital floor fractures are still sought. This study describes a simple, precise, cost-effective method of treating orbital fractures using 3D printing technologies in combination with autologous bone. Enophthalmos and diplopia developed in a 64-year-old female patient with an orbital floor fracture. A virtual 3D model of the fracture site was generated from computed tomography images of the patient. The fracture was virtually closed using spline interpolation. Furthermore, a virtual individualized mold of the defect site was created, which was manufactured using an inkjet printer. The tangible mold was subsequently used during surgery to sculpture an individualized autologous orbital floor implant. Virtual reconstruction of the orbital floor and the resulting mold enhanced the overall accuracy and efficiency of the surgical procedure. The sculptured autologous orbital floor implant showed an excellent fit in vivo. The combination of virtual planning and 3D printing offers an accurate and cost-effective treatment method for orbital floor fractures. PMID:27137437
Iterative methods for the reconstruction of astronomical images with high dynamic range
NASA Astrophysics Data System (ADS)
Anconelli, B.; Bertero, M.; Boccacci, P.; Carbillet, M.; Lanteri, H.
2007-01-01
In most cases astronomical images contain objects with very different intensities such as bright stars combined with faint nebulae. Since the noise is mainly due to photon counting (Poisson noise), the signal-to-noise ratio may be very different in different regions of the image. Moreover, the bright and faint objects have, in general, different angular scales. These features imply that the iterative methods which are most frequently used for the reconstruction of astronomical images, namely the Richardson-Lucy Method (RLM), also known in tomography as Expectation Maximization (EM) method, and the Iterative Space Reconstruction Algorithm (ISRA) do not work well in these cases. Also standard regularization approaches do not provide satisfactory results since a kind of adaptive regularization is required, in the sense that one needs a different regularization for bright and faint objects. In this paper we analyze a number of regularization functionals with this particular kind of adaptivity and we propose a simple modification of RLM and ISRA which takes into account these regularization terms. The preliminary results on a test object are promising.
A Method for Accurate Reconstructions of the Upper Airway Using Magnetic Resonance Images
Xiong, Huahui; Huang, Xiaoqing; Li, Yong; Li, Jianhong; Xian, Junfang; Huang, Yaqi
2015-01-01
Objective The purpose of this study is to provide an optimized method to reconstruct the structure of the upper airway (UA) based on magnetic resonance imaging (MRI) that can faithfully show the anatomical structure with a smooth surface without artificial modifications. Methods MRI was performed on the head and neck of a healthy young male participant in the axial, coronal and sagittal planes to acquire images of the UA. The level set method was used to segment the boundary of the UA. The boundaries in the three scanning planes were registered according to the positions of crossing points and anatomical characteristics using a Matlab program. Finally, the three-dimensional (3D) NURBS (Non-Uniform Rational B-Splines) surface of the UA was constructed using the registered boundaries in all three different planes. Results A smooth 3D structure of the UA was constructed, which captured the anatomical features from the three anatomical planes, particularly the location of the anterior wall of the nasopharynx. The volume and area of every cross section of the UA can be calculated from the constructed 3D model of UA. Conclusions A complete scheme of reconstruction of the UA was proposed, which can be used to measure and evaluate the 3D upper airway accurately. PMID:26066461
NASA Astrophysics Data System (ADS)
Zhou, Huiyuan; Narayanan, Ram M.; Balasingham, Ilangko
2016-05-01
This paper addresses the detection and imaging of a small tumor underneath the inner surface of the human intestine. The proposed system consists of an around-body antenna array cooperating with a capsule carrying a radio frequency (RF) transmitter located within the human body. This paper presents a modified Levenberg-Marquardt algorithm to reconstruct the dielectric profile with this new system architecture. Each antenna around the body acts both as a transmitter and a receiver for the remaining array elements. In addition, each antenna also acts as a receiver for the capsule transmitter inside the body to collect additional data which cannot be obtained from the conventional system. In this paper, the synthetic data are collected from biological objects, which are simulated for the circular phantoms using CST studio software. For the imaging part, the Levenberg-Marquardt algorithm, which is a kind of Newton inversion method, is chosen to reconstruct the dielectric profile of the objects. The imaging process involves a two-part innovation. The first part is the use of a dual mesh method which builds a dense mesh grid around in the region around the transmitter and a coarse mesh for the remaining area. The second part is the modification of the Levenberg-Marquardt method to use the additional data collected from the inside transmitter. The results show that the new system with the new imaging algorithm can obtain high resolution images even for small tumors.
Nien, Hung; Fessler, Jeffrey A.
2014-01-01
Augmented Lagrangian (AL) methods for solving convex optimization problems with linear constraints are attractive for imaging applications with composite cost functions due to the empirical fast convergence rate under weak conditions. However, for problems such as X-ray computed tomography (CT) image reconstruction, where the inner least-squares problem is challenging and requires iterations, AL methods can be slow. This paper focuses on solving regularized (weighted) least-squares problems using a linearized variant of AL methods that replaces the quadratic AL penalty term in the scaled augmented Lagrangian with its separable quadratic surrogate (SQS) function, leading to a simpler ordered-subsets (OS) accelerable splitting-based algorithm, OS-LALM. To further accelerate the proposed algorithm, we use a second-order recursive system analysis to design a deterministic downward continuation approach that avoids tedious parameter tuning and provides fast convergence. Experimental results show that the proposed algorithm significantly accelerates the convergence of X-ray CT image reconstruction with negligible overhead and can reduce OS artifacts when using many subsets. PMID:25248178
NASA Astrophysics Data System (ADS)
Ma, Xibo; Tian, Jie; Zhang, Bo; Zhang, Xing; Xue, Zhenwen; Dong, Di; Han, Dong
2011-03-01
Among many optical molecular imaging modalities, bioluminescence imaging (BLI) has more and more wide application in tumor detection and evaluation of pharmacodynamics, toxicity, pharmacokinetics because of its noninvasive molecular and cellular level detection ability, high sensitivity and low cost in comparison with other imaging technologies. However, BLI can not present the accurate location and intensity of the inner bioluminescence sources such as in the bone, liver or lung etc. Bioluminescent tomography (BLT) shows its advantage in determining the bioluminescence source distribution inside a small animal or phantom. Considering the deficiency of two-dimensional imaging modality, we developed three-dimensional tomography to reconstruct the information of the bioluminescence source distribution in transgenic mOC-Luc mice bone with the boundary measured data. In this paper, to study the osteocalcin (OC) accumulation in transgenic mOC-Luc mice bone, a BLT reconstruction method based on multilevel adaptive finite element (FEM) algorithm was used for localizing and quantifying multi bioluminescence sources. Optical and anatomical information of the tissues are incorporated as a priori knowledge in this method, which can reduce the ill-posedness of BLT. The data was acquired by the dual modality BLT and Micro CT prototype system that was developed by us. Through temperature control and absolute intensity calibration, a relative accurate intensity can be calculated. The location of the OC accumulation was reconstructed, which was coherent with the principle of bone differentiation. This result also was testified by ex vivo experiment in the black 96-plate well using the BLI system and the chemiluminescence apparatus.
Comparison of Short-term Complications Between 2 Methods of Coracoclavicular Ligament Reconstruction
Rush, Lane N.; Lake, Nicholas; Stiefel, Eric C.; Hobgood, Edward R.; Ramsey, J. Randall; O’Brien, Michael J.; Field, Larry D.; Savoie, Felix H.
2016-01-01
Background: Numerous techniques have been used to treat acromioclavicular (AC) joint dislocation, with anatomic reconstruction of the coracoclavicular (CC) ligaments becoming a popular method of fixation. Anatomic CC ligament reconstruction is commonly performed with cortical fixation buttons (CFBs) or tendon grafts (TGs). Purpose: To report and compare short-term complications associated with AC joint stabilization procedures using CFBs or TGs. Study Design: Cohort study; Level of evidence, 3. Methods: We conducted a retrospective review of the operative treatment of AC joint injuries between April 2007 and January 2013 at 2 institutions. Thirty-eight patients who had undergone a procedure for AC joint instability were evaluated. In these 38 patients with a mean age of 36.2 years, 18 shoulders underwent fixation using the CFB technique and 20 shoulders underwent reconstruction using the TG technique. Results: The overall complication rate was 42.1% (16/38). There were 11 complications in the 18 patients in the CFB group (61.1%), including 7 construct failures resulting in a loss of reduction. The most common mode of failure was suture breakage (n = 3), followed by button migration (n = 2) and coracoid fracture (n = 2). There were 5 complications in the TG group (25%), including 3 cases of asymptomatic subluxation, 1 symptomatic suture granuloma, and 1 superficial infection. There were no instances of construct failure seen in TG fixations. CFB fixation was found to have a statistically significant increase in complications (P = .0243) and construct failure (P = .002) compared with TG fixation. Conclusion: CFB fixation was associated with a higher rate of failure and higher rate of early complications when compared with TG fixation. PMID:27504468
The Modified 3-square Flap Method for Reconstruction of Toe Syndactyly
Watanabe, Ayako
2016-01-01
Bandoh reported the 3-square-flap method as a procedure for interdigital space reconstruction in patients with minor syndactyly. We recently modified this flap design so that it could be used in the treatment of toe syndactyly involving fusion of the areas distal to the proximal interphalangeal joint. With our method, the reconstructed interdigital space consists of 4 oblong flaps (A through D). Flaps A and D are designed on the dorsal side, flap B is designed on the frontal plane of the interdigital space, and flap C is designed on the plantar side. Flaps A, B, and C are raised immediately below the dermis in a manner that allowed slight fat tissue to adhere to each flap. Flap D is freed to a degree minimally needed for dislocation, while leaving a thick subcutaneous pedicle. Flaps A, B, and C are each folded in 90 degrees; flap D is dislocated to the proximal plane of the reconstructed digit, followed by skin suturing. In this process, suturing is avoided between flaps A and C, between flaps A and D, and between flaps B and D. During the period of 2011 to 2015, we treated 8 patients of toe syndactyly involving fusion distal to the proximal interphalangeal joint. Cases of congenital syndactyly received surgery between the ages of 8 and 11 months. Using this technique, flap ischemia/necrosis was not observed. During the postoperative follow-up period, the interdigital space retained sufficient depth without developing any scar contracture. No case required additional surgery. PMID:27536472
The Modified 3-square Flap Method for Reconstruction of Toe Syndactyly.
Iida, Naoshige; Watanabe, Ayako
2016-07-01
Bandoh reported the 3-square-flap method as a procedure for interdigital space reconstruction in patients with minor syndactyly. We recently modified this flap design so that it could be used in the treatment of toe syndactyly involving fusion of the areas distal to the proximal interphalangeal joint. With our method, the reconstructed interdigital space consists of 4 oblong flaps (A through D). Flaps A and D are designed on the dorsal side, flap B is designed on the frontal plane of the interdigital space, and flap C is designed on the plantar side. Flaps A, B, and C are raised immediately below the dermis in a manner that allowed slight fat tissue to adhere to each flap. Flap D is freed to a degree minimally needed for dislocation, while leaving a thick subcutaneous pedicle. Flaps A, B, and C are each folded in 90 degrees; flap D is dislocated to the proximal plane of the reconstructed digit, followed by skin suturing. In this process, suturing is avoided between flaps A and C, between flaps A and D, and between flaps B and D. During the period of 2011 to 2015, we treated 8 patients of toe syndactyly involving fusion distal to the proximal interphalangeal joint. Cases of congenital syndactyly received surgery between the ages of 8 and 11 months. Using this technique, flap ischemia/necrosis was not observed. During the postoperative follow-up period, the interdigital space retained sufficient depth without developing any scar contracture. No case required additional surgery. PMID:27536472
NASA Astrophysics Data System (ADS)
Yao, Yibin; Tang, Jun; Kong, Jian; Zhang, Liang; Zhang, Shun
2013-12-01
Reconstructing ionospheric electron density (IED) is an ill-posed inverse problem, with classical Tikhonov regularization tending to smooth IED structures. By contrast, total variation (TV) regularization effectively resists noise and preserves discontinuities of the IED. In this paper, we regularize the inverse problem by incorporating both Tikhonov and TV regularization. A specific formulation of the proposed method, called hybrid regularization, is introduced and investigated. The method is then tested using simulated data for the actual positions of the GPS satellites and ground receivers, and also applied to the analysis of real observation data under quiescent and disturbed ionospheric conditions. Experiments demonstrate the effectiveness, and illustrate the validity and reliability of the proposed method.
3D shape reconstruction of medical images using a perspective shape-from-shading method
NASA Astrophysics Data System (ADS)
Yang, Lei; Han, Jiu-qiang
2008-06-01
A 3D shape reconstruction approach for medical images using a shape-from-shading (SFS) method was proposed in this paper. A new reflectance map equation of medical images was analyzed with the assumption that the Lambertian reflectance surface was irradiated by a point light source located at the light center and the image was formed under perspective projection. The corresponding static Hamilton-Jacobi (H-J) equation of the reflectance map equation was established. So the shape-from-shading problem turned into solving the viscosity solution of the static H-J equation. Then with the conception of a viscosity vanishing approximation, the Lax-Friedrichs fast sweeping numerical method was used to compute the viscosity solution of the H-J equation and a new iterative SFS algorithm was gained. Finally, experiments on both synthetic images and real medical images were performed to illustrate the efficiency of the proposed SFS method.
Matros, Evan; Albornoz, Claudia R; Rensberger, Michael; Weimer, Katherine; Garfein, Evan S
2014-06-01
There is increased clinical use of computer-assisted design (CAD) and computer-assisted modeling (CAM) for osseous flap reconstruction, particularly in the head and neck region. Limited information exists about methods to optimize the application of this new technology and for cases in which it may be advantageous over existing methods of osseous flap shaping. A consecutive series of osseous reconstructions planned with CAD/CAM over the past 5 years was analyzed. Conceptual considerations and refinements in the CAD/CAM process were evaluated. A total of 48 reconstructions were performed using CAD/CAM. The majority of cases were performed for head and neck tumor reconstruction or related complications whereas the remainder (4%) were performed for penetrating trauma. Defect location was the mandible (85%), maxilla (12.5%), and pelvis (2%). Reconstruction was performed immediately in 73% of the cases and delayed in 27% of the cases. The mean number of osseous flap bone segments used in reconstruction was 2.41. Areas of optimization include the following: mandible cutting guide placement, osteotomy creation, alternative planning, and saw blade optimization. Identified benefits of CAD/CAM over current techniques include the following: delayed timing, anterior mandible defects, specimen distortion, osteotomy creation in three dimensions, osteotomy junction overlap, plate adaptation, and maxillary reconstruction. Experience with CAD/CAM for osseous reconstruction has identified tools for technique optimization and cases where this technology may prove beneficial over existing methods. Knowledge of these facts may contribute to improved use and main-stream adoption of CAD/CAM virtual surgical planning by reconstructive surgeons. PMID:24323480
Miyata, Y; Suzuki, T; Takechi, M; Urano, H; Ide, S
2015-07-01
For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA. PMID:26233387
Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.
2015-07-15
For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.
Toxic Diatom Aldehydes Affect Defence Gene Networks in Sea Urchins.
Varrella, Stefano; Romano, Giovanna; Costantini, Susan; Ruocco, Nadia; Ianora, Adrianna; Bentley, Matt G; Costantini, Maria
2016-01-01
Marine organisms possess a series of cellular strategies to counteract the negative effects of toxic compounds, including the massive reorganization of gene expression networks. Here we report the modulated dose-dependent response of activated genes by diatom polyunsaturated aldehydes (PUAs) in the sea urchin Paracentrotus lividus. PUAs are secondary metabolites deriving from the oxidation of fatty acids, inducing deleterious effects on the reproduction and development of planktonic and benthic organisms that feed on these unicellular algae and with anti-cancer activity. Our previous results showed that PUAs target several genes, implicated in different functional processes in this sea urchin. Using interactomic Ingenuity Pathway Analysis we now show that the genes targeted by PUAs are correlated with four HUB genes, NF-κB, p53, δ-2-catenin and HIF1A, which have not been previously reported for P. lividus. We propose a working model describing hypothetical pathways potentially involved in toxic aldehyde stress response in sea urchins. This represents the first report on gene networks affected by PUAs, opening new perspectives in understanding the cellular mechanisms underlying the response of benthic organisms to diatom exposure. PMID:26914213
An Arabidopsis gene network based on the graphical Gaussian model
Ma, Shisong; Gong, Qingqiu; Bohnert, Hans J.
2007-01-01
We describe a gene network for the Arabidopsis thaliana transcriptome based on a modified graphical Gaussian model (GGM). Through partial correlation (pcor), GGM infers coregulation patterns between gene pairs conditional on the behavior of other genes. Regularized GGM calculated pcor between gene pairs among ∼2000 input genes at a time. Regularized GGM coupled with iterative random samplings of genes was expanded into a network that covered the Arabidopsis genome (22,266 genes). This resulted in a network of 18,625 interactions (edges) among 6760 genes (nodes) with high confidence and connections representing ∼0.01% of all possible edges. When queried for selected genes, locally coherent subnetworks mainly related to metabolic functions, and stress responses emerged. Examples of networks for biochemical pathways, cell wall metabolism, and cold responses are presented. GGM displayed known coregulation pathways as subnetworks and added novel components to known edges. Finally, the network reconciled individual subnetworks in a topology joined at the whole-genome level and provided a general framework that can instruct future studies on plant metabolism and stress responses. The network model is included. PMID:17921353
Genes and gene networks implicated in aggression related behaviour.
Malki, Karim; Pain, Oliver; Du Rietz, Ebba; Tosto, Maria Grazia; Paya-Cano, Jose; Sandnabba, Kenneth N; de Boer, Sietse; Schalkwyk, Leonard C; Sluyter, Frans
2014-10-01
Aggressive behaviour is a major cause of mortality and morbidity. Despite of moderate heritability estimates, progress in identifying the genetic factors underlying aggressive behaviour has been limited. There are currently three genetic mouse models of high and low aggression created using selective breeding. This is the first study to offer a global transcriptomic characterization of the prefrontal cortex across all three genetic mouse models of aggression. A systems biology approach has been applied to transcriptomic data across the three pairs of selected inbred mouse strains (Turku Aggressive (TA) and Turku Non-Aggressive (TNA), Short Attack Latency (SAL) and Long Attack Latency (LAL) mice and North Carolina Aggressive (NC900) and North Carolina Non-Aggressive (NC100)), providing novel insight into the neurobiological mechanisms and genetics underlying aggression. First, weighted gene co-expression network analysis (WGCNA) was performed to identify modules of highly correlated genes associated with aggression. Probe sets belonging to gene modules uncovered by WGCNA were carried forward for network analysis using ingenuity pathway analysis (IPA). The RankProd non-parametric algorithm was then used to statistically evaluate expression differences across the genes belonging to modules significantly associated with aggression. IPA uncovered two pathways, involving NF-kB and MAPKs. The secondary RankProd analysis yielded 14 differentially expressed genes, some of which have previously been implicated in pathways associated with aggressive behaviour, such as Adrbk2. The results highlighted plausible candidate genes and gene networks implicated in aggression-related behaviour. PMID:25142712
Mechanistically Consistent Reduced Models of Synthetic Gene Networks
Mier-y-Terán-Romero, Luis; Silber, Mary; Hatzimanikatis, Vassily
2013-01-01
Designing genetic networks with desired functionalities requires an accurate mathematical framework that accounts for the essential mechanistic details of the system. Here, we formulate a time-delay model of protein translation and mRNA degradation by systematically reducing a detailed mechanistic model that explicitly accounts for the ribosomal dynamics and the cleaving of mRNA by endonucleases. We exploit various technical and conceptual advantages that our time-delay model offers over the mechanistic model to probe the behavior of a self-repressing gene over wide regions of parameter space. We show that a heuristic time-delay model of protein synthesis of a commonly used form yields a notably different prediction for the parameter region where sustained oscillations occur. This suggests that such heuristics can lead to erroneous results. The functional forms that arise from our systematic reduction can be used for every system that involves transcription and translation and they could replace the commonly used heuristic time-delay models for these processes. The results from our analysis have important implications for the design of synthetic gene networks and stress that such design must be guided by a combination of heuristic models and mechanistic models that include all relevant details of the process. PMID:23663853
Comparative genomics of mammalian hibernators using gene networks.
Villanueva-Cañas, José Luis; Faherty, Sheena L; Yoder, Anne D; Albà, M Mar
2014-09-01
In recent years, the study of the molecular processes involved in mammalian hibernation has shifted from investigating a few carefully selected candidate genes to large-scale analysis of differential gene expression. The availability of high-throughput data provides an unprecedented opportunity to ask whether phylogenetically distant species show similar mechanisms of genetic control, and how these relate to particular genes and pathways involved in the hibernation phenotype. In order to address these questions, we compare 11 datasets of differentially expressed (DE) genes from two ground squirrel species, one bat species, and the American black bear, as well as a list of genes extracted from the literature that previously have been correlated with the drastic physiological changes associated with hibernation. We identify several genes that are DE in different species, indicating either ancestral adaptations or evolutionary convergence. When we use a network approach to expand the original datasets of DE genes to large gene networks using available interactome data, a higher agreement between datasets is achieved. This indicates that the same key pathways are important for activating and maintaining the hibernation phenotype. Functional-term-enrichment analysis identifies several important metabolic and mitochondrial processes that are critical for hibernation, such as fatty acid beta-oxidation and mitochondrial transport. We do not detect any enrichment of positive selection signatures in the coding sequences of genes from the networks of hibernation-associated genes, supporting the hypothesis that the genetic processes shaping the hibernation phenotype are driven primarily by changes in gene regulation. PMID:24881044
Programmable cells: Interfacing natural and engineered gene networks
NASA Astrophysics Data System (ADS)
Kobayashi, Hideki; Kærn, Mads; Araki, Michihiro; Chung, Kristy; Gardner, Timothy S.; Cantor, Charles R.; Collins, James J.
2004-06-01
Novel cellular behaviors and characteristics can be obtained by coupling engineered gene networks to the cell's natural regulatory circuitry through appropriately designed input and output interfaces. Here, we demonstrate how an engineered genetic circuit can be used to construct cells that respond to biological signals in a predetermined and programmable fashion. We employ a modular design strategy to create Escherichia coli strains where a genetic toggle switch is interfaced with: (i) the SOS signaling pathway responding to DNA damage, and (ii) a transgenic quorum sensing signaling pathway from Vibrio fischeri. The genetic toggle switch endows these strains with binary response dynamics and an epigenetic inheritance that supports a persistent phenotypic alteration in response to transient signals. These features are exploited to engineer cells that form biofilms in response to DNA-damaging agents and cells that activate protein synthesis when the cell population reaches a critical density. Our work represents a step toward the development of "plug-and-play" genetic circuitry that can be used to create cells with programmable behaviors. heterologous gene expression | synthetic biology | Escherichia coli
Toxic Diatom Aldehydes Affect Defence Gene Networks in Sea Urchins
Varrella, Stefano; Ruocco, Nadia; Ianora, Adrianna; Bentley, Matt G.; Costantini, Maria
2016-01-01
Marine organisms possess a series of cellular strategies to counteract the negative effects of toxic compounds, including the massive reorganization of gene expression networks. Here we report the modulated dose-dependent response of activated genes by diatom polyunsaturated aldehydes (PUAs) in the sea urchin Paracentrotus lividus. PUAs are secondary metabolites deriving from the oxidation of fatty acids, inducing deleterious effects on the reproduction and development of planktonic and benthic organisms that feed on these unicellular algae and with anti-cancer activity. Our previous results showed that PUAs target several genes, implicated in different functional processes in this sea urchin. Using interactomic Ingenuity Pathway Analysis we now show that the genes targeted by PUAs are correlated with four HUB genes, NF-κB, p53, δ-2-catenin and HIF1A, which have not been previously reported for P. lividus. We propose a working model describing hypothetical pathways potentially involved in toxic aldehyde stress response in sea urchins. This represents the first report on gene networks affected by PUAs, opening new perspectives in understanding the cellular mechanisms underlying the response of benthic organisms to diatom exposure. PMID:26914213
Cross-Tissue Regulatory Gene Networks in Coronary Artery Disease.
Talukdar, Husain A; Foroughi Asl, Hassan; Jain, Rajeev K; Ermel, Raili; Ruusalepp, Arno; Franzén, Oscar; Kidd, Brian A; Readhead, Ben; Giannarelli, Chiara; Kovacic, Jason C; Ivert, Torbjörn; Dudley, Joel T; Civelek, Mete; Lusis, Aldons J; Schadt, Eric E; Skogsberg, Josefin; Michoel, Tom; Björkegren, Johan L M
2016-03-23
Inferring molecular networks can reveal how genetic perturbations interact with environmental factors to cause common complex diseases. We analyzed genetic and gene expression data from seven tissues relevant to coronary artery disease (CAD) and identified regulatory gene networks (RGNs) and their key drivers. By integrating data from genome-wide association studies, we identified 30 CAD-causal RGNs interconnected in vascular and metabolic tissues, and we validated them with corresponding data from the Hybrid Mouse Diversity Panel. As proof of concept, by targeting the key drivers AIP, DRAP1, POLR2I, and PQBP1 in a cross-species-validated, arterial-wall RGN involving RNA-processing genes, we re-identified this RGN in THP-1 foam cells and independent data from CAD macrophages and carotid lesions. This characterization of the molecular landscape in CAD will help better define the regulation of CAD candidate genes identified by genome-wide association studies and is a first step toward achieving the goals of precision medicine. PMID:27135365
NASA Astrophysics Data System (ADS)
Pan, Qi; Liu, De-Jun; Guo, Zhi-Yong; Fang, Hua-Feng; Feng, Mu-Qun
2016-06-01
In the model of a horizontal straight pipeline of finite length, the segmentation of the pipeline elements is a significant factor in the accuracy and rapidity of the forward modeling and inversion processes, but the existing pipeline segmentation method is very time-consuming. This paper proposes a section segmentation method to study the characteristics of pipeline magnetic anomalies—and the effect of model parameters on these magnetic anomalies—as a way to enhance computational performance and accelerate the convergence process of the inversion. Forward models using the piece segmentation method and section segmentation method based on magnetic dipole reconstruction (MDR) are established for comparison. The results show that the magnetic anomalies calculated by these two segmentation methods are almost the same regardless of different measuring heights and variations of the inclination and declination of the pipeline. In the optimized inversion procedure the results of the simulation data calculated by these two methods agree with the synthetic data from the original model, and the inversion accuracies of the burial depths of the two methods are approximately equal. The proposed method is more computationally efficient than the piece segmentation method—in other words, the section segmentation method can meet the requirements for precision in the detection of pipelines by magnetic anomalies and reduce the computation time of the whole process.
Separating the spectra of binary stars I. A simple method: Secondary reconstruction
NASA Astrophysics Data System (ADS)
Ferluga, S.; Floreano, L.; Bravar, U.; Bedalo, C.
1997-01-01
We present a practical method for the analysis of spectroscopic binaries, reconstructing the lines of the two components of the system. We show that the problem of the separation of binary spectra can be solved in an easy way, under most common conditions. One pair of observations may be sufficient, if taken at different orbital phases of the system, preferably at opposite quadratures. The separation procedure is discussed analytically, and a technique is described, which allows to restore the secondary lines in few steps. An algorithm is also provided, which derives the radial velocity of the secondary star, by directly analysing a difference line-profile obtained from the two input spectra. The efficiency of the method is tested, by reconstructing artificial line-profiles and simulated binary spectra as well. Then the procedure is applied to the eclipsing binary IZ Per, revealing for the first time its faint secondary spectrum. Based on observations performed at the Observatoire de Haute Provence (OHP), and made available through the Trieste-Aurelie-Archive (TAA).
Community Phylogenetics: Assessing Tree Reconstruction Methods and the Utility of DNA Barcodes
Boyle, Elizabeth E.; Adamowicz, Sarah J.
2015-01-01
Studies examining phylogenetic community structure have become increasingly prevalent, yet little attention has been given to the influence of the input phylogeny on metrics that describe phylogenetic patterns of co-occurrence. Here, we examine the influence of branch length, tree reconstruction method, and amount of sequence data on measures of phylogenetic community structure, as well as the phylogenetic signal (Pagel’s λ) in morphological traits, using Trichoptera larval communities from Churchill, Manitoba, Canada. We find that model-based tree reconstruction methods and the use of a backbone family-level phylogeny improve estimations of phylogenetic community structure. In addition, trees built using the barcode region of cytochrome c oxidase subunit I (COI) alone accurately predict metrics of phylogenetic community structure obtained from a multi-gene phylogeny. Input tree did not alter overall conclusions drawn for phylogenetic signal, as significant phylogenetic structure was detected in two body size traits across input trees. As the discipline of community phylogenetics continues to expand, it is important to investigate the best approaches to accurately estimate patterns. Our results suggest that emerging large datasets of DNA barcode sequences provide a vast resource for studying the structure of biological communities. PMID:26110886
NASA Astrophysics Data System (ADS)
Liu, Qi; Ge, Yi Nan; Wang, Tian Fu; Zheng, Chang Qiong; Zheng, Yi
2005-10-01
Based on the two-dimensional color Doppler image in this article, multilane transesophageal rotational scanning method is used to acquire original Doppler echocardiography while echocardiogram is recorded synchronously. After filtering and interpolation, the surface rendering and volume rendering methods are performed. Through analyzing the color-bar information and the color Doppler flow image's superposition principle, the grayscale mitral anatomical structure and color-coded regurgitation velocity parameter were separated from color Doppler flow images, three-dimensional reconstruction of mitral structure and regurgitation velocity distribution was implemented separately, fusion visualization of the reconstructed regurgitation velocity distribution parameter with its corresponding 3D mitral anatomical structures was realized, which can be used in observing the position, phase, direction and measuring the jet length, area, volume, space distribution and severity level of the mitral regurgitation. In addition, in patients with eccentric mitral regurgitation, this new modality overcomes the inherent limitations of two-dimensional color Doppler flow image by depicting the full extent of the jet trajectory, the area of eccentric regurgitation on three-dimensional image was much larger than that on two-dimensional image, the area variation tendency and volume variation tendency of regurgitation have been shown in figure at different angle and different systolic phase. The study shows that three-dimensional color Doppler provides quantitative measurements of eccentric mitral regurgitation that are more accurate and reproducible than conventional color Doppler.
NASA Astrophysics Data System (ADS)
Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A.
2014-03-01
In this work, we report on the development of an advanced multi-channel (MC) image reconstruction algorithm for grating-based X-ray phase-contrast computed tomography (GB-XPCT). The MC reconstruction method we have developed operates by concurrently, rather than independently as is done conventionally, reconstructing tomographic images of the three object properties (absorption, small-angle scattering, refractive index). By jointly estimating the object properties by use of an appropriately defined penalized weighted least squares (PWLS) estimator, the 2nd order statistical properties of the object property sinograms, including correlations between them, can be fully exploited to improve the variance vs. resolution tradeoff of the reconstructed images as compared to existing methods. Channel-independent regularization strategies are proposed. To solve the MC reconstruction problem, we developed an advanced algorithm based on the proximal point algorithm and the augmented Lagrangian method. By use of experimental and computer-simulation data, we demonstrate that by exploiting inter-channel noise correlations, the MC reconstruction method can improve image quality in GB-XPCT.
Reconstructive and rehabilitating methods in patients with dysphagia and nutritional disturbances
Motsch, Christiane
2005-01-01
As diverse as the causes of oropharyngeal dysphagia can be, as broad is the range of potential therapeutical approaches. In the past two decades, methods of plastic-reconstructive surgery, in particular microsurgically revascularised tissue transfer and minimally invasive, endoscopic techniques of every hue have substantially added to the portfolio of reconstructive surgery available for rehabilitating deglutition. Numerically, reconstructing the pharyngolaryngeal tract following resection of squamous-cell carcinomas in the oral cavity, the pharynx and the larynx has been gaining ground, as has functional deglutitive therapy performed to treat posttherapeutical sequelae. Dysphagia and malnutrition are closely interrelated. Every third patient hospitalised in Germany suffers from malnutrition; ENT tumour patients are not excluded. For patients presenting with advancing malnutrition, the mortality, the morbidity and the individual complication rate have all been observed to increase; also a longer duration of stay in hospital has been noted and a lesser individual toleration of treatment, diminished immunocompetence, impaired general physical and psychical condition and, thus, a less favourable prognosis on the whole. Therefore, in oncological patients, the dietotherapy will have to assume a key role in supportive treatment. It is just for patients, who are expected to go through a long process of deglutitive rehabilitation, that enteral nutrition through percutaneous endoscopically controlled gastrostomy (PEG) performed at an early stage can provide useful and efficient support to the therapeutic efforts. Nutrition and oncology are mutually influencing fields where, sooner or later, a change in paradigms will have to take place, i.e. gradually switching from therapy to prevention. While cancer causes malnutrition, feasible changes in feeding and nutrition-associated habits, including habitual drinking and smoking, might lower the incidence of cancer worldwide by 30
Reconstructive and rehabilitating methods in patients with dysphagia and nutritional disturbances.
Motsch, Christiane
2005-01-01
As diverse as the causes of oropharyngeal dysphagia can be, as broad is the range of potential therapeutical approaches. In the past two decades, methods of plastic-reconstructive surgery, in particular microsurgically revascularised tissue transfer and minimally invasive, endoscopic techniques of every hue have substantially added to the portfolio of reconstructive surgery available for rehabilitating deglutition. Numerically, reconstructing the pharyngolaryngeal tract following resection of squamous-cell carcinomas in the oral cavity, the pharynx and the larynx has been gaining ground, as has functional deglutitive therapy performed to treat posttherapeutical sequelae. Dysphagia and malnutrition are closely interrelated. Every third patient hospitalised in Germany suffers from malnutrition; ENT tumour patients are not excluded. For patients presenting with advancing malnutrition, the mortality, the morbidity and the individual complication rate have all been observed to increase; also a longer duration of stay in hospital has been noted and a lesser individual toleration of treatment, diminished immunocompetence, impaired general physical and psychical condition and, thus, a less favourable prognosis on the whole. Therefore, in oncological patients, the dietotherapy will have to assume a key role in supportive treatment. It is just for patients, who are expected to go through a long process of deglutitive rehabilitation, that enteral nutrition through percutaneous endoscopically controlled gastrostomy (PEG) performed at an early stage can provide useful and efficient support to the therapeutic efforts. Nutrition and oncology are mutually influencing fields where, sooner or later, a change in paradigms will have to take place, i.e. gradually switching from therapy to prevention. While cancer causes malnutrition, feasible changes in feeding and nutrition-associated habits, including habitual drinking and smoking, might lower the incidence of cancer worldwide by 30
High resolution image reconstruction method for a double-plane PET system with changeable spacing
NASA Astrophysics Data System (ADS)
Gu, Xiao-Yue; Zhou, Wei; Li, Lin; Wei, Long; Yin, Peng-Fei; Shang, Lei-Min; Yun, Ming-Kai; Lu, Zhen-Rui; Huang, Xian-Chao
2016-05-01
Breast-dedicated positron emission tomography (PET) imaging techniques have been developed in recent years. Their capacities to detect millimeter-sized breast tumors have been the subject of many studies. Some of them have been confirmed with good results in clinical applications. With regard to biopsy application, a double-plane detector arrangement is practicable, as it offers the convenience of breast immobilization. However, the serious blurring effect of the double-plane PET, with changeable spacing for different breast sizes, should be studied. We investigated a high resolution reconstruction method applicable for a double-plane PET. The distance between the detector planes is changeable. Geometric and blurring components were calculated in real-time for different detector distances, and accurate geometric sensitivity was obtained with a new tube area model. Resolution recovery was achieved by estimating blurring effects derived from simulated single gamma response information. The results showed that the new geometric modeling gave a more finite and smooth sensitivity weight in the double-plane PET. The blurring component yielded contrast recovery levels that could not be reached without blurring modeling, and improved visual recovery of the smallest spheres and better delineation of the structures in the reconstructed images were achieved with the blurring component. Statistical noise had lower variance at the voxel level with blurring modeling at matched resolution, compared to without blurring modeling. In distance-changeable double-plane PET, finite resolution modeling during reconstruction achieved resolution recovery, without noise amplification. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences (KJCX2-EW-N06)
NASA Technical Reports Server (NTRS)
Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.
2015-01-01
High-order methods are quickly becoming popular for turbulent flows as the amount of computer processing power increases. The flux reconstruction (FR) method presents a unifying framework for a wide class of high-order methods including discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV). It offers a simple, efficient, and easy way to implement nodal-based methods that are derived via the differential form of the governing equations. Whereas high-order methods have enjoyed recent success, they have been known to introduce numerical instabilities due to polynomial aliasing when applied to under-resolved nonlinear problems. Aliasing errors have been extensively studied in reference to DG methods; however, their study regarding FR methods has mostly been limited to the selection of the nodal points used within each cell. Here, we extend some of the de-aliasing techniques used for DG methods, primarily over-integration, to the FR framework. Our results show that over-integration does remove aliasing errors but may not remove all instabilities caused by insufficient resolution (for FR as well as DG).
NASA Astrophysics Data System (ADS)
Szalay, Viktor
1999-11-01
The reconstruction of a function from knowing only its values on a finite set of grid points, that is the construction of an analytical approximation reproducing the function with good accuracy everywhere within the sampled volume, is an important problem in all branches of sciences. One such problem in chemical physics is the determination of an analytical representation of Born-Oppenheimer potential energy surfaces by ab initio calculations which give the value of the potential at a finite set of grid points in configuration space. This article describes the rudiments of iterative and direct methods of potential surface reconstruction. The major new results are the derivation, numerical demonstration, and interpretation of a reconstruction formula. The reconstruction formula derived approximates the unknown function, say V, by linear combination of functions obtained by discretizing the continuous distributed approximating functional (DAF) approximation of V over the grid of sampling. The simplest of contracted and ordinary Hermite-DAFs are shown to be sufficient for reconstruction. The linear combination coefficients can be obtained either iteratively or directly by finding the minimal norm least-squares solution of a linear system of equations. Several numerical examples of reconstructing functions of one and two variables, and very different shape are given. The examples demonstrate the robustness, high accuracy, as well as the caveats of the proposed method. As to the mathematical foundation of the method, it is shown that the reconstruction formula can be interpreted as, and in fact is, frame expansion. By recognizing the relevance of frames in determining analytical approximation to potential energy surfaces, an extremely rich and beautiful toolbox of mathematics has come to our disposal. Thus, the simple reconstruction method derived in this paper can be refined, extended, and improved in numerous ways.
[Methods and importance of volume measurement in reconstructive and aesthetic breast surgery].
Kunos, Csaba; Gulyás, Gusztáv; Pesthy, Pál; Kovács, Eszter; Mátrai, Zoltán
2014-03-16
Volume measurement of the breast allows for better surgical planning and implant selection in breast reconstructive and symmetrization procedures. The safety and accuracy of tumor removal, in accordance with oncoplastic principles, may be improved by knowing the true breast- and breast tumor volume. The authors discuss the methods of volume measurement of the breast and describe the method based on magnetic resonance imaging digital volume measurement in details. The volume of the breast parenchyma and the tumor was determined by processing the diagnostic magnetic resonance scans, and the difference in the volume of the two breasts was measured. Surgery was planned and implant selection was made based on the measured volume details. The authors conclude that digital volume measurement proved to be a valuable tool in preoperative planning of volume reducing mammaplasty, replacement of unknown size implants and in cases when breast asymmetry is treated. PMID:24613775
Defect detection for corner cracks in steel billets using a wavelet reconstruction method.
Jeon, Yong-Ju; Choi, Doo-chul; Lee, Sang Jun; Yun, Jong Pil; Kim, Sang Woo
2014-02-01
Presently, automatic inspection algorithms are widely used to ensure high-quality products and achieve high productivity in the steelmaking industry. In this paper, we propose a vision-based method for detecting corner cracks on the surface of steel billets. Because of the presence of scales composed of oxidized substances, the billet surfaces are not uniform and vary considerably with the lighting conditions. To minimize the influence of scales and improve the accuracy of detection, a detection method based on a visual inspection algorithm is proposed. Wavelet reconstruction is used to reduce the effect of scales. Texture and morphological features are used to identify the corner cracks among the defective candidates. Finally, the experimental results show that the proposed algorithm is effective in detecting corner cracks on the surfaces of the steel billets. PMID:24562019
NASA Astrophysics Data System (ADS)
Bernal, E. J.; Martinod, R. M.; Betancur, G. R.; Castañeda, L. F.
2016-05-01
The present work poses a method for the measurement of geometric parameters of rail wheels in a dynamic condition, by reconstructing the profilogram from a portion of the wheel surface wear with artificial vision. The suggested procedure can work with a two-dimensional laser displacement transducer or by processing a sole image from a single camera with a structured light source. These two procedures require fewer devices and simpler implementation processes and allow the use of mathematical algorithms that demand less information processing, thus generating more accurate results. Railway operators may implement this method to perform predictive maintenance to their rolling stock at a fraction of the regular cost; thus achieving better precision, availability, maintenance performance and improving safety. Results were compared to those given by commercial equipment, showing similar precision but a better cost-benefit relation.
NASA Astrophysics Data System (ADS)
Zhang, Han-Ming; Wang, Lin-Yuan; Yan, Bin; Li, Lei; Xi, Xiao-Qi; Lu, Li-Zhong
2013-07-01
Linear scan computed tomography (LCT) is of great benefit to online industrial scanning and security inspection due to its characteristics of straight-line source trajectory and high scanning speed. However, in practical applications of LCT, there are challenges to image reconstruction due to limited-angle and insufficient data. In this paper, a new reconstruction algorithm based on total-variation (TV) minimization is developed to reconstruct images from limited-angle and insufficient data in LCT. The main idea of our approach is to reformulate a TV problem as a linear equality constrained problem where the objective function is separable, and then minimize its augmented Lagrangian function by using alternating direction method (ADM) to solve subproblems. The proposed method is robust and efficient in the task of reconstruction by showing the convergence of ADM. The numerical simulations and real data reconstructions show that the proposed reconstruction method brings reasonable performance and outperforms some previous ones when applied to an LCT imaging problem.
NASA Astrophysics Data System (ADS)
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
NASA Astrophysics Data System (ADS)
Gao, Junhui
2013-05-01
Overlap grid is usually used in numerical simulation of flow with complex geometry by high order finite difference scheme. It is difficult to generate overlap grid and the connectivity information between adjacent blocks, especially when interpolation is required for non-coincident overlap grids. In this study, an interface flux reconstruction (IFR) method is proposed for numerical simulation using high order finite difference scheme with multi-block structured grids. In this method the neighboring blocks share a common face, and the fluxes on each block are matched to set the boundary conditions for each interior block. Therefore this method has the promise of allowing discontinuous grids on either side of an interior block interface. The proposed method is proven to be stable for 7-point central DRP scheme coupled with 4-point and 5-point boundary closure schemes, as well as the 4th order compact scheme coupled with 3rd order boundary closure scheme. Four problems are numerically solved with the developed code to validate the interface flux reconstruction method in this study. The IFR method coupled with the 4th order DRP scheme or compact scheme is validated to be 4th order accuracy with one and two dimensional waves propagation problems. Two dimensional pulse propagation in mean flow is computed with wavy mesh to demonstrate the ability of the proposed method for non-uniform grid. To demonstrate the ability of the proposed method for complex geometry, sound scattering by two cylinders is simulated and the numerical results are compared with the analytical data. It is shown that the numerical results agree well with the analytical data. Finally the IFR method is applied to simulate viscous flow pass a cylinder at Reynolds number 150 to show its capability for viscous problem. The computed pressure coefficient on the cylinder surface, the frequency of vortex shedding, the lift and drag coefficients are presented. The numerical results are compared with the data
NASA Astrophysics Data System (ADS)
Chen, Chong; Xu, Guoliang
2012-03-01
In this paper, we present a novel and effective L2-gradient-flow-based semi-implicit finite-element method for solving a variational problem of image reconstruction. The method is applicable to several data scenarios, especially for the contaminated data detected from uniformly sparse or randomly distributed projection directions. We also give a complete and rigorous proof for the convergence of the semi-implicit finite-element method, in which the convergence does not rely on the choices of the regularization parameter and the temporal step size. The experimental results show that our method has more desirable performance comparing with other reconstruction methods in solving a number of challenging reconstruction problems.
Hip reconstruction osteotomy by Ilizarov method as a salvage option for abnormal hip joints.
Umer, Masood; Rashid, Haroon; Umer, Hafiz Muhammad; Raza, Hasnain
2014-01-01
Hip joint instability can be secondary to congenital hip pathologies like developmental dysplasia (DDH) or acquired such as sequel of infective or neoplastic process. An unstable hip is usually associated with loss of bone from the proximal femur, proximal migration of the femur, lower-extremity length discrepancy, abnormal gait, and pain. In this case series of 37 patients coming to our institution between May 2005 and December 2011, we report our results in treatment of unstable hip joint by hip reconstruction osteotomy using the Ilizarov method and apparatus. This includes an acute valgus and extension osteotomy of the proximal femur combined with gradual varus and distraction (if required) for realignment and lengthening at a second, more distal, femoral osteotomy. 18 males and 19 females participated in the study. There were 17 patients with DDH, 12 with sequelae of septic arthritis, 2 with tuberculous arthritis, 4 with posttraumatic arthritis, and 2 with focal proximal femoral deficiency. Outcomes were evaluated by using Harris Hip Scoring system. At the mean follow-up of 37 months, Harris Hip Score had significantly improved in all patients. To conclude, illizarov hip reconstruction can successfully improve Trendelenburg's gait. It supports the pelvis and simultaneously restores knee alignment and corrects lower-extremity length discrepancy (LLD). PMID:24895616
Hip Reconstruction Osteotomy by Ilizarov Method as a Salvage Option for Abnormal Hip Joints
Umer, Masood; Rashid, Haroon; Raza, Hasnain
2014-01-01
Hip joint instability can be secondary to congenital hip pathologies like developmental dysplasia (DDH) or acquired such as sequel of infective or neoplastic process. An unstable hip is usually associated with loss of bone from the proximal femur, proximal migration of the femur, lower-extremity length discrepancy, abnormal gait, and pain. In this case series of 37 patients coming to our institution between May 2005 and December 2011, we report our results in treatment of unstable hip joint by hip reconstruction osteotomy using the Ilizarov method and apparatus. This includes an acute valgus and extension osteotomy of the proximal femur combined with gradual varus and distraction (if required) for realignment and lengthening at a second, more distal, femoral osteotomy. 18 males and 19 females participated in the study. There were 17 patients with DDH, 12 with sequelae of septic arthritis, 2 with tuberculous arthritis, 4 with posttraumatic arthritis, and 2 with focal proximal femoral deficiency. Outcomes were evaluated by using Harris Hip Scoring system. At the mean follow-up of 37 months, Harris Hip Score had significantly improved in all patients. To conclude, illizarov hip reconstruction can successfully improve Trendelenburg's gait. It supports the pelvis and simultaneously restores knee alignment and corrects lower-extremity length discrepancy (LLD). PMID:24895616
An Efficient Augmented Lagrangian Method for Statistical X-Ray CT Image Reconstruction
Li, Jiaojiao; Niu, Shanzhou; Huang, Jing; Bian, Zhaoying; Feng, Qianjin; Yu, Gaohang; Liang, Zhengrong; Chen, Wufan; Ma, Jianhua
2015-01-01
Statistical iterative reconstruction (SIR) for X-ray computed tomography (CT) under the penalized weighted least-squares criteria can yield significant gains over conventional analytical reconstruction from the noisy measurement. However, due to the nonlinear expression of the objective function, most exiting algorithms related to the SIR unavoidably suffer from heavy computation load and slow convergence rate, especially when an edge-preserving or sparsity-based penalty or regularization is incorporated. In this work, to address abovementioned issues of the general algorithms related to the SIR, we propose an adaptive nonmonotone alternating direction algorithm in the framework of augmented Lagrangian multiplier method, which is termed as “ALM-ANAD”. The algorithm effectively combines an alternating direction technique with an adaptive nonmonotone line search to minimize the augmented Lagrangian function at each iteration. To evaluate the present ALM-ANAD algorithm, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present ALM-ANAD algorithm can achieve noticeable gains over the classical nonlinear conjugate gradient algorithm and state-of-the-art split Bregman algorithm in terms of noise reduction, contrast-to-noise ratio, convergence rate, and universal quality index metrics. PMID:26495975
A new method using xenogeneicacellular dermal matrix in the reconstruction of lacrimal drainage
Chen, Li; Gong, Bo; Wu, Zhengzheng; Jetton, Jacquelyn; Chen, Rong; Qu, Chao
2014-01-01
Aims To prospectively evaluate the reliability and efficacy of a new treatment for the reconstruction of the lacrimal duct using a new histo-engineered material, xenogeneic (bovine) acellular dermal matrix. Method Five patients (five eyes) with partial or total absence of the lacrimal duct were included in the study. Four patients (four eyes) suffered from traumatic injuries to the lacrimal duct and one patient (one eye) had congenital absence of the lacrimal drainage system. A pedal graft of conjunctiva was taken from the fornix area and rolled into a tube structure after being attached to the acellular dermal matrix. Results The average duration of follow-up for the patients was 7.2 months (ranging from 6 to 12 months). After surgery, the new duct in the nasal cavity could be observed above the middle turbinate by nasal endoscopy. Patency was confirmed by pressing in the area of the lacrimal sac and visualising air bubbles in the nasal cavity. Additionally, the meatus above the middle turbinate of the nasal cavity was stained and visualised after patients underwent Jones dye test 1 (JDT1). Five tear ducts proved to be effective through irrigation testing and epiphora symptoms were alleviated in all cases. Conclusions The newly reconstructed lacrimal duct, formed by the shift of autogenous conjunctival petal and the attachment of acellular dermal matrix, was successful in all five cases and suggests a new solution for the complex lacrimal duct lesion and congenital anomalies of the lacrimal duct. PMID:25271909
D Reconstruction from Multi-View Medical X-Ray Images - Review and Evaluation of Existing Methods
NASA Astrophysics Data System (ADS)
Hosseinian, S.; Arefi, H.
2015-12-01
The 3D concept is extremely important in clinical studies of human body. Accurate 3D models of bony structures are currently required in clinical routine for diagnosis, patient follow-up, surgical planning, computer assisted surgery and biomechanical applications. However, 3D conventional medical imaging techniques such as computed tomography (CT) scan and magnetic resonance imaging (MRI) have serious limitations such as using in non-weight-bearing positions, costs and high radiation dose(for CT). Therefore, 3D reconstruction methods from biplanar X-ray images have been taken into consideration as reliable alternative methods in order to achieve accurate 3D models with low dose radiation in weight-bearing positions. Different methods have been offered for 3D reconstruction from X-ray images using photogrammetry which should be assessed. In this paper, after demonstrating the principles of 3D reconstruction from X-ray images, different existing methods of 3D reconstruction of bony structures from radiographs are classified and evaluated with various metrics and their advantages and disadvantages are mentioned. Finally, a comparison has been done on the presented methods with respect to several metrics such as accuracy, reconstruction time and their applications. With regards to the research, each method has several advantages and disadvantages which should be considered for a specific application.
A Grhl2-dependent gene network controls trophoblast branching morphogenesis.
Walentin, Katharina; Hinze, Christian; Werth, Max; Haase, Nadine; Varma, Saaket; Morell, Robert; Aue, Annekatrin; Pötschke, Elisabeth; Warburton, David; Qiu, Andong; Barasch, Jonathan; Purfürst, Bettina; Dieterich, Christoph; Popova, Elena; Bader, Michael; Dechend, Ralf; Staff, Anne Cathrine; Yurtdas, Zeliha Yesim; Kilic, Ergin; Schmidt-Ott, Kai M
2015-03-15
Healthy placental development is essential for reproductive success; failure of the feto-maternal interface results in pre-eclampsia and intrauterine growth retardation. We found that grainyhead-like 2 (GRHL2), a CP2-type transcription factor, is highly expressed in chorionic trophoblast cells, including basal chorionic trophoblast (BCT) cells located at the chorioallantoic interface in murine placentas. Placentas from Grhl2-deficient mouse embryos displayed defects in BCT cell polarity and basement membrane integrity at the chorioallantoic interface, as well as a severe disruption of labyrinth branching morphogenesis. Selective Grhl2 inactivation only in epiblast-derived cells rescued all placental defects but phenocopied intraembryonic defects observed in global Grhl2 deficiency, implying the importance of Grhl2 activity in trophectoderm-derived cells. ChIP-seq identified 5282 GRHL2 binding sites in placental tissue. By integrating these data with placental gene expression profiles, we identified direct and indirect Grhl2 targets and found a marked enrichment of GRHL2 binding adjacent to genes downregulated in Grhl2(-/-) placentas, which encoded known regulators of placental development and epithelial morphogenesis. These genes included that encoding the serine protease inhibitor Kunitz type 1 (Spint1), which regulates BCT cell integrity and labyrinth formation. In human placenta, we found that human orthologs of murine GRHL2 and its targets displayed co-regulation and were expressed in trophoblast cells in a similar domain as in mouse placenta. Our data indicate that a conserved Grhl2-coordinated gene network controls trophoblast branching morphogenesis, thereby facilitating development of the site of feto-maternal exchange. This might have implications for syndromes related to placental dysfunction. PMID:25758223
Using Effective Subnetworks to Predict Selected Properties of Gene Networks
Gunaratne, Gemunu H.; Gunaratne, Preethi H.; Seemann, Lars; Török, Andrei
2010-01-01
Background Difficulties associated with implementing gene therapy are caused by the complexity of the underlying regulatory networks. The forms of interactions between the hundreds of genes, proteins, and metabolites in these networks are not known very accurately. An alternative approach is to limit consideration to genes on the network. Steady state measurements of these influence networks can be obtained from DNA microarray experiments. However, since they contain a large number of nodes, the computation of influence networks requires a prohibitively large set of microarray experiments. Furthermore, error estimates of the network make verifiable predictions impossible. Methodology/Principal Findings Here, we propose an alternative approach. Rather than attempting to derive an accurate model of the network, we ask what questions can be addressed using lower dimensional, highly simplified models. More importantly, is it possible to use such robust features in applications? We first identify a small group of genes that can be used to affect changes in other nodes of the network. The reduced effective empirical subnetwork (EES) can be computed using steady state measurements on a small number of genetically perturbed systems. We show that the EES can be used to make predictions on expression profiles of other mutants, and to compute how to implement pre-specified changes in the steady state of the underlying biological process. These assertions are verified in a synthetic influence network. We also use previously published experimental data to compute the EES associated with an oxygen deprivation network of E.coli, and use it to predict gene expression levels on a double mutant. The predictions are significantly different from the experimental results for less than of genes. Conclusions/Significance The constraints imposed by gene expression levels of mutants can be used to address a selected set of questions about a gene network. PMID:20949025
A new volume conservation enforcement method for PLIC reconstruction in general convex grids
NASA Astrophysics Data System (ADS)
López, J.; Hernández, J.; Gómez, P.; Faura, F.
2016-07-01
A comprehensive study is made of methods for resolving the volume conservation enforcement problem in the PLIC reconstruction of an interface in general 3D convex grids. Different procedures to bracket the solution when solving the problem using previous standard methods are analyzed in detail. A new interpolation bracketing procedure and an improved analytical method to find the interface plane constant are proposed. These techniques are combined in a new method to enforce volume conservation, which does not require the sequential polyhedra truncation operations typically used in standard methods. The new methods have been implemented into existing geometrical routines described in López and Hernández [15], which are further improved by using more efficient formulae to compute areas and volumes of general convex 2 and 3D polytopes. Different tests using regular and irregular cell geometries are carried out to demonstrate the robustness and substantial improvement in computational efficiency of the proposed techniques, which increase the computation speed of the mentioned routines by up to 3 times for the 3D problems considered in this work.
NASA Astrophysics Data System (ADS)
Nourgaliev, R.; Luo, H.; Weston, B.; Anderson, A.; Schofield, S.; Dunn, T.; Delplanque, J.-P.
2016-01-01
A new reconstructed Discontinuous Galerkin (rDG) method, based on orthogonal basis/test functions, is developed for fluid flows on unstructured meshes. Orthogonality of basis functions is essential for enabling robust and efficient fully-implicit Newton-Krylov based time integration. The method is designed for generic partial differential equations, including transient, hyperbolic, parabolic or elliptic operators, which are attributed to many multiphysics problems. We demonstrate the method's capabilities for solving compressible fluid-solid systems (in the low Mach number limit), with phase change (melting/solidification), as motivated by applications in Additive Manufacturing (AM). We focus on the method's accuracy (in both space and time), as well as robustness and solvability of the system of linear equations involved in the linearization steps of Newton-based methods. The performance of the developed method is investigated for highly-stiff problems with melting/solidification, emphasizing the advantages from tight coupling of mass, momentum and energy conservation equations, as well as orthogonality of basis functions, which leads to better conditioning of the underlying (approximate) Jacobian matrices, and rapid convergence of the Krylov-based linear solver.
The validation of made-to-measure method for reconstruction of phase space distribution functions
NASA Astrophysics Data System (ADS)
Tagawa, H.; Gouda, N.; Yano, T.; Hara, T.
2016-08-01
We investigate how accurately phase space distribution functions (DFs) in galactic models can be reconstructed by a made-to-measure (M2M) method, which constructs N-particle models of stellar systems from photometric and various kinematic data. The advantage of the M2M method is that this method can be applied to various galactic models without assumption of the spatial symmetries of gravitational potentials adopted in galactic models, and furthermore, numerical calculations of the orbits of the stars cannot be severely constrained by the capacities of computer memories. The M2M method has been applied to various galactic models. However, the degree of accuracy for the recovery of DFs derived by the M2M method in galactic models has never been investigated carefully. Therefore, we show the degree of accuracy for the recovery of the DFs for the anisotropic Plummer model and the axisymmetric Stäckel model, which have analytic solutions of the DFs. Furthermore, this study provides the dependence of the degree of accuracy for the recovery of the DFs on various parameters and a procedure adopted in this paper. As a result, we find that the degree of accuracy for the recovery of the DFs derived by the M2M method for the spherical target model is a few percent, and more than ten percent for the axisymmetric target model.
[Flap-reconstruction in mouth and oropharynx. A clinical comparison of methods (author's transl)].
Eitschberger, E; Weidenbecher, M
1981-10-01
The possible methods for plastic reconstruction after resection of malignant tumors in the mouth and oropharynx are reported. A total of 165 patients, operated upon between 1973 and 1980, has been reviewed. The tumor was located 26 times in the tongue, 4 times within the base of the tongue, 27 times in the floor of the mouth, 10 times in the floor of the mouth and in the tongue and 80 times in the tonsils. For reconstruction 8 times the deltopectoral flap, 51 times the forehead flap, 42 times the tongue flap, 7 times the myocutaneous sternocleido-mastoideus island flap were used. 8 times other methods were applied, like for instance skin grafts, and in 41 cases a primary closure of the defect was possible. Complete necrosis of the flap were rare, more frequent in contrast were partial dehiscences with or without a fistula. Thus in the forehead flap 5 times a necrosis occurred whereas in 15 cases a dehiscence was seen. Even better results were achieved for the deltopectoral- and tongue flap. In contrast, the skin island of the myocutaneous sternocleidomastoideus flaps all became necrotic, but only once a temporary fistula developed. Of the pectoralis myocutaneous island flaps the first two became necrotic, probably due to lack of surgical experience. Taking into account the surgical expenditure, the functional as well as cosmetical results, the methods may be scaled according to clinical value as follows: The pectoralis major myocutaneous island flap and the tongue flap equally range on the first place, followed by the myocutaneous sternocleidomastoideus island flap, and on the 3. and 4. place by the deltopectoral and forehead flap. PMID:7287523
NASA Astrophysics Data System (ADS)
Hosseininaveh Ahmadabadian, Ali; Robson, Stuart; Boehm, Jan; Shortis, Mark
2013-04-01
Multi-View Stereo (MVS) as a low cost technique for precise 3D reconstruction can be a rival for laser scanners if the scale of the model is resolved. A fusion of stereo imaging equipment with photogrammetric bundle adjustment and MVS methods, known as photogrammetric MVS, can generate correctly scaled 3D models without using any known object distances. Although a huge number of stereo images (e.g. 200 high resolution images from a small object) captured of the object contains redundant data that allows detailed and accurate 3D reconstruction, the capture and processing time is increased when a vast amount of high resolution images are employed. Moreover, some parts of the object are often missing due to the lack of coverage of all areas. These problems demand a logical selection of the most suitable stereo camera views from the large image dataset. This paper presents a method for clustering and choosing optimal stereo or optionally single images from a large image dataset. The approach focusses on the two key steps of image clustering and iterative image selection. The method is developed within a software application called Imaging Network Designer (IND) and tested by the 3D recording of a gearbox and three metric reference objects. A comparison is made between IND and CMVS, which is a free package for selecting vantage images. The final 3D models obtained from the IND and CMVS approaches are compared with datasets generated with an MMDx Nikon Laser scanner. Results demonstrate that IND can provide a better image selection for MVS than CMVS in terms of surface coordinate uncertainty and completeness.
Adding versatility to the reconstruction of intraoral lining: opened pocket method.
Cinar, Can; Ogur, Simin; Arslan, Hakan; Kilic, Ali
2007-01-01
Reconstruction of a full-thickness cheek defect, especially one associated with a large lip and oral commissure defect, remains a challenge. After tumor excision, replacement of the oral mucosa is often necessary. The oral mucosa is a thin, pliable lining. Because the skin of the forearm is ideally suited for replacement of oral lining, being thin, pliable, and predominantly hairless, the radial forearm flap is the most frequently used soft-tissue flap for this purpose. In addition, the vascularity of the area allows substantial variation in the design of the flap, both in relation to its site and size. On the other hand, the radial forearm flap might be unusable in some occasions, such as in the case presented here. Thus, a search for an alternative free flap is required. We used a prefabricated scapular free flap to reconstruct a large concomitant lip and full-thickness cheek defect resulting from perioral cancer ablation. We introduce a new "opened pocket" method for reconstruction of the intra-oral lining without folding the flap. Resection of the tumor resulted in a defect including 45% of the upper lip, 50% of the lower lip, and a large, full-thickness defect of the cheek. The resultant defect was temporarily closed with a split-thickness skin graft. Meanwhile, the left scapular fasciocutaneous flap was prefabricated for permanent closure of the defect. The left scapular flap was outlined horizontally, and the flap orientation for the defect was estimated. Then, the distal portion of the flap was harvested and incised to create lips and oral commissure. Afterward, the raw surface under the neo-lip regions and the base where the flap was raised was grafted with one piece from a thick, split-thickness skin graft. Fourteen days later, the patient was taken back to the operating room for reconstruction of the defect with free transfer of a prefabricated scapular fascia-cutaneous flap. The grafted distal region of the flap was raised with the deep fascia located
Chen, Baiyu; Christianson, Olav; Wilson, Joshua M.; Samei, Ehsan
2014-07-15
Purpose: For nonlinear iterative image reconstructions (IR), the computed tomography (CT) noise and resolution properties can depend on the specific imaging conditions, such as lesion contrast and image noise level. Therefore, it is imperative to develop a reliable method to measure the noise and resolution properties under clinically relevant conditions. This study aimed to develop a robust methodology to measure the three-dimensional CT noise and resolution properties under such conditions and to provide guidelines to achieve desirable levels of accuracy and precision. Methods: The methodology was developed based on a previously reported CT image quality phantom. In this methodology, CT noise properties are measured in the uniform region of the phantom in terms of a task-based 3D noise-power spectrum (NPS{sub task}). The in-plane resolution properties are measured in terms of the task transfer function (TTF) by applying a radial edge technique to the rod inserts in the phantom. The z-direction resolution properties are measured from a supplemental phantom, also in terms of the TTF. To account for the possible nonlinearity of IR, the NPS{sub task} is measured with respect to the noise magnitude, and the TTF with respect to noise magnitude and edge contrast. To determine the accuracy and precision of the methodology, images of known noise and resolution properties were simulated. The NPS{sub task} and TTF were measured on the simulated images and compared to the truth, with criteria established to achieve NPS{sub task} and TTF measurements with <10% error. To demonstrate the utility of this methodology, measurements were performed on a commercial CT system using five dose levels, two slice thicknesses, and three reconstruction algorithms (filtered backprojection, FBP; iterative reconstruction in imaging space, IRIS; and sinogram affirmed iterative reconstruction with strengths of 5, SAFIRE5). Results: To achieve NPS{sub task} measurements with <10% error, the
Xu, H
2014-06-01
Purpose: To develop and investigate whether the logarithmic barrier (LB) method can result in high-quality reconstructed CT images using sparsely-sampled noisy projection data Methods: The objective function is typically formulated as the sum of the total variation (TV) and a data fidelity (DF) term with a parameter λ that governs the relative weight between them. Finding the optimized value of λ is a critical step for this approach to give satisfactory results. The proposed LB method avoid using λ by constructing the objective function as the sum of the TV and a log function whose augment is the DF term. Newton's method was used to solve the optimization problem. The algorithm was coded in MatLab2013b. Both Shepp-Logan phantom and a patient lung CT image were used for demonstration of the algorithm. Measured data were simulated by calculating the projection data using radon transform. A Poisson noise model was used to account for the simulated detector noise. The iteration stopped when the difference of the current TV and the previous one was less than 1%. Results: Shepp-Logan phantom reconstruction study shows that filtered back-projection (FBP) gives high streak artifacts for 30 and 40 projections. Although visually the streak artifacts are less pronounced for 64 and 90 projections in FBP, the 1D pixel profiles indicate that FBP gives noisier reconstructed pixel values than LB does. A lung image reconstruction is presented. It shows that use of 64 projections gives satisfactory reconstructed image quality with regard to noise suppression and sharp edge preservation. Conclusion: This study demonstrates that the logarithmic barrier method can be used to reconstruct CT images from sparsely-amped data. The number of projections around 64 gives a balance between the over-smoothing of the sharp demarcation and noise suppression. Future study may extend to CBCT reconstruction and improvement on computation speed.
An estimation method of MR signal parameters for improved image reconstruction in unilateral scanner
NASA Astrophysics Data System (ADS)
Bergman, Elad; Yeredor, Arie; Nevo, Uri
2013-12-01
Unilateral NMR devices are used in various applications including non-destructive testing and well logging, but are not used routinely for imaging. This is mainly due to the inhomogeneous magnetic field (B0) in these scanners. This inhomogeneity results in low sensitivity and further forces the use of the slow single point imaging scan scheme. Improving the measurement sensitivity is therefore an important factor as it can improve image quality and reduce imaging times. Short imaging times can facilitate the use of this affordable and portable technology for various imaging applications. This work presents a statistical signal-processing method, designed to fit the unique characteristics of imaging with a unilateral device. The method improves the imaging capabilities by improving the extraction of image information from the noisy data. This is done by the use of redundancy in the acquired MR signal and by the use of the noise characteristics. Both types of data were incorporated into a Weighted Least Squares estimation approach. The method performance was evaluated with a series of imaging acquisitions applied on phantoms. Images were extracted from each measurement with the proposed method and were compared to the conventional image reconstruction. All measurements showed a significant improvement in image quality based on the MSE criterion - with respect to gold standard reference images. An integration of this method with further improvements may lead to a prominent reduction in imaging times aiding the use of such scanners in imaging application.
Tarsitano, Achille; Battaglia, Salvatore; Crimi, Salvatore; Ciocca, Leonardo; Scotti, Roberto; Marchetti, Claudio
2016-07-01
The design and manufacture of patient-specific mandibular reconstruction plates, particularly in combination with cutting guides, has created many new opportunities for the planning and implementation of mandibular reconstruction. Although this surgical method is being used more widely and the outcomes appear to be improved, the question of the additional cost has to be discussed. To evaluate the cost generated by the management of this technology, we studied a cohort of patients treated for mandibular neoplasms. The population was divided into two groups of 20 patients each who were undergoing a 'traditional' freehand mandibular reconstruction or a computer-aided design/computer-aided manufacturing (CAD-CAM) mandibular reconstruction. Data concerning operation time, complications, and days of hospitalisation were used to evaluate costs related to the management of these patients. The mean operating time for the CAD-CAM group was 435 min, whereas that for the freehand group was 550.5 min. The total difference in terms of average time gain was 115.5 min. No microvascular complication occurred in the CAD-CAM group; two complications (10%) were observed in patients undergoing freehand reconstructions. The mean overall lengths of hospital stay were 13.8 days for the CAD-CAM group and 17 days for the freehand group. Finally, considering that the institutional cost per minute of theatre time is €30, the money saved as a result of the time gained was €3,450. This cost corresponds approximately to the total price of the CAD-CAM surgery. In conclusion, we believe that CAD-CAM technology for mandibular reconstruction will become a widely used reconstructive method and that its cost will be covered by gains in terms of surgical time, quality of reconstruction, and reduced complications. PMID:27193477
Walking on multiple disease-gene networks to prioritize candidate genes.
Jiang, Rui
2015-06-01
Uncovering causal genes for human inherited diseases, as the primary step toward understanding the pathogenesis of these diseases, requires a combined analysis of genetic and genomic data. Although bioinformatics methods have been designed to prioritize candidate genes resulting from genetic linkage analysis or association studies, the coverage of both diseases and genes in existing methods is quite limited, thereby preventing the scan of causal genes for a significant proportion of diseases at the whole-genome level. To overcome this limitation, we propose a method named pgWalk to prioritize candidate genes by integrating multiple phenomic and genomic data. We derive three types of phenotype similarities among 7719 diseases and nine types of functional similarities among 20327 genes. Based on a pair of phenotype and gene similarities, we construct a disease-gene network and then simulate the process that a random walker wanders on such a heterogeneous network to quantify the strength of association between a candidate gene and a query disease. A weighted version of the Fisher's method with dependent correction is adopted to integrate 27 scores obtained in this way, and a final q-value is calibrated for prioritizing candidate genes. A series of validation experiments are conducted to demonstrate the superior performance of this approach. We further show the effectiveness of this method in exome sequencing studies of autism and epileptic encephalopathies. An online service and the standalone software of pgWalk can be found at http://bioinfo.au.tsinghua.edu.cn/jianglab/pgwalk. PMID:25681405
RegnANN: Reverse Engineering Gene Networks Using Artificial Neural Networks
Grimaldi, Marco; Visintainer, Roberto; Jurman, Giuseppe
2011-01-01
RegnANN is a novel method for reverse engineering gene networks based on an ensemble of multilayer perceptrons. The algorithm builds a regressor for each gene in the network, estimating its neighborhood independently. The overall network is obtained by joining all the neighborhoods. RegnANN makes no assumptions about the nature of the relationships between the variables, potentially capturing high-order and non linear dependencies between expression patterns. The evaluation focuses on synthetic data mimicking plausible submodules of larger networks and on biological data consisting of submodules of Escherichia coli. We consider Barabasi and Erdös-Rényi topologies together with two methods for data generation. We verify the effect of factors such as network size and amount of data to the accuracy of the inference algorithm. The accuracy scores obtained with RegnANN is methodically compared with the performance of three reference algorithms: ARACNE, CLR and KELLER. Our evaluation indicates that RegnANN compares favorably with the inference methods tested. The robustness of RegnANN, its ability to discover second order correlations and the agreement between results obtained with this new methods on both synthetic and biological data are promising and they stimulate its application to a wider range of problems. PMID:22216103
NASA Astrophysics Data System (ADS)
Bonechi, L.; D'Alessandro, R.; Mori, N.; Viliani, L.
2015-02-01
Muon absorption radiography is an imaging technique based on the analysis of the attenuation of the cosmic-ray muon flux after traversing an object under examination. While this technique is now reaching maturity in the field of volcanology for the imaging of the innermost parts of the volcanic cones, its applicability to other fields of research has not yet been proved. In this paper we present a study concerning the application of the muon absorption radiography technique to the field of archaeology, and we propose a method for the search of underground cavities and structures hidden a few metres deep in the soil (patent [1]). An original geometric treatment of the reconstructed muon tracks, based on the comparison of the measured flux with a reference simulated flux, and the preliminary results of specific simulations are discussed in details.
NASA Astrophysics Data System (ADS)
Nilsen, Gørill
2016-08-01
Seal hunting and whaling have played an important part of people's livelihoods throughout prehistory as evidenced by rock carvings, remains of bones, artifacts from aquatic animals and hunting tools. This paper focuses on one of the more elusive resources relating to such activities: marine mammal blubber. Although marine blubber easily decomposes, the organic material has been documented from the Mesolithic Period onwards. Of particular interest in this article are the many structures in Northern Norway from the Iron Age and in Finland on Kökar, Åland, from both the Bronze and Early Iron Ages in which these periods exhibited traits interpreted as being related to oil rendering from marine mammal blubber. The article discusses methods used in this oil production activity based on historical sources, archaeological investigations and experimental reconstruction of Iron Age slab-lined pits from Northern Norway.
Model-based near-wall reconstructions for immersed-boundary methods
NASA Astrophysics Data System (ADS)
Posa, Antonio; Balaras, Elias
2014-08-01
In immersed-boundary methods, the cost of resolving the thin boundary layers on a solid boundary at high Reynolds numbers is prohibitive. In the present work, we propose a new model-based, near-wall reconstruction to account for the lack of resolution and provide the correct wall shear stress and hydrodynamic forces. The models are analytical versions of a generalized version of the two-layer model developed by Balaras et al. (AIAA J 34:1111-1119, 1996) for large-eddy simulations. We will present the results for the flow around a cylinder and a sphere, where we use Cartesian and cylindrical coordinate grids. We will demonstrate that the proposed treatment reproduces very accurately the wall stress on grids, which are one order of magnitude coarser compared to well-resolved simulations.
NASA Astrophysics Data System (ADS)
Nilsen, Gørill
2016-02-01
Seal hunting and whaling have played an important part of people's livelihoods throughout prehistory as evidenced by rock carvings, remains of bones, artifacts from aquatic animals and hunting tools. This paper focuses on one of the more elusive resources relating to such activities: marine mammal blubber. Although marine blubber easily decomposes, the organic material has been documented from the Mesolithic Period onwards. Of particular interest in this article are the many structures in Northern Norway from the Iron Age and in Finland on Kökar, Åland, from both the Bronze and Early Iron Ages in which these periods exhibited traits interpreted as being related to oil rendering from marine mammal blubber. The article discusses methods used in this oil production activity based on historical sources, archaeological investigations and experimental reconstruction of Iron Age slab-lined pits from Northern Norway.
Refined Method of Lipofilling following DIEP Breast Reconstruction: 3D Analysis of Graft Survival
Lhoest, Florence; Preud’Homme, Laurence
2015-01-01
Background: The deep inferior epigastric perforator (DIEP) flap technique gives good clinical results, but aesthetic surgical adjustments are often necessary. Lipofilling represents a good complementary method, but fat resorption within the few months after surgery limits its use. Recently, a new protocol was introduced and successfully evaluated on murine models. This study aims to evaluate this protocol following a DIEP procedure by three-dimensional analysis. Methods: Within a period of 4 months, every patient having undergone breast reconstruction with DIEP and who required a lipofilling adjustment was invited to take part in this study. All surgeries were performed using the Adip’sculpt disposable medical device MACROFILL (Laboratoires SEBBIN, Boissy-l’Aillerie, France). Fat resorption was analyzed using a three-dimensional photography system. Results: Twenty-three patients were included, with a total of 25 breasts operated on. Injections were carried out on irradiated breasts in 73% of cases, and average injection volume was 124 mL (SD = 39 mL), whereas average operating time was 68 minutes (44–96 minutes). At an average follow-up of 5 months (4–8 months), 70.9% of projection gain afforded by the lipofilling was still present. Conclusions: It is now clear that particular rules should be respected for an efficient lipofilling, particularly regarding aspiration cannula characteristics, vacuum used, and the necessity of washes and soft centrifugations. We demonstrate here that by following a specific protocol that addresses these precautions, while using material that is specifically adapted, a 70.9% fat survival rate can be achieved, even in the very unfavorable case of postirradiation DIEP breast reconstruction. PMID:26495239
Development of a synthetic gene network to modulate gene expression by mechanical forces
Kis, Zoltán; Rodin, Tania; Zafar, Asma; Lai, Zhangxing; Freke, Grace; Fleck, Oliver; Del Rio Hernandez, Armando; Towhidi, Leila; Pedrigi, Ryan M.; Homma, Takayuki; Krams, Rob
2016-01-01
The majority of (mammalian) cells in our body are sensitive to mechanical forces, but little work has been done to develop assays to monitor mechanosensor activity. Furthermore, it is currently impossible to use mechanosensor activity to drive gene expression. To address these needs, we developed the first mammalian mechanosensitive synthetic gene network to monitor endothelial cell shear stress levels and directly modulate expression of an atheroprotective transcription factor by shear stress. The technique is highly modular, easily scalable and allows graded control of gene expression by mechanical stimuli in hard-to-transfect mammalian cells. We call this new approach mechanosyngenetics. To insert the gene network into a high proportion of cells, a hybrid transfection procedure was developed that involves electroporation, plasmids replication in mammalian cells, mammalian antibiotic selection, a second electroporation and gene network activation. This procedure takes 1 week and yielded over 60% of cells with a functional gene network. To test gene network functionality, we developed a flow setup that exposes cells to linearly increasing shear stress along the length of the flow channel floor. Activation of the gene network varied logarithmically as a function of shear stress magnitude. PMID:27404994
Development of a synthetic gene network to modulate gene expression by mechanical forces.
Kis, Zoltán; Rodin, Tania; Zafar, Asma; Lai, Zhangxing; Freke, Grace; Fleck, Oliver; Del Rio Hernandez, Armando; Towhidi, Leila; Pedrigi, Ryan M; Homma, Takayuki; Krams, Rob
2016-01-01
The majority of (mammalian) cells in our body are sensitive to mechanical forces, but little work has been done to develop assays to monitor mechanosensor activity. Furthermore, it is currently impossible to use mechanosensor activity to drive gene expression. To address these needs, we developed the first mammalian mechanosensitive synthetic gene network to monitor endothelial cell shear stress levels and directly modulate expression of an atheroprotective transcription factor by shear stress. The technique is highly modular, easily scalable and allows graded control of gene expression by mechanical stimuli in hard-to-transfect mammalian cells. We call this new approach mechanosyngenetics. To insert the gene network into a high proportion of cells, a hybrid transfection procedure was developed that involves electroporation, plasmids replication in mammalian cells, mammalian antibiotic selection, a second electroporation and gene network activation. This procedure takes 1 week and yielded over 60% of cells with a functional gene network. To test gene network functionality, we developed a flow setup that exposes cells to linearly increasing shear stress along the length of the flow channel floor. Activation of the gene network varied logarithmically as a function of shear stress magnitude. PMID:27404994
Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim; Zheng Yefeng; Wang Yang; Lauritsch, Guenter; Rohkohl, Christopher; Maier, Andreas K.; Schultz, Carl; Fahrig, Rebecca
2013-03-15
Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In this approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all
Reconstructing photorealistic 3D models from image sequence using domain decomposition method
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei
2009-11-01
In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Structured light and photogrammetry are two main methods to acquire 3D information, and both are expensive. Even if these expensive instruments are used, photorealistic 3D models are seldom available. In this paper, a new method to reconstruction photorealistic 3D models using a single camera is proposed. A square plate glued with coded marks is used to place the objects, and a sequence of about 20 images is taken. From the coded marks, the images are calibrated, and a snake algorithm is used to segment object from the background. A rough 3d model is obtained using shape from silhouettes algorithm. The silhouettes are decomposed into a combination of convex curves, which are used to partition the rough 3d model into some convex mesh patches. For each patch, the multi-view photo consistency constraints and smooth regulations are expressed as a finite element formulation, which can be resolved locally, and the information can be exchanged along the patches boundaries. The rough model is deformed into a fine 3d model through such a domain decomposition finite element method. The textures are assigned to each element mesh, and a photorealistic 3D model is got finally. A toy pig is used to verify the algorithm, and the result is exciting.
NASA Astrophysics Data System (ADS)
Shi, Lei; Wang, Z. J.
2015-08-01
Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.
NASA Astrophysics Data System (ADS)
Le Touz, Nicolas; Dumoulin, Jean; Soldovieri, Francesco
2016-04-01
In this numerical study we present an approach allowing introducing a priori information in an identification method of internal thermal properties field for a thick wall using infrared thermography measurements. This method is based on a coupling with an electromagnetic reconstructing method which data are obtained from measurements of Ground Penetrating Radar (GPR) ([1], [2]). This new method aims at improving the accuracy of reconstructions performed by using only the thermal reconstruction method under quasi-periodic natural solicitation ([3], [4]). Indeed, these thermal reconstructions, without a priori information, have the disadvantage of being performed on the entire studied wall. Through the intake of information from GPR, it becomes possible to focus on the internal zones that may contain defects. These areas are obtained by defining subdomains around remarkable points identified with the GPR reconstruction and considered as belonging to a discontinuity. For thermal reconstruction without providing a priori information, we need to minimize a functional equal to a quadratic residue issued from the difference between the measurements and the results of the direct model. By defining search fields around these potential defects, and thus by forcing the thermal parameters further thereof, we provide information to the data to reconstruct. The minimization of the functional is then modified through the contribution of these constraints. We do not seek only to minimize a residue, but to minimize the overall residue and constraints, what changes the direction followed by the optimization algorithm in the space of thermal parameters to reconstruct. Providing a priori information may then allow to obtain reconstruction with higher residues but whose thermal parameters are better estimated, whether for locating potential defects or for the reconstructed values of these parameters. In particular, it is the case for air defects or more generally for defects having a
Zhou, Yi-Jun; Yunus, Akbar; Tian, Zheng; Chen, Jiang-Tao; Wang, Chong; Xu, Lei-Lei
2016-01-01
Hemipelvic resections for primary bone tumours require reconstruction to restore weight bearing along anatomic axes. However, reconstruction of the pelvic arch remains a major surgical challenge because of the high rate of associated complications. We used the pedicle screw-rod system to reconstruct the pelvis, and the purpose of this investigation was to assess the oncology, functional outcome and complication rate following this procedure. The purpose of this study was to investigate the operative indications and technique of the pedicle screw-rod system in reconstruction of the stability of the sacroiliac joint after resection of sacroiliac joint tumours. The average MSTS (Musculoskeletal Tumour Society) score was 26.5 at either three months after surgery or at the latest follow-up. Seven patients had surgery-related complications, including wound dehiscence in one, infection in two, local necrosis in four (including infection in two), sciatic nerve palsy in one and pubic symphysis subluxation in one. There was no screw loosening or deep vein thrombosis occurring in this series. Using a pedicle screw-rod after resection of a sacroiliac joint tumour is an acceptable method of pelvic reconstruction because of its reduced risk of complications and satisfactory functional outcome, as well as its feasibility of reconstruction for type IV pelvis tumour resection without elaborate preoperative customisation. Level of evidence: Level IV, therapeutic study. PMID:27095944
Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa
2009-01-01
Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769
A new combined prior based reconstruction method for compressed sensing in 3D ultrasound imaging
NASA Astrophysics Data System (ADS)
Uddin, Muhammad S.; Islam, Rafiqul; Tahtali, Murat; Lambert, Andrew J.; Pickering, Mark R.
2015-03-01
Ultrasound (US) imaging is one of the most popular medical imaging modalities, with 3D US imaging gaining popularity recently due to its considerable advantages over 2D US imaging. However, as it is limited by long acquisition times and the huge amount of data processing it requires, methods for reducing these factors have attracted considerable research interest. Compressed sensing (CS) is one of the best candidates for accelerating the acquisition rate and reducing the data processing time without degrading image quality. However, CS is prone to introduce noise-like artefacts due to random under-sampling. To address this issue, we propose a combined prior-based reconstruction method for 3D US imaging. A Laplacian mixture model (LMM) constraint in the wavelet domain is combined with a total variation (TV) constraint to create a new regularization regularization prior. An experimental evaluation conducted to validate our method using synthetic 3D US images shows that it performs better than other approaches in terms of both qualitative and quantitative measures.
Jeong, Youngmo; Kim, Jonghyun; Yeom, Jiwoon; Lee, Chang-Kun; Lee, Byoungho
2015-12-10
In this paper, we develop a real-time depth controllable integral imaging system. With a high-frame-rate camera and a focus controllable lens, light fields from various depth ranges can be captured. According to the image plane of the light field camera, the objects in virtual and real space are recorded simultaneously. The captured light field information is converted to the elemental image in real time without pseudoscopic problems. In addition, we derive characteristics and limitations of the light field camera as a 3D broadcasting capturing device with precise geometry optics. With further analysis, the implemented system provides more accurate light fields than existing devices without depth distortion. We adapt an f-number matching method at the capture and display stage to record a more exact light field and solve depth distortion, respectively. The algorithm allows the users to adjust the pixel mapping structure of the reconstructed 3D image in real time. The proposed method presents a possibility of a handheld real-time 3D broadcasting system in a cheaper and more applicable way as compared to the previous methods. PMID:26836855
Maximum-entropy reconstruction method for moment-based solution of the Boltzmann equation
NASA Astrophysics Data System (ADS)
Summy, Dustin; Pullin, Dale
2013-11-01
We describe a method for a moment-based solution of the Boltzmann equation. This starts with moment equations for a 10 + 9 N , N = 0 , 1 , 2 . . . -moment representation. The partial-differential equations (PDEs) for these moments are unclosed, containing both higher-order moments and molecular-collision terms. These are evaluated using a maximum-entropy construction of the velocity distribution function f (c , x , t) , using the known moments, within a finite-box domain of single-particle-velocity (c) space. Use of a finite-domain alleviates known problems (Junk and Unterreiter, Continuum Mech. Thermodyn., 2002) concerning existence and uniqueness of the reconstruction. Unclosed moments are evaluated with quadrature while collision terms are calculated using a Monte-Carlo method. This allows integration of the moment PDEs in time. Illustrative examples will include zero-space- dimensional relaxation of f (c , t) from a Mott-Smith-like initial condition toward equilibrium and one-space dimensional, finite Knudsen number, planar Couette flow. Comparison with results using the direct-simulation Monte-Carlo method will be presented.
Loomis, Eric; Grim, Gary; Wilde, Carl; Wilke, Mark; Wilson, Doug; Morgan, George; Tregillis, Ian; Clark, David; Finch, Joshua; Fittinghoff, D; Bower, D
2010-01-01
Development of analysis techniques for neutron imaging at the National Ignition Facility (NIF) is an important and difficult task for the detailed understanding or high neutron yield inertial confinement fusion (ICF) implosions. These methods, once developed, must provide accurate images of the hot and cold fuel so that information about the implosion, such as symmetry and areal density, can be extracted. We are currently considering multiple analysis pathways for obtaining this source distribution of neutrons given a measured pinhole image with a scintillator and camera system. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations [E. Loomis et al. IFSA 2009]. We are currently striving to apply the technique to real data by applying a series of realistic effects that will be present for experimental images. These include various sources of noise, misalignment uncertainties at both the source and image planes, as well as scintillator and camera blurring. Some tests on the quality of image reconstructions have also been performed based on point resolution and Legendre mode improvement of recorded images. So far, the method has proven sufficient to overcome most of these experimental effects with continued devlopment.
Wagner, Roland; Helin, Tapio; Obereder, Andreas; Ramlau, Ronny
2016-02-20
The imaging quality of modern ground-based telescopes such as the planned European Extremely Large Telescope is affected by atmospheric turbulence. In consequence, they heavily depend on stable and high-performance adaptive optics (AO) systems. Using measurements of incoming light from guide stars, an AO system compensates for the effects of turbulence by adjusting so-called deformable mirror(s) (DMs) in real time. In this paper, we introduce a novel reconstruction method for ground layer adaptive optics. In the literature, a common approach to this problem is to use Bayesian inference in order to model the specific noise structure appearing due to spot elongation. This approach leads to large coupled systems with high computational effort. Recently, fast solvers of linear order, i.e., with computational complexity O(n), where n is the number of DM actuators, have emerged. However, the quality of such methods typically degrades in low flux conditions. Our key contribution is to achieve the high quality of the standard Bayesian approach while at the same time maintaining the linear order speed of the recent solvers. Our method is based on performing a separate preprocessing step before applying the cumulative reconstructor (CuReD). The efficiency and performance of the new reconstructor are demonstrated using the OCTOPUS, the official end-to-end simulation environment of the ESO for extremely large telescopes. For more specific simulations we also use the MOST toolbox. PMID:26906596
Methods to mitigate data truncation artifacts in multi-contrast tomosynthesis image reconstructions
NASA Astrophysics Data System (ADS)
Garrett, John; Ge, Yongshuai; Li, Ke; Chen, Guang-Hong
2015-03-01
Differential phase contrast imaging is a promising new image modality that utilizes the refraction rather than the absorption of x-rays to image an object. A Talbot-Lau interferometer may be used to permit differential phase contrast imaging with a conventional medical x-ray source and detector. However, the current size of the gratings fabricated for these interferometers are often relatively small. As a result, data truncation image artifacts are often observed in a tomographic acquisition and reconstruction. When data are truncated in x-ray absorption imaging, the methods have been introduced to mitigate the truncation artifacts. However, the same strategy to mitigate absorption truncation artifacts may not be appropriate for differential phase contrast or dark field tomographic imaging. In this work, several new methods to mitigate data truncation artifacts in a multi-contrast imaging system have been proposed and evaluated for tomosynthesis data acquisitions. The proposed methods were validated using experimental data acquired for a bovine udder as well as several cadaver breast specimens using a benchtop system at our facility.
Chun, Se Young; Fessler, Jeffrey A.; Dewaraja, Yuni K.
2013-01-01
Quantitative SPECT techniques are important for many applications including internal emitter therapy dosimetry where accurate estimation of total target activity and activity distribution within targets are both potentially important for dose-response evaluations. We investigated non-local means (NLM) post-reconstruction filtering for accurate I-131 SPECT estimation of both total target activity and the 3D activity distribution. We first investigated activity estimation versus number of ordered-subsets expectation-maximization (OSEM) iterations. We performed simulations using the XCAT phantom with tumors containing a uniform and a non-uniform activity distribution, and measured the recovery coefficient (RC) and the root mean squared error (RMSE) to quantify total target activity and activity distribution, respectively. We observed that using more OSEM iterations is essential for accurate estimation of RC, but may or may not improve RMSE. We then investigated various post-reconstruction filtering methods to suppress noise at high iteration while preserving image details so that both RC and RMSE can be improved. Recently, NLM filtering methods have shown promising results for noise reduction. Moreover, NLM methods using high-quality side information can improve image quality further. We investigated several NLM methods with and without CT side information for I-131 SPECT imaging and compared them to conventional Gaussian filtering and to unfiltered methods. We studied four different ways of incorporating CT information in the NLM methods: two known (NLM CT-B and NLM CT-M) and two newly considered (NLM CT-S and NLM CT-H). We also evaluated the robustness of NLM filtering using CT information to erroneous CT. NLM CT-S and NLM CT-H yielded comparable RC values to unfiltered images while substantially reducing RMSE. NLM CT-S achieved −2.7 to 2.6% increase of RC compared to no filtering and NLM CT-H yielded up to 6% decrease in RC while other methods yielded lower RCs
NASA Astrophysics Data System (ADS)
Chun, Se Young; Fessler, Jeffrey A.; Dewaraja, Yuni K.
2013-09-01
Quantitative SPECT techniques are important for many applications including internal emitter therapy dosimetry where accurate estimation of total target activity and activity distribution within targets are both potentially important for dose-response evaluations. We investigated non-local means (NLM) post-reconstruction filtering for accurate I-131 SPECT estimation of both total target activity and the 3D activity distribution. We first investigated activity estimation versus number of ordered-subsets expectation-maximization (OSEM) iterations. We performed simulations using the XCAT phantom with tumors containing a uniform and a non-uniform activity distribution, and measured the recovery coefficient (RC) and the root mean squared error (RMSE) to quantify total target activity and activity distribution, respectively. We observed that using more OSEM iterations is essential for accurate estimation of RC, but may or may not improve RMSE. We then investigated various post-reconstruction filtering methods to suppress noise at high iteration while preserving image details so that both RC and RMSE can be improved. Recently, NLM filtering methods have shown promising results for noise reduction. Moreover, NLM methods using high-quality side information can improve image quality further. We investigated several NLM methods with and without CT side information for I-131 SPECT imaging and compared them to conventional Gaussian filtering and to unfiltered methods. We studied four different ways of incorporating CT information in the NLM methods: two known (NLM CT-B and NLM CT-M) and two newly considered (NLM CT-S and NLM CT-H). We also evaluated the robustness of NLM filtering using CT information to erroneous CT. NLM CT-S and NLM CT-H yielded comparable RC values to unfiltered images while substantially reducing RMSE. NLM CT-S achieved -2.7 to 2.6% increase of RC compared to no filtering and NLM CT-H yielded up to 6% decrease in RC while other methods yielded lower RCs
Lei Liu; Feng Zhou; Xue-Ru Bai; Ming-Liang Tao; Zi-Jing Zhang
2016-04-01
Traditionally, the factorization method is applied to reconstruct the 3D geometry of a target from its sequential inverse synthetic aperture radar images. However, this method requires performing cross-range scaling to all the sub-images and thus has a large computational burden. To tackle this problem, this paper proposes a novel method for joint cross-range scaling and 3D geometry reconstruction of steadily moving targets. In this method, we model the equivalent rotational angular velocity (RAV) by a linear polynomial with time, and set its coefficients randomly to perform sub-image cross-range scaling. Then, we generate the initial trajectory matrix of the scattering centers, and solve the 3D geometry and projection vectors by the factorization method with relaxed constraints. After that, the coefficients of the polynomial are estimated from the projection vectors to obtain the RAV. Finally, the trajectory matrix is re-scaled using the estimated rotational angle, and accurate 3D geometry is reconstructed. The two major steps, i.e., the cross-range scaling and the factorization, are performed repeatedly to achieve precise 3D geometry reconstruction. Simulation results have proved the effectiveness and robustness of the proposed method. PMID:26886991
Mosaic gene network modelling identified new regulatory mechanisms in HCV infection.
Popik, Olga V; Petrovskiy, Evgeny D; Mishchenko, Elena L; Lavrik, Inna N; Ivanisenko, Vladimir A
2016-06-15
Modelling of gene networks is widely used in systems biology to study the functioning of complex biological systems. Most of the existing mathematical modelling techniques are useful for analysis of well-studied biological processes, for which information on rates of reactions is available. However, complex biological processes such as those determining the phenotypic traits of organisms or pathological disease processes, including pathogen-host interactions, involve complicated cross-talk between interacting networks. Furthermore, the intrinsic details of the interactions between these networks are often missing. In this study, we developed an approach, which we call mosaic network modelling, that allows the combination of independent mathematical models of gene regulatory networks and, thereby, description of complex biological systems. The advantage of this approach is that it allows us to generate the integrated model despite the fact that information on molecular interactions between parts of the model (so-called mosaic fragments) might be missing. To generate a mosaic mathematical model, we used control theory and mathematical models, written in the form of a system of ordinary differential equations (ODEs). In the present study, we investigated the efficiency of this method in modelling the dynamics of more than 10,000 simulated mosaic regulatory networks consisting of two pieces. Analysis revealed that this approach was highly efficient, as the mean deviation of the dynamics of mosaic network elements from the behaviour of the initial parts of the model was less than 10%. It turned out that for construction of the control functional, data on perturbation of one or two vertices of the mosaic piece are sufficient. Further, we used the developed method to construct a mosaic gene regulatory network including hepatitis C virus (HCV) as the first piece and the tumour necrosis factor (TNF)-induced apoptosis and NF-κB induction pathways as the second piece. Thus
Muon Energy Reconstruction Through the Multiple Scattering Method in the NO$\\mathrm{\
Psihas Olmedo, Silvia Fernanda
2015-01-01
Neutrino energy measurements are a crucial component in the experimental study of neutrino oscillations. These measurements are done through the reconstruction of neutrino interactions and energy measurements of their products. This thesis presents the development of a technique to reconstruct the energy of muons from neutrino interactions in the NO$\\mathrm{\
Muon Energy Reconstruction Through the Multiple Scattering Method in the NO$\\mathrm{\
Psihas Olmedo, Silvia Fernanda
2013-06-01
Neutrino energy measurements are a crucial component in the experimental study of neutrino oscillations. These measurements are done through the reconstruction of neutrino interactions and energy measurements of their products. This thesis presents the development of a technique to reconstruct the energy of muons from neutrino interactions in the NO$\\mathrm{\
Reconstruction of dynamical perturbations in optical systems by opto-mechanical simulation methods
NASA Astrophysics Data System (ADS)
Gilbergs, H.; Wengert, N.; Frenner, K.; Eberhard, P.; Osten, W.
2012-03-01
High-performance objectives pose very strict limitations on errors present in the system. External mechanical influences can induce structural vibrations in such a system, leading to small deviations of the position and tilt of the optical components inside the objective from the undisturbed system. This can have an impact on the imaging performance, causing blurred images or broadened structures in lithography processes. A concept to detect the motion of the components of an optical system is presented and demonstrated on a simulated system. The method is based on a combination of optical simulation together with mechanical simulation and inverse problem theory. On the optical side raytracing is used for the generation of wavefront data of the system in its current state. A Shack-Hartmann sensor is implemented as a model to gather this data. The sensor can capture wavefront data with high repetition rates to resolve the periodic motion of the vibrating parts. The mechanical side of the system is simulated using multibody dynamics. The system is modeled as a set of rigid bodies (lenses, mounts, barrel), represented by rigid masses connected by springs that represent the coupling between the individual parts. External excitations cause the objective to vibrate. The vibration can be characterized by the eigenmodes and eigenfrequencies of the system. Every state of the movement during the vibration can be expressed as a linear combination of the eigenmodes. The reconstruction of the system geometry from the wavefront data is an inverse problem. Therefore, Tikhonov regularization is used in the process in order to achieve more accurate reconstruction results. This method relies on a certain amount of a-priori information on the system. The mechanical properties of the system are a great source of such information. It is taken into account by performing the calculation in the coordinate system spanned by the eigenmodes of the objective and using information on the
Fu, Lin; Qi, Jinyi
2010-01-01
Purpose: The quality of tomographic images is directly affected by the system model being used in image reconstruction. An accurate system matrix is desirable for high-resolution image reconstruction, but it often leads to high computation cost. In this work the authors present a maximum a posteriori reconstruction algorithm with residual correction to alleviate the tradeoff between the model accuracy and the computation efficiency in image reconstruction. Methods: Unlike conventional iterative methods that assume that the system matrix is accurate, the proposed method reconstructs an image with a simplified system matrix and then removes the reconstruction artifacts through residual correction. Since the time-consuming forward and back projection operations using the accurate system matrix are not required in every iteration, image reconstruction time can be greatly reduced. Results: The authors apply the new algorithm to high-resolution positron emission tomography reconstruction with an on-the-fly Monte Carlo (MC) based positron range model. Computer simulations show that the new method is an order of magnitude faster than the traditional MC-based method, whereas the visual quality and quantitative accuracy of the reconstructed images are much better than that obtained by using the simplified system matrix alone. Conclusions: The residual correction method can reconstruct high-resolution images and is computationally efficient. PMID:20229880
NASA Technical Reports Server (NTRS)
Gherlone, Marco; Cerracchio, Priscilla; Mattone, Massimiliano; Di Sciuva, Marco; Tessler, Alexander
2011-01-01
A robust and efficient computational method for reconstructing the three-dimensional displacement field of truss, beam, and frame structures, using measured surface-strain data, is presented. Known as shape sensing , this inverse problem has important implications for real-time actuation and control of smart structures, and for monitoring of structural integrity. The present formulation, based on the inverse Finite Element Method (iFEM), uses a least-squares variational principle involving strain measures of Timoshenko theory for stretching, torsion, bending, and transverse shear. Two inverse-frame finite elements are derived using interdependent interpolations whose interior degrees-of-freedom are condensed out at the element level. In addition, relationships between the order of kinematic-element interpolations and the number of required strain gauges are established. As an example problem, a thin-walled, circular cross-section cantilevered beam subjected to harmonic excitations in the presence of structural damping is modeled using iFEM; where, to simulate strain-gauge values and to provide reference displacements, a high-fidelity MSC/NASTRAN shell finite element model is used. Examples of low and high-frequency dynamic motion are analyzed and the solution accuracy examined with respect to various levels of discretization and the number of strain gauges.
Application of damage detection methods using passive reconstruction of impulse response functions.
Tippmann, J D; Zhu, X; Lanza di Scalea, F
2015-02-28
In structural health monitoring (SHM), using only the existing noise has long been an attractive goal. The advances in understanding cross-correlations in ambient noise in the past decade, as well as new understanding in damage indication and other advanced signal processing methods, have continued to drive new research into passive SHM systems. Because passive systems take advantage of the existing noise mechanisms in a structure, offshore wind turbines are a particularly attractive application due to the noise created from the various aerodynamic and wave loading conditions. Two damage detection methods using a passively reconstructed impulse response function, or Green's function, are presented. Damage detection is first studied using the reciprocity of the impulse response functions, where damage introduces new nonlinearities that break down the similarity in the causal and anticausal wave components. Damage detection and localization are then studied using a matched-field processing technique that aims to spatially locate sources that identify a change in the structure. Results from experiments conducted on an aluminium plate and wind turbine blade with simulated damage are also presented. PMID:25583863
A reconstruction method of electron density distribution in the equatorial region of magnetosphere
NASA Astrophysics Data System (ADS)
Shastun, V. V.; Agapitov, O. V.
2015-12-01
A method for the estimation of electron density from the ratio of the wave magnetic and electric field amplitude of whistler waves is developed. Near the geomagnetic equator, whistler wave normals are mainly close to the direction of the background magnetic field. Dispersion relation of whistler wave in the parallel propagation approximation is used in this method. Signals registered by STAFF-SA instrument on board the Cluster spacecraft are used for electron density reconstruction. The Cluster spacecraft crossed the plasmasphere at all local times and in a wide range of latitudes over 10 years (2001-2010) covering well the frequency range of both plasmaspheric hiss and lower band chorus emissions in a vicinity of the geomagnetic equator. The proposed technique can be useful in allowing to supplement plasma density statistics obtained from recent probes (such as THEMIS or Van Allen Probes), as well as for reanalysis of statistics derived from continuous measurements of only one or two components of the wave magnetic and electric fields on board spacecraft covering equatorial regions of the magnetosphere.
Wu, Ke-Zhu; Jiang, Yun-Gen; Zuo, Ying; Li, Ai-Xiu
2014-06-01
Huatuo reconstruction pill (HTRP) is a traditional Chinese medicine prescription that mainly treats for hemiplegia and postoperation of brain stroke. Existing pharmacological studies have previously shown that HTRP could inhibit in vitro thrombosis, delay platelet adhesion, dilate blood vessels, and improve the microcirculation disturbances. In this paper, we chiefly concerned about the potential targets of HTRP and tried to figure out the active components of it. Computer-aided drug design method was emploied to search for the active components and explain the mechanism between the targets and the small molecules at molecular lever. The potential targets of this compound pharmaceutics were searched through relevant pharmacological studies and three pharmacophore models which involved the platelet activating factor (PAF) receptor, the angiotensin converting enzyme (ACE) and the 5-hydroxytryptamine receptor (5-HT2A) were constructed by Discotech method of Sybyl. Thus, the candidate compounds which agreed with the pharmacophore models were obtained by the virtual screening to the known ingredients of HTRP. Based on that, sequence and structure prediction of the unknown targets were realized by homology modeling which were used for molecular docking with those candidate compounds. Results showed that three compounds, which may prove to be valid to these targets, got higher scores than the existing corresponding inhibitors after molecular docking, including ferulic acid, onjixanthone I and albiflorin. And the three molecules may refer to the singificant substances to the total compounds of HTRP which were effective to the disease. PMID:25172450
An interface reconstruction method based on an analytical formula for 3D arbitrary convex cells
NASA Astrophysics Data System (ADS)
Diot, Steven; François, Marianne M.
2016-01-01
In this paper, we are interested in an interface reconstruction method for 3D arbitrary convex cells that could be used in multi-material flow simulations for instance. We assume that the interface is represented by a plane whose normal vector is known and we focus on the volume-matching step that consists in finding the plane constant so that it splits the cell according to a given volume fraction. We follow the same approach as in the recent authors' publication for 2D arbitrary convex cells in planar and axisymmetrical geometries, namely we derive an analytical formula for the volume of the specific prismatoids obtained when decomposing the cell using the planes that are parallel to the interface and passing through all the cell nodes. This formula is used to bracket the interface plane constant such that the volume-matching problem is rewritten in a single prismatoid in which the same formula is used to find the final solution. The proposed method is tested against an important number of reproducible configurations and shown to be at least five times faster.
NASA Astrophysics Data System (ADS)
Gunga, Hanns-Christian; Suthau, Tim; Bellmann, Anke; Friedrich, Andreas; Schwanebeck, Thomas; Stoinski, Stefan; Trippel, Tobias; Kirsch, Karl; Hellwich, Olaf
2007-08-01
Both body mass and surface area are factors determining the essence of any living organism. This should also hold true for an extinct organism such as a dinosaur. The present report discusses the use of a new 3D laser scanner method to establish body masses and surface areas of an Asian elephant (Zoological Museum of Copenhagen, Denmark) and of Plateosaurus engelhardti, a prosauropod from the Upper Triassic, exhibited at the Paleontological Museum in Tübingen (Germany). This method was used to study the effect that slight changes in body shape had on body mass for P. engelhardti. It was established that body volumes varied between 0.79 m3 (slim version) and 1.14 m3 (robust version), resulting in a presumable body mass of 630 and 912 kg, respectively. The total body surface areas ranged between 8.8 and 10.2 m2, of which, in both reconstructions of P. engelhardti, ˜33% account for the thorax area alone. The main difference between the two models is in the tail and hind limb reconstruction. The tail of the slim version has a surface area of 1.98 m2, whereas that of the robust version has a surface area of 2.73 m2. The body volumes calculated for the slim version were as follows: head 0.006 m3, neck 0.016 m3, fore limbs 0.020 m3, hind limbs 0.08 m3, thoracic cavity 0.533 m3, and tail 0.136 m3. For the robust model, the following volumes were established: 0.01 m3 head, neck 0.026 m3, fore limbs 0.025 m3, hind limbs 0.18 m3, thoracic cavity 0.616 m3, and finally, tail 0.28 m3. Based on these body volumes, scaling equations were used to assess the size that the organs of this extinct dinosaur have.
Kryuchkov, Victor; Chumak, Vadim; Maceika, Evaldas; Anspaugh, Lynn R; Cardis, Elisabeth; Bakhanova, Elena; Golovanov, Ivan; Drozdovitch, Vladimir; Luckyanov, Nickolas; Kesminiene, Ausrele; Voillequé, Paul; Bouville, André
2009-10-01
Between 1986 and 1990, several hundred thousand workers, called "liquidators" or "clean-up workers," took part in decontamination and recovery activities within the 30-km zone around the Chernobyl nuclear power plant in Ukraine, where a major accident occurred in April 1986. The Chernobyl liquidators were mainly exposed to external ionizing radiation levels that depended primarily on their work locations and the time after the accident when the work was performed. Because individual doses were often monitored inadequately or were not monitored at all for the majority of liquidators, a new method of photon (i.e., gamma and x rays) dose assessment, called "RADRUE" (Realistic Analytical Dose Reconstruction with Uncertainty Estimation), was developed to obtain unbiased and reasonably accurate estimates for use in three epidemiologic studies of hematological malignancies and thyroid cancer among liquidators. The RADRUE program implements a time-and-motion dose-reconstruction method that is flexible and conceptually easy to understand. It includes a large exposure rate database and interpolation and extrapolation techniques to calculate exposure rates at places where liquidators lived and worked within approximately 70 km of the destroyed reactor. The RADRUE technique relies on data collected from subjects' interviews conducted by trained interviewers, and on expert dosimetrists to interpret the information and provide supplementary information, when necessary, based upon their own Chernobyl experience. The RADRUE technique was used to estimate doses from external irradiation, as well as uncertainties, to the bone marrow for 929 subjects and to the thyroid gland for 530 subjects enrolled in epidemiologic studies. Individual bone marrow dose estimates were found to range from less than one muGy to 3,300 mGy, with an arithmetic mean of 71 mGy. Individual thyroid dose estimates were lower and ranged from 20 muGy to 507 mGy, with an arithmetic mean of 29 mGy. The
Kryuchkov, Victor; Chumak, Vadim; Maceika, Evaldas; Anspaugh, Lynn R.; Cardis, Elisabeth; Bakhanova, Elena; Golovanov, Ivan; Drozdovitch, Vladimir; Luckyanov, Nickolas; Kesminiene, Ausrele; Voillequé, Paul; Bouville, André
2010-01-01
Between 1986 and 1990, several hundred thousand workers, called “liquidators” or “clean-up workers”, took part in decontamination and recovery activities within the 30-km zone around the Chernobyl nuclear power plant in Ukraine, where a major accident occurred in April 1986. The Chernobyl liquidators were mainly exposed to external ionizing radiation levels that depended primarily on their work locations and the time after the accident when the work was performed. Because individual doses were often monitored inadequately or were not monitored at all for the majority of liquidators, a new method of photon (i.e. gamma and x-rays) dose assessment, called “RADRUE” (Realistic Analytical Dose Reconstruction with Uncertainty Estimation) was developed to obtain unbiased and reasonably accurate estimates for use in three epidemiologic studies of hematological malignancies and thyroid cancer among liquidators. The RADRUE program implements a time-and-motion dose reconstruction method that is flexible and conceptually easy to understand. It includes a large exposure rate database and interpolation and extrapolation techniques to calculate exposure rates at places where liquidators lived and worked within ~70 km of the destroyed reactor. The RADRUE technique relies on data collected from subjects’ interviews conducted by trained interviewers, and on expert dosimetrists to interpret the information and provide supplementary information, when necessary, based upon their own Chernobyl experience. The RADRUE technique was used to estimate doses from external irradiation, as well as uncertainties, to the bone-marrow for 929 subjects and to the thyroid gland for 530 subjects enrolled in epidemiologic studies. Individual bone-marrow dose estimates were found to range from less than one μGy to 3,300 mGy, with an arithmetic mean of 71 mGy. Individual thyroid dose estimates were lower and ranged from 20 μGy to 507 mGy, with an arithmetic mean of 29 mGy. The
Estep, Robert J.
2012-05-31
We have developed a dynamic image reconstruction method called MVIR (Moving Voxel Image Reconstruction) for lane detection in multilane portal monitor systems. MVIR was evaluated for use in the Fixed Site Detection System, a prototype three-lane portal monitor system for EZ-pass toll plazas. As a baseline, we compared MVIR with a static image reconstruction method in analyzing the same real and simulated data sets. Performance was judged by the distributions of image intensities for source and no-source vehicles over many trials as a function of source strength. We found that MVIR produced significantly better results in all cases. The performance difference was greatest at low count rates, where source/no-source distributions were well separated with the MVIR method, allowing reliable source vehicle identification with a low probability of false positive identifications. Static reconstruction of the same data produced overlapping distributions that made source vehicle identification unreliable. The performance of the static method was acceptable at high count rates. Both algorithms reliably identified two strong sources passing through at nearly the same time.
Baraër, F; Darsonval, V; Lejeune, F; Bochot-Hermouet, B; Rousseau, P
2013-10-01
The eyebrow is an essential anatomical area, from a social point of view, so its reconstruction, in case of skin defect, must be as meticulous as possible, with the less residual sequela. Capillary density extremely varies from one person to another and the different methods of restoration of this area should absolutely take this into consideration. We are going to review the various techniques of reconstruction, according to the sex and the surface to cover. PMID:23896574
An efficient closed-form design method for nearly perfect reconstruction of non-uniform filter bank.
Kumar, A; Pooja, R; Singh, G K
2016-03-01
In this paper, an efficient closed form method for the design of multi-channel nearly perfect reconstruction of non-uniform filter bank with the prescribed stopband attenuation and channel overlapping is presented. In this method, the design problem of multi-channel non-uniform filter bank (NUFB) is considered as the design of a prototype filter whose magnitude response at quadrature frequency is 0.707, which is exploited for finding the optimum passband edge frequency through empirical formula instead of using single or multivariable optimization technique. Two main attributes used in assessing the performance of filter bank are peak reconstruction error (PRE) and computational time (CPU time). As compared to existing methods, this method is very simple and easy to implement for NUFBs. To implement this algorithm, a Matlab program has been developed, and several examples are presented to illustrate the performance of proposed method. PMID:26861726
Zhao, Yong-guang; Ma, Ling-ling; Li, Chuan-rong; Zhu, Xiao-hua; Tang, Ling-li
2015-07-01
Due to the lack of enough spectral bands for multi-spectral sensor, it is difficult to reconstruct surface retlectance spectrum from finite spectral information acquired by multi-spectral instrument. Here, taking into full account of the heterogeneity of pixel from remote sensing image, a method is proposed to simulate hyperspectral data from multispectral data based on canopy radiation transfer model. This method first assumes the mixed pixels contain two types of land cover, i.e., vegetation and soil. The sensitive parameters of Soil-Leaf-Canopy (SLC) model and a soil ratio factor were retrieved from multi-spectral data based on Look-Up Table (LUT) technology. Then, by combined with a soil ratio factor, all the parameters were input into the SLC model to simulate the surface reflectance spectrum from 400 to 2 400 nm. Taking Landsat Enhanced Thematic Mapper Plus (ETM+) image as reference image, the surface reflectance spectrum was simulated. The simulated reflectance spectrum revealed different feature information of different surface types. To test the performance of this method, the simulated reflectance spectrum was convolved with the Landsat ETM + spectral response curves and Moderate Resolution Imaging Spectrometer (MODIS) spectral response curves to obtain the simulated Landsat ETM+ and MODIS image. Finally, the simulated Landsat ETM+ and MODIS images were compared with the observed Landsat ETM+ and MODIS images. The results generally showed high correction coefficients (Landsat: 0.90-0.99, MODIS: 0.74-0.85) between most simulated bands and observed bands and indicated that the simulated reflectance spectrum was well simulated and reliable. PMID:26717721
Absolute conductivity reconstruction in magnetic induction tomography using a nonlinear method.
Soleimani, Manuchehr; Lionheart, William R B
2006-12-01
Magnetic induction tomography (MIT) attempts to image the electrical and magnetic characteristics of a target using impedance measurement data from pairs of excitation and detection coils. This inverse eddy current problem is nonlinear and also severely ill posed so regularization is required for a stable solution. A regularized Gauss-Newton algorithm has been implemented as a nonlinear, iterative inverse solver. In this algorithm, one needs to solve the forward problem and recalculate the Jacobian matrix for each iteration. The forward problem has been solved using an edge based finite element method for magnetic vector potential A and electrical scalar potential V, a so called A, A - V formulation. A theoretical study of the general inverse eddy current problem and a derivation, paying special attention to the boundary conditions, of an adjoint field formula for the Jacobian is given. This efficient formula calculates the change in measured induced voltage due to a small perturbation of the conductivity in a region. This has the advantage that it involves only the inner product of the electric fields when two different coils are excited, and these are convenient computationally. This paper also shows that the sensitivity maps change significantly when the conductivity distribution changes, demonstrating the necessity for a nonlinear reconstruction algorithm. The performance of the inverse solver has been examined and results presented from simulated data with added noise. PMID:17167989
Cluitmans, M J M; Peeters, R L M; Westra, R L; Volders, P G A
2015-06-01
Electrical activity at the level of the heart muscle can be noninvasively reconstructed from body-surface electrocardiograms (ECGs) and patient-specific torso-heart geometry. This modality, coined electrocardiographic imaging, could fill the gap between the noninvasive (low-resolution) 12-lead ECG and invasive (high-resolution) electrophysiology studies. Much progress has been made to establish electrocardiographic imaging, and clinical studies appear with increasing frequency. However, many assumptions and model choices are involved in its execution, and only limited validation has been performed. In this article, we will discuss the technical details, clinical applications and current limitations of commonly used methods in electrocardiographic imaging. It is important for clinicians to realise the influence of certain assumptions and model choices for correct and careful interpretation of the results. This, in combination with more extensive validation, will allow for exploitation of the full potential of noninvasive electrocardiographic imaging as a powerful clinical tool to expedite diagnosis, guide therapy and improve risk stratification. PMID:25896779
Wu, Siqi; Joseph, Antony; Hammonds, Ann S; Celniker, Susan E; Yu, Bin; Frise, Erwin
2016-04-19
Spatial gene expression patterns enable the detection of local covariability and are extremely useful for identifying local gene interactions during normal development. The abundance of spatial expression data in recent years has led to the modeling and analysis of regulatory networks. The inherent complexity of such data makes it a challenge to extract biological information. We developed staNMF, a method that combines a scalable implementation of nonnegative matrix factorization (NMF) with a new stability-driven model selection criterion. When applied to a set ofDrosophilaearly embryonic spatial gene expression images, one of the largest datasets of its kind, staNMF identified 21 principal patterns (PP). Providing a compact yet biologically interpretable representation ofDrosophilaexpression patterns, PP are comparable to a fate map generated experimentally by laser ablation and show exceptional promise as a data-driven alternative to manual annotations. Our analysis mapped genes to cell-fate programs and assigned putative biological roles to uncharacterized genes. Finally, we used the PP to generate local transcription factor regulatory networks. Spatially local correlation networks were constructed for six PP that span along the embryonic anterior-posterior axis. Using a two-tail 5% cutoff on correlation, we reproduced 10 of the 11 links in the well-studied gap gene network. The performance of PP with theDrosophiladata suggests that staNMF provides informative decompositions and constitutes a useful computational lens through which to extract biological insight from complex and often noisy gene expression data. PMID:27071099
A platform for rapid prototyping of synthetic gene networks in mammalian cells
Duportet, Xavier; Wroblewska, Liliana; Guye, Patrick; Li, Yinqing; Eyquem, Justin; Rieders, Julianne; Rimchala, Tharathorn; Batt, Gregory; Weiss, Ron
2014-01-01
Mammalian synthetic biology may provide novel therapeutic strategies, help decipher new paths for drug discovery and facilitate synthesis of valuable molecules. Yet, our capacity to genetically program cells is currently hampered by the lack of efficient approaches to streamline the design, construction and screening of synthetic gene networks. To address this problem, here we present a framework for modular and combinatorial assembly of functional (multi)gene expression vectors and their efficient and specific targeted integration into a well-defined chromosomal context in mammalian cells. We demonstrate the potential of this framework by assembling and integrating different functional mammalian regulatory networks including the largest gene circuit built and chromosomally integrated to date (6 transcription units, 27kb) encoding an inducible memory device. Using a library of 18 different circuits as a proof of concept, we also demonstrate that our method enables one-pot/single-flask chromosomal integration and screening of circuit libraries. This rapid and powerful prototyping platform is well suited for comparative studies of genetic regulatory elements, genes and multi-gene circuits as well as facile development of libraries of isogenic engineered cell lines. PMID:25378321
SU-D-207-04: GPU-Based 4D Cone-Beam CT Reconstruction Using Adaptive Meshing Method
Zhong, Z; Gu, X; Iyengar, P; Mao, W; Wang, J; Guo, X
2015-06-15
Purpose: Due to the limited number of projections at each phase, the image quality of a four-dimensional cone-beam CT (4D-CBCT) is often degraded, which decreases the accuracy of subsequent motion modeling. One of the promising methods is the simultaneous motion estimation and image reconstruction (SMEIR) approach. The objective of this work is to enhance the computational speed of the SMEIR algorithm using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the tetrahedral mesh based on the features of a reference phase 4D-CBCT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. After the mesh generation, the updated motion model and other phases of 4D-CBCT can be obtained by matching the 4D-CBCT projection images at each phase with the corresponding forward projections of the deformed reference phase of 4D-CBCT. The entire process of this 4D-CBCT reconstruction method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its tremendous parallel computing ability. Results: A 4D XCAT digital phantom was used to test the proposed mesh-based image reconstruction algorithm. The image Result shows both bone structures and inside of the lung are well-preserved and the tumor position can be well captured. Compared to the previous voxel-based CPU implementation of SMEIR, the proposed method is about 157 times faster for reconstructing a 10 -phase 4D-CBCT with dimension 256×256×150. Conclusion: The GPU-based parallel 4D CBCT reconstruction method uses the feature-based mesh for estimating motion model and demonstrates equivalent image Result with previous voxel-based SMEIR approach, with significantly improved computational speed.
NASA Astrophysics Data System (ADS)
Smerdon, Jason; Coats, Sloan; Ault, Toby
2015-04-01
The spatial skill of four climate field reconstruction (CFR) methods is investigated using pseudoproxy experiments (PPEs) based on five Last Millennium (LM) and historical simulations from the Coupled and Paleo Model Intercomparison Projects Phases 5 and 3 (CMIP5/PMIP3) data archives. These simulations are used for the first time in a PPE context, the pseudoproxy frameworks of which are constructed to test a recently assembled multiproxy network and multiple CFR techniques. The experiments confirm earlier findings demonstrating consistent methodological performance across all of the employed methods and spatially dependent reconstruction errors in the derived CFRs. Spectral biases in the reconstructed fields demonstrate that reconstruction methods can alone alter the ratio of spectral power at all locations in the field, independent of whether there are spectral biases inherent in the underlying proxy series. The patterns of spectral biases are model dependent and indicate the potential for regions in the derived CFRs to be biased by changes in either low or high-frequency spectral power. CFR methods are also shown to alter the pattern of mean differences in the tropical Pacific during the Medieval Climate Anomaly (MCA) and the Little Ice Age (LIA), with some model experiments indicating that CFR methodologies enhance the statistical likelihood of achieving a larger mean difference between the MCA and LIA in the region. All of the characteristics of reconstruction performance are model dependent, indicating that CFR methods must be evaluated across multiple models and that conclusions from PPEs should be carefully connected to the spatial statistics of real-world climatic fields.
Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2012-01-01
Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764
Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut
2016-01-01
Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications. PMID:27181695
NASA Astrophysics Data System (ADS)
Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut
2016-05-01
Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications.
NASA Astrophysics Data System (ADS)
Yeo, Inhwan Jason; Jung, Jae Won; Chew, Meng; Kim, Jong Oh; Wang, Brian; Di Biase, Steven; Zhu, Yunping; Lee, Dohyung
2009-09-01
A straightforward and accurate method was developed to verify the delivery of intensity-modulated radiation therapy (IMRT) and to reconstruct the dose in a patient. The method is based on a computational algorithm that linearly describes the physical relationship between beamlets and dose-scoring voxels in a patient and the dose image from an electronic portal imaging device (EPID). The relationship is expressed in the form of dose response functions (responses) that are quantified using Monte Carlo (MC) particle transport techniques. From the dose information measured by the EPID the received patient dose is reconstructed by inversely solving the algorithm. The unique and novel non-iterative feature of this algorithm sets it apart from many existing dose reconstruction methods in the literature. This study presents the algorithm in detail and validates it experimentally for open and IMRT fields. Responses were first calculated for each beamlet of the selected fields by MC simulation. In-phantom and exit film dosimetry were performed on a flat phantom. Using the calculated responses and the algorithm, the exit film dose was used to inversely reconstruct the in-phantom dose, which was then compared with the measured in-phantom dose. The dose comparison in the phantom for all irradiated fields showed a pass rate of higher than 90% dose points given the criteria of dose difference of 3% and distance to agreement of 3 mm.
Review of methods used in the reconstruction and rehabilitation of the maxillofacial region.
O'Fearraigh, Pádraig
2010-01-01
Maxillofacial and dental defects often have detrimental effects on patient health and appearance. A holistic approach of restoring lost dentition along with bone and soft tissue is now the standard treatment of these defects. Recent improvements in reconstructive techniques, especially osseointegration, microvascular free tissue transfer, and improvements in bone engineering, have yielded excellent functional and aesthetic outcomes. This article reviews the literature on these modern reconstructive and rehabilitation techniques. PMID:20337144
A SPECT reconstruction method for extending parallel to non-parallel geometries
NASA Astrophysics Data System (ADS)
Wen, Junhai; Liang, Zhengrong
2010-03-01
Due to its simplicity, parallel-beam geometry is usually assumed for the development of image reconstruction algorithms. The established reconstruction methodologies are then extended to fan-beam, cone-beam and other non-parallel geometries for practical application. This situation occurs for quantitative SPECT (single photon emission computed tomography) imaging in inverting the attenuated Radon transform. Novikov reported an explicit parallel-beam formula for the inversion of the attenuated Radon transform in 2000. Thereafter, a formula for fan-beam geometry was reported by Bukhgeim and Kazantsev (2002 Preprint N. 99 Sobolev Institute of Mathematics). At the same time, we presented a formula for varying focal-length fan-beam geometry. Sometimes, the reconstruction formula is so implicit that we cannot obtain the explicit reconstruction formula in the non-parallel geometries. In this work, we propose a unified reconstruction framework for extending parallel-beam geometry to any non-parallel geometry using ray-driven techniques. Studies by computer simulations demonstrated the accuracy of the presented unified reconstruction framework for extending parallel-beam to non-parallel geometries in inverting the attenuated Radon transform.
Three-channel dynamic photometric stereo: a new method for 4D surface