Review on statistical methods for gene network reconstruction using expression data.
Wang, Y X Rachel; Huang, Haiyan
2014-12-07
Network modeling has proven to be a fundamental tool in analyzing the inner workings of a cell. It has revolutionized our understanding of biological processes and made significant contributions to the discovery of disease biomarkers. Much effort has been devoted to reconstruct various types of biochemical networks using functional genomic datasets generated by high-throughput technologies. This paper discusses statistical methods used to reconstruct gene regulatory networks using gene expression data. In particular, we highlight progress made and challenges yet to be met in the problems involved in estimating gene interactions, inferring causality and modeling temporal changes of regulation behaviors. As rapid advances in technologies have made available diverse, large-scale genomic data, we also survey methods of incorporating all these additional data to achieve better, more accurate inference of gene networks. Copyright © 2014 Elsevier Ltd. All rights reserved.
Gene Network Reconstruction using Global-Local Shrinkage Priors*
Leday, Gwenaël G.R.; de Gunst, Mathisca C.M.; Kpogbezan, Gino B.; van der Vaart, Aad W.; van Wieringen, Wessel N.; van de Wiel, Mark A.
2016-01-01
Reconstructing a gene network from high-throughput molecular data is an important but challenging task, as the number of parameters to estimate easily is much larger than the sample size. A conventional remedy is to regularize or penalize the model likelihood. In network models, this is often done locally in the neighbourhood of each node or gene. However, estimation of the many regularization parameters is often difficult and can result in large statistical uncertainties. In this paper we propose to combine local regularization with global shrinkage of the regularization parameters to borrow strength between genes and improve inference. We employ a simple Bayesian model with non-sparse, conjugate priors to facilitate the use of fast variational approximations to posteriors. We discuss empirical Bayes estimation of hyper-parameters of the priors, and propose a novel approach to rank-based posterior thresholding. Using extensive model- and data-based simulations, we demonstrate that the proposed inference strategy outperforms popular (sparse) methods, yields more stable edges, and is more reproducible. The proposed method, termed ShrinkNet, is then applied to Glioblastoma to investigate the interactions between genes associated with patient survival. PMID:28408966
Hub-Centered Gene Network Reconstruction Using Automatic Relevance Determination
Böck, Matthias; Ogishima, Soichi; Tanaka, Hiroshi; Kramer, Stefan; Kaderali, Lars
2012-01-01
Network inference deals with the reconstruction of biological networks from experimental data. A variety of different reverse engineering techniques are available; they differ in the underlying assumptions and mathematical models used. One common problem for all approaches stems from the complexity of the task, due to the combinatorial explosion of different network topologies for increasing network size. To handle this problem, constraints are frequently used, for example on the node degree, number of edges, or constraints on regulation functions between network components. We propose to exploit topological considerations in the inference of gene regulatory networks. Such systems are often controlled by a small number of hub genes, while most other genes have only limited influence on the network's dynamic. We model gene regulation using a Bayesian network with discrete, Boolean nodes. A hierarchical prior is employed to identify hub genes. The first layer of the prior is used to regularize weights on edges emanating from one specific node. A second prior on hyperparameters controls the magnitude of the former regularization for different nodes. The net effect is that central nodes tend to form in reconstructed networks. Network reconstruction is then performed by maximization of or sampling from the posterior distribution. We evaluate our approach on simulated and real experimental data, indicating that we can reconstruct main regulatory interactions from the data. We furthermore compare our approach to other state-of-the art methods, showing superior performance in identifying hubs. Using a large publicly available dataset of over 800 cell cycle regulated genes, we are able to identify several main hub genes. Our method may thus provide a valuable tool to identify interesting candidate genes for further study. Furthermore, the approach presented may stimulate further developments in regularization methods for network reconstruction from data. PMID:22570688
Semi-Supervised Multi-View Learning for Gene Network Reconstruction
Ceci, Michelangelo; Pio, Gianvito; Kuzmanovski, Vladimir; Džeroski, Sašo
2015-01-01
The task of gene regulatory network reconstruction from high-throughput data is receiving increasing attention in recent years. As a consequence, many inference methods for solving this task have been proposed in the literature. It has been recently observed, however, that no single inference method performs optimally across all datasets. It has also been shown that the integration of predictions from multiple inference methods is more robust and shows high performance across diverse datasets. Inspired by this research, in this paper, we propose a machine learning solution which learns to combine predictions from multiple inference methods. While this approach adds additional complexity to the inference process, we expect it would also carry substantial benefits. These would come from the automatic adaptation to patterns on the outputs of individual inference methods, so that it is possible to identify regulatory interactions more reliably when these patterns occur. This article demonstrates the benefits (in terms of accuracy of the reconstructed networks) of the proposed method, which exploits an iterative, semi-supervised ensemble-based algorithm. The algorithm learns to combine the interactions predicted by many different inference methods in the multi-view learning setting. The empirical evaluation of the proposed algorithm on a prokaryotic model organism (E. coli) and on a eukaryotic model organism (S. cerevisiae) clearly shows improved performance over the state of the art methods. The results indicate that gene regulatory network reconstruction for the real datasets is more difficult for S. cerevisiae than for E. coli. The software, all the datasets used in the experiments and all the results are available for download at the following link: http://figshare.com/articles/Semi_supervised_Multi_View_Learning_for_Gene_Network_Reconstruction/1604827. PMID:26641091
Snapshot of iron response in Shewanella oneidensis by gene network reconstruction
Yang, Yunfeng; Harris, Daniel P.; Luo, Feng; Xiong, Wenlu; Joachimiak, Marcin; Wu, Liyou; Dehal, Paramvir; Jacobsen, Janet; Yang, Zamin; Palumbo, Anthony V.; Arkin, Adam P.; Zhou, Jizhong
2008-10-09
Background: Iron homeostasis of Shewanella oneidensis, a gamma-proteobacterium possessing high iron content, is regulated by a global transcription factor Fur. However, knowledge is incomplete about other biological pathways that respond to changes in iron concentration, as well as details of the responses. In this work, we integrate physiological, transcriptomics and genetic approaches to delineate the iron response of S. oneidensis. Results: We show that the iron response in S. oneidensis is a rapid process. Temporal gene expression profiles were examined for iron depletion and repletion, and a gene co-expression network was reconstructed. Modules of iron acquisition systems, anaerobic energy metabolism and protein degradation were the most noteworthy in the gene network. Bioinformatics analyses suggested that genes in each of the modules might be regulated by DNA-binding proteins Fur, CRP and RpoH, respectively. Closer inspection of these modules revealed a transcriptional regulator (SO2426) involved in iron acquisition and ten transcriptional factors involved in anaerobic energy metabolism. Selected genes in the network were analyzed by genetic studies. Disruption of genes encoding a putative alcaligin biosynthesis protein (SO3032) and a gene previously implicated in protein degradation (SO2017) led to severe growth deficiency under iron depletion conditions. Disruption of a novel transcriptional factor (SO1415) caused deficiency in both anaerobic iron reduction and growth with thiosulfate or TMAO as an electronic acceptor, suggesting that SO1415 is required for specific branches of anaerobic energy metabolism pathways. Conclusions: Using a reconstructed gene network, we identified major biological pathways that were differentially expressed during iron depletion and repletion. Genetic studies not only demonstrated the importance of iron acquisition and protein degradation for iron depletion, but also characterized a novel transcriptional factor (SO1415) with a
A Synthesis Method of Gene Networks Having Cyclic Expression Pattern Sequences by Network Learning
NASA Astrophysics Data System (ADS)
Mori, Yoshihiro; Kuroe, Yasuaki
Recently, synthesis of gene networks having desired functions has become of interest to many researchers because it is a complementary approach to understanding gene networks, and it could be the first step in controlling living cells. There exist several periodic phenomena in cells, e.g. circadian rhythm. These phenomena are considered to be generated by gene networks. We have already proposed synthesis method of gene networks based on gene expression. The method is applicable to synthesizing gene networks possessing the desired cyclic expression pattern sequences. It ensures that realized expression pattern sequences are periodic, however, it does not ensure that their corresponding solution trajectories are periodic, which might bring that their oscillations are not persistent. In this paper, in order to resolve the problem we propose a synthesis method of gene networks possessing the desired cyclic expression pattern sequences together with their corresponding solution trajectories being periodic. In the proposed method the persistent oscillations of the solution trajectories are realized by specifying passing points of them.
Methods of Voice Reconstruction
Chen, Hung-Chi; Kim Evans, Karen F.; Salgado, Christopher J.; Mardini, Samir
2010-01-01
This article reviews methods of voice reconstruction. Nonsurgical methods of voice reconstruction include electrolarynx, pneumatic artificial larynx, and esophageal speech. Surgical methods of voice reconstruction include neoglottis, tracheoesophageal puncture, and prosthesis. Tracheoesophageal puncture can be performed in patients with pedicled flaps such as colon interposition, jejunum, or gastric pull-up or in free flaps such as perforator flaps, jejunum, and colon flaps. Other flaps for voice reconstruction include the ileocolon flap and jejunum. Laryngeal transplantation is also reviewed. PMID:22550443
Mine, Karina L.; Shulzhenko, Natalia; Yambartsev, Anatoly; Rochman, Mark; Sanson, Gerdine F. O.; Lando, Malin; Varma, Sudhir; Skinner, Jeff; Volfovsky, Natalia; Deng, Tao; Brenna, Sylvia M. F.; Carvalho, Carmen R. N.; Ribalta, Julisa C. L.; Bustin, Michael; Matzinger, Polly; Silva, Ismael D. C. G.; Lyng, Heidi; Gerbase-DeLima, Maria; Morgun, Andrey
2014-01-01
Although human papillomavirus (HPV) was identified as an etiological factor in cervical cancer, the key human gene drivers of this disease remain unknown. Here we apply an unbiased approach integrating gene expression and chromosomal aberration data. In an independent group of patients, we reconstruct and validate a gene regulatory meta-network, and identify cell cycle and antiviral genes that constitute two major sub-networks up-regulated in tumour samples. These genes are located within the same regions as chromosomal amplifications, most frequently on 3q. We propose a model in which selected chromosomal gains drive activation of antiviral genes contributing to episomal virus elimination, which synergizes with cell cycle dysregulation. These findings may help to explain the paradox of episomal HPV decline in women with invasive cancer who were previously unable to clear the virus. PMID:23651994
Mine, Karina L; Shulzhenko, Natalia; Yambartsev, Anatoly; Rochman, Mark; Sanson, Gerdine F O; Lando, Malin; Varma, Sudhir; Skinner, Jeff; Volfovsky, Natalia; Deng, Tao; Brenna, Sylvia M F; Carvalho, Carmen R N; Ribalta, Julisa C L; Bustin, Michael; Matzinger, Polly; Silva, Ismael D C G; Lyng, Heidi; Gerbase-DeLima, Maria; Morgun, Andrey
2013-01-01
Although human papillomavirus was identified as an aetiological factor in cervical cancer, the key human gene drivers of this disease remain unknown. Here we apply an unbiased approach integrating gene expression and chromosomal aberration data. In an independent group of patients, we reconstruct and validate a gene regulatory meta-network, and identify cell cycle and antiviral genes that constitute two major subnetworks upregulated in tumour samples. These genes are located within the same regions as chromosomal amplifications, most frequently on 3q. We propose a model in which selected chromosomal gains drive activation of antiviral genes contributing to episomal virus elimination, which synergizes with cell cycle dysregulation. These findings may help to explain the paradox of episomal human papillomavirus decline in women with invasive cancer who were previously unable to clear the virus.
How to train your microbe: methods for dynamically characterizing gene networks
Castillo-Hair, Sebastian M.; Igoshin, Oleg A.; Tabor, Jeffrey J.
2015-01-01
Gene networks regulate biological processes dynamically. However, researchers have largely relied upon static perturbations, such as growth media variations and gene knockouts, to elucidate gene network structure and function. Thus, much of the regulation on the path from DNA to phenotype remains poorly understood. Recent studies have utilized improved genetic tools, hardware, and computational control strategies to generate precise temporal perturbations outside and inside of live cells. These experiments have, in turn, provided new insights into the organizing principles of biology. Here, we introduce the major classes of dynamical perturbations that can be used to study gene networks, and discuss technologies available for creating them in a wide range of microbial pathways. PMID:25677419
He, Feng; Balling, Rudi; Zeng, An-Ping
2009-11-01
Reverse engineering of gene networks aims at revealing the structure of the gene regulation network in a biological system by reasoning backward directly from experimental data. Many methods have recently been proposed for reverse engineering of gene networks by using gene transcript expression data measured by microarray. Whereas the potentials of the methods have been well demonstrated, the assumptions and limitations behind them are often not clearly stated or not well understood. In this review, we first briefly explain the principles of the major methods, identify the assumptions behind them and pinpoint the limitations and possible pitfalls in applying them to real biological questions. With regard to applications, we then discuss challenges in the experimental verification of gene networks generated from reverse engineering methods. We further propose an optimal experimental design for allocating sampling schedule and possible strategies for reducing the limitations of some of the current reverse engineering methods. Finally, we examine the perspectives for the development of reverse engineering and urge the need to move from revealing network structure to the dynamics of biological systems.
2011-01-01
Background The immune response to viral infection is a temporal process, represented by a dynamic and complex network of gene and protein interactions. Here, we present a reverse engineering strategy aimed at capturing the temporal evolution of the underlying Gene Regulatory Networks (GRN). The proposed approach will be an enabling step towards comprehending the dynamic behavior of gene regulation circuitry and mapping the network structure transitions in response to pathogen stimuli. Results We applied the Time Varying Dynamic Bayesian Network (TV-DBN) method for reconstructing the gene regulatory interactions based on time series gene expression data for the mouse C57BL/6J inbred strain after infection with influenza A H1N1 (PR8) virus. Initially, 3500 differentially expressed genes were clustered with the use of k-means algorithm. Next, the successive in time GRNs were built over the expression profiles of cluster centroids. Finally, the identified GRNs were examined with several topological metrics and available protein-protein and protein-DNA interaction data, transcription factor and KEGG pathway data. Conclusions Our results elucidate the potential of TV-DBN approach in providing valuable insights into the temporal rewiring of the lung transcriptome in response to H1N1 virus. PMID:22017961
CHAI, Lian En; LAW, Chow Kuan; MOHAMAD, Mohd Saberi; CHONG, Chuii Khim; CHOON, Yee Wen; DERIS, Safaai; ILLIAS, Rosli Md
2014-01-01
Background: Gene expression data often contain missing expression values. Therefore, several imputation methods have been applied to solve the missing values, which include k-nearest neighbour (kNN), local least squares (LLS), and Bayesian principal component analysis (BPCA). However, the effects of these imputation methods on the modelling of gene regulatory networks from gene expression data have rarely been investigated and analysed using a dynamic Bayesian network (DBN). Methods: In the present study, we separately imputed datasets of the Escherichia coli S.O.S. DNA repair pathway and the Saccharomyces cerevisiae cell cycle pathway with kNN, LLS, and BPCA, and subsequently used these to generate gene regulatory networks (GRNs) using a discrete DBN. We made comparisons on the basis of previous studies in order to select the gene network with the least error. Results: We found that BPCA and LLS performed better on larger networks (based on the S. cerevisiae dataset), whereas kNN performed better on smaller networks (based on the E. coli dataset). Conclusion: The results suggest that the performance of each imputation method is dependent on the size of the dataset, and this subsequently affects the modelling of the resultant GRNs using a DBN. In addition, on the basis of these results, a DBN has the capacity to discover potential edges, as well as display interactions, between genes. PMID:24876803
Computational methods for image reconstruction.
Chung, Julianne; Ruthotto, Lars
2017-04-01
Reconstructing images from indirect measurements is a central problem in many applications, including the subject of this special issue, quantitative susceptibility mapping (QSM). The process of image reconstruction typically requires solving an inverse problem that is ill-posed and large-scale and thus challenging to solve. Although the research field of inverse problems is thriving and very active with diverse applications, in this part of the special issue we will focus on recent advances in inverse problems that are specific to deconvolution problems, the class of problems to which QSM belongs. We will describe analytic tools that can be used to investigate underlying ill-posedness and apply them to the QSM reconstruction problem and the related extensively studied image deblurring problem. We will discuss state-of-the-art computational tools and methods for image reconstruction, including regularization approaches and regularization parameter selection methods. We finish by outlining some of the current trends and future challenges. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Modern methods of image reconstruction.
NASA Astrophysics Data System (ADS)
Puetter, R. C.
The author reviews the image restoration or reconstruction problem in its general setting. He first discusses linear methods for solving the problem of image deconvolution, i.e. the case in which the data are a convolution of a point-spread function and an underlying unblurred image. Next, non-linear methods are introduced in the context of Bayesian estimation, including maximum likelihood and maximum entropy methods. Then, the author discusses the role of language and information theory concepts for data compression and solving the inverse problem. The concept of algorithmic information content (AIC) is introduced and is shown to be crucial to achieving optimal data compression and optimized Bayesian priors for image reconstruction. The dependence of the AIC on the selection of language then suggests how efficient coordinate systems for the inverse problem may be selected. The author also introduced pixon-based image restoration and reconstruction methods. The relation between image AIC and the Bayesian incarnation of Occam's Razor is discussed, as well as the relation of multiresolution pixon languages and image fractal dimension. Also discussed is the relation of pixons to the role played by the Heisenberg uncertainty principle in statistical physics and how pixon-based image reconstruction provides a natural extension to the Akaike information criterion for maximum likelihood. The author presents practical applications of pixon-based Bayesian estimation to the restoration of astronomical images. He discusses the effects of noise, effects of finite sampling on resolution, and special problems associated with spatially correlated noise introduced by mosaicing. Comparisons to other methods demonstrate the significant improvements afforded by pixon-based methods and illustrate the science that such performance improvements allow.
Fast iterative reconstruction method for PROPELLER MRI
NASA Astrophysics Data System (ADS)
Guo, Hongyu; Dai, Jianping; Shi, Jinquan
2009-10-01
Patient motion during scanning will introduce artifacts in the reconstructed image in MRI imaging. Periodically Rotated Overlapping Parallel Lines with Enhanced Reconstruction (PROPELLER) MRI is an effective technique to correct for motion artifacts. The iterative method that combine the preconditioned conjugate gradient (PCG) algorithm with nonuniform fast Fourier transformation (NUFFT) operations is applied to PROPELLER MRI in the paper. But the drawback of the method is long reconstruction time. In order to make it viable in clinical situation, parallel optimization of the iterative method on modern GPU using CUDA is proposed. The simulated data and in vivo data from PROPELLER MRI are respectively reconstructed in order to test the method. The experimental results show that image quality is improved compared with gridding method using the GPU based iterative method with compatible reconstruction time.
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
Reconstructive methods in hearing disorders - surgical methods
Zahnert, Thomas
2005-01-01
Restoration of hearing is associated in many cases with resocialisation of those affected and therefore occupies an important place in a society where communication is becoming ever faster. Not all problems can be solved surgically. Even 50 years after the introduction of tympanoplasty, the hearing results are unsatisfactory and often do not reach the threshold for social hearing. The cause of this can in most cases be regarded as incomplete restoration of the mucosal function of the middle ear and tube, which leads to ventilation disorders of the ear and does not allow real vibration of the reconstructed middle ear. However, a few are also caused by the biomechanics of the reconstructed ossicular chain. There has been progress in reconstructive middle ear surgery, which applies particularly to the development of implants. Implants made of titanium, which are distinguished by outstanding biocompatibility, delicate design and by biomechanical possibilities in the reconstruction of chain function, can be regarded as a new generation. Metal implants for the first time allow a controlled close fit with the remainder of the chain and integration of micromechanical functions in the implant. Moreover, there has also been progress in microsurgery itself. This applies particularly to the operative procedures for auditory canal atresia, the restoration of the tympanic membrane and the coupling of implants. This paper gives a summary of the current state of reconstructive microsurgery paying attention to the acousto-mechanical rules. PMID:22073050
Shape reconstruction methods with incomplete data
NASA Astrophysics Data System (ADS)
Nakahata, K.; Kitahara, M.
2000-05-01
Linearized inverse scattering methods are applied to the shape reconstruction of defects in elastic solids. The linearized methods are based on the Born approximation in the low frequency range and the Kirchhoff approximation in the high frequency range. The experimental measurement is performed to collect the scattering data from defects. The processed data from the measurement are fed into the linearized methods and the shape of the defect is reconstructed by two linearized methods. The importance of scattering data in the low frequency range is pointed out not only for Born inversion but also for Kirchhoff inversion. In the ultrasonic measurement for the real structure, the access points of the sensor may be limited to one side of the structural surfaces and a part of the surface. From the viewpoint of application, the incomplete scattering data are used as inputs for the shape reconstruction methods and the effect of the sensing points are discussed.
Replaying the evolutionary tape: biomimetic reverse engineering of gene networks.
Marbach, Daniel; Mattiussi, Claudio; Floreano, Dario
2009-03-01
In this paper, we suggest a new approach for reverse engineering gene regulatory networks, which consists of using a reconstruction process that is similar to the evolutionary process that created these networks. The aim is to integrate prior knowledge into the reverse-engineering procedure, thus biasing the search toward biologically plausible solutions. To this end, we propose an evolutionary method that abstracts and mimics the natural evolution of gene regulatory networks. Our method can be used with a wide range of nonlinear dynamical models. This allows us to explore novel model types such as the log-sigmoid model introduced here. We apply the biomimetic method to a gold-standard dataset from an in vivo gene network. The obtained results won a reverse engineering competition of the second DREAM conference (Dialogue on Reverse Engineering Assessments and Methods 2007, New York, NY).
Parametric reconstruction method in optical tomography.
Gu, Xuejun; Ren, Kui; Masciotti, James; Hielscher, Andreas H
2006-01-01
Optical tomography consists of reconstructing the spatial of a medium's optical properties from measurements of transmitted light on the boundary of the medium. Mathematically this problem amounts to parameter identification for the radiative transport equation (ERT) or diffusion approximation (DA). However, this type of boundary-value problem is highly ill-posed and the image reconstruction process is often unstable and non-unique. To overcome this problem, we present a parametric inverse method that considerably reduces the number of variables being reconstructed. In this way the amount of measured data is equal or larger than the number of unknowns. Using synthetic data, we show examples that demonstrate how this approach leads to improvements in imaging quality.
Bullet trajectory reconstruction - Methods, accuracy and precision.
Mattijssen, Erwin J A T; Kerkhoff, Wim
2016-05-01
Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement.
Magnetic flux reconstruction methods for shaped tokamaks
NASA Astrophysics Data System (ADS)
Tsui, Chi-Wa
1993-12-01
The use of a variational method permits the Grad-Shafranov (GS) equation to be solved by reducing the problem of solving the two dimensional nonlinear partial differential equation to the problem of minimizing a function of several variables. This high speed algorithm approximately solves the GS equation given a parameterization of the plasma boundary and the current profile (p' and FF' functions). The current profile parameters are treated as unknowns. The goal is to reconstruct the internal magnetic flux surfaces of a tokamak plasma and the toroidal current density profile from the external magnetic measurements. This is a classic problem of inverse equilibrium determination. The current profile parameters can be evaluated by several different matching procedures. Matching of magnetic flux and field at the probe locations using the Biot-Savart law and magnetic Green's function provides a robust method of magnetic reconstruction. The matching of poloidal magnetic field on the plasma surface provides a unique method of identifying the plasma current profile. However, the power of this method is greatly compromised by the experimental errors of the magnetic signals. The Casing principle provides a very fast way to evaluate the plasma contribution to the magnetic signals. It has the potential of being a fast matching method. The performance of this method is hindered by the accuracy of the poloidal magnetic field computed from the equilibrium solver. A flux reconstruction package has been implemented which integrates a vacuum field solver using a filament model for the plasma, a multilayer perception neural network as an interface, and the volume integration of plasma current density using Green's functions as a matching method for the current profile parameters. The flux reconstruction package is applied to compare with the ASEQ and EFIT data.
Magnetic flux reconstruction methods for shaped tokamaks
Tsui, Chi-Wa
1993-12-01
The use of a variational method permits the Grad-Shafranov (GS) equation to be solved by reducing the problem of solving the 2D non-linear partial differential equation to the problem of minimizing a function of several variables. This high speed algorithm approximately solves the GS equation given a parameterization of the plasma boundary and the current profile (p` and FF` functions). The author treats the current profile parameters as unknowns. The goal is to reconstruct the internal magnetic flux surfaces of a tokamak plasma and the toroidal current density profile from the external magnetic measurements. This is a classic problem of inverse equilibrium determination. The current profile parameters can be evaluated by several different matching procedures. Matching of magnetic flux and field at the probe locations using the Biot-Savart law and magnetic Green`s function provides a robust method of magnetic reconstruction. The matching of poloidal magnetic field on the plasma surface provides a unique method of identifying the plasma current profile. However, the power of this method is greatly compromised by the experimental errors of the magnetic signals. The Casing Principle provides a very fast way to evaluate the plasma contribution to the magnetic signals. It has the potential of being a fast matching method. The performance of this method is hindered by the accuracy of the poloidal magnetic field computed from the equilibrium solver. A flux reconstruction package has been implemented which integrates a vacuum field solver using a filament model for the plasma, a multi-layer perception neural network as an interface, and the volume integration of plasma current density using Green`s functions as a matching method for the current profile parameters. The flux reconstruction package is applied to compare with the ASEQ and EFIT data. The results are promising.
Introduction: Cancer Gene Networks.
Clarke, Robert
2017-01-01
Constructing, evaluating, and interpreting gene networks generally sits within the broader field of systems biology, which continues to emerge rapidly, particular with respect to its application to understanding the complexity of signaling in the context of cancer biology. For the purposes of this volume, we take a broad definition of systems biology. Considering an organism or disease within an organism as a system, systems biology is the study of the integrated and coordinated interactions of the network(s) of genes, their variants both natural and mutated (e.g., polymorphisms, rearrangements, alternate splicing, mutations), their proteins and isoforms, and the organic and inorganic molecules with which they interact, to execute the biochemical reactions (e.g., as enzymes, substrates, products) that reflect the function of that system. Central to systems biology, and perhaps the only approach that can effectively manage the complexity of such systems, is the building of quantitative multiscale predictive models. The predictions of the models can vary substantially depending on the nature of the model and its inputoutput relationships. For example, a model may predict the outcome of a specific molecular reaction(s), a cellular phenotype (e.g., alive, dead, growth arrest, proliferation, and motility), a change in the respective prevalence of cell or subpopulations, a patient or patient subgroup outcome(s). Such models necessarily require computers. Computational modeling can be thought of as using machine learning and related tools to integrate the very high dimensional data generated from modern, high throughput omics technologies including genomics (next generation sequencing), transcriptomics (gene expression microarrays; RNAseq), metabolomics and proteomics (ultra high performance liquid chromatography, mass spectrometry), and "subomic" technologies to study the kinome, methylome, and others. Mathematical modeling can be thought of as the use of ordinary
Gene network and pathway generation and analysis: Editorial
Zhao, Zhongming; Sanfilippo, Antonio P.; Huang, Kun
2011-02-18
The past decade has witnessed an exponential growth of biological data including genomic sequences, gene annotations, expression and regulation, and protein-protein interactions. A key aim in the post-genome era is to systematically catalogue gene networks and pathways in a dynamic living cell and apply them to study diseases and phenotypes. To promote the research in systems biology and its application to disease studies, we organized a workshop focusing on the reconstruction and analysis of gene networks and pathways in any organisms from high-throughput data collected through techniques such as microarray analysis and RNA-Seq.
Magnetic Flux Reconstruction Methods for Shaped Tokamaks
NASA Astrophysics Data System (ADS)
Tsui, Chi-Wa.
The use of a variational method permits the Grad -Shafranov (GS) equation to be solved by reducing the problem of solving the 2D non-linear partial differential equation to the problem of minimizing a function of several variables. This high speed algorithm approximately solves the GS equation given a pararmeterization of the plasma boundary and the current profile (p^' and FF^' functions). We treat the current profile parameters as unknowns. The goal is to reconstruct the internal magnetic flux surfaces of a tokamak plasma and the toroidal current density profile from the external magnetic measurements. This is a classic problem of inverse equilibrium determination. The current profile parameters can be evaluated by several different matching procedures. We found that the matching of magnetic flux and field at the probe locations using the Biot-Savart law and magnetic Green's function provides a robust method of magnetic reconstruction. The matching of poloidal magnetic field on the plasma surface provides a unique method of identifying the plasma current profile. However, the power of this method is greatly compromised by the experimental errors of the magnetic signals. The Casing Principle (60) provides a very fast way to evaluate the plasma contribution to the magnetic signals. It has the potential of being a fast matching method. We found that the performance of this method is hindered by the accuracy of the poloidal magnetic field computed from the equilibrium solver. A flux reconstruction package have been implemented which integrates a vacuum field solver using a filament model for the plasma, a multi-layer perceptron neural network as a interface, and the volume integration of plasma current density using Green's functions as a matching method for the current profile parameters. The flux reconstruction package is applied to compare with the ASEQ and EFIT data. The results are promising. Also, we found that some plasmas in the tokamak Alcator C-Mod lie
Hybrid stochastic simplifications for multiscale gene networks
Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu
2009-01-01
Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach. PMID:19735554
Buffering in cyclic gene networks
NASA Astrophysics Data System (ADS)
Glyzin, S. D.; Kolesov, A. Yu.; Rozov, N. Kh.
2016-06-01
We consider cyclic chains of unidirectionally coupled delay differential-difference equations that are mathematical models of artificial oscillating gene networks. We establish that the buffering phenomenon is realized in these system for an appropriate choice of the parameters: any given finite number of stable periodic motions of a special type, the so-called traveling waves, coexist.
Crowdsourcing the nodulation gene network discovery environment.
Li, Yupeng; Jackson, Scott A
2016-05-26
The Legumes (Fabaceae) are an economically and ecologically important group of plant species with the conspicuous capacity for symbiotic nitrogen fixation in root nodules, specialized plant organs containing symbiotic microbes. With the aim of understanding the underlying molecular mechanisms leading to nodulation, many efforts are underway to identify nodulation-related genes and determine how these genes interact with each other. In order to accurately and efficiently reconstruct nodulation gene network, a crowdsourcing platform, CrowdNodNet, was created. The platform implements the jQuery and vis.js JavaScript libraries, so that users are able to interactively visualize and edit the gene network, and easily access the information about the network, e.g. gene lists, gene interactions and gene functional annotations. In addition, all the gene information is written on MediaWiki pages, enabling users to edit and contribute to the network curation. Utilizing the continuously updated, collaboratively written, and community-reviewed Wikipedia model, the platform could, in a short time, become a comprehensive knowledge base of nodulation-related pathways. The platform could also be used for other biological processes, and thus has great potential for integrating and advancing our understanding of the functional genomics and systems biology of any process for any species. The platform is available at http://crowd.bioops.info/ , and the source code can be openly accessed at https://github.com/bioops/crowdnodnet under MIT License.
2014-01-01
Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel
Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che
2014-01-16
To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high
Inverse polynomial reconstruction method in DCT domain
NASA Astrophysics Data System (ADS)
Dadkhahi, Hamid; Gotchev, Atanas; Egiazarian, Karen
2012-12-01
The discrete cosine transform (DCT) offers superior energy compaction properties for a large class of functions and has been employed as a standard tool in many signal and image processing applications. However, it suffers from spurious behavior in the vicinity of edge discontinuities in piecewise smooth signals. To leverage the sparse representation provided by the DCT, in this article, we derive a framework for the inverse polynomial reconstruction in the DCT expansion. It yields the expansion of a piecewise smooth signal in terms of polynomial coefficients, obtained from the DCT representation of the same signal. Taking advantage of this framework, we show that it is feasible to recover piecewise smooth signals from a relatively small number of DCT coefficients with high accuracy. Furthermore, automatic methods based on minimum description length principle and cross-validation are devised to select the polynomial orders, as a requirement of the inverse polynomial reconstruction method in practical applications. The developed framework can considerably enhance the performance of the DCT in sparse representation of piecewise smooth signals. Numerical results show that denoising and image approximation algorithms based on the proposed framework indicate significant improvements over wavelet counterparts for this class of signals.
Accelerated augmented Lagrangian method for few-view CT reconstruction
NASA Astrophysics Data System (ADS)
Wu, Junfeng; Mou, Xuanqin
2012-03-01
Recently iterative reconstruction algorithms with total variation (TV) regularization have shown its tremendous power in image reconstruction from few-view projection data, but it is much more demanding in computation. In this paper, we propose an accelerated augmented Lagrangian method (ALM) for few-view CT reconstruction with total variation regularization. Experimental phantom results demonstrate that the proposed method not only reconstruct high quality image from few-view projection data but also converge fast to the optimal solution.
Reverse engineering transcriptional gene networks.
Belcastro, Vincenzo; di Bernardo, Diego
2014-01-01
The aim of this chapter is a step-by-step guide on how to infer gene networks from gene expression profiles. The definition of a gene network is given in Subheading 1, where the different types of networks are discussed. The chapter then guides the readers through a data-gathering process in order to build a compendium of gene expression profiles from a public repository. Gene expression profiles are then discretized and a statistical relationship between genes, called mutual information (MI), is computed. Gene pairs with insignificant MI scores are then discarded by applying one of the described pruning steps. The retained relationships are then used to build up a Boolean adjacency matrix used as input for a clustering algorithm to divide the network into modules (or communities). The gene network can then be used as a hypothesis generator for discovering gene function and analyzing gene signatures. Some case studies are presented, and an online web-tool called Netview is described.
Hybrid Method for Tokamak MHD Equilibrium Configuration Reconstruction
NASA Astrophysics Data System (ADS)
He, Hong-Da; Dong, Jia-Qi; Zhang, Jin-Hua; Jiang, Hai-Bin
2007-02-01
A hybrid method for tokamak MHD equilibrium configuration reconstruction is proposed and employed in the modified EFIT code. This method uses the free boundary tokamak equilibrium configuration reconstruction algorithm with one boundary point fixed. The results show that the position of the fixed point has explicit effects on the reconstructed divertor configurations. In particular, the separatrix of the reconstructed divertor configuration precisely passes the required position when the hybrid method is used in the reconstruction. The profiles of plasma parameters such as pressure and safety factor for reconstructed HL-2A tokamak configurations with the hybrid and the free boundary methods are compared. The possibility for applications of the method to swing the separatrix strike point on the divertor target plate is discussed.
Auricular reconstruction for microtia: A review of available methods
Baluch, Narges; Nagata, Satoru; Park, Chul; Wilkes, Gordon H; Reinisch, John; Kasrai, Leila; Fisher, David
2014-01-01
Several surgical techniques have been described for auricular reconstruction. Autologous reconstruction using costal cartilage is the most widely accepted technique of microtia repair. However, other techniques have certain indications and should be discussed with patients and families when planning for an auricular reconstruction. In the present review, the authors discuss the main surgical techniques for auricular reconstruction including autologous costal cartilage graft, Medpor (Stryker, USA) implant and prosthetic reconstruction. To further elaborate on the advantages and disadvantages of each technique, the authors invited leaders in this field, Dr Nagata, Dr Park, Dr Reinisch and Dr Wilkes, to comment on their own technique and provide examples of their methods. PMID:25152646
Gene networks controlling petal organogenesis.
Huang, Tengbo; Irish, Vivian F
2016-01-01
One of the biggest unanswered questions in developmental biology is how growth is controlled. Petals are an excellent organ system for investigating growth control in plants: petals are dispensable, have a simple structure, and are largely refractory to environmental perturbations that can alter their size and shape. In recent studies, a number of genes controlling petal growth have been identified. The overall picture of how such genes function in petal organogenesis is beginning to be elucidated. This review will focus on studies using petals as a model system to explore the underlying gene networks that control organ initiation, growth, and final organ morphology.
A new target reconstruction method considering atmospheric refraction
NASA Astrophysics Data System (ADS)
Zuo, Zhengrong; Yu, Lijuan
2015-12-01
In this paper, a new target reconstruction method considering the atmospheric refraction is presented to improve 3D reconstruction accuracy in long rang surveillance system. The basic idea of the method is that the atmosphere between the camera and the target is partitioned into several thin layers radially in which the density is regarded as uniform; Then the reverse tracking of the light propagation path from sensor to target was carried by applying Snell's law at the interface between layers; and finally the average of the tracked target's positions from different cameras is regarded as the reconstructed position. The reconstruction experiments were carried, and the experiment results showed that the new method have much better reconstruction accuracy than the traditional stereoscopic reconstruction method.
Hong Luo; Hanping Xiao; Robert Nourgaliev; Chunpei Cai
2011-06-01
A comparative study of different reconstruction schemes for a reconstruction-based discontinuous Galerkin, termed RDG(P1P2) method is performed for compressible flow problems on arbitrary grids. The RDG method is designed to enhance the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution via a reconstruction scheme commonly used in the finite volume method. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are implemented to obtain a quadratic polynomial representation of the underlying discontinuous Galerkin linear polynomial solution on each cell. These three reconstruction/recovery methods are compared for a variety of compressible flow problems on arbitrary meshes to access their accuracy and robustness. The numerical results demonstrate that all three reconstruction methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstruction method provides the best performance in terms of both accuracy and robustness.
Exhaustive Search for Fuzzy Gene Networks from Microarray Data
Sokhansanj, B A; Fitch, J P; Quong, J N; Quong, A A
2003-07-07
Recent technological advances in high-throughput data collection allow for the study of increasingly complex systems on the scale of the whole cellular genome and proteome. Gene network models are required to interpret large and complex data sets. Rationally designed system perturbations (e.g. gene knock-outs, metabolite removal, etc) can be used to iteratively refine hypothetical models, leading to a modeling-experiment cycle for high-throughput biological system analysis. We use fuzzy logic gene network models because they have greater resolution than Boolean logic models and do not require the precise parameter measurement needed for chemical kinetics-based modeling. The fuzzy gene network approach is tested by exhaustive search for network models describing cyclin gene interactions in yeast cell cycle microarray data, with preliminary success in recovering interactions predicted by previous biological knowledge and other analysis techniques. Our goal is to further develop this method in combination with experiments we are performing on bacterial regulatory networks.
Interior reconstruction method based on rotation-translation scanning model.
Wang, Xianchao; Tang, Ziyue; Yan, Bin; Li, Lei; Bao, Shanglian
2014-01-01
In various applications of computed tomography (CT), it is common that the reconstructed object is over the field of view (FOV) or we may intend to sue a FOV which only covers the region of interest (ROI) for the sake of reducing radiation dose. These kinds of imaging situations often lead to interior reconstruction problems which are difficult cases in the reconstruction field of CT, due to the truncated projection data at every view angle. In this paper, an interior reconstruction method is developed based on a rotation-translation (RT) scanning model. The method is implemented by first scanning the reconstructed region, and then scanning a small region outside the support of the reconstructed object after translating the rotation centre. The differentiated backprojection (DBP) images of the reconstruction region and the small region outside the object can be respectively obtained from the two-time scanning data without data rebinning process. At last, the projection onto convex sets (POCS) algorithm is applied to reconstruct the interior region. Numerical simulations are conducted to validate the proposed reconstruction method.
High resolution x-ray CMT: Reconstruction methods
Brown, J.K.
1997-02-01
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited for high accuracy, tomographic reconstruction codes.
Gene Network Landscape of the Ciliate Tetrahymena thermophila
Xiong, Jie; Lu, Xingyi; Chang, Yue; Liu, Yifan; Fu, Chengjie; Pearlman, Ronald E.; Miao, Wei
2011-01-01
Background Genome-wide expression data of gene microarrays can be used to infer gene networks. At a cellular level, a gene network provides a picture of the modules in which genes are densely connected, and of the hub genes, which are highly connected with other genes. A gene network is useful to identify the genes involved in the same pathway, in a protein complex or that are co-regulated. In this study, we used different methods to find gene networks in the ciliate Tetrahymena thermophila, and describe some important properties of this network, such as modules and hubs. Methodology/Principal Findings Using 67 single channel microarrays, we constructed the Tetrahymena gene network (TGN) using three methods: the Pearson correlation coefficient (PCC), the Spearman correlation coefficient (SCC) and the context likelihood of relatedness (CLR) algorithm. The accuracy and coverage of the three networks were evaluated using four conserved protein complexes in yeast. The CLR network with a Z-score threshold 3.49 was determined to be the most robust. The TGN was partitioned, and 55 modules were found. In addition, analysis of the arbitrarily determined 1200 hubs showed that these hubs could be sorted into six groups according to their expression profiles. We also investigated human disease orthologs in Tetrahymena that are missing in yeast and provide evidence indicating that some of these are involved in the same process in Tetrahymena as in human. Conclusions/Significance This study constructed a Tetrahymena gene network, provided new insights to the properties of this biological network, and presents an important resource to study Tetrahymena genes at the pathway level. PMID:21637855
Compressive measurement and feature reconstruction method for autonomous star trackers
NASA Astrophysics Data System (ADS)
Yin, Hang; Yan, Ye; Song, Xin; Yang, Yueneng
2016-12-01
Compressive sensing (CS) theory provides a framework for signal reconstruction using a sub-Nyquist sampling rate. CS theory enables the reconstruction of a signal that is sparse or compressible from a small set of measurements. The current CS application in optical field mainly focuses on reconstructing the original image using optimization algorithms and conducts data processing in full-dimensional image, which cannot reduce the data processing rate. This study is based on the spatial sparsity of star image and proposes a new compressive measurement and reconstruction method that extracts the star feature from compressive data and directly reconstructs it to the original image for attitude determination. A pixel-based folding model that preserves the star feature and enables feature reconstruction is presented to encode the original pixel location into the superposed space. A feature reconstruction method is then proposed to extract the star centroid by compensating distortions and to decode the centroid without reconstructing the whole image, which reduces the sampling rate and data processing rate at the same time. The statistical results investigate the proportion of star distortion and false matching results, which verifies the correctness of the proposed method. The results also verify the robustness of the proposed method to a great extent and demonstrate that its performance can be improved by sufficient measurement in noise cases. Moreover, the result on real star images significantly ensures the correct star centroid estimation for attitude determination and confirms the feasibility of applying the proposed method in a star tracker.
Petrovskaya, Olga V; Petrovskiy, Evgeny D; Lavrik, Inna N; Ivanisenko, Vladimir A
2017-04-01
Gene network modeling is one of the widely used approaches in systems biology. It allows for the study of complex genetic systems function, including so-called mosaic gene networks, which consist of functionally interacting subnetworks. We conducted a study of a mosaic gene networks modeling method based on integration of models of gene subnetworks by linear control functionals. An automatic modeling of 10,000 synthetic mosaic gene regulatory networks was carried out using computer experiments on gene knockdowns/knockouts. Structural analysis of graphs of generated mosaic gene regulatory networks has revealed that the most important factor for building accurate integrated mathematical models, among those analyzed in the study, is data on expression of genes corresponding to the vertices with high properties of centrality.
An image reconstruction method (IRBis) for optical/infrared interferometry
NASA Astrophysics Data System (ADS)
Hofmann, K.-H.; Weigelt, G.; Schertl, D.
2014-05-01
Aims: We present an image reconstruction method for optical/infrared long-baseline interferometry called IRBis (image reconstruction software using the bispectrum). We describe the theory and present applications to computer-simulated interferograms. Methods: The IRBis method can reconstruct an image from measured visibilities and closure phases. The applied optimization routine ASA_CG is based on conjugate gradients. The method allows the user to implement different regularizers, apply residual ratios as an additional metric for goodness-of-fit, and use previous iteration results as a prior to force convergence. Results: We present the theory of the IRBis method and several applications of the method to computer-simulated interferograms. The image reconstruction results show the dependence of the reconstructed image on the noise in the interferograms (e.g., for ten electron read-out noise and 139 to 1219 detected photons per interferogram), the regularization method, the angular resolution, and the reconstruction parameters applied. Furthermore, we present the IRBis reconstructions submitted to the interferometric imaging beauty contest 2012 initiated by the IAU Working Group on Optical/IR Interferometry and describe the performed data processing steps.
An alternative method of middle vault reconstruction.
Gassner, Holger G; Friedman, Oren; Sherris, David A; Kern, Eugene B
2006-01-01
Surgery of the nasal valves is a challenging aspect of rhinoplasty surgery. The middle nasal vault assumes an important role in certain aspects of nasal valve collapse. Techniques that address pathologies of the middle vault include the placement of spreader grafts and the butterfly graft. We present an alternative technique of middle vault reconstruction that allows simultaneous repair of nasal valve collapse and creation of a smooth dorsal profile. The surgical technique is described in detail and representative cases are discussed.
Optimization and Comparison of Different Digital Mammographic Tomosynthesis Reconstruction Methods
2007-04-01
likelihood iterative algorithm (MLEM) by Wu et al. [4,5], tuned-aperture computed tomography (TACT) reconstruction methods developed by Webber and...A. Karellas, S. Vedantham, S. J. Glick, C. J. D’Orsi, S. P. Baker, and R. L. Webber , “Comparison of tomosynthesis methods used with digital...L. Webber , “Evaluation of linear and nonlinear tomosynthetic reconstruction methods in digital mammography,” Acad. Radiol. 8, 219-224 (2001). 8. L
Combinatorial explosion in model gene networks
NASA Astrophysics Data System (ADS)
Edwards, R.; Glass, L.
2000-09-01
The explosive growth in knowledge of the genome of humans and other organisms leaves open the question of how the functioning of genes in interacting networks is coordinated for orderly activity. One approach to this problem is to study mathematical properties of abstract network models that capture the logical structures of gene networks. The principal issue is to understand how particular patterns of activity can result from particular network structures, and what types of behavior are possible. We study idealized models in which the logical structure of the network is explicitly represented by Boolean functions that can be represented by directed graphs on n-cubes, but which are continuous in time and described by differential equations, rather than being updated synchronously via a discrete clock. The equations are piecewise linear, which allows significant analysis and facilitates rapid integration along trajectories. We first give a combinatorial solution to the question of how many distinct logical structures exist for n-dimensional networks, showing that the number increases very rapidly with n. We then outline analytic methods that can be used to establish the existence, stability and periods of periodic orbits corresponding to particular cycles on the n-cube. We use these methods to confirm the existence of limit cycles discovered in a sample of a million randomly generated structures of networks of 4 genes. Even with only 4 genes, at least several hundred different patterns of stable periodic behavior are possible, many of them surprisingly complex. We discuss ways of further classifying these periodic behaviors, showing that small mutations (reversal of one or a few edges on the n-cube) need not destroy the stability of a limit cycle. Although these networks are very simple as models of gene networks, their mathematical transparency reveals relationships between structure and behavior, they suggest that the possibilities for orderly dynamics in such
Anatomically-aided PET reconstruction using the kernel method
NASA Astrophysics Data System (ADS)
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-09-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
An analytic reconstruction method for PET based on cubic splines
NASA Astrophysics Data System (ADS)
Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.
2014-03-01
PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.
Alternative method for reconstruction of antihydrogen annihilation vertices
NASA Astrophysics Data System (ADS)
Amole, C.; Ashkezari, M. D.; Andresen, G. B.; Baquero-Ruiz, M.; Bertsche, W.; Bowe, P. D.; Butler, E.; Cesar, C. L.; Chapman, S.; Charlton, M.; Deller, A.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayano, R. S.; Hayden, M. E.; Humphries, A. J.; Hydomako, R.; Jonsell, S.; Kurchaninov, L.; Madsen, N.; Menary, S.; Nolan, P.; Olchanski, K.; Olin, A.; Povilus, A.; Pusa, P.; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Storey, J. W.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Yamazaki, Y.
2012-12-01
The ALPHA experiment, located at CERN, aims to compare the properties of antihydrogen atoms with those of hydrogen atoms. The neutral antihydrogen atoms are trapped using an octupole magnetic trap. The trap region is surrounded by a three layered silicon detector used to reconstruct the antiproton annihilation vertices. This paper describes a method we have devised that can be used for reconstructing annihilation vertices with a good resolution and is more efficient than the standard method currently used for the same purpose.
Alternative method for reconstruction of antihydrogen annihilation vertices
NASA Astrophysics Data System (ADS)
Amole, C.; Ashkezari, M. D.; Andresen, G. B.; Baquero-Ruiz, M.; Bertsche, W.; Bowe, P. D.; Butler, E.; Cesar, C. L.; Chapman, S.; Charlton, M.; Deller, A.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayano, R. S.; Hayden, M. E.; Humphries, A. J.; Hydomako, R.; Jonsell, S.; Kurchaninov, L.; Madsen, N.; Menary, S.; Nolan, P.; Olchanski, K.; Olin, A.; Povilus, A.; Pusa, P.; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Storey, J. W.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Yamazaki, Y.
The ALPHA experiment, located at CERN, aims to compare the properties of antihydrogen atoms with those of hydrogen atoms. The neutral antihydrogen atoms are trapped using an octupole magnetic trap. The trap region is surrounded by a three layered silicon detector used to reconstruct the antiproton annihilation vertices. This paper describes a method we have devised that can be used for reconstructing annihilation vertices with a good resolution and is more efficient than the standard method currently used for the same purpose.
Gene networks and liar paradoxes.
Isalan, Mark
2009-10-01
Network motifs are small patterns of connections, found over-represented in gene regulatory networks. An example is the negative feedback loop (e.g. factor A represses itself). This opposes its own state so that when 'on' it tends towards 'off' - and vice versa. Here, we argue that such self-opposition, if considered dimensionlessly, is analogous to the liar paradox: 'This statement is false'. When 'true' it implies 'false' - and vice versa. Such logical constructs have provided philosophical consternation for over 2000 years. Extending the analogy, other network topologies give strikingly varying outputs over different dimensions. For example, the motif 'A activates B and A. B inhibits A' can give switches or oscillators with time only, or can lead to Turing-type patterns with both space and time (spots, stripes or waves). It is argued here that the dimensionless form reduces to a variant of 'The following statement is true. The preceding statement is false'. Thus, merely having a static topological description of a gene network can lead to a liar paradox. Network diagrams are only snapshots of dynamic biological processes and apparent paradoxes can reveal important biological mechanisms that are far from paradoxical when considered explicitly in time and space.
Gene networks and liar paradoxes
Isalan, Mark
2009-01-01
Network motifs are small patterns of connections, found over-represented in gene regulatory networks. An example is the negative feedback loop (e.g. factor A represses itself). This opposes its own state so that when ‘on’ it tends towards ‘off’ – and vice versa. Here, we argue that such self-opposition, if considered dimensionlessly, is analogous to the liar paradox: ‘This statement is false’. When ‘true’ it implies ‘false’ – and vice versa. Such logical constructs have provided philosophical consternation for over 2000 years. Extending the analogy, other network topologies give strikingly varying outputs over different dimensions. For example, the motif ‘A activates B and A. B inhibits A’ can give switches or oscillators with time only, or can lead to Turing-type patterns with both space and time (spots, stripes or waves). It is argued here that the dimensionless form reduces to a variant of ‘The following statement is true. The preceding statement is false’. Thus, merely having a static topological description of a gene network can lead to a liar paradox. Network diagrams are only snapshots of dynamic biological processes and apparent paradoxes can reveal important biological mechanisms that are far from paradoxical when considered explicitly in time and space. PMID:19722183
A novel method of anterior lumbosacral cage reconstruction.
Mathios, Dimitrios; Kaloostian, Paul Edward; Bydon, Ali; Sciubba, Daniel M; Wolinsky, Jean Paul; Gokaslan, Ziya L; Witham, Timothy F
2014-02-01
Reconstruction of the lumbosacral junction is a considerable challenge for spinal surgeons due to the unique anatomical constraints of this region as well as the vectors of force that are applied focally in this area. The standard cages, both expandable and nonexpendable, often fail to reconstitute the appropriate anatomical alignment of the lumbosacral junction. This inadequate reconstruction may predispose the patient to continued back pain and neurological symptoms as well as possible pseudarthrosis and instrumentation failure. The authors describe their preoperative planning and the technical characteristics of their novel reconstruction technique at the lumbosacral junction using a cage with adjustable caps. Based precisely on preoperative measurements that maintain the appropriate Cobb angle, they performed reconstruction of the lumbosacral junction in a series of 3 patients. All 3 patients had excellent installation of the cages used for reconstruction. Postoperative CT scans were used to radiographically confirm the appropriate reconstruction of the lumbosacral junction. All patients had a significant reduction in pain, had neurological improvement, and experienced no instrumentation failure at the time of latest follow-up. Taking into account the inherent morphology of the lumbosacral junction and carefully planning the technical characteristics of the cage installation preoperatively and intraoperatively, the authors achieved favorable clinical and radiographic outcomes in all 3 cases. Based on this small case series, this technique for reconstruction of the lumbosacral junction appears to be a safe and appropriate method of reconstruction of the anterior spinal column in this technically challenging region of the spine.
Reconstruction methods for phase-contrast tomography
Raven, C.
1997-02-01
Phase contrast imaging with coherent x-rays can be distinguished in outline imaging and holography, depending on the wavelength {lambda}, the object size d and the object-to-detector distance r. When r << d{sup 2}{lambda}, phase contrast occurs only in regions where the refractive index fastly changes, i.e. at interfaces and edges in the sample. With increasing object-to-detector distance we come in the area of holographic imaging. The image contrast outside the shadow region of the object is due to interference of the direct, undiffracted beam and a beam diffracted by the object, or, in terms of holography, the interference of a reference wave with the object wave. Both, outline imaging and holography, offer the possibility to obtain three dimensional information of the sample in conjunction with a tomographic technique. But the data treatment and the kind of information one can obtain from the reconstruction is different.
Yeast Ancestral Genome Reconstructions: The Possibilities of Computational Methods
NASA Astrophysics Data System (ADS)
Tannier, Eric
In 2006, a debate has risen on the question of the efficiency of bioinformatics methods to reconstruct mammalian ancestral genomes. Three years later, Gordon et al. (PLoS Genetics, 5(5), 2009) chose not to use automatic methods to build up the genome of a 100 million year old Saccharomyces cerevisiae ancestor. Their manually constructed ancestor provides a reference genome to test whether automatic methods are indeed unable to approach confident reconstructions. Adapting several methodological frameworks to the same yeast gene order data, I discuss the possibilities, differences and similarities of the available algorithms for ancestral genome reconstructions. The methods can be classified into two types: local and global. Studying the properties of both helps to clarify what we can expect from their usage. Both methods propose contiguous ancestral regions that come very close (> 95% identity) to the manually predicted ancestral yeast chromosomes, with a good coverage of the extant genomes.
A novel electron density reconstruction method for asymmetrical toroidal plasmas
Shi, N.; Ohshima, S.; Minami, T.; Nagasaki, K.; Yamamoto, S.; Mizuuchi, T.; Okada, H.; Kado, S.; Kobayashi, S.; Konoshima, S.; Sano, F.; Tanaka, K.; Ohtani, Y.; Zang, L.; Kenmochi, N.
2014-05-15
A novel reconstruction method is developed for acquiring the electron density profile from multi-channel interferometric measurements of strongly asymmetrical toroidal plasmas. It is based on a regularization technique, and a generalized cross-validation function is used to optimize the regularization parameter with the aid of singular value decomposition. The feasibility of method could be testified by simulated measurements based on a magnetic configuration of the flexible helical-axis heliotron device, Heliotron J, which has an asymmetrical poloidal cross section. And the successful reconstruction makes possible to construct a multi-channel Far-infrared laser interferometry on this device. The advantages of this method are demonstrated by comparison with a conventional method. The factors which may affect the accuracy of the results are investigated, and an error analysis is carried out. Based on the obtained results, the proposed method is highly promising for accurately reconstructing the electron density in the asymmetrical toroidal plasma.
Assessing the Accuracy of Ancestral Protein Reconstruction Methods
Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A
2006-01-01
The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of “ancestral sequences” inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a “best guess” amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated. PMID:16789817
Method of breast reconstruction and the development of lymphoedema.
Lee, K-T; Bang, S I; Pyon, J-K; Hwang, J H; Mun, G-H
2017-02-01
Several studies have demonstrated an association between immediate autologous or implant-based breast reconstruction and a reduced incidence of lymphoedema. However, few of these have ocused specifically on whether the reconstruction method affects the development of lymphoedema. The study evaluated the potential impact of breast reconstruction modality on the incidence of lymphoedema. Outcomes of women with breast cancer who underwent mastectomy and immediate reconstruction using an autologous flap or a tissue expander/implant between 2008 and 2013 were reviewed. Arm or hand swelling with pertinent clinical signs of lymphoedema and excess volume compared with those of the contralateral side was diagnosed as lymphoedema. The cumulative incidence of lymphoedema was estimated by the Kaplan-Meier method. Clinicopathological factors associated with the development of lymphoedema were investigated by Cox regression analysis. A total of 429 reconstructions (214 autologous and 215 tissue expander/implant) were analysed; the mean follow-up of patients was 45·3 months. The two groups had similar characteristics, except that women in the autologous group were older, had a higher BMI, and more often had preoperative radiotherapy than women in the tissue expander/implant group. Overall, the 2-year cumulative incidence of lymphoedema was 6·8 per cent (autologous 4·2 per cent, tissue expander/implant 9·3 per cent). Multivariable analysis demonstrated that autologous reconstruction was associated with a significantly reduced risk of lymphoedema compared with that for tissue expander/implant reconstruction. Axillary dissection, a greater number of dissected lymph nodes and postoperative chemotherapy were also independent risk factors for lymphoedema. The method of breast reconstruction may affect subsequent development of lymphoedema. © 2016 BJS Society Ltd Published by John Wiley & Sons Ltd.
A Comparison of Methods for Ocean Reconstruction from Sparse Observations
NASA Astrophysics Data System (ADS)
Streletz, G. J.; Kronenberger, M.; Weber, C.; Gebbie, G.; Hagen, H.; Garth, C.; Hamann, B.; Kreylos, O.; Kellogg, L. H.; Spero, H. J.
2014-12-01
We present a comparison of two methods for developing reconstructions of oceanic scalar property fields from sparse scattered observations. Observed data from deep sea core samples provide valuable information regarding the properties of oceans in the past. However, because the locations of sample sites are distributed on the ocean floor in a sparse and irregular manner, developing a global ocean reconstruction is a difficult task. Our methods include a flow-based and a moving least squares -based approximation method. The flow-based method augments the process of interpolating or approximating scattered scalar data by incorporating known flow information. The scheme exploits this additional knowledge to define a non-Euclidean distance measure between points in the spatial domain. This distance measure is used to create a reconstruction of the desired scalar field on the spatial domain. The resulting reconstruction thus incorporates information from both the scattered samples and the known flow field. The second method does not assume a known flow field, but rather works solely with the observed scattered samples. It is based on a modification of the moving least squares approach, a weighted least squares approximation method that blends local approximations into a global result. The modifications target the selection of data used for these local approximations and the construction of the weighting function. The definition of distance used in the weighting function is crucial for this method, so we use a machine learning approach to determine a set of near-optimal parameters for the weighting. We have implemented both of the reconstruction methods and have tested them using several sparse oceanographic datasets. Based upon these studies, we discuss the advantages and disadvantages of each method and suggest possible ways to combine aspects of both methods in order to achieve an overall high-quality reconstruction.
The frequency split method for helical cone-beam reconstruction.
Shechter, G; Köhler, Th; Altman, A; Proksa, R
2004-08-01
A new approximate method for the utilization of redundant data in helical cone-beam CT is presented. It is based on the observation that the original WEDGE method provides excellent image quality if only little more than 180 degrees data are used for back-projection, and that significant low-frequency artifacts appear if a larger amount of redundant data are used. This degradation is compensated by the frequency split method: The low-frequency part of the image is reconstructed using little more than 180 degrees of data, while the high frequency part is reconstructed using all data. The resulting algorithm shows no cone-beam artifacts in a simulation of a 64-row scanner. It is further shown that the frequency split method hardly degrades the signal-to-noise ratio of the reconstructed images and that it behaves robustly in the presence of motion.
Various methods of breast reconstruction after mastectomy: an economic comparison.
Elkowitz, A; Colen, S; Slavin, S; Seibert, J; Weinstein, M; Shaw, W
1993-07-01
This study is an economic comparison of various methods of breast reconstruction after mastectomy. The hospital bills of 287 patients undergoing breast reconstruction at three institutions from June of 1988 to March of 1991 were analyzed. The procedures examined included mastectomy, implant and tissue-expander reconstruction, and TRAM and latissimus pedicle flaps, as well as free TRAM and free gluteal flaps. These procedures were subdivided into those which were performed at the time of mastectomy and those performed at a later admission. In addition, auxiliary procedures (i.e., revision, nipple reconstruction, tissue-expander exchange, and contralateral mastopexy/reduction) also were examined. Where appropriate, these procedures were subdivided into those performed under general or local anesthesia and by inpatient or outpatient status. Data from the three institutions were converted to N.Y.U. Medical Center costs for standardization. A table is presented that summarizes the costs of each individual procedure with all the pertinent variations. In addition, a unique and novel method of analyzing the data was developed. This paper describes a menu system whereby other data regarding morbidity, mortality, and revision rates may be superimposed. With this information, the final cost of reconstruction can be extrapolated and the various methods of reconstruction can be compared. This method can be applied to almost any complex series of multiple procedures. The most salient points elucidated by this study are as follows: The savings generated by performing immediate reconstruction varies between $5092 (p < 0.05) for free gluteal flaps and $10,616 (p < 0.05) for pedicled TRAM flaps.(ABSTRACT TRUNCATED AT 250 WORDS)
Zonal matrix iterative method for wavefront reconstruction from gradient measurements.
Panagopoulou, Sophia I; Neal, Daniel R
2005-01-01
To present an alternate method to Zemike decomposition (modal) of wavefront reconstruction using iterative implicit solution to the finite difference equations (zonal). Different reconstruction methods, modal and zonal, were compared and the advantages of each method were analyzed. Although the modal or Zemike method allows for quantitative interpretation of some of the aberrations, it is cumbersome for use with fine details and may lead to errors for eyes with keratoconus or other rapidly varying aberration. The zonal method produces a very high-resolution map that can be used for identifying irregular structures. The distinction between the two methods is useful to maintain, and the solution methods are generally different. In practice, both methods are useful and, with modern computers, both zonal and lower-order modal may be calculated rapidly. The difference between the wavefronts derived from the two methods may provide useful insight or interpretation of the infornation.
Matrix-based image reconstruction methods for tomography
Llacer, J.; Meng, J.D.
1984-10-01
Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures.
Fast alternating projection methods for constrained tomographic reconstruction.
Liu, Li; Han, Yongxin; Jin, Mingwu
2017-01-01
The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification.
Fast alternating projection methods for constrained tomographic reconstruction
Liu, Li; Han, Yongxin
2017-01-01
The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods
Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
A reconstructed discontinuous Galerkin method for magnetohydrodynamics on arbitrary grids
NASA Astrophysics Data System (ADS)
Karami Halashi, Behrouz; Luo, Hong
2016-12-01
A reconstructed discontinuous Galerkin (rDG) method, designed not only to enhance the accuracy of DG methods but also to ensure the nonlinear stability of the rDG method, is developed for solving the Magnetohydrodynamics (MHD) equations on arbitrary grids. In this rDG(P1P2) method, a quadratic polynomial solution (P2) is first obtained using a Hermite Weighted Essentially Non-oscillatory (WENO) reconstruction from the underlying linear polynomial (P1) discontinuous Galerkin solution to ensure linear stability of the rDG method and to improves efficiency of the underlying DG method. By taking advantage of handily available and yet invaluable information, namely the first derivatives in the DG formulation, the stencils used in reconstruction involve only Von Neumann neighborhood (adjacent face-neighboring cells) and thus are compact. The first derivatives of the quadratic polynomial solution are then reconstructed using a WENO reconstruction in order to eliminate spurious oscillations in the vicinity of strong discontinuities, thus ensuring the nonlinear stability of the rDG method. The HLLD Riemann solver introduced in the literature for one-dimensional MHD problems is adopted in normal direction to compute numerical fluxes. The divergence free constraint is satisfied using the Locally Divergence Free (LDF) approach. The developed rDG method is used to compute a variety of 2D and 3D MHD problems on arbitrary grids to demonstrate its accuracy, robustness, and non-oscillatory property. Our numerical experiments indicate that the rDG(P1P2) method is able to capture shock waves sharply essentially without any spurious oscillations, and achieve the designed third-order of accuracy: one order accuracy higher than the underlying DG method.
Tomographic fluorescence reconstruction by a spectral projected gradient pursuit method
NASA Astrophysics Data System (ADS)
Ye, Jinzuo; An, Yu; Mao, Yamin; Jiang, Shixin; Yang, Xin; Chi, Chongwei; Tian, Jie
2015-03-01
In vivo fluorescence molecular imaging (FMI) has played an increasingly important role in biomedical research of preclinical area. Fluorescence molecular tomography (FMT) further upgrades the two-dimensional FMI optical information to three-dimensional fluorescent source distribution, which can greatly facilitate applications in related studies. However, FMT presents a challenging inverse problem which is quite ill-posed and ill-conditioned. Continuous efforts to develop more practical and efficient methods for FMT reconstruction are still needed. In this paper, a method based on spectral projected gradient pursuit (SPGP) has been proposed for FMT reconstruction. The proposed method was based on the directional pursuit framework. A mathematical strategy named the nonmonotone line search was associated with the SPGP method, which guaranteed the global convergence. In addition, the Barzilai-Borwein step length was utilized to build the new step length of the SPGP method, which was able to speed up the convergence of this gradient method. To evaluate the performance of the proposed method, several heterogeneous simulation experiments including multisource cases as well as comparative analyses have been conducted. The results demonstrated that, the proposed method was able to achieve satisfactory source localizations with a bias less than 1 mm; the computational efficiency of the method was one order of magnitude faster than the contrast method; and the fluorescence reconstructed by the proposed method had a higher contrast to the background than the contrast method. All the results demonstrated the potential for practical FMT applications with the proposed method.
Bubble reconstruction method for wire-mesh sensors measurements
NASA Astrophysics Data System (ADS)
Mukin, Roman V.
2016-08-01
A new algorithm is presented for post-processing of void fraction measurements with wire-mesh sensors, particularly for identifying and reconstructing bubble surfaces in a two-phase flow. This method is a combination of the bubble recognition algorithm presented in Prasser (Nuclear Eng Des 237(15):1608, 2007) and Poisson surface reconstruction algorithm developed in Kazhdan et al. (Poisson surface reconstruction. In: Proceedings of the fourth eurographics symposium on geometry processing 7, 2006). To verify the proposed technique, a comparison was done of the reconstructed individual bubble shapes with those obtained numerically in Sato and Ničeno (Int J Numer Methods Fluids 70(4):441, 2012). Using the difference between reconstructed and referenced bubble shapes, the accuracy of the proposed algorithm was estimated. At the next step, the algorithm was applied to void fraction measurements performed in Ylönen (High-resolution flow structure measurements in a rod bundle (Diss., Eidgenössische Technische Hochschule ETH Zürich, Nr. 20961, 2013) by means of wire-mesh sensors in a rod bundle geometry. The reconstructed bubble shape yields bubble surface area and volume, hence its Sauter diameter d_{32} as well. Sauter diameter is proved to be more suitable for bubbles size characterization compared to volumetric diameter d_{30}, proved capable to capture the bi-disperse bubble size distribution in the flow. The effect of a spacer grid was studied as well: For the given spacer grid and considered flow rates, bubble size frequency distribution is obtained almost at the same position for all cases, approximately at d_{32} = 3.5 mm. This finding can be related to the specific geometry of the spacer grid or the air injection device applied in the experiments, or even to more fundamental properties of the bubble breakup and coagulation processes. In addition, an application of the new algorithm for reconstruction of a large air-water interface in a tube bundle is
Method for 3D fibre reconstruction on a microrobotic platform.
Hirvonen, J; Myllys, M; Kallio, P
2016-07-01
Automated handling of a natural fibrous object requires a method for acquiring the three-dimensional geometry of the object, because its dimensions cannot be known beforehand. This paper presents a method for calculating the three-dimensional reconstruction of a paper fibre on a microrobotic platform that contains two microscope cameras. The method is based on detecting curvature changes in the fibre centreline, and using them as the corresponding points between the different views of the images. We test the developed method with four fibre samples and compare the results with the references measured with an X-ray microtomography device. We rotate the samples through 16 different orientations on the platform and calculate the three-dimensional reconstruction to test the repeatability of the algorithm and its sensitivity to the orientation of the sample. We also test the noise sensitivity of the algorithm, and record the mismatch rate of the correspondences provided. We use the iterative closest point algorithm to align the measured three-dimensional reconstructions with the references. The average point-to-point distances between the reconstructed fibre centrelines and the references are 20-30 μm, and the mismatch rate is low. Given the manipulation tolerance, this shows that the method is well suited to automated fibre grasping. This has also been demonstrated with actual grasping experiments. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Image of OCT denoising and 3D reconstructing method
NASA Astrophysics Data System (ADS)
Yan, Xue-tao; Yang, Jun; Liu, Zhi-hai; Yuan, Li-bo
2007-11-01
Optical coherence tomography (OCT), which is a novel tomography method, is non-contact, noninvasive image of the vivo tomograms, and have characteristic of high resolution and high speed; therefore it becomes an important direction of biomedicine imaging. However, when the OCT system used in specimen, noise and distortion will appear, because the speed of the system is confined, therefore image needs the reconstruction. The article studies OCT 3-D reconstruction method. It cotains denoising, recovering and segmenting, these image preprocessing technology are necessary. This paper studies the high scattering medium, such as specimen of skin, using photons transmiting properties, researches the denoising and recovering algorithm with optical photons model of propagation in biological tissu to remove the speckle of skin image and 3-D reconstrut. It proposes a dynamic average background estimation algorithm based on time-domain estimation method. This method combines the estimation in time-domain with the filter in frequency-domain to remove the noises of image effectively. In addition, it constructs a noise-model for recovering image to avoid longitudinal direction distortion and deep's amplitude distortion and image blurring. By compareing and discussing, this method improves and optimizes algorithms to improve the quality of image. The article optimizes iterative reconstruction algorithm by improving convergent speed, and realizes OCT specimen data's 3-D reconstruction. It opened the door for further analysis and diagnosis of diseases.
Digital Signal Processing and Control for the Study of Gene Networks
NASA Astrophysics Data System (ADS)
Shin, Yong-Jun
2016-04-01
Thanks to the digital revolution, digital signal processing and control has been widely used in many areas of science and engineering today. It provides practical and powerful tools to model, simulate, analyze, design, measure, and control complex and dynamic systems such as robots and aircrafts. Gene networks are also complex dynamic systems which can be studied via digital signal processing and control. Unlike conventional computational methods, this approach is capable of not only modeling but also controlling gene networks since the experimental environment is mostly digital today. The overall aim of this article is to introduce digital signal processing and control as a useful tool for the study of gene networks.
Digital Signal Processing and Control for the Study of Gene Networks
Shin, Yong-Jun
2016-01-01
Thanks to the digital revolution, digital signal processing and control has been widely used in many areas of science and engineering today. It provides practical and powerful tools to model, simulate, analyze, design, measure, and control complex and dynamic systems such as robots and aircrafts. Gene networks are also complex dynamic systems which can be studied via digital signal processing and control. Unlike conventional computational methods, this approach is capable of not only modeling but also controlling gene networks since the experimental environment is mostly digital today. The overall aim of this article is to introduce digital signal processing and control as a useful tool for the study of gene networks. PMID:27102828
Digital Signal Processing and Control for the Study of Gene Networks.
Shin, Yong-Jun
2016-04-22
Thanks to the digital revolution, digital signal processing and control has been widely used in many areas of science and engineering today. It provides practical and powerful tools to model, simulate, analyze, design, measure, and control complex and dynamic systems such as robots and aircrafts. Gene networks are also complex dynamic systems which can be studied via digital signal processing and control. Unlike conventional computational methods, this approach is capable of not only modeling but also controlling gene networks since the experimental environment is mostly digital today. The overall aim of this article is to introduce digital signal processing and control as a useful tool for the study of gene networks.
Reconstructing Program Theories: Methods Available and Problems To Be Solved.
ERIC Educational Resources Information Center
Leeuw, Frans L.
2003-01-01
Discusses methods for reconstructing theories underlying programs and policies, focusing on three approaches: (1) an empirical approach that focuses on interviews, documents, and argumentational analysis; (2) an approach based on strategic assessment, group dynamics, and dialogue; and (3) an approach based on cognitive and organizational…
Robust Methods for Sensing and Reconstructing Sparse Signals
ERIC Educational Resources Information Center
Carrillo, Rafael E.
2012-01-01
Compressed sensing (CS) is an emerging signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are…
Robust Methods for Sensing and Reconstructing Sparse Signals
ERIC Educational Resources Information Center
Carrillo, Rafael E.
2012-01-01
Compressed sensing (CS) is an emerging signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are…
An improved reconstruction method for cosmological density fields
NASA Technical Reports Server (NTRS)
Gramann, Mirt
1993-01-01
This paper proposes some improvements to existing reconstruction methods for recovering the initial linear density and velocity fields of the universe from the present large-scale density distribution. We derive the Eulerian continuity equation in the Zel'dovich approximation and show that, by applying this equation, we can trace the evolution of the gravitational potential of the universe more exactly than is possible with previous approaches based on the Zel'dovich-Bernoulli equation. The improved reconstruction method is tested using N-body simulations. When the Zel'dovich-Bernoulli equation describes the formation of filaments, then the Zel'dovich continuity equation also follows the clustering of clumps inside the filaments. Our reconstruction method recovers the true initial gravitational potential with an rms error about 3 times smaller than previous methods. We examine the recovery of the initial distribution of Fourier components and find the scale at which the recovered phases are scrambled with respect their true initial values. Integrating the Zel'dovich continuity equation back in time, we can improve the spatial resolution of the reconstruction by a factor of about 2.
Method for image reconstruction of moving radionuclide source distribution
Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick
2012-12-18
A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.
Testing the global flow reconstruction method on coupled chaotic oscillators
NASA Astrophysics Data System (ADS)
Plachy, Emese; Kolláth, Zoltán
2010-03-01
Irregular behaviour of pulsating variable stars may occur due to low dimensional chaos. To determine the quantitative properties of the dynamics in such systems, we apply a suitable time series analysis, the global flow reconstruction method. The robustness of the reconstruction can be tested through the resultant quantities, like Lyapunov dimension and Fourier frequencies. The latter is specially important as it is directly derivable from the observed light curves. We have performed tests using coupled Rossler oscillators to investigate the possible connection between those quantities. In this paper we present our test results.
Endoscopic Skull Base Reconstruction: An Evolution of Materials and Methods.
Sigler, Aaron C; D'Anza, Brian; Lobo, Brian C; Woodard, Troy; Recinos, Pablo F; Sindwani, Raj
2017-03-31
Endoscopic skull base surgery has developed rapidly over the last decade, in large part because of the expanding armamentarium of endoscopic repair techniques. This article reviews the available technologies and techniques, including vascularized and nonvascularized flaps, synthetic grafts, sealants and glues, and multilayer reconstruction. Understanding which of these repair methods is appropriate and under what circumstances is paramount to achieving success in this challenging but rewarding field. A graduated approach to skull base reconstruction is presented to provide a systematic framework to guide selection of repair technique to ensure a successful outcome while minimizing morbidity for the patient.
3D reconstruction methods of coronal structures by radio observations
NASA Technical Reports Server (NTRS)
Aschwanden, Markus J.; Bastian, T. S.; White, Stephen M.
1992-01-01
The ability to carry out the three dimensional (3D) reconstruction of structures in the solar corona would represent a major advance in the study of the physical properties in active regions and in flares. Methods which allow a geometric reconstruction of quasistationary coronal structures (for example active region loops) or dynamic structures (for example flaring loops) are described: stereoscopy of multi-day imaging observations by the VLA (Very Large Array); tomography of optically thin emission (in radio or soft x-rays); multifrequency band imaging by the VLA; and tracing of magnetic field lines by propagating electron beams.
NASA Technical Reports Server (NTRS)
Newman, Timothy; Santhanam, Naveen; Zhang, Huijuan; Gallagher, Dennis
2003-01-01
A new method for reconstructing the global 3D distribution of plasma densities in the plasmasphere from a limited number of 2D views is presented. The method is aimed at using data from the Extreme Ultra Violet (EUV) sensor on NASA s Imager for Magnetopause-to-Aurora Global Exploration (IMAGE) satellite. Physical properties of the plasmasphere are exploited by the method to reduce the level of inaccuracy imposed by the limited number of views. The utility of the method is demonstrated on synthetic data.
Reverse engineering gene networks: Integrating genetic perturbations with dynamical modeling
Tegnér, Jesper; Yeung, M. K. Stephen; Hasty, Jeff; Collins, James J.
2003-01-01
While the fundamental building blocks of biology are being tabulated by the various genome projects, microarray technology is setting the stage for the task of deducing the connectivity of large-scale gene networks. We show how the perturbation of carefully chosen genes in a microarray experiment can be used in conjunction with a reverse engineering algorithm to reveal the architecture of an underlying gene regulatory network. Our iterative scheme identifies the network topology by analyzing the steady-state changes in gene expression resulting from the systematic perturbation of a particular node in the network. We highlight the validity of our reverse engineering approach through the successful deduction of the topology of a linear in numero gene network and a recently reported model for the segmentation polarity network in Drosophila melanogaster. Our method may prove useful in identifying and validating specific drug targets and in deconvolving the effects of chemical compounds. PMID:12730377
A fast linear reconstruction method for scanning impedance imaging.
Liu, Hongze; Hawkins, Aaron R; Schultz, Stephen M; Oliphant, Travis E
2006-01-01
Scanning electrical impedance imaging (SII) has been developed and implemented as a novel high resolution imaging modality with the potential of imaging the electrical properties of biological tissues. In this paper, a fast linear model is derived and applied to the impedance image reconstruction of scanning impedance imaging. With the help of both the deblurring concept and the reciprocity principle, this new approach leads to a calibrated approximation of the exact impedance distribution rather than a relative one from the original simplified linear method. Additionally, the method shows much less computational cost than the more straightforward nonlinear inverse method based on the forward model. The kernel function of this new approach is described and compared to the kernel of the simplified linear method. Two-dimensional impedance images of a flower petal and cancer cells are reconstructed using this method. The images reveal details not present in the measured images.
GENIES: gene network inference engine based on supervised analysis.
Kotera, Masaaki; Yamanishi, Yoshihiro; Moriya, Yuki; Kanehisa, Minoru; Goto, Susumu
2012-07-01
Gene network inference engine based on supervised analysis (GENIES) is a web server to predict unknown part of gene network from various types of genome-wide data in the framework of supervised network inference. The originality of GENIES lies in the construction of a predictive model using partially known network information and in the integration of heterogeneous data with kernel methods. The GENIES server accepts any 'profiles' of genes or proteins (e.g. gene expression profiles, protein subcellular localization profiles and phylogenetic profiles) or pre-calculated gene-gene similarity matrices (or 'kernels') in the tab-delimited file format. As a training data set to learn a predictive model, the users can choose either known molecular network information in the KEGG PATHWAY database or their own gene network data. The user can also select an algorithm of supervised network inference, choose various parameters in the method, and control the weights of heterogeneous data integration. The server provides the list of newly predicted gene pairs, maps the predicted gene pairs onto the associated pathway diagrams in KEGG PATHWAY and indicates candidate genes for missing enzymes in organism-specific metabolic pathways. GENIES (http://www.genome.jp/tools/genies/) is publicly available as one of the genome analysis tools in GenomeNet.
Parallel MR image reconstruction using augmented Lagrangian methods.
Ramani, Sathish; Fessler, Jeffrey A
2011-03-01
Magnetic resonance image (MRI) reconstruction using SENSitivity Encoding (SENSE) requires regularization to suppress noise and aliasing effects. Edge-preserving and sparsity-based regularization criteria can improve image quality, but they demand computation-intensive nonlinear optimization. In this paper, we present novel methods for regularized MRI reconstruction from undersampled sensitivity encoded data--SENSE-reconstruction--using the augmented Lagrangian (AL) framework for solving large-scale constrained optimization problems. We first formulate regularized SENSE-reconstruction as an unconstrained optimization task and then convert it to a set of (equivalent) constrained problems using variable splitting. We then attack these constrained versions in an AL framework using an alternating minimization method, leading to algorithms that can be implemented easily. The proposed methods are applicable to a general class of regularizers that includes popular edge-preserving (e.g., total-variation) and sparsity-promoting (e.g., l(1)-norm of wavelet coefficients) criteria and combinations thereof. Numerical experiments with synthetic and in vivo human data illustrate that the proposed AL algorithms converge faster than both general-purpose optimization algorithms such as nonlinear conjugate gradient (NCG) and state-of-the-art MFISTA.
A Robust Shape Reconstruction Method for Facial Feature Point Detection.
Tan, Shuqiu; Chen, Dongyi; Guo, Chenggang; Huang, Zhiqi
2017-01-01
Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.
Image reconstruction method for non-synchronous THz signals
NASA Astrophysics Data System (ADS)
Oda, Naoki; Okubo, Syuichi; Sudou, Takayuki; Isoyama, Goro; Kato, Ryukou; Irizawa, Akinori; Kawase, Keigo
2014-05-01
Image reconstruction method for non-synchronous THz signals was developed for a combination of THz Free Electron Laser (THz-FEL) developed by Osaka University with THz imager. The method employs a slight time-difference between repetition period of THz macro-pulse from THz-FEL and a plurality of frames for THz imager, so that image can be reconstructed out of a predetermined number of time-sequential frames. This method was applied to THz-FEL and other pulsed THz source, and found very effective. Thermal time constants of pixels in 320x240 microbolometer array were also evaluated with this method, using quantum cascade laser as a THz source.
NASA Astrophysics Data System (ADS)
Chan, Harley; Gilbert, Ralph W.; Pagedar, Nitin A.; Daly, Michael J.; Irish, Jonathan C.; Siewerdsen, Jeffrey H.
2010-02-01
esthetic appearance is one of the most important factors for reconstructive surgery. The current practice of maxillary reconstruction chooses radial forearm, fibula or iliac rest osteocutaneous to recreate three-dimensional complex structures of the palate and maxilla. However, these bone flaps lack shape similarity to the palate and result in a less satisfactory esthetic. Considering similarity factors and vasculature advantages, reconstructive surgeons recently explored the use of scapular tip myo-osseous free flaps to restore the excised site. We have developed a new method that quantitatively evaluates the morphological similarity of the scapula tip bone and palate based on a diagnostic volumetric computed tomography (CT) image. This quantitative result was further interpreted as a color map that rendered on the surface of a three-dimensional computer model. For surgical planning, this color interpretation could potentially assist the surgeon to maximize the orientation of the bone flaps for best fit of the reconstruction site. With approval from the Research Ethics Board (REB) of the University Health Network, we conducted a retrospective analysis with CT image obtained from 10 patients. Each patient had a CT scans including the maxilla and chest on the same day. Based on this image set, we simulated total, subtotal and hemi palate reconstruction. The procedure of simulation included volume segmentation, conversing the segmented volume to a stereo lithography (STL) model, manual registration, computation of minimum geometric distances and curvature between STL model. Across the 10 patients data, we found the overall root-mean-square (RMS) conformance was 3.71+/- 0.16 mm
The formulation of the control of an expression pattern in a gene network by propositional calculus.
Nakayama, Hideki; Tanaka, Hiroto; Ushio, Toshimitsu
2006-06-07
In this study we model a gene network as a continuous-time switching network. In this model, each gene has a binary state which indicates if the gene expresses or not. We propose a method to control a sequence of expression patterns in the gene network model by adding another continuous-time switching network. By using propositional calculus, we will show that the control problem can be formulated as a mixed-integer linear programming problem with linear constraints.
Adaptive Kaczmarz Method for Image Reconstruction in Electrical Impedance Tomography
Li, Taoran; Kao, Tzu-Jen; Isaacson, David; Newell, Jonathan C.; Saulnier, Gary J.
2013-01-01
We present an adaptive Kaczmarz method for solving the inverse problem in electrical impedance tomography and determining the conductivity distribution inside an object from electrical measurements made on the surface. To best characterize an unknown conductivity distribution and avoid inverting the Jacobian-related term JTJ which could be expensive in terms of computation cost and memory in large scale problems, we propose solving the inverse problem by applying the optimal current patterns for distinguishing the actual conductivity from the conductivity estimate between each iteration of the block Kaczmarz algorithm. With a novel subset scheme, the memory-efficient reconstruction algorithm which appropriately combines the optimal current pattern generation with the Kaczmarz method can produce more accurate and stable solutions adaptively as compared to traditional Kaczmarz and Gauss-Newton type methods. Choices of initial current pattern estimates are discussed in the paper. Several reconstruction image metrics are used to quantitatively evaluate the performance of the simulation results. PMID:23718952
Efficient ghost cell reconstruction for embedded boundary methods
NASA Astrophysics Data System (ADS)
Rapaka, Narsimha; Al-Marouf, Mohamad; Samtaney, Ravi
2016-11-01
A non-iterative linear reconstruction procedure for Cartesian grid embedded boundary methods is introduced. The method exploits the inherent geometrical advantage of the Cartesian grid and employs batch sorting of the ghost cells to eliminate the need for an iterative solution procedure. This reduces the computational cost of the reconstruction procedure significantly, especially for large scale problems in a parallel environment that have significant communication overhead, e.g., patch based adaptive mesh refinement (AMR) methods. In this approach, prior computation and storage of the weightage coefficients for the neighbour cells is not required which is particularly attractive for moving boundary problems and memory intensive stationary boundary problems. The method utilizes a compact and unique interpolation stencil but also provides second order spatial accuracy. It provides a single step/direct reconstruction for the ghost cells that enforces the boundary conditions on the embedded boundary. The method is extendable to higher order interpolations as well. Examples that demonstrate the advantages of the present approach are presented. Supported by the KAUST Office of Competitive Research Funds under Award No. URF/1/1394-01.
Iterative reconstruction methods in X-ray CT.
Beister, Marcel; Kolditz, Daniel; Kalender, Willi A
2012-04-01
Iterative reconstruction (IR) methods have recently re-emerged in transmission x-ray computed tomography (CT). They were successfully used in the early years of CT, but given up when the amount of measured data increased because of the higher computational demands of IR compared to analytical methods. The availability of large computational capacities in normal workstations and the ongoing efforts towards lower doses in CT have changed the situation; IR has become a hot topic for all major vendors of clinical CT systems in the past 5 years. This review strives to provide information on IR methods and aims at interested physicists and physicians already active in the field of CT. We give an overview on the terminology used and an introduction to the most important algorithmic concepts including references for further reading. As a practical example, details on a model-based iterative reconstruction algorithm implemented on a modern graphics adapter (GPU) are presented, followed by application examples for several dedicated CT scanners in order to demonstrate the performance and potential of iterative reconstruction methods. Finally, some general thoughts regarding the advantages and disadvantages of IR methods as well as open points for research in this field are discussed.
Improving automated 3D reconstruction methods via vision metrology
NASA Astrophysics Data System (ADS)
Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart
2015-05-01
This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.
Image reconstruction methods for the PBX-M pinhole camera.
Holland, A; Powell, E T; Fonck, R J
1991-09-10
We describe two methods that have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera [Proc. Soc. Photo-Opt. Instrum. Eng. 691, 111 (1986)]. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least-squares fit to the data. This has the advantage of being fast and small and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape that can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster for an overdetermined system than the usual Lagrange multiplier approach to finding the maximum entropy solution [J. Opt. Soc. Am. 62, 511 (1972); Rev. Sci. Instrum. 57, 1557 (1986)].
Image reconstruction methods for the PBX-M pinhole camera
Holland, A.; Powell, E.T.; Fonck, R.J.
1990-03-01
This paper describes two methods which have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least squares fit to the data. This has the advantage of being fast and small, and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape which can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster, for an overdetermined system, than the usual Lagrange multiplier approach to finding the maximum entropy solution. 13 refs., 24 figs.
Adaptive and robust methods of reconstruction (ARMOR) for thermoacoustic tomography.
Xie, Yao; Guo, Bin; Li, Jian; Ku, Geng; Wang, Lihong V
2008-12-01
In this paper, we present new adaptive and robust methods of reconstruction (ARMOR) for thermoacoustic tomography (TAT), and study their performances for breast cancer detection. TAT is an emerging medical imaging technique that combines the merits of high contrast due to electromagnetic or laser stimulation and high resolution offered by thermal acoustic imaging. The current image reconstruction methods used for TAT, such as the delay-and-sum (DAS) approach, are data-independent and suffer from low-resolution, high sidelobe levels, and poor interference rejection capabilities. The data-adaptive ARMOR can have much better resolution and much better interference rejection capabilities than their data-independent counterparts. By allowing certain uncertainties, ARMOR can be used to mitigate the amplitude and phase distortion problems encountered in TAT. The excellent performance of ARMOR is demonstrated using both simulated and experimentally measured data.
Optical Sensors and Methods for Underwater 3D Reconstruction
Massot-Campos, Miquel; Oliver-Codina, Gabriel
2015-01-01
This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389
Ghaderi, Parviz; Marateb, Hamid R
2017-07-01
The aim of this study was to reconstruct low-quality High-density surface EMG (HDsEMG) signals, recorded with 2-D electrode arrays, using image inpainting and surface reconstruction methods. It is common that some fraction of the electrodes may provide low-quality signals. We used variety of image inpainting methods, based on partial differential equations (PDEs), and surface reconstruction methods to reconstruct the time-averaged or instantaneous muscle activity maps of those outlier channels. Two novel reconstruction algorithms were also proposed. HDsEMG signals were recorded from the biceps femoris and brachial biceps muscles during low-to-moderate-level isometric contractions, and some of the channels (5-25%) were randomly marked as outliers. The root-mean-square error (RMSE) between the original and reconstructed maps was then calculated. Overall, the proposed Poisson and wave PDE outperformed the other methods (average RMSE 8.7 μVrms ± 6.1 μVrms and 7.5 μVrms ± 5.9 μVrms) for the time-averaged single-differential and monopolar map reconstruction, respectively. Biharmonic Spline, the discrete cosine transform, and the Poisson PDE outperformed the other methods for the instantaneous map reconstruction. The running time of the proposed Poisson and wave PDE methods, implemented using a Vectorization package, was 4.6 ± 5.7 ms and 0.6 ± 0.5 ms, respectively, for each signal epoch or time sample in each channel. The proposed reconstruction algorithms could be promising new tools for reconstructing muscle activity maps in real-time applications. Proper reconstruction methods could recover the information of low-quality recorded channels in HDsEMG signals.
Efficient finite element method for grating profile reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Ruming; Sun, Jiguang
2015-12-01
This paper concerns the reconstruction of grating profiles from scattering data. The inverse problem is formulated as an optimization problem with a regularization term. We devise an efficient finite element method (FEM) and employ a quasi-Newton method to solve it. For the direct problems, the FEM stiff and mass matrices are assembled once at the beginning of the numerical procedure. Then only minor changes are made to the mass matrix at each iteration, which significantly saves the computation cost. Numerical examples show that the method is effective and robust.
Computational methods estimating uncertainties for profile reconstruction in scatterometry
NASA Astrophysics Data System (ADS)
Gross, H.; Rathsfeld, A.; Scholze, F.; Model, R.; Bär, M.
2008-04-01
The solution of the inverse problem in scatterometry, i.e. the determination of periodic surface structures from light diffraction patterns, is incomplete without knowledge of the uncertainties associated with the reconstructed surface parameters. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to the efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation and end up minimizing the deviation of the measured efficiency or phase shift values from the simulated ones. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping and the uncertainties of the measured efficiencies or phase shifts. In particular, the uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data and can be estimated by various methods. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.
Reconstruction of the Sunspot Group Number: The Backbone Method
NASA Astrophysics Data System (ADS)
Svalgaard, Leif; Schatten, Kenneth H.
2016-11-01
We have reconstructed the sunspot-group count, not by comparisons with other reconstructions and correcting those where they were deemed to be deficient, but by a re-assessment of original sources. The resulting series is a pure solar index and does not rely on input from other proxies, e.g. radionuclides, auroral sightings, or geomagnetic records. "Backboning" the data sets, our chosen method, provides substance and rigidity by using long-time observers as a stiffness character. Solar activity, as defined by the Group Number, appears to reach and sustain for extended intervals of time the same level in each of the last three centuries since 1700 and the past several decades do not seem to have been exceptionally active, contrary to what is often claimed.
Belaineh, Getachew; Sumner, David; Carter, Edward; Clapp, David
2013-01-01
Potential evapotranspiration (PET) and reference evapotranspiration (RET) data are usually critical components of hydrologic analysis. Many different equations are available to estimate PET and RET. Most of these equations, such as the Priestley-Taylor and Penman- Monteith methods, rely on detailed meteorological data collected at ground-based weather stations. Few weather stations collect enough data to estimate PET or RET using one of the more complex evapotranspiration equations. Currently, satellite data integrated with ground meteorological data are used with one of these evapotranspiration equations to accurately estimate PET and RET. However, earlier than the last few decades, historical reconstructions of PET and RET needed for many hydrologic analyses are limited by the paucity of satellite data and of some types of ground data. Air temperature stands out as the most generally available meteorological ground data type over the last century. Temperature-based approaches used with readily available historical temperature data offer the potential for long period-of-record PET and RET historical reconstructions. A challenge is the inconsistency between the more accurate, but more data intensive, methods appropriate for more recent periods and the less accurate, but less data intensive, methods appropriate to the more distant past. In this study, multiple methods are harmonized in a seamless reconstruction of historical PET and RET by quantifying and eliminating the biases of the simple Hargreaves-Samani method relative to the more complex and accurate Priestley-Taylor and Penman-Monteith methods. This harmonization process is used to generate long-term, internally consistent, spatiotemporal databases of PET and RET.
A New Method for Coronal Magnetic Field Reconstruction
NASA Astrophysics Data System (ADS)
Yi, Sibaek; Choe, Gwang-Son; Cho, Kyung-Suk; Kim, Kap-Sung
2017-08-01
A precise way of coronal magnetic field reconstruction (extrapolation) is an indispensable tool for understanding of various solar activities. A variety of reconstruction codes have been developed so far and are available to researchers nowadays, but they more or less bear this and that shortcoming. In this paper, a new efficient method for coronal magnetic field reconstruction is presented. The method imposes only the normal components of magnetic field and current density at the bottom boundary to avoid the overspecification of the reconstruction problem, and employs vector potentials to guarantee the divergence-freeness. In our method, the normal component of current density is imposed, not by adjusting the tangential components of A, but by adjusting its normal component. This allows us to avoid a possible numerical instability that on and off arises in codes using A. In real reconstruction problems, the information for the lateral and top boundaries is absent. The arbitrariness of the boundary conditions imposed there as well as various preprocessing brings about the diversity of resulting solutions. We impose the source surface condition at the top boundary to accommodate flux imbalance, which always shows up in magnetograms. To enhance the convergence rate, we equip our code with a gradient-method type accelerator. Our code is tested on two analytical force-free solutions. When the solution is given only at the bottom boundary, our result surpasses competitors in most figures of merits devised by Schrijver et al. (2006). We have also applied our code to a real active region NOAA 11974, in which two M-class flares and a halo CME took place. The EUV observation shows a sudden appearance of an erupting loop before the first flare. Our numerical solutions show that two entwining flux tubes exist before the flare and their shackling is released after the CME with one of them opened up. We suggest that the erupting loop is created by magnetic reconnection between
Comparing 3D virtual methods for hemimandibular body reconstruction.
Benazzi, Stefano; Fiorenza, Luca; Kozakowski, Stephanie; Kullmer, Ottmar
2011-07-01
Reconstruction of fractured, distorted, or missing parts in human skeleton presents an equal challenge in the fields of paleoanthropology, bioarcheology, forensics, and medicine. This is particularly important within the disciplines such as orthodontics and surgery, when dealing with mandibular defects due to tumors, developmental abnormalities, or trauma. In such cases, proper restorations of both form (for esthetic purposes) and function (restoration of articulation, occlusion, and mastication) are required. Several digital approaches based on three-dimensional (3D) digital modeling, computer-aided design (CAD)/computer-aided manufacturing techniques, and more recently geometric morphometric methods have been used to solve this problem. Nevertheless, comparisons among their outcomes are rarely provided. In this contribution, three methods for hemimandibular body reconstruction have been tested. Two bone defects were virtually simulated in a 3D digital model of a human hemimandible. Accordingly, 3D digital scaffolds were obtained using the mirror copy of the unaffected hemimandible (Method 1), the thin plate spline (TPS) interpolation (Method 2), and the combination between TPS and CAD techniques (Method 3). The mirror copy of the unaffected hemimandible does not provide a suitable solution for bone restoration. The combination between TPS interpolation and CAD techniques (Method 3) produces an almost perfect-fitting 3D digital model that can be used for biocompatible custom-made scaffolds generated by rapid prototyping technologies.
Two-dimensional signal reconstruction: The correlation sampling method
Roman, H. E.
2007-12-15
An accurate approach for reconstructing a time-dependent two-dimensional signal from non-synchronized time series recorded at points located on a grid is discussed. The method, denoted as correlation sampling, improves the standard conditional sampling approach commonly employed in the study of turbulence in magnetoplasma devices. Its implementation is illustrated in the case of an artificial time-dependent signal constructed using a fractal algorithm that simulates a fluctuating surface. A statistical method is also discussed for distinguishing coherent (i.e., collective) from purely random (noisy) behavior for such two-dimensional fluctuating phenomena.
Skin sparing mastectomy: technique and suggested methods of reconstruction.
Farahat, Ahmed M; Hashim, Tarek; Soliman, Hussein O; Manie, Tamer M; Soliman, Osama M
2014-09-01
To demonstrate the feasibility and accessibility of performing adequate mastectomy to extirpate the breast tissue, along with en-block formal axillary dissection performed from within the same incision. We also compared different methods of immediate breast reconstruction used to fill the skin envelope to achieve the best aesthetic results. 38 patients with breast cancer underwent skin-sparing mastectomy with formal axillary clearance, through a circum-areolar incision. Immediate breast reconstruction was performed using different techniques to fill in the skin envelope. Two reconstruction groups were assigned; group 1: Autologus tissue transfer only (n=24), and group 2: implant augmentation (n=14). The techniques used included filling in the skin envelope using Extended Latissimus Dorsi flap (18 patients) and Pedicled TRAM flap (6 patients). Subpectoral implants(4 patients), a rounded implant placed under the pectoralis major muscle to augment an LD reconstructed breast. LD pocket (10 patients), an anatomical implant placed over the pectoralis major muscle within a pocket created by the LD flap. No contra-lateral procedure was performed in any of the cases to achieve symmetry. All cases underwent adequate excision of the breast tissue along with en-block complete axillary clearance (when indicated), without the need for an additional axillary incision. Eighteen patients underwent reconstruction using extended LD flaps only, six had TRAM flaps, four had augmentation using implants placed below the pectoralis muscle along with LD flaps, and ten had implants placed within the LD pocket. Breast shape, volume and contour were successfully restored in all patients. Adequate degree of ptosis was achieved, to ensure maximal symmetry. Skin Sparing mastectomy through a circum-areolar incision has proven to be a safe and feasible option for the management of breast cancer in Egyptian women, offering them adequate oncologic control and optimum cosmetic outcome through
Reverse engineering and analysis of large genome-scale gene networks.
Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas
2013-01-07
Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web.
Tensor-based dynamic reconstruction method for electrical capacitance tomography
NASA Astrophysics Data System (ADS)
Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.
2017-03-01
Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.
An Assessment of Iterative Reconstruction Methods for Sparse Ultrasound Imaging
Valente, Solivan A.; Zibetti, Marcelo V. W.; Pipa, Daniel R.; Maia, Joaquim M.; Schneider, Fabio K.
2017-01-01
Ultrasonic image reconstruction using inverse problems has recently appeared as an alternative to enhance ultrasound imaging over beamforming methods. This approach depends on the accuracy of the acquisition model used to represent transducers, reflectivity, and medium physics. Iterative methods, well known in general sparse signal reconstruction, are also suited for imaging. In this paper, a discrete acquisition model is assessed by solving a linear system of equations by an ℓ1-regularized least-squares minimization, where the solution sparsity may be adjusted as desired. The paper surveys 11 variants of four well-known algorithms for sparse reconstruction, and assesses their optimization parameters with the goal of finding the best approach for iterative ultrasound imaging. The strategy for the model evaluation consists of using two distinct datasets. We first generate data from a synthetic phantom that mimics real targets inside a professional ultrasound phantom device. This dataset is contaminated with Gaussian noise with an estimated SNR, and all methods are assessed by their resulting images and performances. The model and methods are then assessed with real data collected by a research ultrasound platform when scanning the same phantom device, and results are compared with beamforming. A distinct real dataset is finally used to further validate the proposed modeling. Although high computational effort is required by iterative methods, results show that the discrete model may lead to images closer to ground-truth than traditional beamforming. However, computing capabilities of current platforms need to evolve before frame rates currently delivered by ultrasound equipments are achievable. PMID:28282862
Park, Gui-Yong; Cho, Hee-Eun; Lee, Byung-Il; Park, Seung-Ha
2016-01-01
Background The objective of this paper was to describe a novel technique for improving the maintenance of nipple projection in primary nipple reconstruction by using acellular dermal matrix as a strut in one of three different configurations, according to the method of prior breast reconstruction. The struts were designed to best fill the different types of dead spaces in nipple reconstruction depending on the breast reconstruction method. Methods A total of 50 primary nipple reconstructions were performed between May 2012 and May 2015. The prior breast reconstruction methods were latissimus dorsi (LD) flap (28 cases), transverse rectus abdominis myocutaneous (TRAM) flap (10 cases), or tissue expander/implant (12 cases). The nipple reconstruction technique involved the use of local flaps, including the C-V flap or star flap. A 1×2-cm acellular dermal matrix was placed into the core with O-, I-, and L-shaped struts for prior LD, TRAM, and expander/implant methods, respectively. The projection of the reconstructed nipple was measured at the time of surgery and at 3, 6, and 9 months postoperatively. Results The nine-month average maintenance of nipple projection was 73.0%±9.67% for the LD flap group using an O-strut, 72.0%±11.53% for the TRAM flap group using an I-strut, and 69.0%±10.82% for the tissue expander/implant group using an L-strut. There were no cases of infection, wound dehiscence, or flap necrosis. Conclusions The application of an acellular dermal matrix with a different kind of strut for each of 3 breast reconstruction methods is an effective addition to current techniques for improving the maintenance of long-term projection in primary nipple reconstruction. PMID:27689049
Reconstruction and analysis of hybrid composite shells using meshless methods
NASA Astrophysics Data System (ADS)
Bernardo, G. M. S.; Loja, M. A. R.
2017-06-01
The importance of focusing on the research of viable models to predict the behaviour of structures which may possess in some cases complex geometries is an issue that is growing in different scientific areas, ranging from the civil and mechanical engineering to the architecture or biomedical devices fields. In these cases, the research effort to find an efficient approach to fit laser scanning point clouds, to the desired surface, has been increasing, leading to the possibility of modelling as-built/as-is structures and components' features. However, combining the task of surface reconstruction and the implementation of a structural analysis model is not a trivial task. Although there are works focusing those different phases in separate, there is still an effective need to find approaches able to interconnect them in an efficient way. Therefore, achieving a representative geometric model able to be subsequently submitted to a structural analysis in a similar based platform is a fundamental step to establish an effective expeditious processing workflow. With the present work, one presents an integrated methodology based on the use of meshless approaches, to reconstruct shells described by points' clouds, and to subsequently predict their static behaviour. These methods are highly appropriate on dealing with unstructured points clouds, as they do not need to have any specific spatial or geometric requirement when implemented, depending only on the distance between the points. Details on the formulation, and a set of illustrative examples focusing the reconstruction of cylindrical and double-curvature shells, and its further analysis, are presented.
Reconstruction and analysis of hybrid composite shells using meshless methods
NASA Astrophysics Data System (ADS)
Bernardo, G. M. S.; Loja, M. A. R.
2017-02-01
The importance of focusing on the research of viable models to predict the behaviour of structures which may possess in some cases complex geometries is an issue that is growing in different scientific areas, ranging from the civil and mechanical engineering to the architecture or biomedical devices fields. In these cases, the research effort to find an efficient approach to fit laser scanning point clouds, to the desired surface, has been increasing, leading to the possibility of modelling as-built/as-is structures and components' features. However, combining the task of surface reconstruction and the implementation of a structural analysis model is not a trivial task. Although there are works focusing those different phases in separate, there is still an effective need to find approaches able to interconnect them in an efficient way. Therefore, achieving a representative geometric model able to be subsequently submitted to a structural analysis in a similar based platform is a fundamental step to establish an effective expeditious processing workflow. With the present work, one presents an integrated methodology based on the use of meshless approaches, to reconstruct shells described by points' clouds, and to subsequently predict their static behaviour. These methods are highly appropriate on dealing with unstructured points clouds, as they do not need to have any specific spatial or geometric requirement when implemented, depending only on the distance between the points. Details on the formulation, and a set of illustrative examples focusing the reconstruction of cylindrical and double-curvature shells, and its further analysis, are presented.
Asymptotic approximation method of force reconstruction: Proof of concept
NASA Astrophysics Data System (ADS)
Sanchez, J.; Benaroya, H.
2017-08-01
An important problem in engineering is the determination of the system input based on the system response. This type of problem is difficult to solve as it is often ill-defined, and produces inaccurate or non-unique results. Current reconstruction techniques typically involve the employment of optimization methods or additional constraints to regularize the problem, but these methods are not without their flaws as they may be sub-optimally applied and produce inadequate results. An alternative approach is developed that draws upon concepts from control systems theory, the equilibrium analysis of linear dynamical systems with time-dependent inputs, and asymptotic approximation analysis. This paper presents the theoretical development of the proposed method. A simple application of the method is presented to demonstrate the procedure. A more complex application to a continuous system is performed to demonstrate the applicability of the method.
An improved image reconstruction method for optical intensity correlation Imaging
NASA Astrophysics Data System (ADS)
Gao, Xin; Feng, Lingjie; Li, Xiyu
2016-12-01
The intensity correlation imaging method is a novel kind of interference imaging and it has favorable prospects in deep space recognition. However, restricted by the low detecting signal-to-noise ratio (SNR), it's usually very difficult to obtain high-quality image of deep space object like high-Earth-orbit (HEO) satellite with existing phase retrieval methods. In this paper, based on the priori intensity statistical distribution model of the object and characteristics of measurement noise distribution, an improved method of Prior Information Optimization (PIO) is proposed to reduce the ambiguous images and accelerate the phase retrieval procedure thus realizing fine image reconstruction. As the simulations and experiments show, compared to previous methods, our method could acquire higher-resolution images with less error in low SNR condition.
Detection of driver pathways using mutated gene network in cancer.
Li, Feng; Gao, Lin; Ma, Xiaoke; Yang, Xiaofei
2016-06-21
Distinguishing driver pathways has been extensively studied because they are critical for understanding the development and molecular mechanisms of cancers. Most existing methods for driver pathways are based on high coverage as well as high mutual exclusivity, with the underlying assumption that mutations are exclusive. However, in many cases, mutated driver genes in the same pathways are not strictly mutually exclusive. Based on this observation, we propose an index for quantifying mutual exclusivity between gene pairs. Then, we construct a mutated gene network for detecting driver pathways by integrating the proposed index and coverage. The detection of driver pathways on the mutated gene network consists of two steps: raw pathways are obtained using a CPM method, and the final driver pathways are selected using a strict testing strategy. We apply this method to glioblastoma and breast cancers and find that our method is more accurate than state-of-the-art methods in terms of enrichment of KEGG pathways. Furthermore, the detected driver pathways intersect with well-known pathways with moderate exclusivity, which cannot be discovered using the existing algorithms. In conclusion, the proposed method provides an effective way to investigate driver pathways in cancers.
A Robust Shape Reconstruction Method for Facial Feature Point Detection
Huang, Zhiqi
2017-01-01
Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods. PMID:28316615
The impact of HGT on phylogenomic reconstruction methods.
Lapierre, Pascal; Lasek-Nesselquist, Erica; Gogarten, Johann Peter
2014-01-01
Supermatrix and supertree analyses are frequently used to more accurately recover vertical evolutionary history but debate still exists over which method provides greater reliability. Traditional methods that resolve relationships among organisms from single genes are often unreliable because of the frequent lack of strong phylogenetic signal and the presence of systematic artifacts. Methods developed to reconstruct organismal history from multiple genes can be divided into supermatrix and supertree approaches. A supermatrix analysis consists of the concatenation of multiple genes into a single, possibly partitioned alignment, from which phylogenies are reconstructed using a variety of approaches. Supertrees build consensus trees from the topological information contained within individual gene trees. Both methods are now widely used and have been demonstrated to solve previously ambiguous or unresolved phylogenies with high statistical support. However, the amount of misleading signal needed to induce erroneous phylogenies for both strategies is still unknown. Using genome simulations, we test the accuracy of supertree and supermatrix approaches in recovering the true organismal phylogeny under increased amounts of horizontally transferred genes and changes in substitution rates. Our results show that overall, supermatrix approaches are preferable when a low amount of gene transfer is suspected to be present in the dataset, while supertrees have greater reliability in the presence of a moderate amount of misleading gene transfers. In the face of very high or very low substitution rates without horizontal gene transfers, supermatrix approaches outperform supertrees as individual gene trees remain unresolved and additional sequences contribute to a congruent phylogenetic signal.
Reconstruction of Gene Networks of Iron Response in Shewanella oneidensis
Yang, Yunfeng; Harris, Daniel P; Luo, Feng; Joachimiak, Marcin; Wu, Liyou; Dehal, Paramvir; Jacobsen, Janet; Yang, Zamin Koo; Gao, Haichun; Arkin, Adam; Palumbo, Anthony Vito; Zhou, Jizhong
2009-01-01
It is of great interest to study the iron response of the -proteobacterium Shewanella oneidensis since it possesses a high content of iron and is capable of utilizing iron for anaerobic respiration. We report here that the iron response in S. oneidensis is a rapid process. To gain more insights into the bacterial response to iron, temporal gene expression profiles were examined for iron depletion and repletion, resulting in identification of iron-responsive biological pathways in a gene co-expression network. Iron acquisition systems, including genes unique to S. oneidensis, were rapidly and strongly induced by iron depletion, and repressed by iron repletion. Some were required for iron depletion, as exemplified by the mutational analysis of the putative siderophore biosynthesis protein SO3032. Unexpectedly, a number of genes related to anaerobic energy metabolism were repressed by iron depletion and induced by repletion, which might be due to the iron storage potential of their protein products. Other iron-responsive biological pathways include protein degradation, aerobic energy metabolism and protein synthesis. Furthermore, sequence motifs enriched in gene clusters as well as their corresponding DNA-binding proteins (Fur, CRP and RpoH) were identified, resulting in a regulatory network of iron response in S. oneidensis. Together, this work provides an overview of iron response and reveals novel features in S. oneidensis, including Shewanella-specific iron acquisition systems, and suggests the intimate relationship between anaerobic energy metabolism and iron response.
Image reconstruction by the speckle-masking method.
Weigelt, G; Wirnitzer, B
1983-07-01
Speckle masking is a method for reconstructing high-resolution images of general astronomical objects from stellar speckle interferograms. In speckle masking no unresolvable star is required within the isoplanatic patch of the object. We present digital applications of speckle masking to close spectroscopic double stars. The speckle interferograms were recorded with the European Southern Observatory's 3.6-m telescope. Diffraction-limited resolution (0.03 arc see) was achieved, which is about 30 times higher than the resolution of conventional astrophotography.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos; Pina, Robert
2005-05-17
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos
2002-01-01
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos
2002-01-01
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape size, and/or position) as needed to best fit the data.
Algebraic filter approach for fast approximation of nonlinear tomographic reconstruction methods
NASA Astrophysics Data System (ADS)
Plantagie, Linda; Batenburg, Kees Joost
2015-01-01
We present a computational approach for fast approximation of nonlinear tomographic reconstruction methods by filtered backprojection (FBP) methods. Algebraic reconstruction algorithms are the methods of choice in a wide range of tomographic applications, yet they require significant computation time, restricting their usefulness. We build upon recent work on the approximation of linear algebraic reconstruction methods and extend the approach to the approximation of nonlinear reconstruction methods which are common in practice. We demonstrate that if a blueprint image is available that is sufficiently similar to the scanned object, our approach can compute reconstructions that approximate iterative nonlinear methods, yet have the same speed as FBP.
A New Method for Coronal Magnetic Field Reconstruction
NASA Astrophysics Data System (ADS)
Yi, Sibaek; Choe, Gwangson; Lim, Daye
2015-08-01
We present a new, simple, variational method for reconstruction of coronal force-free magnetic fields based on vector magnetogram data. Our method employs vector potentials for magnetic field description in order to ensure the divergence-free condition. As boundary conditions, it only requires the normal components of magnetic field and current density so that the boundary conditions are not over-specified as in many other methods. The boundary normal current distribution is initially fixed once and for all and does not need continual adjustment as in stress-and-relax type methods. We have tested the computational code based on our new method in problems with known solutions and those with actual photospheric data. When solutions are fully given at all boundaries, the accuracy of our method is almost comparable to best performing methods in the market. When magnetic field data are given only at the photospheric boundary, our method excels other methods in most “figures of merit” devised by Schrijver et al. (2006). Furthermore the residual force in the solution is at least an order of magnitude smaller than that of any other method. It can also accommodate the source-surface boundary condition at the top boundary. Our method is expected to contribute to the real time monitoring of the sun required for future space weather forecasts.
Local motion-compensated method for high-quality 3D coronary artery reconstruction.
Liu, Bo; Bai, Xiangzhi; Zhou, Fugen
2016-12-01
The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method.
Local motion-compensated method for high-quality 3D coronary artery reconstruction
Liu, Bo; Bai, Xiangzhi; Zhou, Fugen
2016-01-01
The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method. PMID:28018741
Post-refinement multiscale method for pin power reconstruction
Collins, B.; Seker, V.; Downar, T.; Xu, Y.
2012-07-01
The ability to accurately predict local pin powers in nuclear reactors is necessary to understand the mechanisms that cause fuel pin failure during steady state and transient operation. In the research presented here, methods are developed to improve the local solution using high order methods with boundary conditions from a low order global solution. Several different core configurations were tested to determine the improvement in the local pin powers compared to the standard techniques based on diffusion theory and pin power reconstruction (PPR). The post-refinement multiscale methods use the global solution to determine boundary conditions for the local solution. The local solution is solved using either a fixed boundary source or an albedo boundary condition; this solution is 'post-refinement' and thus has no impact on the global solution. (authors)
Posttraumatic Reconstruction of the Ankle Using the Ilizarov Method
2005-01-01
Reconstruction of the ankle after trauma requires a variety of treatment strategies. Once the personality of the problem is appreciated, a tailored approach may be implemented. The Ilizarov method provides a versatile, powerful, and safe approach. It is particularly useful in the setting of infection, bone loss, poor soft tissue envelope, leg length discrepancy, bony deformity, and joint contracture. In this article, a variety of posttraumatic ankle pathologies are discussed. Treatment methods including osteotomy, arthrodesis, distraction, correction of contracture, nonunion repair, and tibia and fibula lengthening are reviewed. The use of the Ilizarov method for acute and/or gradual correction as well as the application of simultaneous treatments at multiple levels is discussed in this article. PMID:18751813
Elastography Method for Reconstruction of Nonlinear Breast Tissue Properties
Wang, Z. G.; Liu, Y.; Wang, G.; Sun, L. Z.
2009-01-01
Elastography is developed as a quantitative approach to imaging linear elastic properties of tissues to detect suspicious tumors. In this paper a nonlinear elastography method is introduced for reconstruction of complex breast tissue properties. The elastic parameters are estimated by optimally minimizing the difference between the computed forces and experimental measures. A nonlinear adjoint method is derived to calculate the gradient of the objective function, which significantly enhances the numerical efficiency and stability. Simulations are conducted on a three-dimensional heterogeneous breast phantom extracting from real imaging including fatty tissue, glandular tissue, and tumors. An exponential-form of nonlinear material model is applied. The effect of noise is taken into account. Results demonstrate that the proposed nonlinear method opens the door toward nonlinear elastography and provides guidelines for future development and clinical application in breast cancer study. PMID:19636362
Comparison of pulse phase and thermographic signal reconstruction processing methods
NASA Astrophysics Data System (ADS)
Oswald-Tranta, Beata; Shepard, Steven M.
2013-05-01
Active thermography data for nondestructive testing has traditionally been evaluated by either visual or numerical identification of anomalous surface temperature contrast in the IR image sequence obtained as the target sample cools in response to thermal stimulation. However, in recent years, it has been demonstrated that considerably more information about the subsurface condition of a sample can be obtained by evaluating the time history of each pixel independently. In this paper, we evaluate the capabilities of two such analysis techniques, Pulse Phase Thermography (PPT) and Thermographic Signal Reconstruction (TSR) using induction and optical flash excitation. Data sequences from optical pulse and scanned induction heating are analyzed with both methods. Results are evaluated in terms of signal-tobackground ratio for a given subsurface feature. In addition to the experimental data, we present finite element simulation models with varying flaw diameter and depth, and discuss size measurement accuracy and the effect of noise on detection limits and sensitivity for both methods.
Structural influence of gene networks on their inference: analysis of C3NET
2011-01-01
Background The availability of large-scale high-throughput data possesses considerable challenges toward their functional analysis. For this reason gene network inference methods gained considerable interest. However, our current knowledge, especially about the influence of the structure of a gene network on its inference, is limited. Results In this paper we present a comprehensive investigation of the structural influence of gene networks on the inferential characteristics of C3NET - a recently introduced gene network inference algorithm. We employ local as well as global performance metrics in combination with an ensemble approach. The results from our numerical study for various biological and synthetic network structures and simulation conditions, also comparing C3NET with other inference algorithms, lead a multitude of theoretical and practical insights into the working behavior of C3NET. In addition, in order to facilitate the practical usage of C3NET we provide an user-friendly R package, called c3net, and describe its functionality. It is available from https://r-forge.r-project.org/projects/c3net and from the CRAN package repository. Conclusions The availability of gene network inference algorithms with known inferential properties opens a new era of large-scale screening experiments that could be equally beneficial for basic biological and biomedical research with auspicious prospects. The availability of our easy to use software package c3net may contribute to the popularization of such methods. Reviewers This article was reviewed by Lev Klebanov, Joel Bader and Yuriy Gusev. PMID:21696592
Comparison of image reconstruction methods for structured illumination microscopy
NASA Astrophysics Data System (ADS)
Lukeš, Tomas; Hagen, Guy M.; Křížek, Pavel; Švindrych, Zdeněk.; Fliegel, Karel; Klíma, Miloš
2014-05-01
Structured illumination microscopy (SIM) is a recent microscopy technique that enables one to go beyond the diffraction limit using patterned illumination. The high frequency information is encoded through aliasing into the observed image. By acquiring multiple images with different illumination patterns aliased components can be separated and a highresolution image reconstructed. Here we investigate image processing methods that perform the task of high-resolution image reconstruction, namely square-law detection, scaled subtraction, super-resolution SIM (SR-SIM), and Bayesian estimation. The optical sectioning and lateral resolution improvement abilities of these algorithms were tested under various noise level conditions on simulated data and on fluorescence microscopy images of a pollen grain test sample and of a cultured cell stained for the actin cytoskeleton. In order to compare the performance of the algorithms, the following objective criteria were evaluated: Signal to Noise Ratio (SNR), Signal to Background Ratio (SBR), circular average of the power spectral density and the S3 sharpness index. The results show that SR-SIM and Bayesian estimation combine illumination patterned images more effectively and provide better lateral resolution in exchange for more complex image processing. SR-SIM requires one to precisely shift the separated spectral components to their proper positions in reciprocal space. High noise levels in the raw data can cause inaccuracies in the shifts of the spectral components which degrade the super-resolved image. Bayesian estimation has proven to be more robust to changes in noise level and illumination pattern frequency.
Method for self reconstruction of holograms for secure communication
NASA Astrophysics Data System (ADS)
Babcock, Craig; Donkor, Eric
2017-05-01
We present the theory and experimental results behind using a 3D holographic signal for secure communications. A hologram of a complex 3D object is recorded to be used as a hard key for data encryption and decryption. The hologram is cut in half to be used at each end of the system. One piece is used for data encryption, while the other is used for data decryption. The first piece of hologram is modulated with the data to be encrypted. The hologram has an extremely complex phase distribution which encodes the data signal incident on the first piece of hologram. In order to extract the data from the modulated holographic carrier, the signal must be passed through the second hologram, removing the complex phase contributions of the first hologram. The signal beam from the first piece of hologram is used to illuminate the second piece of the same hologram, creating a self-reconstructing system. The 3D hologram's interference pattern is highly specific to the 3D object and conditions during the holographic writing process. With a sufficiently complex 3D object used to generate the holographic hard key, the data will be nearly impossible to recover without using the second piece of the same hologram. This method of producing a self-reconstructing hologram ensures that the pieces in use are from the same original hologram, providing a system hard key, making it an extremely difficult system to counterfeit.
Features of the method of large-scale paleolandscape reconstructions
NASA Astrophysics Data System (ADS)
Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina
2017-04-01
The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the
Ethanol Modulation of Gene Networks: Implications for Alcoholism
Farris, Sean P.; Miles, Michael F.
2011-01-01
Alcoholism is a complex disease caused by a confluence of environmental and genetic factors influencing multiple brain pathways to produce a variety of behavioral sequelae, including addiction. Genetic factors contribute to over 50% of the risk for alcoholism and recent evidence points to a large number of genes with small effect sizes as the likely molecular basis for this disease. Recent progress in genomics (microarrays or RNA-Seq) and genetics has led to the identification of a large number of potential candidate genes influencing ethanol behaviors or alcoholism itself. To organize this complex information, investigators have begun to focus on the contribution of gene networks, rather than individual genes, for various ethanol-induced behaviors in animal models or behavioral endophenotypes comprising alcoholism. This chapter reviews some of the methods used for constructing gene networks from genomic data and some of the recent progress made in applying such approaches to the study of the neurobiology of ethanol. We show that rapid technology development in gathering genomic data, together with sophisticated experimental design and a growing collection of sophisticated tools are producing novel insights for understanding the molecular basis of alcoholism and that such approaches promise new opportunities for therapeutic development. PMID:21536129
Ethanol modulation of gene networks: implications for alcoholism.
Farris, Sean P; Miles, Michael F
2012-01-01
Alcoholism is a complex disease caused by a confluence of environmental and genetic factors influencing multiple brain pathways to produce a variety of behavioral sequelae, including addiction. Genetic factors contribute to over 50% of the risk for alcoholism and recent evidence points to a large number of genes with small effect sizes as the likely molecular basis for this disease. Recent progress in genomics (microarrays or RNA-Seq) and genetics has led to the identification of a large number of potential candidate genes influencing ethanol behaviors or alcoholism itself. To organize this complex information, investigators have begun to focus on the contribution of gene networks, rather than individual genes, for various ethanol-induced behaviors in animal models or behavioral endophenotypes comprising alcoholism. This chapter reviews some of the methods used for constructing gene networks from genomic data and some of the recent progress made in applying such approaches to the study of the neurobiology of ethanol. We show that rapid technology development in gathering genomic data, together with sophisticated experimental design and a growing collection of analysis tools are producing novel insights for understanding the molecular basis of alcoholism and that such approaches promise new opportunities for therapeutic development.
A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS
Hong Luo; Yidong Xia; Robert Nourgaliev
2011-05-01
A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.
NASA Astrophysics Data System (ADS)
Hu, Hui
This dissertation is principally concerned with improving the performance of a prototype image-intensifier -based cone-beam volume computed tomography system by removing or partially removing two of its restricting factors, namely, the inaccuracy of current cone-beam reconstruction algorithm and the image distortion associated with the curved detecting surface of the image intensifier. To improve the accuracy of cone-beam reconstruction, first, the currently most accurate and computationally efficient cone-beam reconstruction method, the Feldkamp algorithm, is investigated by studying the relation of an original unknown function with its Feldkamp estimate. From this study, a partial knowledge on the unknown function can be derived in the Fourier domain from its Feldkamp estimate. Then, based on the Gerchberg-Papoulis algorithm, a modified iterative algorithm efficiently incorporating the Fourier knowledge as well as the a priori spatial knowledge on the unknown function is devised and tested to improve the cone-beam reconstruction accuracy by postprocessing the Feldkamp estimate. Two methods are developed to remove the distortion associated with the curved surface of image intensifier. A calibrating method based on a rubber-sheet remapping is designed and implemented. As an alternative, the curvature can be considered in the reconstruction algorithm. As an initial effort along this direction, a generalized convolution -backprojection reconstruction algorithm for fan-beam and any circular detector arrays is derived and studied.
Yeast ancestral genome reconstructions: the possibilities of computational methods II.
Chauve, Cedric; Gavranovic, Haris; Ouangraoua, Aida; Tannier, Eric
2010-09-01
Since the availability of assembled eukaryotic genomes, the first one being a budding yeast, many computational methods for the reconstruction of ancestral karyotypes and gene orders have been developed. The difficulty has always been to assess their reliability, since we often miss a good knowledge of the true ancestral genomes to compare their results to, as well as a good knowledge of the evolutionary mechanisms to test them on realistic simulated data. In this study, we propose some measures of reliability of several kinds of methods, and apply them to infer and analyse the architectures of two ancestral yeast genomes, based on the sequence of seven assembled extant ones. The pre-duplication common ancestor of S. cerevisiae and C. glabrata has been inferred manually by Gordon et al. (Plos Genet. 2009). We show why, in this case, a good convergence of the methods is explained by some properties of the data, and why results are reliable. In another study, Jean et al. (J. Comput Biol. 2009) proposed an ancestral architecture of the last common ancestor of S. kluyveri, K. thermotolerans, K. lactis, A. gossypii, and Z. rouxii inferred by a computational method. In this case, we show that the dataset does not seem to contain enough information to infer a reliable architecture, and we construct a higher resolution dataset which gives a good reliability on a new ancestral configuration.
Hoy, Christopher L; Durr, Nicholas J; Ben-Yakar, Adela
2011-06-01
We present a fast-updating Lissajous image reconstruction methodology that uses an increased image frame rate beyond the pattern repeat rate generally used in conventional Lissajous image reconstruction methods. The fast display rate provides increased dynamic information and reduced motion blur, as compared to conventional Lissajous reconstruction, at the cost of single-frame pixel density. Importantly, this method does not discard any information from the conventional Lissajous image reconstruction, and frames from the complete Lissajous pattern can be displayed simultaneously. We present the theoretical background for this image reconstruction methodology along with images and video taken using the algorithm in a custom-built miniaturized multiphoton microscopy system.
Lee, Heui Chang; Song, Bongyong; Kim, Jin Sung; Jung, James J.; Li, H. Harold; Mutic, Sasa; Park, Justin C.
2017-01-01
Compared to analytical reconstruction by Feldkamp-Davis-Kress (FDK), simultaneous algebraic reconstruction technique (SART) offers a higher degree of flexibility in input measurements and often produces superior quality images. Due to the iterative nature of the algorithm, however, SART requires intense computations which have prevented its use in clinical practice. In this paper, we developed a fast-converging SART-type algorithm and showed its clinical feasibility in CBCT reconstructions. Inspired by the quasi-orthogonal nature of the x-ray projections in CBCT, we implement a simple yet much faster algorithm by computing Barzilai and Borwein step size at each iteration. We applied this variable step-size (VS)-SART algorithm to numerical and physical phantoms as well as cancer patients for reconstruction. By connecting the SART algebraic problem to the statistical weighted least squares problem, we enhanced the reconstruction speed significantly (i.e., less number of iterations). We further accelerated the reconstruction speed of algorithms by using the parallel computing power of GPU. PMID:28476047
Reverse engineering gene networks using singular value decomposition and robust regression
Yeung, M. K. Stephen; Tegnér, Jesper; Collins, James J.
2002-01-01
We propose a scheme to reverse-engineer gene networks on a genome-wide scale using a relatively small amount of gene expression data from microarray experiments. Our method is based on the empirical observation that such networks are typically large and sparse. It uses singular value decomposition to construct a family of candidate solutions and then uses robust regression to identify the solution with the smallest number of connections as the most likely solution. Our algorithm has O(log N) sampling complexity and O(N4) computational complexity. We test and validate our approach in a series of in numero experiments on model gene networks. PMID:11983907
Parallel logic gates in synthetic gene networks induced by non-Gaussian noise.
Xu, Yong; Jin, Xiaoqin; Zhang, Huiqing
2013-11-01
The recent idea of logical stochastic resonance is verified in synthetic gene networks induced by non-Gaussian noise. We realize the switching between two kinds of logic gates under optimal moderate noise intensity by varying two different tunable parameters in a single gene network. Furthermore, in order to obtain more logic operations, thus providing additional information processing capacity, we obtain in a two-dimensional toggle switch model two complementary logic gates and realize the transformation between two logic gates via the methods of changing different parameters. These simulated results contribute to improve the computational power and functionality of the networks.
Next-Generation Synthetic Gene Networks
Lu, Timothy K.; Khalil, Ahmad S.; Collins, James J.
2009-01-01
Synthetic biology is focused on the rational construction of biological systems based on engineering principles. During the field’s first decade of development, significant progress has been made in designing biological parts and assembling them into genetic circuits to achieve basic functionalities. These circuits have been used to construct proof-of-principle systems with promising results in industrial and medical applications. However, advances in synthetic biology have been limited by a lack of interoperable parts, techniques for dynamically probing biological systems, and frameworks for the reliable construction and operation of complex, higher-order networks. Here, we highlight challenges and goals for next-generation synthetic gene networks, in the context of potential applications in medicine, biotechnology, bioremediation, and bioenergy. PMID:20010597
Next-generation synthetic gene networks.
Lu, Timothy K; Khalil, Ahmad S; Collins, James J
2009-12-01
Synthetic biology is focused on the rational construction of biological systems based on engineering principles. During the field's first decade of development, significant progress has been made in designing biological parts and assembling them into genetic circuits to achieve basic functionalities. These circuits have been used to construct proof-of-principle systems with promising results in industrial and medical applications. However, advances in synthetic biology have been limited by a lack of interoperable parts, techniques for dynamically probing biological systems and frameworks for the reliable construction and operation of complex, higher-order networks. As these challenges are addressed, synthetic biologists will be able to construct useful next-generation synthetic gene networks with real-world applications in medicine, biotechnology, bioremediation and bioenergy.
NASA Astrophysics Data System (ADS)
Salonen, J. Sakari; Luoto, Miska; Alenius, Teija; Heikkilä, Maija; Seppä, Heikki; Telford, Richard J.; Birks, H. John B.
2014-03-01
We test and analyse a new calibration method, boosted regression trees (BRTs) in palaeoclimatic reconstructions based on fossil pollen assemblages. We apply BRTs to multiple Holocene and Lateglacial pollen sequences from northern Europe, and compare their performance with two commonly-used calibration methods: weighted averaging regression (WA) and the modern-analogue technique (MAT). Using these calibration methods and fossil pollen data, we present synthetic reconstructions of Holocene summer temperature, winter temperature, and water balance changes in northern Europe. Highly consistent trends are found for summer temperature, with a distinct Holocene thermal maximum at ca 8000-4000 cal. a BP, with a mean Tjja anomaly of ca +0.7 °C at 6 ka compared to 0.5 ka. We were unable to reconstruct reliably winter temperature or water balance, due to the confounding effects of summer temperature and the great between-reconstruction variability. We find BRTs to be a promising tool for quantitative reconstructions from palaeoenvironmental proxy data. BRTs show good performance in cross-validations compared with WA and MAT, can model a variety of taxon response types, find relevant predictors and incorporate interactions between predictors, and show some robustness with non-analogue fossil assemblages.
NASA Astrophysics Data System (ADS)
Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook
2013-05-01
In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method.
Patel, Niyant V.; Wagner, Douglas S.
2015-01-01
Background: Venous thromboembolism (VTE) risk models including the Davison risk score and the 2005 Caprini risk assessment model have been validated in plastic surgery patients. However, their utility and predictive value in breast reconstruction has not been well described. We sought to determine the utility of current VTE risk models in this population and the VTE rate observed in various methods of breast reconstruction. Methods: A retrospective review of breast reconstructions by a single surgeon was performed. One hundred consecutive transverse rectus abdominis myocutaneous (TRAM) patients, 100 consecutive implant patients, and 100 consecutive latissimus dorsi patients were identified over a 10-year period. Patient demographics and presence of symptomatic VTE were collected. 2005 Caprini risk scores and Davison risk scores were calculated for each patient. Results: The TRAM reconstruction group was found to have a higher VTE rate (6%) than the implant (0%) and latissimus (0%) reconstruction groups (P < 0.01). Mean Davison risk scores and 2005 Caprini scores were similar across all reconstruction groups (P > 0.1). The vast majority of patients were stratified as high risk (87.3%) by the VTE risk models. However, only TRAM reconstruction patients demonstrated significant VTE risk. Conclusions: TRAM reconstruction appears to have a significantly higher risk of VTE than both implant and latissimus reconstruction. Current risk models do not effectively stratify breast reconstruction patients at risk for VTE. The method of breast reconstruction appears to have a significant role in patients’ VTE risk. PMID:26090287
NASA Astrophysics Data System (ADS)
Wahl, E. R.
2011-12-01
significant improvement in regional fidelity has resulted from continued model development. Additional examination using a new millennium-length CCSM integration and adding European post-volcanic field reconstructions yields a more mixed picture. Finally, a rigorous experimental evaluation of the efficacy of climate field reconstruction (CFR) methods is presented, derived from the western North American temperature reconstructions. This evaluation compares the fidelity of CFRs based on real proxy predictors to those obtained by using non-informative predictors. The non-informative proxies are designed to have the same autocorrelation structure as the real proxy data, but contain no climatic information. Large ensembles of reconstructions are generated in both cases, providing estimated Monte Carlo distributions of reconstruction skill. The skill metric distributions of the real proxy-based CFRs indicate good reconstruction quality and clearly (and almost entirely) separate from the poor skill distributions generated using the non-informative proxies, in contrast to a recent similar study that suggests proxy-based reconstructions have little efficacy, but which did not evaluate CFR methods.
Yoon, Sungwon; Pineda, Angel R.; Fahrig, Rebecca
2010-01-01
Purpose: An iterative tomographic reconstruction algorithm that simultaneously segments and reconstructs the reconstruction domain is proposed and applied to tomographic reconstructions from a sparse number of projection images. Methods: The proposed algorithm uses a two-phase level set method segmentation in conjunction with an iterative tomographic reconstruction to achieve simultaneous segmentation and reconstruction. The simultaneous segmentation and reconstruction is achieved by alternating between level set function evolutions and per-region intensity value updates. To deal with the limited number of projections, a priori information about the reconstruction is enforced via penalized likelihood function. Specifically, smooth function within each region (piecewise smooth function) and bounded function intensity values for each region are assumed. Such a priori information is formulated into a quadratic objective function with linear bound constraints. The level set function evolutions are achieved by artificially time evolving the level set function in the negative gradient direction; the intensity value updates are achieved by using the gradient projection conjugate gradient algorithm. Results: The proposed simultaneous segmentation and reconstruction results were compared to “conventional” iterative reconstruction (with no segmentation), iterative reconstruction followed by segmentation, and filtered backprojection. Improvements of 6%–13% in the normalized root mean square error were observed when the proposed algorithm was applied to simulated projections of a numerical phantom and to real fan-beam projections of the Catphan phantom, both of which did not satisfy the a priori assumptions. Conclusions: The proposed simultaneous segmentation and reconstruction resulted in improved reconstruction image quality. The algorithm correctly segments the reconstruction space into regions, preserves sharp edges between different regions, and smoothes the noise
Yoon, Sungwon; Pineda, Angel R.; Fahrig, Rebecca
2010-05-15
Purpose: An iterative tomographic reconstruction algorithm that simultaneously segments and reconstructs the reconstruction domain is proposed and applied to tomographic reconstructions from a sparse number of projection images. Methods: The proposed algorithm uses a two-phase level set method segmentation in conjunction with an iterative tomographic reconstruction to achieve simultaneous segmentation and reconstruction. The simultaneous segmentation and reconstruction is achieved by alternating between level set function evolutions and per-region intensity value updates. To deal with the limited number of projections, a priori information about the reconstruction is enforced via penalized likelihood function. Specifically, smooth function within each region (piecewise smooth function) and bounded function intensity values for each region are assumed. Such a priori information is formulated into a quadratic objective function with linear bound constraints. The level set function evolutions are achieved by artificially time evolving the level set function in the negative gradient direction; the intensity value updates are achieved by using the gradient projection conjugate gradient algorithm. Results: The proposed simultaneous segmentation and reconstruction results were compared to ''conventional'' iterative reconstruction (with no segmentation), iterative reconstruction followed by segmentation, and filtered backprojection. Improvements of 6%-13% in the normalized root mean square error were observed when the proposed algorithm was applied to simulated projections of a numerical phantom and to real fan-beam projections of the Catphan phantom, both of which did not satisfy the a priori assumptions. Conclusions: The proposed simultaneous segmentation and reconstruction resulted in improved reconstruction image quality. The algorithm correctly segments the reconstruction space into regions, preserves sharp edges between different regions, and smoothes the noise
Prediction of disease genes using tissue-specified gene-gene network
2014-01-01
Background Tissue specificity is an important aspect of many genetic diseases in the context of genetic disorders as the disorder affects only few tissues. Therefore tissue specificity is important in identifying disease-gene associations. Hence this paper seeks to discuss the impact of using tissue specificity in predicting new disease-gene associations and how to use tissue specificity along with phenotype information for a particular disease. Methods In order to find out the impact of using tissue specificity for predicting new disease-gene associations, this study proposes a novel method called tissue-specified genes to construct tissues-specific gene-gene networks for different tissue samples. Subsequently, these networks are used with phenotype details to predict disease genes by using Katz method. The proposed method was compared with three other tissue-specific network construction methods in order to check its effectiveness. Furthermore, to check the possibility of using tissue-specific gene-gene network instead of generic protein-protein network at all time, the results are compared with three other methods. Results In terms of leave-one-out cross validation, calculation of the mean enrichment and ROC curves indicate that the proposed approach outperforms existing network construction methods. Furthermore tissues-specific gene-gene networks make a more positive impact on predicting disease-gene associations than generic protein-protein interaction networks. Conclusions In conclusion by integrating tissue-specific data it enabled prediction of known and unknown disease-gene associations for a particular disease more effectively. Hence it is better to use tissue-specific gene-gene network whenever possible. In addition the proposed method is a better way of constructing tissue-specific gene-gene networks. PMID:25350876
Acerbi, Enzo; Zelante, Teresa; Narang, Vipin; Stella, Fabio
2014-12-11
Dynamic aspects of gene regulatory networks are typically investigated by measuring system variables at multiple time points. Current state-of-the-art computational approaches for reconstructing gene networks directly build on such data, making a strong assumption that the system evolves in a synchronous fashion at fixed points in time. However, nowadays omics data are being generated with increasing time course granularity. Thus, modellers now have the possibility to represent the system as evolving in continuous time and to improve the models' expressiveness. Continuous time Bayesian networks are proposed as a new approach for gene network reconstruction from time course expression data. Their performance was compared to two state-of-the-art methods: dynamic Bayesian networks and Granger causality analysis. On simulated data, the methods comparison was carried out for networks of increasing size, for measurements taken at different time granularity densities and for measurements unevenly spaced over time. Continuous time Bayesian networks outperformed the other methods in terms of the accuracy of regulatory interactions learnt from data for all network sizes. Furthermore, their performance degraded smoothly as the size of the network increased. Continuous time Bayesian networks were significantly better than dynamic Bayesian networks for all time granularities tested and better than Granger causality for dense time series. Both continuous time Bayesian networks and Granger causality performed robustly for unevenly spaced time series, with no significant loss of performance compared to the evenly spaced case, while the same did not hold true for dynamic Bayesian networks. The comparison included the IRMA experimental datasets which confirmed the effectiveness of the proposed method. Continuous time Bayesian networks were then applied to elucidate the regulatory mechanisms controlling murine T helper 17 (Th17) cell differentiation and were found to be effective in
New method to analyze internal disruptions with tomographic reconstructions
NASA Astrophysics Data System (ADS)
Tanzi, C. P.; de Blank, H. J.
1997-03-01
Sawtooth crashes have been investigated on the Rijnhuizen Tokamak Project (RTP) [N. J. Lopes Cardozo et al., Proceedings of the 14th International Conference on Plasma Physics and Controlled Nuclear Fusion Research, Würzburg, 1992 (International Atomic Energy Agency, Vienna, 1993), Vol. 1, p. 271]. Internal disruptions in tokamak plasmas often exhibit an m=1 poloidal mode structure prior to the collapse which can be clearly identified by means of multicamera soft x-ray diagnostics. In this paper tomographic reconstructions of such m=1 modes are analyzed with a new method, based on magnetohydrodynamic (MHD) invariants computed from the two-dimensional emissivity profiles, which quantifies the amount of profile flattening not only after the crash but also during the precursor oscillations. The results are interpreted by comparing them with two models which simulate the measurements of the m=1 redistribution of soft x-ray emissivity prior to the sawtooth crash. One model is based on the magnetic reconnection model of Kadomtsev. The other involves ideal MHD motion only. In cases where differences in magnetic topology between the two models cannot be seen in the tomograms, the analysis of profile flattening has an advantage. The analysis shows that in RTP the clearly observed m=1 displacement of some sawteeth requires the presence of convective ideal MHD motion, whereas other precursors are consistent with magnetic reconnection of up to 75% of the magnetic flux within the q=1 surface. The possibility of ideal interchange combined with enhanced cross-field transport is not excluded.
Evolution of a core gene network for skeletogenesis in chordates.
Hecht, Jochen; Stricker, Sigmar; Wiecha, Ulrike; Stiege, Asita; Panopoulou, Georgia; Podsiadlowski, Lars; Poustka, Albert J; Dieterich, Christoph; Ehrich, Siegfried; Suvorova, Julia; Mundlos, Stefan; Seitz, Volkhard
2008-03-21
The skeleton is one of the most important features for the reconstruction of vertebrate phylogeny but few data are available to understand its molecular origin. In mammals the Runt genes are central regulators of skeletogenesis. Runx2 was shown to be essential for osteoblast differentiation, tooth development, and bone formation. Both Runx2 and Runx3 are essential for chondrocyte maturation. Furthermore, Runx2 directly regulates Indian hedgehog expression, a master coordinator of skeletal development. To clarify the correlation of Runt gene evolution and the emergence of cartilage and bone in vertebrates, we cloned the Runt genes from hagfish as representative of jawless fish (MgRunxA, MgRunxB) and from dogfish as representative of jawed cartilaginous fish (ScRunx1-3). According to our phylogenetic reconstruction the stem species of chordates harboured a single Runt gene and thereafter Runt locus duplications occurred during early vertebrate evolution. All newly isolated Runt genes were expressed in cartilage according to quantitative PCR. In situ hybridisation confirmed high MgRunxA expression in hard cartilage of hagfish. In dogfish ScRunx2 and ScRunx3 were expressed in embryonal cartilage whereas all three Runt genes were detected in teeth and placoid scales. In cephalochordates (lancelets) Runt, Hedgehog and SoxE were strongly expressed in the gill bars and expression of Runt and Hedgehog was found in endo- as well as ectodermal cells. Furthermore we demonstrate that the lancelet Runt protein binds to Runt binding sites in the lancelet Hedgehog promoter and regulates its activity. Together, these results suggest that Runt and Hedgehog were part of a core gene network for cartilage formation, which was already active in the gill bars of the common ancestor of cephalochordates and vertebrates and diversified after Runt duplications had occurred during vertebrate evolution. The similarities in expression patterns of Runt genes support the view that teeth and
Evolution of a Core Gene Network for Skeletogenesis in Chordates
Hecht, Jochen; Panopoulou, Georgia; Podsiadlowski, Lars; Poustka, Albert J.; Dieterich, Christoph; Ehrich, Siegfried; Suvorova, Julia; Mundlos, Stefan; Seitz, Volkhard
2008-01-01
The skeleton is one of the most important features for the reconstruction of vertebrate phylogeny but few data are available to understand its molecular origin. In mammals the Runt genes are central regulators of skeletogenesis. Runx2 was shown to be essential for osteoblast differentiation, tooth development, and bone formation. Both Runx2 and Runx3 are essential for chondrocyte maturation. Furthermore, Runx2 directly regulates Indian hedgehog expression, a master coordinator of skeletal development. To clarify the correlation of Runt gene evolution and the emergence of cartilage and bone in vertebrates, we cloned the Runt genes from hagfish as representative of jawless fish (MgRunxA, MgRunxB) and from dogfish as representative of jawed cartilaginous fish (ScRunx1–3). According to our phylogenetic reconstruction the stem species of chordates harboured a single Runt gene and thereafter Runt locus duplications occurred during early vertebrate evolution. All newly isolated Runt genes were expressed in cartilage according to quantitative PCR. In situ hybridisation confirmed high MgRunxA expression in hard cartilage of hagfish. In dogfish ScRunx2 and ScRunx3 were expressed in embryonal cartilage whereas all three Runt genes were detected in teeth and placoid scales. In cephalochordates (lancelets) Runt, Hedgehog and SoxE were strongly expressed in the gill bars and expression of Runt and Hedgehog was found in endo- as well as ectodermal cells. Furthermore we demonstrate that the lancelet Runt protein binds to Runt binding sites in the lancelet Hedgehog promoter and regulates its activity. Together, these results suggest that Runt and Hedgehog were part of a core gene network for cartilage formation, which was already active in the gill bars of the common ancestor of cephalochordates and vertebrates and diversified after Runt duplications had occurred during vertebrate evolution. The similarities in expression patterns of Runt genes support the view that teeth and
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N F; Sitek, A
2011-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496
Reduction and reconstruction methods for simulation and control of fluids
NASA Astrophysics Data System (ADS)
Ma, Zhanhua
In this thesis we develop model reduction/reconstruction methods that are applied to simulation and control of fluids. In the first part of the thesis, we focus on development of dimension reduction methods that compute reduced-order models (at the order of 101˜2) of systems with high-dimensional states (at the order of 105˜8) that are typical in computational fluid dynamics. The reduced-order models are then used for feedback control design for the full systems, as the control design tools are usually applicable only to systems of order up to 10 4. First, we show that a widely-used model reduction method for stable linear timeinvariant (LTI) systems, the approximate balanced truncation method (also called balanced POD), yields identical reduced-order models as Eigensystem Realization Algorithm (ERA), a well-known method in system identification. Unlike ERA, Balanced POD generates sets of modes that are useful in controller/observer design and systems analysis. On the other hand, ERA is more computationally efficient and does not need data from adjoint systems, which cannot be constructed in experiments and are often costly to construct and simulate numerically. The equivalence of ERA and balanced POD leads us to further design a version of ERA that works for unstable (linear) systems with one-dimensional unstable eigenspace and is equivalent to a recently developed version of balanced POD for unstable systems. We consider further generalization of balanced POD/ERA methods for linearized time-periodic systems around an unstable orbit. Four algorithms are presented: the lifted balanced POD/lifted ERA and the periodic balanced POD/periodic ERA. The lifting approach generates a LTI reduced-order model that updates the system once every period, and the periodic approach generates a periodic reduced-order model. By construction the lifted ERA is the most computationally efficient algorithm and it does not need adjoint data. By removing periodicity in periodic balanced
Prediction of disease genes using tissue-specified gene-gene network.
Ganegoda, Gamage; Wang, JianXin; Wu, Fang-Xiang; Li, Min
2014-01-01
Tissue specificity is an important aspect of many genetic diseases in the context of genetic disorders as the disorder affects only few tissues. Therefore tissue specificity is important in identifying disease-gene associations. Hence this paper seeks to discuss the impact of using tissue specificity in predicting new disease-gene associations and how to use tissue specificity along with phenotype information for a particular disease. In order to find out the impact of using tissue specificity for predicting new disease-gene associations, this study proposes a novel method called tissue-specified genes to construct tissues-specific gene-gene networks for different tissue samples. Subsequently, these networks are used with phenotype details to predict disease genes by using Katz method. The proposed method was compared with three other tissue-specific network construction methods in order to check its effectiveness. Furthermore, to check the possibility of using tissue-specific gene-gene network instead of generic protein-protein network at all time, the results are compared with three other methods. In terms of leave-one-out cross validation, calculation of the mean enrichment and ROC curves indicate that the proposed approach outperforms existing network construction methods. Furthermore tissues-specific gene-gene networks make a more positive impact on predicting disease-gene associations than generic protein-protein interaction networks. In conclusion by integrating tissue-specific data it enabled prediction of known and unknown disease-gene associations for a particular disease more effectively. Hence it is better to use tissue-specific gene-gene network whenever possible. In addition the proposed method is a better way of constructing tissue-specific gene-gene networks.
Comparison of methods for the reduction of reconstructed layers in atmospheric tomography.
Saxenhuber, Daniela; Auzinger, Günter; Louarn, Miska Le; Helin, Tapio
2017-04-01
For the new generation of extremely large telescopes (ELTs), the computational effort for adaptive optics (AO) systems is demanding even for fast reconstruction algorithms. In wide-field AO, atmospheric tomography, i.e., the reconstruction of turbulent atmospheric layers from wavefront sensor data in several directions of view, is the crucial step for an overall reconstruction. Along with the number of deformable mirrors, wavefront sensors and their resolution, as well as the guide star separation, the number of reconstruction layers contributes significantly to the numerical effort. To reduce the computational cost, a sparse reconstruction profile which still yields good reconstruction quality is needed. In this paper, we analyze existing methods and present new approaches to determine optimal layer heights and turbulence weights for the tomographic reconstruction. Two classes of methods are discussed. On the one hand, we have compression methods that downsample a given input profile to fewer layers. Among other methods, a new compression method based on discrete optimization of collecting atmospheric layers to subgroups and the compression by means of conserving turbulence moments is presented. On the other hand, we take a look at a joint optimization of tomographic reconstruction and reconstruction profile during atmospheric tomography, which is independent of any a priori information on the underlying input profile. We analyze and study the qualitative performance of these methods for different input profiles and varying fields of view in an ELT-sized multi-object AO setting on the European Southern Observatory end-to-end simulation tool OCTOPUS.
Compact high order finite volume method on unstructured grids III: Variational reconstruction
NASA Astrophysics Data System (ADS)
Wang, Qian; Ren, Yu-Xin; Pan, Jianhua; Li, Wanai
2017-05-01
This paper presents a variational reconstruction for the high order finite volume method in solving the two-dimensional Navier-Stokes equations on arbitrary unstructured grids. In the variational reconstruction, an interfacial jump integration is defined to measure the jumps of the reconstruction polynomial and its spatial derivatives on each cell interface. The system of linear equations to determine the reconstruction polynomials is derived by minimizing the total interfacial jump integration in the computational domain using the variational method. On each control volume, the derived equations are implicit relations between the coefficients of the reconstruction polynomials defined on a compact stencil involving only the current cell and its direct face-neighbors. The reconstruction and time integration coupled iteration method proposed in our previous paper is used to achieve high computational efficiency. A problem-independent shock detector and the WBAP limiter are used to suppress non-physical oscillations in the simulation of flow with discontinuities. The advantages of the finite volume method using the variational reconstruction over the compact least-squares finite volume method proposed in our previous papers are higher accuracy, higher computational efficiency, more flexible boundary treatment and non-singularity of the reconstruction matrix. A number of numerical test cases are solved to verify the accuracy, efficiency and shock-capturing capability of the finite volume method using the variational reconstruction.
NASA Astrophysics Data System (ADS)
He, Jinping; Ruan, Ningjuan; Zhao, Haibo; Liu, Yuchen
2016-10-01
Remote sensing features are varied and complicated. There is no comprehensive coverage dictionary for reconstruction. The reconstruction precision is not guaranteed. Aiming at the above problems, a novel reconstruction method with multiple compressed sensing data based on energy compensation is proposed in this paper. The multiple measured data and multiple coding matrices compose the reconstruction equation. It is locally solved through the Orthogonal Matching Pursuit (OMP) algorithm. Then the initial reconstruction image is obtained. Further assuming the local image patches have the same compensation gray value, the mathematical model of compensation value is constructed by minimizing the error of multiple estimated measured values and actual measured values. After solving the minimization, the compensation values are added to the initial reconstruction image. Then the final energy compensation image is obtained. The experiments prove that the energy compensation method is superior to those without compensation. Our method is more suitable for remote sensing features.
Yu, Xiangtian; Zeng, Tao; Wang, Xiangdong; Li, Guojun; Chen, Luonan
2015-06-13
In the conventional analysis of complex diseases, the control and case samples are assumed to be of great purity. However, due to the heterogeneity of disease samples, many disease genes are even not always consistently up-/down-regulated, leading to be under-estimated. This problem will seriously influence effective personalized diagnosis or treatment. The expression variance and expression covariance can address such a problem in a network manner. But, these analyses always require multiple samples rather than one sample, which is generally not available in clinical practice for each individual. To extract the common and specific network characteristics for individual patients in this paper, a novel differential network model, e.g. personalized dysfunctional gene network, is proposed to integrate those genes with different features, such as genes with the differential gene expression (DEG), genes with the differential expression variance (DEVG) and gene-pairs with the differential expression covariance (DECG) simultaneously, to construct personalized dysfunctional networks. This model uses a new statistic-like measurement on differential information, i.e., a differential score (DEVC), to reconstruct the differential expression network between groups of normal and diseased samples; and further quantitatively evaluate different feature genes in the patient-specific network for each individual. This DEVC-based differential expression network (DEVC-net) has been applied to the study of complex diseases for prostate cancer and diabetes. (1) Characterizing the global expression change between normal and diseased samples, the differential gene networks of those diseases were found to have a new bi-coloured topological structure, where their non hub-centred sub-networks are mainly composed of genes/proteins controlling various biological processes. (2) The differential expression variance/covariance rather than differential expression is new informative sources, and can
A novel method for the 3-D reconstruction of scoliotic ribs from frontal and lateral radiographs.
Seoud, Lama; Cheriet, Farida; Labelle, Hubert; Dansereau, Jean
2011-05-01
Among the external manifestations of scoliosis, the rib hump, which is associated with the ribs' deformities and rotations, constitutes the most disturbing aspect of the scoliotic deformity for patients. A personalized 3-D model of the rib cage is important for a better evaluation of the deformity, and hence, a better treatment planning. A novel method for the 3-D reconstruction of the rib cage, based only on two standard radiographs, is proposed in this paper. For each rib, two points are extrapolated from the reconstructed spine, and three points are reconstructed by stereo radiography. The reconstruction is then refined using a surface approximation. The method was evaluated using clinical data of 13 patients with scoliosis. A comparison was conducted between the reconstructions obtained with the proposed method and those obtained by using a previous reconstruction method based on two frontal radiographs. A first comparison criterion was the distances between the reconstructed ribs and the surface topography of the trunk, considered as the reference modality. The correlation between ribs axial rotation and back surface rotation was also evaluated. The proposed method successfully reconstructed the ribs of the 6th-12th thoracic levels. The evaluation results showed that the 3-D configuration of the new rib reconstructions is more consistent with the surface topography and provides more accurate measurements of ribs axial rotation.
Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models
Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...
Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models
Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...
Solution of the quasispecies model for an arbitrary gene network
NASA Astrophysics Data System (ADS)
Tannenbaum, Emmanuel; Shakhnovich, Eugene I.
2004-08-01
In this paper, we study the equilibrium behavior of Eigen’s quasispecies equations for an arbitrary gene network. We consider a genome consisting of N genes, so that the full genome sequence σ may be written as σ=σ1σ2⋯σN , where σi are sequences of individual genes. We assume a single fitness peak model for each gene, so that gene i has some “master” sequence σi,0 for which it is functioning. The fitness landscape is then determined by which genes in the genome are functioning and which are not. The equilibrium behavior of this model may be solved in the limit of infinite sequence length. The central result is that, instead of a single error catastrophe, the model exhibits a series of localization to delocalization transitions, which we term an “error cascade.” As the mutation rate is increased, the selective advantage for maintaining functional copies of certain genes in the network disappears, and the population distribution delocalizes over the corresponding sequence spaces. The network goes through a series of such transitions, as more and more genes become inactivated, until eventually delocalization occurs over the entire genome space, resulting in a final error catastrophe. This model provides a criterion for determining the conditions under which certain genes in a genome will lose functionality due to genetic drift. It also provides insight into the response of gene networks to mutagens. In particular, it suggests an approach for determining the relative importance of various genes to the fitness of an organism, in a more accurate manner than the standard “deletion set” method. The results in this paper also have implications for mutational robustness and what C.O. Wilke termed “survival of the flattest.”
Chen, Shuo; Wang, Gang; Cui, Xiaoyu; Liu, Quan
2017-01-23
Raman spectroscopy has demonstrated great potential in biomedical applications. However, spectroscopic Raman imaging is limited in the investigation of fast changing phenomena because of slow data acquisition. Our previous studies have indicated that spectroscopic Raman imaging can be significantly sped up using the approach of narrow-band imaging followed by spectral reconstruction. A multi-channel system was built to demonstrate the feasibility of fast wide-field spectroscopic Raman imaging using the approach of simultaneous narrow-band image acquisition followed by spectral reconstruction based on Wiener estimation in phantoms. To further improve the accuracy of reconstructed Raman spectra, we propose a stepwise spectral reconstruction method in this study, which can be combined with the earlier developed sequential weighted Wiener estimation to improve spectral reconstruction accuracy. The stepwise spectral reconstruction method first reconstructs the fluorescence background spectrum from narrow-band measurements and then the pure Raman narrow-band measurements can be estimated by subtracting the estimated fluorescence background from the overall narrow-band measurements. Thereafter, the pure Raman spectrum can be reconstructed from the estimated pure Raman narrow-band measurements. The result indicates that the stepwise spectral reconstruction method can improve spectral reconstruction accuracy significantly when combined with sequential weighted Wiener estimation, compared with the traditional Wiener estimation. In addition, qualitatively accurate cell Raman spectra were successfully reconstructed using the stepwise spectral reconstruction method from the narrow-band measurements acquired by a four-channel wide-field Raman spectroscopic imaging system. This method can potentially facilitate the adoption of spectroscopic Raman imaging to the investigation of fast changing phenomena.
Analysis of cascading failure in gene networks.
Sun, Longxiao; Wang, Shudong; Li, Kaikai; Meng, Dazhi
2012-01-01
It is an important subject to research the functional mechanism of cancer-related genes make in formation and development of cancers. The modern methodology of data analysis plays a very important role for deducing the relationship between cancers and cancer-related genes and analyzing functional mechanism of genome. In this research, we construct mutual information networks using gene expression profiles of glioblast and renal in normal condition and cancer conditions. We investigate the relationship between structure and robustness in gene networks of the two tissues using a cascading failure model based on betweenness centrality. Define some important parameters such as the percentage of failure nodes of the network, the average size-ratio of cascading failure, and the cumulative probability of size-ratio of cascading failure to measure the robustness of the networks. By comparing control group and experiment groups, we find that the networks of experiment groups are more robust than that of control group. The gene that can cause large scale failure is called structural key gene. Some of them have been confirmed to be closely related to the formation and development of glioma and renal cancer respectively. Most of them are predicted to play important roles during the formation of glioma and renal cancer, maybe the oncogenes, suppressor genes, and other cancer candidate genes in the glioma and renal cancer cells. However, these studies provide little information about the detailed roles of identified cancer genes.
Paper-based Synthetic Gene Networks
Pardee, Keith; Green, Alexander A.; Ferrante, Tom; Cameron, D. Ewen; DaleyKeyser, Ajay; Yin, Peng; Collins, James J.
2014-01-01
Synthetic gene networks have wide-ranging uses in reprogramming and rewiring organisms. To date, there has not been a way to harness the vast potential of these networks beyond the constraints of a laboratory or in vivo environment. Here, we present an in vitro paper-based platform that provides a new venue for synthetic biologists to operate, and a much-needed medium for the safe deployment of engineered gene circuits beyond the lab. Commercially available cell-free systems are freeze-dried onto paper, enabling the inexpensive, sterile and abiotic distribution of synthetic biology-based technologies for the clinic, global health, industry, research and education. For field use, we create circuits with colorimetric outputs for detection by eye, and fabricate a low-cost, electronic optical interface. We demonstrate this technology with small molecule and RNA actuation of genetic switches, rapid prototyping of complex gene circuits, and programmable in vitro diagnostics, including glucose sensors and strain-specific Ebola virus sensors. PMID:25417167
Paper-based synthetic gene networks.
Pardee, Keith; Green, Alexander A; Ferrante, Tom; Cameron, D Ewen; DaleyKeyser, Ajay; Yin, Peng; Collins, James J
2014-11-06
Synthetic gene networks have wide-ranging uses in reprogramming and rewiring organisms. To date, there has not been a way to harness the vast potential of these networks beyond the constraints of a laboratory or in vivo environment. Here, we present an in vitro paper-based platform that provides an alternate, versatile venue for synthetic biologists to operate and a much-needed medium for the safe deployment of engineered gene circuits beyond the lab. Commercially available cell-free systems are freeze dried onto paper, enabling the inexpensive, sterile, and abiotic distribution of synthetic-biology-based technologies for the clinic, global health, industry, research, and education. For field use, we create circuits with colorimetric outputs for detection by eye and fabricate a low-cost, electronic optical interface. We demonstrate this technology with small-molecule and RNA actuation of genetic switches, rapid prototyping of complex gene circuits, and programmable in vitro diagnostics, including glucose sensors and strain-specific Ebola virus sensors.
On multigrid methods for image reconstruction from projections
Henson, V.E.; Robinson, B.T.; Limber, M.
1994-12-31
The sampled Radon transform of a 2D function can be represented as a continuous linear map R : L{sup 1} {yields} R{sup N}. The image reconstruction problem is: given a vector b {element_of} R{sup N}, find an image (or density function) u(x, y) such that Ru = b. Since in general there are infinitely many solutions, the authors pick the solution with minimal 2-norm. Numerous proposals have been made regarding how best to discretize this problem. One can, for example, select a set of functions {phi}{sub j} that span a particular subspace {Omega} {contained_in} L{sup 1}, and model R : {Omega} {yields} R{sup N}. The subspace {Omega} may be chosen as a member of a sequence of subspaces whose limit is dense in L{sup 1}. One approach to the choice of {Omega} gives rise to a natural pixel discretization of the image space. Two possible choices of the set {phi}{sub j} are the set of characteristic functions of finite-width `strips` representing energy transmission paths and the set of intersections of such strips. The authors have studied the eigenstructure of the matrices B resulting from these choices and the effect of applying a Gauss-Seidel iteration to the problem Bw = b. There exists a near null space into which the error vectors migrate with iteration, after which Gauss-Seidel iteration stalls. The authors attempt to accelerate convergence via a multilevel scheme, based on the principles of McCormick`s Multilevel Projection Method (PML). Coarsening is achieved by thickening the rays which results in a much smaller discretization of an optimal grid, and a halving of the number of variables. This approach satisfies all the requirements of the PML scheme. They have observed that a multilevel approach based on this idea accelerates convergence at least to the point where noise in the data dominates.
New method to analyze internal disruptions with tomographic reconstructions
Tanzi, C.P.; de Blank, H.J.
1997-03-01
Sawtooth crashes have been investigated on the Rijnhuizen Tokamak Project (RTP) [N. J. Lopes Cardozo {ital et al.}, {ital Proceedings of the 14th International Conference on Plasma Physics and Controlled Nuclear Fusion Research}, W{umlt u}rzburg, 1992 (International Atomic Energy Agency, Vienna, 1993), Vol. 1, p. 271]. Internal disruptions in tokamak plasmas often exhibit an m=1 poloidal mode structure prior to the collapse which can be clearly identified by means of multicamera soft x-ray diagnostics. In this paper tomographic reconstructions of such m=1 modes are analyzed with a new method, based on magnetohydrodynamic (MHD) invariants computed from the two-dimensional emissivity profiles, which quantifies the amount of profile flattening not only after the crash but also during the precursor oscillations. The results are interpreted by comparing them with two models which simulate the measurements of the m=1 redistribution of soft x-ray emissivity prior to the sawtooth crash. One model is based on the magnetic reconnection model of Kadomtsev. The other involves ideal MHD motion only. In cases where differences in magnetic topology between the two models cannot be seen in the tomograms, the analysis of profile flattening has an advantage. The analysis shows that in RTP the clearly observed m=1 displacement of some sawteeth requires the presence of convective ideal MHD motion, whereas other precursors are consistent with magnetic reconnection of up to 75{percent} of the magnetic flux within the q=1 surface. The possibility of ideal interchange combined with enhanced cross-field transport is not excluded. {copyright} {ital 1997 American Institute of Physics.}
Source reconstruction for neutron coded-aperture imaging: A sparse method.
Wang, Dongming; Hu, Huasi; Zhang, Fengna; Jia, Qinggang
2017-08-01
Neutron coded-aperture imaging has been developed as an important diagnostic for inertial fusion studies in recent decades. It is used to measure the distribution of neutrons produced in deuterium-tritium plasma. Source reconstruction is an essential part of the coded-aperture imaging. In this paper, we applied a sparse reconstruction method to neutron source reconstruction. This method takes advantage of the sparsity of the source image. Monte Carlo neutron transport simulations were performed to obtain the system response. An interpolation method was used while obtaining the spatially variant point spread functions on each point of the source in order to reduce the number of point spread functions that needs to be calculated by the Monte Carlo method. Source reconstructions from simulated images show that the sparse reconstruction method can result in higher signal-to-noise ratio and less distortion at a relatively high statistical noise level.
Source reconstruction for neutron coded-aperture imaging: A sparse method
NASA Astrophysics Data System (ADS)
Wang, Dongming; Hu, Huasi; Zhang, Fengna; Jia, Qinggang
2017-08-01
Neutron coded-aperture imaging has been developed as an important diagnostic for inertial fusion studies in recent decades. It is used to measure the distribution of neutrons produced in deuterium-tritium plasma. Source reconstruction is an essential part of the coded-aperture imaging. In this paper, we applied a sparse reconstruction method to neutron source reconstruction. This method takes advantage of the sparsity of the source image. Monte Carlo neutron transport simulations were performed to obtain the system response. An interpolation method was used while obtaining the spatially variant point spread functions on each point of the source in order to reduce the number of point spread functions that needs to be calculated by the Monte Carlo method. Source reconstructions from simulated images show that the sparse reconstruction method can result in higher signal-to-noise ratio and less distortion at a relatively high statistical noise level.
Reconstruction method for data protection in telemedicine systems
NASA Astrophysics Data System (ADS)
Buldakova, T. I.; Suyatinov, S. I.
2015-03-01
In the report the approach to protection of transmitted data by creation of pair symmetric keys for the sensor and the receiver is offered. Since biosignals are unique for each person, their corresponding processing allows to receive necessary information for creation of cryptographic keys. Processing is based on reconstruction of the mathematical model generating time series that are diagnostically equivalent to initial biosignals. Information about the model is transmitted to the receiver, where the restoration of physiological time series is performed using the reconstructed model. Thus, information about structure and parameters of biosystem model received in the reconstruction process can be used not only for its diagnostics, but also for protection of transmitted data in telemedicine complexes.
NASA Astrophysics Data System (ADS)
Chen, Xueli; Yang, Defu; Zhang, Qitan; Liang, Jimin
2014-05-01
Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l1/2 regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l1/2 regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l1 regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.
Wu, Xuecheng; Wu, Yingchun; Yang, Jing; Wang, Zhihua; Zhou, Binwu; Gréhan, Gérard; Cen, Kefa
2013-05-20
Application of the modified convolution method to reconstruct digital inline holography of particle illuminated by an elliptical Gaussian beam is investigated. Based on the analysis on the formation of particle hologram using the Collins formula, the convolution method is modified to compensate the astigmatism by adding two scaling factors. Both simulated and experimental holograms of transparent droplets and opaque particles are used to test the algorithm, and the reconstructed images are compared with that using FRFT reconstruction. Results show that the modified convolution method can accurately reconstruct the particle image. This method has an advantage that the reconstructed images in different depth positions have the same size and resolution with the hologram. This work shows that digital inline holography has great potential in particle diagnostics in curvature containers.
Wang, Jinguo; Zhao, Zhiqin; Song, Jian; Chen, Guoping; Nie, Zaiping; Liu, Qing-Huo
2015-05-01
An iterative reconstruction method has been previously reported by the authors of this paper. However, the iterative reconstruction method was demonstrated by solely using the numerical simulations. It is essential to apply the iterative reconstruction method to practice conditions. The objective of this work is to validate the capability of the iterative reconstruction method for reducing the effects of acoustic heterogeneity with the experimental data in microwave induced thermoacoustic tomography. Most existing reconstruction methods need to combine the ultrasonic measurement technology to quantitatively measure the velocity distribution of heterogeneity, which increases the system complexity. Different to existing reconstruction methods, the iterative reconstruction method combines time reversal mirror technique, fast marching method, and simultaneous algebraic reconstruction technique to iteratively estimate the velocity distribution of heterogeneous tissue by solely using the measured data. Then, the estimated velocity distribution is used subsequently to reconstruct the highly accurate image of microwave absorption distribution. Experiments that a target placed in an acoustic heterogeneous environment are performed to validate the iterative reconstruction method. By using the estimated velocity distribution, the target in an acoustic heterogeneous environment can be reconstructed with better shape and higher image contrast than targets that are reconstructed with a homogeneous velocity distribution. The distortions caused by the acoustic heterogeneity can be efficiently corrected by utilizing the velocity distribution estimated by the iterative reconstruction method. The advantage of the iterative reconstruction method over the existing correction methods is that it is successful in improving the quality of the image of microwave absorption distribution without increasing the system complexity.
Tamada, Yoshinori; Imoto, Seiya; Araki, Hiromitsu; Nagasaki, Masao; Print, Cristin; Charnock-Jones, D Stephen; Miyano, Satoru
2011-01-01
We present a novel algorithm to estimate genome-wide gene networks consisting of more than 20,000 genes from gene expression data using nonparametric Bayesian networks. Due to the difficulty of learning Bayesian network structures, existing algorithms cannot be applied to more than a few thousand genes. Our algorithm overcomes this limitation by repeatedly estimating subnetworks in parallel for genes selected by neighbor node sampling. Through numerical simulation, we confirmed that our algorithm outperformed a heuristic algorithm in a shorter time. We applied our algorithm to microarray data from human umbilical vein endothelial cells (HUVECs) treated with siRNAs, to construct a human genome-wide gene network, which we compared to a small gene network estimated for the genes extracted using a traditional bioinformatics method. The results showed that our genome-wide gene network contains many features of the small network, as well as others that could not be captured during the small network estimation. The results also revealed master-regulator genes that are not in the small network but that control many of the genes in the small network. These analyses were impossible to realize without our proposed algorithm.
Comparison of reconstruction methods and quantitative accuracy in Siemens Inveon PET scanner
NASA Astrophysics Data System (ADS)
Ram Yu, A.; Kim, Jin Su; Kang, Joo Hyun; Moo Lim, Sang
2015-04-01
PET reconstruction is key to the quantification of PET data. To our knowledge, no comparative study of reconstruction methods has been performed to date. In this study, we compared reconstruction methods with various filters in terms of their spatial resolution, non-uniformities (NU), recovery coefficients (RCs), and spillover ratios (SORs). In addition, the linearity of reconstructed radioactivity between linearity of measured and true concentrations were also assessed. A Siemens Inveon PET scanner was used in this study. Spatial resolution was measured with NEMA standard by using a 1 mm3 sized 18F point source. Image quality was assessed in terms of NU, RC and SOR. To measure the effect of reconstruction algorithms and filters, data was reconstructed using FBP, 3D reprojection algorithm (3DRP), ordered subset expectation maximization 2D (OSEM 2D), and maximum a posteriori (MAP) with various filters or smoothing factors (β). To assess the linearity of reconstructed radioactivity, image quality phantom filled with 18F was used using FBP, OSEM and MAP (β =1.5 & 5 × 10-5). The highest achievable volumetric resolution was 2.31 mm3 and the highest RCs were obtained when OSEM 2D was used. SOR was 4.87% for air and 3.97% for water, obtained OSEM 2D reconstruction was used. The measured radioactivity of reconstruction image was proportional to the injected one for radioactivity below 16 MBq/ml when FBP or OSEM 2D reconstruction methods were used. By contrast, when the MAP reconstruction method was used, activity of reconstruction image increased proportionally, regardless of the amount of injected radioactivity. When OSEM 2D or FBP were used, the measured radioactivity concentration was reduced by 53% compared with true injected radioactivity for radioactivity <16 MBq/ml. The OSEM 2D reconstruction method provides the highest achievable volumetric resolution and highest RC among all the tested methods and yields a linear relation between the measured and true
Motion correction based reconstruction method for compressively sampled cardiac MR imaging.
Ahmed, Abdul Haseeb; Qureshi, Ijaz M; Shah, Jawad Ali; Zaheer, Muhammad
2017-02-01
Respiratory motion during Magnetic Resonance (MR) acquisition causes strong blurring artifacts in the reconstructed images. These artifacts become more pronounced when used with the fast imaging reconstruction techniques like compressed sensing (CS). Recently, an MR reconstruction technique has been done with the help of compressed sensing (CS), to provide good quality sparse images from the highly under-sampled k-space data. In order to maximize the benefits of CS, it is obvious to use CS with the motion corrected samples. In this paper, we propose a novel CS based motion corrected image reconstruction technique. First, k-space data have been assigned to different respiratory state with the help of frequency domain phase correlation method. Then, multiple sparsity constraints has been used to provide good quality reconstructed cardiac cine images with the highly under-sampled k-space data. The proposed method exploits the multiple sparsity constraints, in combination with demon based registration technique and a novel reconstruction technique to provide the final motion free images. The proposed method is very simple to implement in clinical settings as compared to existing motion corrected methods. The performance of the proposed method is examined using simulated data and clinical data. Results show that this method performs better than the reconstruction of CS based method of cardiac cine images. Different acceleration rates have been used to show the performance of the proposed method. Copyright © 2016 Elsevier Inc. All rights reserved.
GFD-Net: A novel semantic similarity methodology for the analysis of gene networks.
Díaz-Montaña, Juan J; Díaz-Díaz, Norberto; Gómez-Vela, Francisco
2017-04-01
Since the popularization of biological network inference methods, it has become crucial to create methods to validate the resulting models. Here we present GFD-Net, the first methodology that applies the concept of semantic similarity to gene network analysis. GFD-Net combines the concept of semantic similarity with the use of gene network topology to analyze the functional dissimilarity of gene networks based on Gene Ontology (GO). The main innovation of GFD-Net lies in the way that semantic similarity is used to analyze gene networks taking into account the network topology. GFD-Net selects a functionality for each gene (specified by a GO term), weights each edge according to the dissimilarity between the nodes at its ends and calculates a quantitative measure of the network functional dissimilarity, i.e. a quantitative value of the degree of dissimilarity between the connected genes. The robustness of GFD-Net as a gene network validation tool was demonstrated by performing a ROC analysis on several network repositories. Furthermore, a well-known network was analyzed showing that GFD-Net can also be used to infer knowledge. The relevance of GFD-Net becomes more evident in Section "GFD-Net applied to the study of human diseases" where an example of how GFD-Net can be applied to the study of human diseases is presented. GFD-Net is available as an open-source Cytoscape app which offers a user-friendly interface to configure and execute the algorithm as well as the ability to visualize and interact with the results(http://apps.cytoscape.org/apps/gfdnet). Copyright © 2017 Elsevier Inc. All rights reserved.
A Novel Parallel Method for Speckle Masking Reconstruction Using the OpenMP
NASA Astrophysics Data System (ADS)
Li, Xuebao; Zheng, Yanfang
2016-08-01
High resolution reconstruction technology is developed to help enhance the spatial resolution of observational images for ground-based solar telescopes, such as speckle masking. Near real-time reconstruction performance is achieved on a high performance cluster using the Message Passing Interface (MPI). However, much time is spent in reconstructing solar subimages in such a speckle reconstruction. We design and implement a novel parallel method for speckle masking reconstruction of solar subimage on a shared memory machine using the OpenMP. Real tests are performed to verify the correctness of our codes. We present the details of several parallel reconstruction steps. The parallel implementation between various modules shows a great speed increase as compared to single thread serial implementation, and a speedup of about 2.5 is achieved in one subimage reconstruction. The timing result for reconstructing one subimage with 256×256 pixels shows a clear advantage with greater number of threads. This novel parallel method can be valuable in real-time reconstruction of solar images, especially after porting to a high performance cluster.
Image reconstruction method IRBis for optical/infrared long-baseline interferometry
NASA Astrophysics Data System (ADS)
Hofmann, Karl-Heinz; Heininger, Matthias; Schertl, Dieter; Weigelt, Gerd; Millour, Florentin; Berio, Philippe
2016-07-01
IRBis is an image reconstruction method for optical/infrared long-baseline interferometry. IRBis can reconstruct images from (a) measured visibilities and closure phases, or from (b) measured complex visibilities (i.e. the Fourier phases and visibilities). The applied optimization routine ASA CG is based on conjugate gradients. The method allows the user to implement different regularizers, as for example, maximum entropy, smoothness, total variation, etc., and apply residual ratios as an additional metric for goodness-of-fit. In addition, IRBis allows the user to change the following reconstruction parameters: (a) FOV of the area to be reconstructed, (b) the size of the pixel-grid used, (c) size of a binary mask in image space allowing reconstructed intensities < 0 within the binary mask only, (d) the strength of the regularization, etc. The two main reconstruction parameters are the size of the binary mask in image space (c) and the strength of the regularization (d). Several values of these two parameters are tested within the algorithm. The quality of the different reconstructions obtained is roughly estimated by evaluation of the differences between the measured data and the reconstructed image (using the reduced χ2 values and the residual ratios). The best-quality reconstruction and a few reconstructions sorted according to their quality are provided to the user as resulting reconstructions. We describe the theory of IRBis and will present several applications to simulated interferometric data and data of real astronomical objects: (a) We have investigated image reconstruction experiments of MATISSE target candidates by computer simulations. We have modeled gaps in a disk of a young stellar object and have simulated interferometric data (squared visibilities and closure phases) with a signal-to-noise ratio as expected for MATISSE observations. We have performed image reconstruction experiments with this model for different flux levels of the target and
NASA Astrophysics Data System (ADS)
Althobaiti, Murad; Vavadi, Hamed; Zhu, Quing
2017-02-01
Ultrasound-guided diffuse optical tomography (DOT) is a promising imaging technique that maps hemoglobin concentrations of breast lesions to assist ultrasound (US) for cancer diagnosis and treatment monitoring. The accurate recovery of breast lesion optical properties requires an effective image reconstruction method. We introduce a reconstruction approach in which US images are encoded as prior information for regularization of the inversion matrix. The framework of this approach is based on image reconstruction package "NIRFAST." We compare this approach to the US-guided dual-zone mesh reconstruction method, which is based on Born approximation and conjugate gradient optimization developed in our laboratory. Results were evaluated using phantoms and clinical data. This method improves classification of malignant and benign lesions by increasing malignant to benign lesion absorption contrast. The results also show improvements in reconstructed lesion shapes and the spatial distribution of absorption maps.
NASA Astrophysics Data System (ADS)
Nishida, Hidetoshi
In order to reconstruct the arbitrary shaped incompressible velocity field with noises, a new data-processing fluid dynamics (DFD) based upon the seamless immersed boundary method is proposed. The velocity field with noises is reconstructed by the Helmholtz's decomposition. The performance of DFD is demonstrated first for the reconstruction of velocity with noises and erroneous vectors. Also, the seamless immersed boundary method is incorporated into the velocity reconstruction for complicated flow geometry. Some fundamental flow fields, i.e., the square cavity flows with a circular cylinder and a square cylinder, are considered. As a result, it is concluded that the present DFD based upon the seamless immersed boundary method is very versatile technique for velocity reconstruction of the arbitrary shaped incompressible velocity with noises.
NASA Astrophysics Data System (ADS)
Guo, Hongbo; He, Xiaowei; Liu, Muhan; Zhang, Zeyu; Hu, Zhenhua; Tian, Jie
2017-03-01
Cerenkov luminescence tomography (CLT), as a promising optical molecular imaging modality, can be applied to cancer diagnostic and therapeutic. Most researches about CLT reconstruction are based on the finite element method (FEM) framework. However, the quality of FEM mesh grid is still a vital factor to restrict the accuracy of the CLT reconstruction result. In this paper, we proposed a multi-grid finite element method framework, which was able to improve the accuracy of reconstruction. Meanwhile, the multilevel scheme adaptive algebraic reconstruction technique (MLS-AART) based on a modified iterative algorithm was applied to improve the reconstruction accuracy. In numerical simulation experiments, the feasibility of our proposed method were evaluated. Results showed that the multi-grid strategy could obtain 3D spatial information of Cerenkov source more accurately compared with the traditional single-grid FEM.
Keller, Susanna R.; Lee, Jae K.
2017-01-01
Different computational approaches have been examined and compared for inferring network relationships from time-series genomic data on human disease mechanisms under the recent Dialogue on Reverse Engineering Assessment and Methods (DREAM) challenge. Many of these approaches infer all possible relationships among all candidate genes, often resulting in extremely crowded candidate network relationships with many more False Positives than True Positives. To overcome this limitation, we introduce a novel approach, Module Anchored Network Inference (MANI), that constructs networks by analyzing sequentially small adjacent building blocks (modules). Using MANI, we inferred a 7-gene adipogenesis network based on time-series gene expression data during adipocyte differentiation. MANI was also applied to infer two 10-gene networks based on time-course perturbation datasets from DREAM3 and DREAM4 challenges. MANI well inferred and distinguished serial, parallel, and time-dependent gene interactions and network cascades in these applications showing a superior performance to other in silico network inference techniques for discovering and reconstructing gene network relationships. PMID:28197408
A method for investigating system matrix properties in optimization-based CT reconstruction
NASA Astrophysics Data System (ADS)
Rose, Sean D.; Sidky, Emil Y.; Pan, Xiaochuan
2016-04-01
Optimization-based iterative reconstruction methods have shown much promise for a variety of applications in X-ray computed tomography (CT). In these reconstruction methods, the X-ray measurement is modeled as a linear mapping from a finite-dimensional image space to a finite dimensional data-space. This mapping is dependent on a number of factors including the basis functions used for image representation1 and the method by which the matrix representing this mapping is generated.2 Understanding the properties of this linear mapping and how it depends on our choice of parameters is fundamental to optimization-based reconstruction. In this work, we confine our attention to a pixel basis and propose a method to investigate the effect of pixel size in optimization-based reconstruction. The proposed method provides insight into the tradeoff between higher resolution image representation and matrix conditioning. We demonstrate this method for a particular breast CT system geometry. We find that the images obtained from accurate solution of a least squares reconstruction optimization problem have high sensitivity to pixel size within certain regimes. We propose two methods by which this sensitivity can be reduced and demonstrate their efficacy. Our results indicate that the choice of pixel size in optimization-based reconstruction can have great impact on the quality of the reconstructed image, and that understanding the properties of the linear mapping modeling the X-ray measurement can help guide us with this choice.
Gene Networks Underlying Chronic Sleep Deprivation in Drosophila
2014-06-15
SECURITY CLASSIFICATION OF: Studies of the gene network affected by sleep deprivation and stress in the fruit fly Drosophila have revealed the...Chronic Sleep Deprivation in Drosophila Report Title Studies of the gene network affected by sleep deprivation and stress in the fruit fly Drosophila have...stressed flies , the involvement of axonogenesis as a process regulated by these stressors. This goes beyond the current hypothesis of sleep as functioning
3D Image Reconstruction: Hamiltonian Method for Phase Recovery
Blankenbecler, Richard
2003-03-13
The problem of reconstructing a positive semi-definite 3-D image from the measurement of the magnitude of its 2-D fourier transform at a series of orientations is explored. The phase of the fourier transform is not measured. The algorithm developed here utilizes a Hamiltonian, or cost function, that at its minimum provides the solution to the stated problem. The energy function includes both data and physical constraints on the charge distribution or image.
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
Qian Weixin; Qi Shuangxi; Wang Wanli; Cheng Jinming; Liu Dongbing
2011-09-15
Neutron penumbral imaging is a significant diagnostic technique in laser-driven inertial confinement fusion experiment. It is very important to develop a new reconstruction method to improve the resolution of neutron penumbral imaging. A new nonlinear reconstruction method based on total variation (TV) regularization is proposed in this paper. A TV-norm is used as regularized term to construct a smoothing functional for penumbral image reconstruction in the new method, in this way, the problem of penumbral image reconstruction is transformed to the problem of a functional minimization. In addition, a fixed point iteration scheme is introduced to solve the problem of functional minimization. The numerical experimental results show that, compared to linear reconstruction method based on Wiener filter, the TV regularized nonlinear reconstruction method is beneficial to improve the quality of reconstructed image with better performance of noise smoothing and edge preserving. Meanwhile, it can also obtain the spatial resolution with 5 {mu}m which is higher than the Wiener method.
Sparse Reconstruction Techniques in MRI: Methods, Applications, and Challenges to Clinical Adoption
Yang, Alice Chieh-Yu; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole
2016-01-01
The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in Magnetic Resonance Imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be employed to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MR imaging, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold-standards, are discussed. PMID:27003227
Yang, Alice C; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole
2016-06-01
The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in magnetic resonance imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be used to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MRI, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold standards, are discussed.
3D reconstruction method based on time-division multiplexing using multiple depth cameras
NASA Astrophysics Data System (ADS)
Kang, Ji-Hoon; Lee, Dong-Su; Park, Min-Chul; Lee, Kwang-Hoon
2014-06-01
This article proposes a 3D reconstruction method using multiple depth cameras. Since the depth camera acquires the depth information from a single viewpoint, it's inadequate for 3D reconstruction. In order to solve this problem, we used multiple depth cameras. For 3D scene reconstruction, the depth information is acquired from different viewpoints with multiple depth cameras. However, when using multiple depth cameras, it's difficult to acquire accurate depth information because of interference among depth cameras. To solve this problem, in this research, we propose Time-division multiplexing method. The depth information was acquired from different cameras sequentially. After acquiring the depth images, we extracted features using Fast Point Feature Histogram (FPFH) descriptor. Then, we performed 3D registration with Sample Consensus Initial Alignment (SAC-IA). We reconstructed 3D human bodies with our system and measured body sizes for evaluating the accuracy of 3D reconstruction.
The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method
NASA Astrophysics Data System (ADS)
Voronina, T. A.; Romanenko, A. A.
2016-12-01
Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.
A shape-based quality evaluation and reconstruction method for electrical impedance tomography.
Antink, Christoph Hoog; Pikkemaat, Robert; Malmivuo, Jaakko; Leonhardt, Steffen
2015-06-01
Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community. In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed. Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images.
Nishimaru, Eiji; Ichikawa, Katsuhiro; Hara, Takanori; Terakawa, Shoichi; Yokomachi, Kazushi; Fujioka, Chikako; Kiguchi, Masao; Ishifuro, Minoru
2012-01-01
Adaptive iterative reconstruction techniques (IRs) can decrease image noise in computed tomography (CT) and are expected to contribute to reduction of the radiation dose. To evaluate the performance of IRs, the conventional two-dimensional (2D) noise power spectrum (NPS) is widely used. However, when an IR provides an NPS value drop at all spatial frequency (which is similar to NPS changes by dose increase), the conventional method cannot evaluate the correct noise property because the conventional method does not correspond to the volume data natures of CT images. The purpose of our study was to develop a new method for NPS measurements that can be adapted to IRs. Our method utilized thick multi-planar reconstruction (MPR) images. The thick images are generally made by averaging CT volume data in a direction perpendicular to a MPR plane (e.g. z-direction for axial MPR plane). By using this averaging technique as a cutter for 3D-NPS, we can obtain adequate 2D-extracted NPS (eNPS) from 3D NPS. We applied this method to IR images generated with adaptive iterative dose reduction 3D (AIDR-3D, Toshiba) to investigate the validity of our method. A water phantom with 24 cm-diameters was scanned at 120 kV and 200 mAs with a 320-row CT (Acquilion One, Toshiba). From the results of study, the adequate thickness of MPR images for eNPS was more than 25.0 mm. Our new NPS measurement method utilizing thick MPR images was accurate and effective for evaluating noise reduction effects of IRs.
NASA Astrophysics Data System (ADS)
Liu, Ming; Qin, Zhuanping; Jia, Mengyu; Zhao, Huijuan; Gao, Feng
2015-03-01
Two-layered slab is a rational simplified sample to the near-infrared functional brain imaging using diffuse optical tomography (DOT).The quality of reconstructed images is substantially affected by the accuracy of the background optical properties. In this paper, region step wise reconstruction method is proposed for reconstructing the background optical properties of the two-layered slab sample with the known geometric information based on continuous wave (CW) DOT. The optical properties of the top and bottom layers are respectively reconstructed utilizing the different source-detector-separation groups according to the depth of maximum brain sensitivity of the source-detector-separation. We demonstrate the feasibility of the proposed method and investigate the application range of the source-detector-separation groups by the numerical simulations. The numerical simulation results indicate the proposed method can effectively reconstruct the background optical properties of two-layered slab sample. The relative reconstruction errors are less than 10% when the thickness of the top layer is approximate 10mm. The reconstruction of target caused by brain activation is investigated with the reconstructed optical properties as well. The quantitativeness ratio of the ROI is about 80% which is higher than that of the conventional method. The spatial resolution of the reconstructions (R) with two targets is investigated, and it demonstrates R with the proposed method is better than that with the conventional method as well.
A Parallel Reconstructed Discontinuous Galerkin Method for the Compressible Flows on Aritrary Grids
Hong Luo; Amjad Ali; Robert Nourgaliev; Vincent A. Mousseau
2010-01-01
A reconstruction-based discontinuous Galerkin method is presented for the solution of the compressible Navier-Stokes equations on arbitrary grids. In this method, an in-cell reconstruction is used to obtain a higher-order polynomial representation of the underlying discontinuous Galerkin polynomial solution and an inter-cell reconstruction is used to obtain a continuous polynomial solution on the union of two neighboring, interface-sharing cells. The in-cell reconstruction is designed to enhance the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. The inter-cell reconstruction is devised to remove an interface discontinuity of the solution and its derivatives and thus to provide a simple, accurate, consistent, and robust approximation to the viscous and heat fluxes in the Navier-Stokes equations. A parallel strategy is also devised for the resulting reconstruction discontinuous Galerkin method, which is based on domain partitioning and Single Program Multiple Data (SPMD) parallel programming model. The RDG method is used to compute a variety of compressible flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results demonstrate that this RDG method is third-order accurate at a cost slightly higher than its underlying second-order DG method, at the same time providing a better performance than the third order DG method, in terms of both computing costs and storage requirements.
Simultaneous denoising and reconstruction of 5-D seismic data via damped rank-reduction method
NASA Astrophysics Data System (ADS)
Chen, Yangkang; Zhang, Dong; Jin, Zhaoyu; Chen, Xiaohong; Zu, Shaohuan; Huang, Weilin; Gan, Shuwei
2016-09-01
The Cadzow rank-reduction method can be effectively utilized in simultaneously denoising and reconstructing 5-D seismic data that depend on four spatial dimensions. The classic version of Cadzow rank-reduction method arranges the 4-D spatial data into a level-four block Hankel/Toeplitz matrix and then applies truncated singular value decomposition (TSVD) for rank reduction. When the observed data are extremely noisy, which is often the feature of real seismic data, traditional TSVD cannot be adequate for attenuating the noise and reconstructing the signals. The reconstructed data tend to contain a significant amount of residual noise using the traditional TSVD method, which can be explained by the fact that the reconstructed data space is a mixture of both signal subspace and noise subspace. In order to better decompose the block Hankel matrix into signal and noise components, we introduced a damping operator into the traditional TSVD formula, which we call the damped rank-reduction method. The damped rank-reduction method can obtain a perfect reconstruction performance even when the observed data have extremely low signal-to-noise ratio. The feasibility of the improved 5-D seismic data reconstruction method was validated via both 5-D synthetic and field data examples. We presented comprehensive analysis of the data examples and obtained valuable experience and guidelines in better utilizing the proposed method in practice. Since the proposed method is convenient to implement and can achieve immediate improvement, we suggest its wide application in the industry.
NASA Astrophysics Data System (ADS)
Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.
2014-07-01
The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.
NASA Astrophysics Data System (ADS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2017-04-01
Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results.
Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method
NASA Astrophysics Data System (ADS)
Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao
2017-03-01
Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.
Image Reconstruction in SNR Units: A General Method for SNR Measurement
Kellman, Peter; McVeigh, Elliot R.
2007-01-01
The method for phased array image reconstruction of uniform noise images may be used in conjunction with proper image scaling as a means of reconstructing images directly in SNR units. This facilitates accurate and precise SNR measurement on a per pixel basis. This method is applicable to root-sum-of-squares magnitude combining, B1-weighted combining, and parallel imaging such as SENSE. A procedure for image reconstruction and scaling is presented, and the method for SNR measurement is validated with phantom data. Alternative methods that rely on noise only regions are not appropriate for parallel imaging where the noise level is highly variable across the field-of-view. The purpose of this article is to provide a nuts and bolts procedure for calculating scale factors used for reconstructing images directly in SNR units. The procedure includes scaling for noise equivalent bandwidth of digital receivers, FFTs and associated window functions (raw data filters), and array combining. PMID:16261576
Myers, Glenn R.; Thomas, C. David L.; Clement, John G.; Paganin, David M.; Gureyev, Timur E.
2010-01-11
We present a method for tomographic reconstruction of objects containing several distinct materials, which is capable of accurately reconstructing a sample from vastly fewer angular projections than required by conventional algorithms. The algorithm is more general than many previous discrete tomography methods, as: (i) a priori knowledge of the exact number of materials is not required; (ii) the linear attenuation coefficient of each constituent material may assume a small range of a priori unknown values. We present reconstructions from an experimental x-ray computed tomography scan of cortical bone acquired at the SPring-8 synchrotron.
NASA Astrophysics Data System (ADS)
Ma, Xichao; Xiao, Wen; Pan, Feng
2017-07-01
We present a reconstruction method for samples containing localized refractive index (RI) discontinuities in optical diffraction tomography. Abrupt RI changes induce regional phase perturbations and random spikes, which will be expanded and strengthened by existing tomographic algorithms, resulting in contaminated reconstructions. This method avoids the disturbance by recognition and separation of the discontinuous regions, and recombination of individually reconstructed data. Three-dimensional RI distributions of two fusion spliced optical fibers with different typical discontinuities are demonstrated, showing distinctly detailed structures of the samples as well as the positions and estimated shapes of the discontinuities.
NASA Astrophysics Data System (ADS)
Myers, Glenn R.; Thomas, C. David L.; Paganin, David M.; Gureyev, Timur E.; Clement, John G.
2010-01-01
We present a method for tomographic reconstruction of objects containing several distinct materials, which is capable of accurately reconstructing a sample from vastly fewer angular projections than required by conventional algorithms. The algorithm is more general than many previous discrete tomography methods, as: (i) a priori knowledge of the exact number of materials is not required; (ii) the linear attenuation coefficient of each constituent material may assume a small range of a priori unknown values. We present reconstructions from an experimental x-ray computed tomography scan of cortical bone acquired at the SPring-8 synchrotron.
Reconstruction method for x-ray imaging capsule
NASA Astrophysics Data System (ADS)
Rubin, Daniel; Lifshitz, Ronen; Bar-Ilan, Omer; Weiss, Noam; Shapiro, Yoel; Kimchy, Yoav
2017-03-01
A colon imaging capsule has been developed by Check-Cap Ltd (C-Scan® Cap). For the procedure, the patient swallows a small amount of standard iodinated contrast agent. To create images, three rotating X-ray beams are emitted towards the colon wall. Part of the X-ray photons are backscattered from the contrast medium and the colon. These photons are collected by an omnidirectional array of energy discriminating photon counting detectors (CdTe/CZT) within the capsule. X-ray fluorescence (XRF) and Compton backscattering photons pertain different energies and are counted separately by the detection electronics. The current work examines a new statistical approach for the algorithm that reconstructs the lining of the colon wall from the X-ray detector readings. The algorithm performs numerical optimization for finding the solution to the inverse problem applied to a physical forward model, reflecting the behavior of the system. The forward model that was employed, accounts for the following major factors: the two mechanisms of dependence between the distance to the colon wall and the number photons, directional scatter distributions, and relative orientations between beams and detectors. A calibration procedure has been put in place to adjust the coefficients of the forward model for the specific capsule geometry, radiation source characteristics, and the detector response. The performance of the algorithm was examined in phantom experiments and demonstrated high correlation between actual phantom shape and x-ray image reconstruction. Evaluation is underway to assess the algorithm performance in clinical setting.
Reproducibility of the Structural Connectome Reconstruction across Diffusion Methods.
Prčkovska, Vesna; Rodrigues, Paulo; Puigdellivol Sanchez, Ana; Ramos, Marc; Andorra, Magi; Martinez-Heras, Eloy; Falcon, Carles; Prats-Galino, Albert; Villoslada, Pablo
2016-01-01
Analysis of the structural connectomes can lead to powerful insights about the brain's organization and damage. However, the accuracy and reproducibility of constructing the structural connectome done with different acquisition and reconstruction techniques is not well defined. In this work, we evaluated the reproducibility of the structural connectome techniques by performing test-retest (same day) and longitudinal studies (after 1 month) as well as analyzing graph-based measures on the data acquired from 22 healthy volunteers (6 subjects were used for the longitudinal study). We compared connectivity matrices and tract reconstructions obtained with the most typical acquisition schemes used in clinical application: diffusion tensor imaging (DTI), high angular resolution diffusion imaging (HARDI), and diffusion spectrum imaging (DSI). We observed that all techniques showed high reproducibility in the test-retest analysis (correlation >.9). However, HARDI was the only technique with low variability (2%) in the longitudinal assessment (1-month interval). The intraclass coefficient analysis showed the highest reproducibility for the DTI connectome, however, with more sparse connections than HARDI and DSI. Qualitative (neuroanatomical) assessment of selected tracts confirmed the quantitative results showing that HARDI managed to detect most of the analyzed fiber groups and fanning fibers. In conclusion, we found that HARDI acquisition showed the most balanced trade-off between high reproducibility of the connectome, higher rate of path detection and of fanning fibers, and intermediate acquisition times (10-15 minutes), although at the cost of higher appearance of aberrant fibers.
Validation of plasma shape reconstruction by Cauchy condition surface method in KSTAR
Miyata, Y.; Suzuki, T.; Ide, S.; Hahn, S. H.; Chung, J.; Bak, J. G.; Ko, W. H.
2014-03-15
Cauchy Condition Surface (CCS) method is a numerical approach to reconstruct the plasma boundary and calculate the quantities related to plasma shape using the magnetic diagnostics in real time. It has been applied to the KSTAR plasma in order to establish the plasma shape reconstruction with the high elongation of plasma shape and the large effect of eddy currents flowing in the tokamak structures for the first time. For applying the CCS calculation to the KSTAR plasma, the effects by the eddy currents and the ferromagnetic materials on the plasma shape reconstruction are studied. The CCS calculation includes the effect of eddy currents and excludes the magnetic diagnostics, which is expected to be influenced largely by ferromagnetic materials. Calculations have been performed to validate the plasma shape reconstruction in 2012 KSTAR experimental campaign. Comparison between the CCS calculation and non-magnetic measurements revealed that the CCS calculation can reconstruct the accurate plasma shape even with a small I{sub P}.
Yuan, Zhen; Zhang, Jiang; Wang, Xiaodong; Li, Changqing
2014-01-01
We conducted a systematic investigation of the reflectance diffuse optical tomography using continuous wave (CW) measurements and nonlinear reconstruction algorithms. We illustrated and suggested how to fine-tune the nonlinear reconstruction methods in order to optimize target localization with depth-adaptive regularizations, reduce boundary noises in the reconstructed images using a logarithm based objective function, improve reconstruction quantification using transport models, and resolve crosstalk problems between absorption and scattering contrasts with the CW reflectance measurements. The upgraded nonlinear reconstruction algorithms were evaluated with a series of numerical and experimental tests, which show the potentials of the proposed approaches for imaging both absorption and scattering contrasts in the deep targets with enhanced image quality. PMID:25401014
A hybrid method for more efficient channel-by-channel reconstruction with many channels.
Huang, Feng; Lin, Wei; Duensing, George R; Reykowski, Arne
2012-03-01
In MRI, imaging using receiving coil arrays with a large number of elements is an area of growing interest. With increasing channel numbers for parallel acquisition, longer reconstruction times have become a significant concern. Channel reduction techniques have been proposed to reduce the processing time of channel-by-channel reconstruction algorithms. In this article, two schemes are combined to enable faster and more accurate reconstruction than existing channel reduction techniques. One scheme use two stages of channel reduction instead of one. The other scheme is to incorporate all acquired data into the final reconstruction. The combination of these two schemes is called flexible virtual coil. Applications of flexible virtual coil for partially parallel imaging, motion compensation, and compressed sensing are presented as specific examples. Theoretical analysis and experimental results demonstrate that the proposed method has a major impact in reducing computation cost in reconstruction with high-channel count coil elements. Copyright © 2011 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Zhao, Shuang; Xu, Yanli; Dai, Huayu
2017-05-01
Aiming at the problem of configuration design of heterogeneous constellation reconstruction, a design method of heterogeneous constellation reconstruction based on multi objective and multi constraints is proposed. At first, the concept of heterogeneous constellation is defined. Secondly, the heterogeneous constellation reconstruction methods were analyzed, and then the two typical existing design methods of reconstruction, phase position uniformity method and reconstruction configuration design method based on optimization algorithm are summarized. The advantages and shortcomings of different reconstruction configuration design methods are compared, finally the heterogeneous constellation reconstruction configuration design is currently facing problems are analyzed and put forward the thinking about the reconstruction index system of heterogeneous constellation and the selection of optimal variables and the establishment of constraints in the optimization design of the configuration.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
Stable-phase method for hierarchical annealing in the reconstruction of porous media images.
Chen, Dongdong; Teng, Qizhi; He, Xiaohai; Xu, Zhi; Li, Zhengji
2014-01-01
In this paper, we introduce a stable-phase approach for hierarchical annealing which addresses the very large computational costs associated with simulated annealing for the reconstruction of large-scale binary porous media images. Our presented method, which uses the two-point correlation function as the morphological descriptor, involves the reconstruction of three-phase and two-phase structures. We consider reconstructing the three-phase structures based on standard annealing and the two-phase structures based on standard and hierarchical annealings. From the result of the two-dimensional (2D) reconstruction, we find that the 2D generation does not fully capture the morphological information of the original image, even though the two-point correlation function of the reconstruction is in excellent agreement with that of the reference image. For the reconstructed three-dimensional (3D) microstructure, we calculate its permeability and compare it to that of the reference 3D microstructure. The result indicates that the reconstructed structure has a lower degree of connectedness than that of the actual sandstone. We also compare the computation time of our presented method to that of the standard annealing, which shows that our presented method of orders of magnitude improves the convergence rate. That is because only a small part of the pixels in the overall hierarchy need to be considered for sampling by the annealer.
NASA Astrophysics Data System (ADS)
Zhu, Dianwen; Zhang, Wei; Zhao, Yue; Li, Changqing
2016-03-01
Dynamic fluorescence molecular tomography (FMT) has the potential to quantify physiological or biochemical information, known as pharmacokinetic parameters, which are important for cancer detection, drug development and delivery etc. To image those parameters, there are indirect methods, which are easier to implement but tend to provide images with low signal-to-noise ratio, and direct methods, which model all the measurement noises together and are statistically more efficient. The direct reconstruction methods in dynamic FMT have attracted a lot of attention recently. However, the coupling of tomographic image reconstruction and nonlinearity of kinetic parameter estimation due to the compartment modeling has imposed a huge computational burden to the direct reconstruction of the kinetic parameters. In this paper, we propose to take advantage of both the direct and indirect reconstruction ideas through a variable splitting strategy under the augmented Lagrangian framework. Each iteration of the direct reconstruction is split into two steps: the dynamic FMT image reconstruction and the node-wise nonlinear least squares fitting of the pharmacokinetic parameter images. Through numerical simulation studies, we have found that the proposed algorithm can achieve good reconstruction results within a small amount of time. This will be the first step for a combined dynamic PET and FMT imaging in the future.
Reconstruction for 3D PET Based on Total Variation Constrained Direct Fourier Method
Yu, Haiqing; Chen, Zhi; Zhang, Heye; Loong Wong, Kelvin Kian; Chen, Yunmei; Liu, Huafeng
2015-01-01
This paper presents a total variation (TV) regularized reconstruction algorithm for 3D positron emission tomography (PET). The proposed method first employs the Fourier rebinning algorithm (FORE), rebinning the 3D data into a stack of ordinary 2D data sets as sinogram data. Then, the resulted 2D sinogram are ready to be reconstructed by conventional 2D reconstruction algorithms. Given the locally piece-wise constant nature of PET images, we introduce the total variation (TV) based reconstruction schemes. More specifically, we formulate the 2D PET reconstruction problem as an optimization problem, whose objective function consists of TV norm of the reconstructed image and the data fidelity term measuring the consistency between the reconstructed image and sinogram. To solve the resulting minimization problem, we apply an efficient methods called the Bregman operator splitting algorithm with variable step size (BOSVS). Experiments based on Monte Carlo simulated data and real data are conducted as validations. The experiment results show that the proposed method produces higher accuracy than conventional direct Fourier (DF) (bias in BOSVS is 70% of ones in DF, variance of BOSVS is 80% of ones in DF). PMID:26398232
Time-frequency manifold sparse reconstruction: A novel method for bearing fault feature extraction
NASA Astrophysics Data System (ADS)
Ding, Xiaoxi; He, Qingbo
2016-12-01
In this paper, a novel transient signal reconstruction method, called time-frequency manifold (TFM) sparse reconstruction, is proposed for bearing fault feature extraction. This method introduces image sparse reconstruction into the TFM analysis framework. According to the excellent denoising performance of TFM, a more effective time-frequency (TF) dictionary can be learned from the TFM signature by image sparse decomposition based on orthogonal matching pursuit (OMP). Then, the TF distribution (TFD) of the raw signal in a reconstructed phase space would be re-expressed with the sum of learned TF atoms multiplied by corresponding coefficients. Finally, one-dimensional signal can be achieved again by the inverse process of TF analysis (TFA). Meanwhile, the amplitude information of the raw signal would be well reconstructed. The proposed technique combines the merits of the TFM in denoising and the atomic decomposition in image sparse reconstruction. Moreover, the combination makes it possible to express the nonlinear signal processing results explicitly in theory. The effectiveness of the proposed TFM sparse reconstruction method is verified by experimental analysis for bearing fault feature extraction.
NASA Astrophysics Data System (ADS)
Ko, Han Seo; Gim, Yeonghyeon; Kang, Seung-Hwan
2015-11-01
A three-dimensional optical correction method was developed to reconstruct droplet-based flow fields. For a numerical simulation, synthetic phantoms were reconstructed by a simultaneous multiplicative algebraic reconstruction technique using three projection images which were positioned at an offset angle of 45°. If the synthetic phantom in a conical object with refraction index which differs from atmosphere, the image can be distorted because a light is refracted on the surface of the conical object. Thus, the direction of the projection ray was replaced by the refracted ray which occurred on the surface of the conical object. In order to prove the method considering the distorted effect, reconstruction results of the developed method were compared with the original phantom. As a result, the reconstruction result of the method showed smaller error than that without the method. The method was applied for a Taylor cone which was caused by high voltage between a droplet and a substrate to reconstruct the three-dimensional flow fields for analysis of the characteristics of the droplet. This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korean government (MEST) (No. 2013R1A2A2A01068653).
Lu, Huancai; Wu, Sean F
2009-03-01
The vibroacoustic responses of a highly nonspherical vibrating object are reconstructed using Helmholtz equation least-squares (HELS) method. The objectives of this study are to examine the accuracy of reconstruction and the impacts of various parameters involved in reconstruction using HELS. The test object is a simply supported and baffled thin plate. The reason for selecting this object is that it represents a class of structures that cannot be exactly described by the spherical Hankel functions and spherical harmonics, which are taken as the basis functions in the HELS formulation, yet the analytic solutions to vibroacoustic responses of a baffled plate are readily available so the accuracy of reconstruction can be checked accurately. The input field acoustic pressures for reconstruction are generated by the Rayleigh integral. The reconstructed normal surface velocities are validated against the benchmark values, and the out-of-plane vibration patterns at several natural frequencies are compared with the natural modes of a simply supported plate. The impacts of various parameters such as number of measurement points, measurement distance, location of the origin of the coordinate system, microphone spacing, and ratio of measurement aperture size to the area of source surface of reconstruction on the resultant accuracy of reconstruction are examined.
NASA Astrophysics Data System (ADS)
Porch, Nick
2010-03-01
If Quaternary palaeoclimatic reconstructions are to be adequately contextualised, it is vital that the nature of modern datasets and the limitations this places on interpreting Quaternary climates are made explicit - such issues are too infrequently considered. This paper describes a coexistence method for the reconstruction of past temperature and precipitation parameters in Australia, using fossil beetles. It presents the context for Quaternary palaeoclimatic reconstruction in terms of climate space, bioclimatic envelope data derived from modern beetle distributions, and the palaeoclimatic limitations of bioclimatic envelope-based reconstructions. Tests in modern climate space, using bioclimatic envelope data for 734 beetle taxa and 54 site-based assemblages from across the continent, indicate that modern seasonal, especially summer, temperatures and precipitation are accurately and, in the case of temperature, precisely reconstructed. The limitations of modern climate space, especially in terms of the limited seasonal variation in thermal regimes and subsequent lack of cold winters in the Australian region, renders winter predictions potentially unreliable when applied to the Quaternary record.
Apparatus And Method For Reconstructing Data Using Cross-Parity Stripes On Storage Media
Hughes, James Prescott
2003-06-17
An apparatus and method for reconstructing missing data using cross-parity stripes on a storage medium is provided. The apparatus and method may operate on data symbols having sizes greater than a data bit. The apparatus and method makes use of a plurality of parity stripes for reconstructing missing data stripes. The parity symbol values in the parity stripes are used as a basis for determining the value of the missing data symbol in a data stripe. A correction matrix is shifted along the data stripes, correcting missing data symbols as it is shifted. The correction is performed from the outside data stripes towards the inner data stripes to thereby use previously reconstructed data symbols to reconstruct other missing data symbols.
Yu, Dongdong; Zhang, Shuang; An, Yu; Hu, Yifang
2015-01-01
Optical molecular imaging is a promising technique and has been widely used in physiology, and pathology at cellular and molecular levels, which includes different modalities such as bioluminescence tomography, fluorescence molecular tomography and Cerenkov luminescence tomography. The inverse problem is ill-posed for the above modalities, which cause a nonunique solution. In this paper, we propose an effective reconstruction method based on the linearized Bregman iterative algorithm with sparse regularization (LBSR) for reconstruction. Considering the sparsity characteristics of the reconstructed sources, the sparsity can be regarded as a kind of a priori information and sparse regularization is incorporated, which can accurately locate the position of the source. The linearized Bregman iteration method is exploited to minimize the sparse regularization problem so as to further achieve fast and accurate reconstruction results. Experimental results in a numerical simulation and in vivo mouse demonstrate the effectiveness and potential of the proposed method. PMID:26421055
Gradient domain methods with application to 4D scene reconstruction
NASA Astrophysics Data System (ADS)
Di Martino, J. Matías; Fernández, Alicia; Ferrari, José A.
2015-03-01
In many applications such as Photometric Stereo, Shape from Shading, Differential 3D reconstruction and Image Editing in gradient domain it is important to integrate a retrieved gradient field. In most of the real experiments, the retrieved gradient fields correspond to nonintegrable fields (i.e. they are not irrotational on every point of the domain). Robust approaches have been proposed to deal with noisy nonintegrable gradient fields. In this work we extend some of these techniques for the case of dynamic scenes when the gradient field in the x - y domain can be estimated over time. We exploit temporal consistency in the scene to ensure integrability and improve the accuracy of the results. In addition, two known integration algorithms are reviewed and important implementation details are discussed. Experiments with synthetic and real data showing some potential applications for the proposed framework are presented.
Mass reconstruction methods for PM2.5: a review.
Chow, Judith C; Lowenthal, Douglas H; Chen, L-W Antony; Wang, Xiaoliang; Watson, John G
Major components of suspended particulate matter (PM) are inorganic ions, organic matter (OM), elemental carbon (EC), geological minerals, salt, non-mineral elements, and water. Since oxygen (O) and hydrogen (H) are not directly measured in chemical speciation networks, more than ten weighting equations have been applied to account for their presence, thereby approximating gravimetric mass. Assumptions for these weights are not the same under all circumstances. OM is estimated from an organic carbon (OC) multiplier (f) that ranges from 1.4 to 1.8 in most studies, but f can be larger for highly polar compounds from biomass burning and secondary organic aerosols. The mineral content of fugitive dust is estimated from elemental markers, while the water-soluble content is accounted for as inorganic ions or salt. Part of the discrepancy between measured and reconstructed PM mass is due to the measurement process, including: (1) organic vapors adsorbed on quartz-fiber filters; (2) evaporation of volatile ammonium nitrate and OM between the weighed Teflon-membrane filter and the nylon-membrane and/or quartz-fiber filters on which ions and carbon are measured; and (3) liquid water retained on soluble constituents during filter weighing. The widely used IMPROVE equations were developed to characterize particle light extinction in U.S. national parks, and variants of this approach have been tested in a large variety of environments. Important factors for improving agreement between measured and reconstructed PM mass are the f multiplier for converting OC to OM and accounting for OC sampling artifacts.
New methods for the computer-assisted 3-D reconstruction of neurons from confocal image stacks.
Schmitt, Stephan; Evers, Jan Felix; Duch, Carsten; Scholz, Michael; Obermayer, Klaus
2004-12-01
Exact geometrical reconstructions of neuronal architecture are indispensable for the investigation of neuronal function. Neuronal shape is important for the wiring of networks, and dendritic architecture strongly affects neuronal integration and firing properties as demonstrated by modeling approaches. Confocal microscopy allows to scan neurons with submicron resolution. However, it is still a tedious task to reconstruct complex dendritic trees with fine structures just above voxel resolution. We present a framework assisting the reconstruction. User time investment is strongly reduced by automatic methods, which fit a skeleton and a surface to the data, while the user can interact and thus keeps full control to ensure a high quality reconstruction. The reconstruction process composes a successive gain of metric parameters. First, a structural description of the neuron is built, including the topology and the exact dendritic lengths and diameters. We use generalized cylinders with circular cross sections. The user provides a rough initialization by marking the branching points. The axes and radii are fitted to the data by minimizing an energy functional, which is regularized by a smoothness constraint. The investigation of proximity to other structures throughout dendritic trees requires a precise surface reconstruction. In order to achieve accuracy of 0.1 microm and below, we additionally implemented a segmentation algorithm based on geodesic active contours that allow for arbitrary cross sections and uses locally adapted thresholds. In summary, this new reconstruction tool saves time and increases quality as compared to other methods, which have previously been applied to real neurons.
NASA Astrophysics Data System (ADS)
Poudel, Joemini; Matthews, Thomas P.; Anastasio, Mark A.; Wang, Lihong V.
2016-03-01
Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the absorbed optical energy density within tissue. If the object possesses spatially variant acoustic properties that are unaccounted for by the reconstruction algorithm, the estimated image can contain distortions. While reconstruction algorithms have recently been developed for compensating for this effect, they generally require the objects acoustic properties to be known a priori. To circumvent the need for detailed information regarding an objects acoustic properties, we have previously proposed a half-time reconstruction method for PACT. A half-time reconstruction method estimates the PACT image from a data set that has been temporally truncated to exclude the data components that have been strongly aberrated. In this approach, the degree of temporal truncation is the same for all measurements. However, this strategy can be improved upon it when the approximate sizes and locations of strongly heterogeneous structures such as gas voids or bones are known. In this work, we investigate PACT reconstruction algorithms that are based on a variable temporal data truncation (VTDT) approach that represents a generalization of the half-time reconstruction approach. In the VTDT approach, the degree of temporal truncation for each measurement is determined by the distance between the corresponding transducer location and the nearest known bone or gas void location. Reconstructed images from a numerical phantom is employed to demonstrate the feasibility and effectiveness of the approach.
Network Reconstruction Using Nonparametric Additive ODE Models
Henderson, James; Michailidis, George
2014-01-01
Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative
Adaptive Forward Modeling Method for Analysis and Reconstructions of Orientation Image Map
Frankie Li, Shiu Fai
2014-06-01
IceNine is a MPI-parallel orientation reconstruction and microstructure analysis code. It's primary purpose is to reconstruct a spatially resolved orientation map given a set of diffraction images from a high energy x-ray diffraction microscopy (HEDM) experiment (1). In particular, IceNine implements the adaptive version of the forward modeling method (2, 3). Part of IceNine is a library used to for conbined analysis of the microstructure with the experimentally measured diffraction signal. The libraries is also designed for tapid prototyping of new reconstruction and analysis algorithms. IceNine is also built with a simulator of diffraction images with an input microstructure.
Chmiel, Z
1996-01-01
An original method for A1 retinaculum reconstruction of flexor pollicis longus sheath with extensor pollicis brevis tendon is presented. Reconstructed retinaculum is very strong. Loss of extensor pollicis brevis did not impaired thumb function.
Comparison of kinoform synthesis methods for image reconstruction in Fourier plane
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Porshneva, Liudmila A.; Rodin, Vladislav G.; Starikov, Sergey N.
2014-05-01
Kinoform is synthesized phase diffractive optical element which allows to reconstruct image by its illumination with plane wave. Kinoforms are used in image processing systems. For tasks of kinoform synthesis iterative methods had become wide-spread because of relatively small error of resulting intensity distribution. There are articles in which two or three iterative methods are compared but they use only one or several test images. The goal of this work is to compare iterative methods by using many test images of different types. Images were reconstructed in Fourier plane from synthesized kinoforms displayed on phase-only LCOS SLM. Quality of reconstructed images and computational resources of the methods were analyzed. For kinoform synthesis four methods were implemented in programming environment: Gerchberg-Saxton algorithm (GS), Fienup algorithm (F), adaptive-additive algorithm (AA) and Gerchberg-Saxton algorithm with weight coefficients (GSW). To compare these methods 50 test images with different characteristics were used: binary and grayscale, contour and non-contour. Resolution of images varied from 64×64 to 1024×1024. Occupancy of images ranged from 0.008 to 0.89. Quantity of phase levels of synthesized kinoforms was 256 which is equal to number of phase levels of SLM LCOS HoloEye PLUTO VIS. Under numerical testing it was found that the best quality of reconstructed images provides the AA method. The GS, F and GSW methods showed worse results but roughly similar between each other. Execution time of single iteration of the analyzed methods is minimal for the GS method. The F method provides maximum execution time. Synthesized kinoforms were optically reconstructed using phase-only LCOS SLM HoloEye PLUTO VIS. Results of optical reconstruction were compared to the numerical ones. The AA method showed slightly better results than other methods especially in case of gray-scale images.
Shang, Shang; Bai, Jing; Song, Xiaolei; Wang, Hongkai; Lau, Jaclyn
2007-01-01
Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT.
Hwang, Euna; Kim, Young Soo; Chung, Seum
2014-06-01
Before visiting a plastic surgeon, some microtia patients may undergo canaloplasty for hearing improvement. In such cases, scarred tissues and the reconstructed external auditory canal in the postauricular area may cause a significant limitation in using the posterior auricular skin flap for ear reconstruction. In this article, we present a new method for auricular reconstruction in microtia patients with previous canaloplasty. By dividing a postauricular skin flap into an upper scalp extended skin flap and a lower mastoid extended skin flap at the level of a reconstructed external auditory canal, the entire anterior surface of the auricular framework can be covered with the two extended postauricular skin flaps. The reconstructed ear shows good color match and texture, with the entire anterior surface of the reconstructed ear being resurfaced with the skin flaps. Clinical question/level of evidence; therapeutic level IV.
NASA Astrophysics Data System (ADS)
Xu, Min; He, Kang-Lin; Zhang, Zi-Ping; Wang, Yi-Fang; Bian, Jian-Ming; Cao, Guo-Fu; Cao, Xue-Xiang; Chen, Shen-Jian; Deng, Zi-Yan; Fu, Cheng-Dong; Gao, Yuan-Ning; Han, Lei; Han, Shao-Qing; He, Miao; Hu, Ji-Feng; Hu, Xiao-Wei; Huang, Bin; Huang, Xing-Tao; Jia, Lu-Kui; Ji, Xiao-Bin; Li, Hai-Bo; Li, Wei-Dong; Liang, Yu-Tie; Liu, Chun-Xiu; Liu, Huai-Min; Liu, Ying; Liu, Yong; Luo, Tao; Lü, Qi-Wen; Ma, Qiu-Mei; Ma, Xiang; Mao, Ya-Jun; Mao, Ze-Pu; Mo, Xiao-Hu; Ning, Fei-Peng; Ping, Rong-Gang; Qiu, Jin-Fa; Song, Wen-Bo; Sun, Sheng-Sen; Sun, Xiao-Dong; Sun, Yong-Zhao; Tian, Hao-Lai; Wang, Ji-Ke; Wang, Liang-Liang; Wen, Shuo-Pin; Wu, Ling-Hui; Wu, Zhi; Xie, Yu-Guang; Yan, Jie; Yan, Liang; Yao, Jian; Yuan, Chang-Zheng; Yuan, Ye; Zhang, Chang-Chun; Zhang, Jian-Yong; Zhang, Lei; Zhang, Xue-Yao; Zhang, Yao; Zheng, Yang-Heng; Zhu, Yong-Sheng; Zou, Jia-Heng
2009-06-01
This paper focuses mainly on the vertex reconstruction of resonance particles with a relatively long lifetime such as K0S, Λ, as well as on lifetime measurements using a 3-dimensional fit. The kinematic constraints between the production and decay vertices and the decay vertex fitting algorithm based on the least squares method are both presented. Reconstruction efficiencies including experimental resolutions are discussed. The results and systematic errors are calculated based on a Monte Carlo simulation.
Glass, Nel; Davis, Kierrynn
2004-01-01
Nursing research informed by postmodern feminist perspectives has prompted many debates in recent times. While this is so, nurse researchers who have been tempted to break new ground have had few examples of appropriate analytical methods for a research design informed by the above perspectives. This article presents a deconstructive/reconstructive secondary analysis of a postmodern feminist ethnography in order to provide an analytical exemplar. In doing so, previous notions of vulnerability as a negative state have been challenged and reconstructed.
Novel l2,1-norm optimization method for fluorescence molecular tomography reconstruction
Jiang, Shixin; Liu, Jie; An, Yu; Zhang, Guanglei; Ye, Jinzuo; Mao, Yamin; He, Kunshan; Chi, Chongwei; Tian, Jie
2016-01-01
Fluorescence molecular tomography (FMT) is a promising tomographic method in preclinical research, which enables noninvasive real-time three-dimensional (3-D) visualization for in vivo studies. The ill-posedness of the FMT reconstruction problem is one of the many challenges in the studies of FMT. In this paper, we propose a l2,1-norm optimization method using a priori information, mainly the structured sparsity of the fluorescent regions for FMT reconstruction. Compared to standard sparsity methods, the structured sparsity methods are often superior in reconstruction accuracy since the structured sparsity utilizes correlations or structures of the reconstructed image. To solve the problem effectively, the Nesterov’s method was used to accelerate the computation. To evaluate the performance of the proposed l2,1-norm method, numerical phantom experiments and in vivo mouse experiments are conducted. The results show that the proposed method not only achieves accurate and desirable fluorescent source reconstruction, but also demonstrates enhanced robustness to noise. PMID:27375949
NASA Astrophysics Data System (ADS)
Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.; Le, Hanh N. D.; Kang, Jin U.; Roland, Per E.; Wong, Dean F.; Rahmim, Arman
2017-02-01
Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via l1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1, absorption coefficient: 0.1 cm-1 and tomographic measurements made using pixelated detectors. In different experiments, fluorescent sources of varying size and intensity were simulated. The proposed reconstruction method provided accurate estimates of the fluorescent source intensity, with a 20% lower root mean square error on average compared to the pure-homotopy method for all considered source intensities and sizes. Further, compared with conventional l2 regularized algorithm, overall, the proposed method reconstructed substantially more accurate fluorescence distribution. The proposed method shows considerable promise and will be tested using more realistic simulations and experimental setups.
Hong Luo; Luquing Luo; Robert Nourgaliev; Vincent Mousseau
2009-06-01
A reconstruction-based discontinuous Galerkin (DG) method is presented for the solution of the compressible Euler equations on arbitrary grids. By taking advantage of handily available and yet invaluable information, namely the derivatives, in the context of the discontinuous Galerkin methods, a solution polynomial of one degree higher is reconstructed using a least-squares method. The stencils used in the reconstruction involve only the van Neumann neighborhood (face-neighboring cells) and are compact and consistent with the underlying DG method. The resulting DG method can be regarded as an improvement of a recovery-based DG method in the sense that it shares the same nice features as the recovery-based DG method, such as high accuracy and efficiency, and yet overcomes some of its shortcomings such as a lack of flexibility, compactness, and robustness. The developed DG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate the accuracy and efficiency of the method. The numerical results indicate that this reconstructed DG method is able to obtain a third-order accurate solution at a slightly higher cost than its second-order DG method and provide an increase in performance over the third order DG method in terms of computing time and storage requirement.
An adaptive total variation image reconstruction method for speckles through disordered media
NASA Astrophysics Data System (ADS)
Gong, Changmei; Shao, Xiaopeng; Wu, Tengfei
2013-09-01
Multiple scattering of light in highly disordered medium can break the diffraction limit of conventional optical system combined with image reconstruction method. Once the transmission matrix of the imaging system is obtained, the target image can be reconstructed from its speckle pattern by image reconstruction algorithm. Nevertheless, the restored image attained by common image reconstruction algorithms such as Tikhonov regularization has a relatively low signal-tonoise ratio (SNR) due to the experimental noise and reconstruction noise, greatly reducing the quality of the result image. In this paper, the speckle pattern of the test image is simulated by the combination of light propagation theories and statistical optics theories. Subsequently, an adaptive total variation (ATV) algorithm—the TV minimization by augmented Lagrangian and alternating direction algorithms (TVAL3), which is based on augmented Lagrangian and alternating direction algorithm, is utilized to reconstruct the target image. Numerical simulation experimental results show that, the TVAL3 algorithm can effectively suppress the noise of the restored image and preserve more image details, thus greatly boosts the SNR of the restored image. It also indicates that, compared with the image directly formed by `clean' system, the reconstructed results can overcoming the diffraction limit of the `clean' system, therefore being conductive to the observation of cells and protein molecules in biological tissues and other structures in micro/nano scale.
Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods.
Smith, David S; Gore, John C; Yankeelov, Thomas E; Welch, E Brian
2012-01-01
Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 4096(2) or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 1024(2) and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images.
Computation intelligent for eukaryotic cell-cycle gene network.
Wu, Shinq-Jen; Wu, Cheng-Tao; Lee, Tsu-Tian
2006-01-01
Computational intelligent approaches is adopted to construct the S-system of eukaryotic cell cycle for further analysis of genetic regulatory networks. A highly nonlinear power-law differential equation is constructed to describe the transcriptional regulation of gene network from the time-courses dataset. Global artificial algorithm, based on hybrid differential evolution, can achieve global optimization for the highly nonlinear differential gene network modeling. The constructed gene regulatory networks will be a reference for researchers to realize the inhibitory and activatory operator for genes synthesis and decomposition in Eukaryotic cell cycle.
A novel digital tomosynthesis (DTS) reconstruction method using a deformation field map.
Ren, Lei; Zhang, Junan; Thongphiew, Danthai; Godfrey, Devon J; Wu, Q Jackie; Zhou, Su-Min; Yin, Fang-Fang
2008-07-01
We developed a novel digital tomosynthesis (DTS) reconstruction method using a deformation field map to optimally estimate volumetric information in DTS images. The deformation field map is solved by using prior information, a deformation model, and new projection data. Patients' previous cone-beam CT (CBCT) or planning CT data are used as the prior information, and the new patient volume to be reconstructed is considered as a deformation of the prior patient volume. The deformation field is solved by minimizing bending energy and maintaining new projection data fidelity using a nonlinear conjugate gradient method. The new patient DTS volume is then obtained by deforming the prior patient CBCT or CT volume according to the solution to the deformation field. This method is novel because it is the first method to combine deformable registration with limited angle image reconstruction. The method was tested in 2D cases using simulated projections of a Shepp-Logan phantom, liver, and head-and-neck patient data. The accuracy of the reconstruction was evaluated by comparing both organ volume and pixel value differences between DTS and CBCT images. In the Shepp-Logan phantom study, the reconstructed pixel signal-to-noise ratio (PSNR) for the 60 degrees DTS image reached 34.3 dB. In the liver patient study, the relative error of the liver volume reconstructed using 60 degrees projections was 3.4%. The reconstructed PSNR for the 60 degrees DTS image reached 23.5 dB. In the head-and-neck patient study, the new method using 60 degrees projections was able to reconstruct the 8.1 degrees rotation of the bony structure with 0.0 degrees error. The reconstructed PSNR for the 60 degrees DTS image reached 24.2 dB. In summary, the new reconstruction method can optimally estimate the volumetric information in DTS images using 60 degrees projections. Preliminary validation of the algorithm showed that it is both technically and clinically feasible for image guidance in radiation
Wang, Jinguo; Zhao, Zhiqin Song, Jian; Chen, Guoping; Nie, Zaiping; Liu, Qing-Huo
2015-05-15
Purpose: An iterative reconstruction method has been previously reported by the authors of this paper. However, the iterative reconstruction method was demonstrated by solely using the numerical simulations. It is essential to apply the iterative reconstruction method to practice conditions. The objective of this work is to validate the capability of the iterative reconstruction method for reducing the effects of acoustic heterogeneity with the experimental data in microwave induced thermoacoustic tomography. Methods: Most existing reconstruction methods need to combine the ultrasonic measurement technology to quantitatively measure the velocity distribution of heterogeneity, which increases the system complexity. Different to existing reconstruction methods, the iterative reconstruction method combines time reversal mirror technique, fast marching method, and simultaneous algebraic reconstruction technique to iteratively estimate the velocity distribution of heterogeneous tissue by solely using the measured data. Then, the estimated velocity distribution is used subsequently to reconstruct the highly accurate image of microwave absorption distribution. Experiments that a target placed in an acoustic heterogeneous environment are performed to validate the iterative reconstruction method. Results: By using the estimated velocity distribution, the target in an acoustic heterogeneous environment can be reconstructed with better shape and higher image contrast than targets that are reconstructed with a homogeneous velocity distribution. Conclusions: The distortions caused by the acoustic heterogeneity can be efficiently corrected by utilizing the velocity distribution estimated by the iterative reconstruction method. The advantage of the iterative reconstruction method over the existing correction methods is that it is successful in improving the quality of the image of microwave absorption distribution without increasing the system complexity.
Hong Luo; Luqing Luo; Robert Nourgaliev; Vincent A. Mousseau
2010-09-01
A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier–Stokes equations on arbitrary grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier–Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need to judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on arbitrary grids. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG method is able to deliver the same accuracy as the well-known Bassi–Rebay II scheme, at a half of its computing costs for the discretization of the viscous fluxes in the Navier–Stokes equations, clearly demonstrating its superior performance over the existing DG methods for solving the compressible Navier–Stokes equations.
Hong Luo; Luqing Luo; Robert Nourgaliev; Vincent A. Mousseau
2010-01-01
A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier-Stokes equations on arbitrary grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier-Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need to judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on arbitrary grids. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG method is able to deliver the same accuracy as the well-known Bassi-Rebay II scheme, at a half of its computing costs for the discretization of the viscous fluxes in the Navier-Stokes equations, clearly demonstrating its superior performance over the existing DG methods for solving the compressible Navier-Stokes equations.
Image reconstruction in EIT with unreliable electrode data using random sample consensus method
NASA Astrophysics Data System (ADS)
Jeon, Min Ho; Khambampati, Anil Kumar; Kim, Bong Seok; In Kang, Suk; Kim, Kyung Youn
2017-04-01
In electrical impedance tomography (EIT), it is important to acquire reliable measurement data through EIT system for achieving good reconstructed image. In order to have reliable data, various methods for checking and optimizing the EIT measurement system have been studied. However, most of the methods involve additional cost for testing and the measurement setup is often evaluated before the experiment. It is useful to have a method which can detect the faulty electrode data during the experiment without any additional cost. This paper presents a method based on random sample consensus (RANSAC) to find the incorrect data on fault electrode in EIT data. RANSAC is a curve fitting method that removes the outlier data from measurement data. RANSAC method is applied with Gauss Newton (GN) method for image reconstruction of human thorax with faulty data. Numerical and phantom experiments are performed and the reconstruction performance of the proposed RANSAC method with GN is compared with conventional GN method. From the results, it can be noticed that RANSAC with GN has better reconstruction performance than conventional GN method with faulty electrode data.
Assessment of three methods of geometric image reconstruction for digital subtraction radiography.
Queiroz, Polyane M; Oliveira, Matheus L; Tanaka, Jefferson L O; Soares, Milton G; Haiter-Neto, Francisco; Ono, Evelise
To evaluate three methods of geometric image reconstruction for digital subtraction radiography (DSR). Digital periapical radiographs were acquired of 24 teeth with the X-ray tube at 6 different geometric configurations of vertical (V) and horizontal (H) angles: V0°H0°, V0°H10°, V10°H0°, V10°H10°, V20°H0° and V20°H10°. All 144 images were registered in pairs (Group V0°H0° + 1 of the 6 groups) 3 times by using the Emago(®) (Oral Diagnostic Systems, Amsterdam, Netherlands) with manual selection and Regeemy with manual and automatic selections. After geometric reconstruction on the two software applications under different modes of selection, all images were subtracted and the standard deviation of grey values was obtained as a measure of image noise. All measurements were repeated after 15 days to evaluate the method error. Values of image noise were statistically analyzed by one-way ANOVA for differences between methods and between projection angles, followed by Tukey's test at a level of significance of 5%. Significant differences were found between most of the projection angles for the three reconstruction methods. Image subtraction after manual selection-based reconstruction on Regeemy presented the lowest values of image noise, except on group V0°H0°. The groups V10°H0° and V20°H0° were not significantly different between the manual selection-based reconstruction in Regeemy and automatic selection-based reconstruction in Regeemy methods. The Regeemy software on manual mode revealed better quality of geometric image reconstruction for DSR than the Regeemy on automatic mode and the Emago on manual mode, when the radiographic images were obtained at V and H angles used in the present investigation.
NASA Astrophysics Data System (ADS)
Torrelles, X.; Rius, J.; Boscherini, F.; Heun, S.; Mueller, B. H.; Ferrer, S.; Alvarez, J.; Miravitlles, C.
1998-02-01
The projections of surface reconstructions are normally solved from the interatomic vectors found in two-dimensional Patterson maps computed with the intensities of the in-plane superstructure reflections. Since for difficult reconstructions this procedure is not trivial, an alternative automated one based on the ``direct methods'' sum function [Rius, Miravitlles, and Allmann, Acta Crystallogr. A52, 634 (1996)] is shown. It has been applied successfully to the known c(4×2) reconstruction of Ge(001) and to the so-far unresolved In0.04Ga0.96As (001) p(4×2) surface reconstruction. For this last system we propose a modification of one of the models previously proposed for GaAs(001) whose characteristic feature is the presence of dimers along the fourfold direction.
Free flaps in orbital exenteration: a safe and effective method for reconstruction.
López, Fernando; Suárez, Carlos; Carnero, Susana; Martín, Clara; Camporro, Daniel; Llorente, José L
2013-05-01
The aim of this study was to investigate the course of reconstructive treatment and outcomes with use of free flaps after orbital exenteration for malignancy. Charts of patients who had free flap reconstruction after orbital exenteration were retrospectively reviewed and the surgical technique was evaluated. Demographics, histology, surgical management, complications, locoregional control, and survival were analyzed. We performed 22 flaps in 21 patients. Reconstruction was undertaken mainly with anterolateral thigh (56 %), radial forearm (22 %), or parascapular (22 %) free flaps. Complications occurred in 33 % of patients and the flap's success rate was 91 %. The 5-year locoregional control and survival rates were 42 and 37 %, respectively. Free tissue transfer is a reliable, safe, and effective method for repair of defects of the orbit and periorbital structures resulting from oncologic resection. The anterolateral thigh flap is a versatile option to reconstruct the many orbital defects encountered.
NASA Astrophysics Data System (ADS)
Jiang, Mingfeng; Zhang, Heng; Zhu, Lingyan; Cao, Li; Wang, Yaming; Xia, Ling; Gong, Yinglan
2015-04-01
Non-invasively reconstructing the cardiac transmembrane potentials (TMPs) from body surface potentials can act as a regression problem. The support vector regression (SVR) method is often used to solve the regression problem, however the computational complexity of the SVR training algorithm is usually intensive. In this paper, another learning algorithm, termed as extreme learning machine (ELM), is proposed to reconstruct the cardiac transmembrane potentials. Moreover, ELM can be extended to single-hidden layer feed forward neural networks with kernel matrix (kernelized ELM), which can achieve a good generalization performance at a fast learning speed. Based on the realistic heart-torso models, a normal and two abnormal ventricular activation cases are applied for training and testing the regression model. The experimental results show that the ELM method can perform a better regression ability than the single SVR method in terms of the TMPs reconstruction accuracy and reconstruction speed. Moreover, compared with the ELM method, the kernelized ELM method features a good approximation and generalization ability when reconstructing the TMPs.
Nonlinear PET parametric image reconstruction with MRI information using kernel method
NASA Astrophysics Data System (ADS)
Gong, Kuang; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2017-03-01
Positron Emission Tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neurology. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information. Previously we have used kernel learning to embed MR information in static PET reconstruction and direct Patlak reconstruction. Here we extend this method to direct reconstruction of nonlinear parameters in a compartment model by using the alternating direction of multiplier method (ADMM) algorithm. Simulation studies show that the proposed method can produce superior parametric images compared with existing methods.
NASA Astrophysics Data System (ADS)
Jia, Lu-Kui; Mao, Ze-Pu; Li, Wei-Dong; Cao, Guo-Fu; Cao, Xue-Xiang; Deng, Zi-Yan; He, Kang-Lin; Liu, Chun-Yan; Liu, Huai-Min; Liu, Qiu-Guang; Ma, Qiu-Mei; Ma, Xiang; Qiu, Jin-Fa; Tian, Hao-Lai; Wang, Ji-Ke; Wu, Ling-Hui; Yuan, Ye; Zang, Shi-Lei; Zhang, Chang-Chun; Zhang, Lei; Zhang, Yao; Zhu, Kai; Zou, Jia-Heng
2010-12-01
In order to overcome the difficulty brought by the circling charged tracks with transverse momentum less than 120 MeV in the BESIII Main Drift Chamber (MDC), a specialized method called TCurlFinder was developed. This tracking method focuses on the charged track reconstruction under 120 MeV and possesses a special mechanism to reject background noise hits. The performance of the package has been carefully checked and tuned by both Monte Carlo data and real data. The study shows that this tracking method could obviously enhance the reconstruction efficiency in the low transverse momentum region, providing physics analysis with more and reliable data.
A method for detecting the best reconstructing distance in phase-shifting digital holography
NASA Astrophysics Data System (ADS)
Cao, Wen-Bo; Su, Ping; Ma, Jian-She; Liang, Xian-Ting
2014-09-01
In this paper, we propose a novel method to detect the best reconstructing distance in phase-shifting digital holography, which can help one to reconstruct high-quality images even though the recording distance is unkonwn. This scheme is based on an algorithm, two-dimensional discrete cosine transform (DCT). Numerical experiments for this method are shown in this paper. It is shown that this method is not only effective but also fast compared to previous schemes for detecting the focal distance in digital holography. Meanwhile, the algorithm can be effective against different types of noise.
Hong Luo; Yidong Xia; Robert Nourgaliev; Chunpei Cai
2011-06-01
A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier-Stokes equations on unstructured tetrahedral grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier-Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need to judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on unstructured grids. The preliminary results indicate that this RDG method is stable on unstructured tetrahedral grids, and provides a viable and attractive alternative for the discretization of the viscous and heat fluxes in the Navier-Stokes equations.
Zhang, Shuang; Wang, Kun; Liu, Hongbo; Leng, Chengcai; Gao, Yuan; Tian, Jie
2017-04-01
Bioluminescence tomography (BLT) can provide in vivo three-dimensional (3D) images for quantitative analysis of biological processes in preclinical small animal studies, which is superior than the conventional planar bioluminescence imaging. However, to reconstruct light sources under the skin in 3D with desirable accuracy and efficiency, BLT has to face the ill-posed and ill-conditioned inverse problem. In this paper, we developed a new method for BLT reconstruction, which utilized the mathematical strategies of the split Bregman iterative and surrogate functions (SBISF) method. The proposed method considered the sparsity characteristic of the reconstructed sources. Thus, the sparsity itself was regarded as a kind of a priori information, and the sparse regularization is incorporated, which can accurately locate the position of the sources. Numerical simulation experiments of multisource cases with comparative analyses were performed to evaluate the performance of the proposed method. Then, a bead-implanted mouse and a breast cancer xenograft mouse model were employed to validate the feasibility of this method in in vivo experiments. The results of both simulation and in vivo experiments indicated that comparing with the L1-norm iteration shrinkage method and non-monotone spectral projected gradient pursuit method, the proposed SBISF method provided the smallest position error with the least amount of time consumption. The SBISF method is able to achieve high accuracy and high efficiency in BLT reconstruction and hold great potential for making BLT more practical in small animal studies.
NASA Astrophysics Data System (ADS)
Yamaguchi, Yusaku; Kojima, Takeshi; Yoshinaga, Tetsuya
2016-03-01
In clinical X-ray computed tomography (CT), filtered back-projection as a transform method and iterative reconstruction such as the maximum-likelihood expectation-maximization (ML-EM) method are known methods to reconstruct tomographic images. As the other reconstruction method, we have presented a continuous-time image reconstruction (CIR) system described by a nonlinear dynamical system, based on the idea of continuous methods for solving tomographic inverse problems. Recently, we have also proposed a multiplicative CIR system described by differential equations based on the minimization of a weighted Kullback-Leibler divergence. We prove theoretically that the divergence measure decreases along the solution to the CIR system, for consistent inverse problems. In consideration of the noisy nature of projections in clinical CT, the inverse problem belongs to the category of ill-posed problems. The performance of a noise-reduction scheme for a new (previously developed) CIR system was investigated by means of numerical experiments using a circular phantom image. Compared to the conventional CIR and the ML-EM methods, the proposed CIR method has an advantage on noisy projection with lower signal-to-noise ratios in terms of the divergence measure on the actual image under the same common measure observed via the projection data. The results lead to the conclusion that the multiplicative CIR method is more effective and robust for noise reduction in CT compared to the ML-EM as well as conventional CIR methods.
A new method to reconstruct intra-fractional prostate motion in volumetric modulated arc therapy
NASA Astrophysics Data System (ADS)
Chi, Y.; Rezaeian, N. H.; Shen, C.; Zhou, Y.; Lu, W.; Yang, M.; Hannan, R.; Jia, X.
2017-07-01
Intra-fractional motion is a concern during prostate radiation therapy, as it may cause deviations between planned and delivered radiation doses. Because accurate motion information during treatment delivery is critical to address dose deviation, we developed the projection marker matching method (PM3), a novel method for prostate motion reconstruction in volumetric modulated arc therapy. The purpose of this method is to reconstruct in-treatment prostate motion trajectory using projected positions of implanted fiducial markers measured in kV x-ray projection images acquired during treatment delivery. We formulated this task as a quadratic optimization problem. The objective function penalized the distance from the reconstructed 3D position of each fiducial marker to the corresponding straight line, defined by the x-ray projection of the marker. Rigid translational motion of the prostate and motion smoothness along the temporal dimension were assumed and incorporated into the optimization model. We tested the motion reconstruction method in both simulation and phantom experimental studies. We quantified the accuracy using 3D normalized root-mean-square (RMS) error defined as the norm of a vector containing ratios between the absolute RMS errors and corresponding motion ranges in three dimensions. In the simulation study with realistic prostate motion trajectories, the 3D normalized RMS error was on average ~0.164 (range from 0.097 to 0.333 ). In an experimental study, a prostate phantom was driven to move along a realistic prostate motion trajectory. The 3D normalized RMS error was ~0.172 . We also examined the impact of the model parameters on reconstruction accuracy, and found that a single set of parameters can be used for all the tested cases to accurately reconstruct the motion trajectories. The motion trajectory derived by PM3 may be incorporated into novel strategies, including 4D dose reconstruction and adaptive treatment replanning to address motion
Xing, Pei; Chen, Xin; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu
2016-01-01
Large-scale climate history of the past millennium reconstructed solely from tree-ring data is prone to underestimate the amplitude of low-frequency variability. In this paper, we aimed at solving this problem by utilizing a novel method termed "MDVM", which was a combination of the ensemble empirical mode decomposition (EEMD) and variance matching techniques. We compiled a set of 211 tree-ring records from the extratropical Northern Hemisphere (30-90°N) in an effort to develop a new reconstruction of the annual mean temperature by the MDVM method. Among these dataset, a number of 126 records were screened out to reconstruct temperature variability longer than decadal scale for the period 850-2000 AD. The MDVM reconstruction depicted significant low-frequency variability in the past millennium with evident Medieval Warm Period (MWP) over the interval 950-1150 AD and pronounced Little Ice Age (LIA) cumulating in 1450-1850 AD. In the context of 1150-year reconstruction, the accelerating warming in 20th century was likely unprecedented, and the coldest decades appeared in the 1640s, 1600s and 1580s, whereas the warmest decades occurred in the 1990s, 1940s and 1930s. Additionally, the MDVM reconstruction covaried broadly with changes in natural radiative forcing, and especially showed distinct footprints of multiple volcanic eruptions in the last millennium. Comparisons of our results with previous reconstructions and model simulations showed the efficiency of the MDVM method on capturing low-frequency variability, particularly much colder signals of the LIA relative to the reference period. Our results demonstrated that the MDVM method has advantages in studying large-scale and low-frequency climate signals using pure tree-ring data.
Xing, Pei; Chen, Xin; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu
2016-01-01
Large-scale climate history of the past millennium reconstructed solely from tree-ring data is prone to underestimate the amplitude of low-frequency variability. In this paper, we aimed at solving this problem by utilizing a novel method termed “MDVM”, which was a combination of the ensemble empirical mode decomposition (EEMD) and variance matching techniques. We compiled a set of 211 tree-ring records from the extratropical Northern Hemisphere (30–90°N) in an effort to develop a new reconstruction of the annual mean temperature by the MDVM method. Among these dataset, a number of 126 records were screened out to reconstruct temperature variability longer than decadal scale for the period 850–2000 AD. The MDVM reconstruction depicted significant low-frequency variability in the past millennium with evident Medieval Warm Period (MWP) over the interval 950–1150 AD and pronounced Little Ice Age (LIA) cumulating in 1450–1850 AD. In the context of 1150-year reconstruction, the accelerating warming in 20th century was likely unprecedented, and the coldest decades appeared in the 1640s, 1600s and 1580s, whereas the warmest decades occurred in the 1990s, 1940s and 1930s. Additionally, the MDVM reconstruction covaried broadly with changes in natural radiative forcing, and especially showed distinct footprints of multiple volcanic eruptions in the last millennium. Comparisons of our results with previous reconstructions and model simulations showed the efficiency of the MDVM method on capturing low-frequency variability, particularly much colder signals of the LIA relative to the reference period. Our results demonstrated that the MDVM method has advantages in studying large-scale and low-frequency climate signals using pure tree-ring data. PMID:26751947
Chen, Xueli E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin E-mail: jimleung@mail.xidian.edu.cn
2014-05-14
Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.
Ahmad, Munir; Shahzad, Tasawar; Masood, Khalid; Rashid, Khalid; Tanveer, Muhammad; Iqbal, Rabail; Hussain, Nasir; Shahid, Abubakar; Fazal-E-Aleem
2016-06-01
Emission tomographic image reconstruction is an ill-posed problem due to limited and noisy data and various image-degrading effects affecting the data and leads to noisy reconstructions. Explicit regularization, through iterative reconstruction methods, is considered better to compensate for reconstruction-based noise. Local smoothing and edge-preserving regularization methods can reduce reconstruction-based noise. However, these methods produce overly smoothed images or blocky artefacts in the final image because they can only exploit local image properties. Recently, non-local regularization techniques have been introduced, to overcome these problems, by incorporating geometrical global continuity and connectivity present in the objective image. These techniques can overcome drawbacks of local regularization methods; however, they also have certain limitations, such as choice of the regularization function, neighbourhood size or calibration of several empirical parameters involved. This work compares different local and non-local regularization techniques used in emission tomographic imaging in general and emission computed tomography in specific for improved quality of the resultant images.
A physics-based intravascular ultrasound image reconstruction method for lumen segmentation.
Mendizabal-Ruiz, Gerardo; Kakadiaris, Ioannis A
2016-08-01
Intravascular ultrasound (IVUS) refers to the medical imaging technique consisting of a miniaturized ultrasound transducer located at the tip of a catheter that can be introduced in the blood vessels providing high-resolution, cross-sectional images of their interior. Current methods for the generation of an IVUS image reconstruction from radio frequency (RF) data do not account for the physics involved in the interaction between the IVUS ultrasound signal and the tissues of the vessel. In this paper, we present a novel method to generate an IVUS image reconstruction based on the use of a scattering model that considers the tissues of the vessel as a distribution of three-dimensional point scatterers. We evaluated the impact of employing the proposed IVUS image reconstruction method in the segmentation of the lumen/wall interface on 40MHz IVUS data using an existing automatic lumen segmentation method. We compared the results with those obtained using the B-mode reconstruction on 600 randomly selected frames from twelve pullback sequences acquired from rabbit aortas and different arteries of swine. Our results indicate the feasibility of employing the proposed IVUS image reconstruction for the segmentation of the lumen.
NASA Astrophysics Data System (ADS)
Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M.
2013-02-01
Digital breast tomosynthesis (DBT) has strong promise to improve sensitivity for detecting breast cancer. DBT reconstruction estimates the breast tissue attenuation using projection views (PVs) acquired in a limited angular range. Because of the limited field of view (FOV) of the detector, the PVs may not completely cover the breast in the x-ray source motion direction at large projection angles. The voxels in the imaged volume cannot be updated when they are outside the FOV, thus causing a discontinuity in intensity across the FOV boundaries in the reconstructed slices, which we refer to as the truncated projection artifact (TPA). Most existing TPA reduction methods were developed for the filtered backprojection method in the context of computed tomography. In this study, we developed a new diffusion-based method to reduce TPAs during DBT reconstruction using the simultaneous algebraic reconstruction technique (SART). Our TPA reduction method compensates for the discontinuity in background intensity outside the FOV of the current PV after each PV updating in SART. The difference in voxel values across the FOV boundary is smoothly diffused to the region beyond the FOV of the current PV. Diffusion-based background intensity estimation is performed iteratively to avoid structured artifacts. The method is applicable to TPA in both the forward and backward directions of the PVs and for any number of iterations during reconstruction. The effectiveness of the new method was evaluated by comparing the visual quality of the reconstructed slices and the measured discontinuities across the TPA with and without artifact correction at various iterations. The results demonstrated that the diffusion-based intensity compensation method reduced the TPA while preserving the detailed tissue structures. The visibility of breast lesions obscured by the TPA was improved after artifact reduction.
Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M
2014-01-01
Digital breast tomosynthesis (DBT) has strong promise to improve sensitivity for detecting breast cancer. DBT reconstruction estimates the breast tissue attenuation using projection views (PVs) acquired in a limited angular range. Because of the limited field of view (FOV) of the detector, the PVs may not completely cover the breast in the x-ray source motion direction at large projection angles. The voxels in the imaged volume cannot be updated when they are outside the FOV, thus causing a discontinuity in intensity across the FOV boundaries in the reconstructed slices, which we refer to as the truncated projection artifact (TPA). Most existing TPA reduction methods were developed for the filtered backprojection method in the context of computed tomography. In this study, we developed a new diffusion-based method to reduce TPAs during DBT reconstruction using the simultaneous algebraic reconstruction technique (SART). Our TPA reduction method compensates for the discontinuity in background intensity outside the FOV of the current PV after each PV updating in SART. The difference in voxel values across the FOV boundary is smoothly diffused to the region beyond the FOV of the current PV. Diffusion-based background intensity estimation is performed iteratively to avoid structured artifacts. The method is applicable to TPA in both the forward and backward directions of the PVs and for any number of iterations during reconstruction. The effectiveness of the new method was evaluated by comparing the visual quality of the reconstructed slices and the measured discontinuities across the TPA with and without artifact correction at various iterations. The results demonstrated that the diffusion-based intensity compensation method reduced the TPA while preserving the detailed tissue structures. The visibility of breast lesions obscured by the TPA was improved after artifact reduction. PMID:23318346
Linking chordate gene networks to cellular behavior in ascidians.
Davidson, Brad; Christiaen, Lionel
2006-01-27
Embryos of simple chordates called ascidians (sea squirts) have few cells, develop rapidly, and are transparent, enabling the in vivo fluorescent imaging of labeled cell lineages. Ascidians are also simple genetically, with limited redundancy and compact regulatory regions. This cellular and genetic simplicity is now being exploited to link comprehensive gene networks to the cellular events underlying morphogenesis.
EcoliNet: a database of cofunctional gene network for Escherichia coli
Kim, Hanhae; Shim, Jung Eun; Shin, Junha; Lee, Insuk
2015-01-01
During the past several decades, Escherichia coli has been a treasure chest for molecular biology. The molecular mechanisms of many fundamental cellular processes have been discovered through research on this bacterium. Although much basic research now focuses on more complex model organisms, E. coli still remains important in metabolic engineering and synthetic biology. Despite its long history as a subject of molecular investigation, more than one-third of the E. coli genome has no pathway annotation supported by either experimental evidence or manual curation. Recently, a network-assisted genetics approach to the efficient identification of novel gene functions has increased in popularity. To accelerate the speed of pathway annotation for the remaining uncharacterized part of the E. coli genome, we have constructed a database of cofunctional gene network with near-complete genome coverage of the organism, dubbed EcoliNet. We find that EcoliNet is highly predictive for diverse bacterial phenotypes, including antibiotic response, indicating that it will be useful in prioritizing novel candidate genes for a wide spectrum of bacterial phenotypes. We have implemented a web server where biologists can easily run network algorithms over EcoliNet to predict novel genes involved in a pathway or novel functions for a gene. All integrated cofunctional associations can be downloaded, enabling orthology-based reconstruction of gene networks for other bacterial species as well. Database URL: http://www.inetbio.org/ecolinet PMID:25650278
ERIC Educational Resources Information Center
Dockray, Samantha; Grant, Nina; Stone, Arthur A.; Kahneman, Daniel; Wardle, Jane; Steptoe, Andrew
2010-01-01
Measurement of affective states in everyday life is of fundamental importance in many types of quality of life, health, and psychological research. Ecological momentary assessment (EMA) is the recognized method of choice, but the respondent burden can be high. The day reconstruction method (DRM) was developed by Kahneman and colleagues ("Science,"…
ERIC Educational Resources Information Center
Dockray, Samantha; Grant, Nina; Stone, Arthur A.; Kahneman, Daniel; Wardle, Jane; Steptoe, Andrew
2010-01-01
Measurement of affective states in everyday life is of fundamental importance in many types of quality of life, health, and psychological research. Ecological momentary assessment (EMA) is the recognized method of choice, but the respondent burden can be high. The day reconstruction method (DRM) was developed by Kahneman and colleagues ("Science,"…
Ye, Jinzuo; Chi, Chongwei; Xue, Zhenwen; Wu, Ping; An, Yu; Xu, Han; Zhang, Shuang; Tian, Jie
2014-02-01
Fluorescence molecular tomography (FMT), as a promising imaging modality, can three-dimensionally locate the specific tumor position in small animals. However, it remains challenging for effective and robust reconstruction of fluorescent probe distribution in animals. In this paper, we present a novel method based on sparsity adaptive subspace pursuit (SASP) for FMT reconstruction. Some innovative strategies including subspace projection, the bottom-up sparsity adaptive approach, and backtracking technique are associated with the SASP method, which guarantees the accuracy, efficiency, and robustness for FMT reconstruction. Three numerical experiments based on a mouse-mimicking heterogeneous phantom have been performed to validate the feasibility of the SASP method. The results show that the proposed SASP method can achieve satisfactory source localization with a bias less than 1mm; the efficiency of the method is much faster than mainstream reconstruction methods; and this approach is robust even under quite ill-posed condition. Furthermore, we have applied this method to an in vivo mouse model, and the results demonstrate the feasibility of the practical FMT application with the SASP method.
Phase microscopy using light-field reconstruction method for cell observation.
Xiu, Peng; Zhou, Xin; Kuang, Cuifang; Xu, Yingke; Liu, Xu
2015-08-01
The refractive index (RI) distribution can serve as a natural label for undyed cell imaging. However, the majority of images obtained through quantitative phase microscopy is integrated along the illumination angle and cannot reflect additional information about the refractive map on a certain plane. Herein, a light-field reconstruction method to image the RI map within a depth of 0.2 μm is proposed. It records quantitative phase-delay images using a four-step phase shifting method in different directions and then reconstructs a similar scattered light field for the refractive sample on the focus plane. It can image the RI of samples, transparent cell samples in particular, in a manner similar to the observation of scattering characteristics. The light-field reconstruction method is therefore a powerful tool for use in cytobiology studies.
Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay
Huang, Jian
2013-03-12
A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.
NASA Astrophysics Data System (ADS)
Fugal, Jacob P.; Schulz, Timothy J.; Shaw, Raymond A.
2009-07-01
Hologram reconstruction algorithms often undersample the phase in propagation kernels for typical parameters of holographic optical setups. Given in this paper is an algorithm that addresses this phase undersampling in reconstructing digital in-line holograms of particles for these typical parameters. This algorithm has a lateral sample spacing constant in reconstruction distance, has a diffraction limited resolution, and can be implemented with computational speeds comparable to the fastest of other reconstruction algorithms. This algorithm is shown to be accurate by testing with analytical solutions to the Huygens-Fresnel propagation integral. A low-pass filter can be applied to enforce a uniform minimum particle size detection limit throughout a sample volume, allowing this method to be useful in measuring particle size distributions and number densities. Tens of thousands of holograms of cloud ice particles are digitally reconstructed using the algorithm discussed. Positions of ice particles in the size range of 20 µm-1.5 mm are obtained using an algorithm that accurately finds the position of large and small particles along the optical axis. The digital reconstruction and particle characterization algorithms are implemented in an automated fashion with no user intervention on a computer cluster. Strategies for efficient algorithm implementation on a computer cluster are discussed.
Comparing five alternative methods of breast reconstruction surgery: a cost-effectiveness analysis.
Grover, Ritwik; Padula, William V; Van Vliet, Michael; Ridgway, Emily B
2013-11-01
The purpose of this study was to assess the cost-effectiveness of five standardized procedures for breast reconstruction to delineate the best reconstructive approach in postmastectomy patients in the settings of nonirradiated and irradiated chest walls. A decision tree was used to model five breast reconstruction procedures from the provider perspective to evaluate cost-effectiveness. Procedures included autologous flaps with pedicled tissue, autologous flaps with free tissue, latissimus dorsi flaps with breast implants, expanders with implant exchange, and immediate implant placement. All methods were compared with a "do-nothing" alternative. Data for model parameters were collected through a systematic review, and patient health utilities were calculated from an ad hoc survey of reconstructive surgeons. Results were measured in cost (2011 U.S. dollars) per quality-adjusted life-year. Univariate sensitivity analyses and Bayesian multivariate probabilistic sensitivity analysis were conducted. Pedicled autologous tissue and free autologous tissue reconstruction were cost-effective compared with the do-nothing alternative. Pedicled autologous tissue was the slightly more cost-effective of the two. The other procedures were not found to be cost-effective. The results were robust to a number of sensitivity analyses, although the margin between pedicled and free autologous tissue reconstruction is small and affected by some parameter values. Autologous pedicled tissue was slightly more cost-effective than free tissue reconstruction in irradiated and nonirradiated patients. Implant-based techniques were not cost-effective. This is in agreement with the growing trend at academic institutions to encourage autologous tissue reconstruction because of its natural recreation of the breast contour, suppleness, and resiliency in the setting of irradiated recipient beds.
NASA Astrophysics Data System (ADS)
Kang, Jinfen; Moriyama, Koji; Kim, Seung Hyun
2016-09-01
This paper presents an extended, stochastic reconstruction method for catalyst layers (CLs) of Proton Exchange Membrane Fuel Cells (PEMFCs). The focus is placed on the reconstruction of customized, low platinum (Pt) loading CLs where the microstructure of CLs can substantially influence the performance. The sphere-based simulated annealing (SSA) method is extended to generate the CL microstructures with specified and controllable structural properties for agglomerates, ionomer, and Pt catalysts. In the present method, the agglomerate structures are controlled by employing a trial two-point correlation function used in the simulated annealing process. An off-set method is proposed to generate more realistic ionomer structures. The variations of ionomer structures at different humidity conditions are considered to mimic the swelling effects. A method to control Pt loading, distribution, and utilization is presented. The extension of the method to consider heterogeneity in structural properties, which can be found in manufactured CL samples, is presented. Various reconstructed CLs are generated to demonstrate the capability of the proposed method. Proton transport properties of the reconstructed CLs are calculated and validated with experimental data.
Performance of climate field reconstruction methods over multiple seasons and climate variables
NASA Astrophysics Data System (ADS)
Dannenberg, Matthew P.; Wise, Erika K.
2013-09-01
Studies of climate variability require long time series of data but are limited by the absence of preindustrial instrumental records. For such studies, proxy-based climate reconstructions, such as those produced from tree-ring widths, provide the opportunity to extend climatic records into preindustrial periods. Climate field reconstruction (CFR) methods are capable of producing spatially-resolved reconstructions of climate fields. We assessed the performance of three commonly used CFR methods (canonical correlation analysis, point-by-point regression, and regularized expectation maximization) over spatially-resolved fields using multiple seasons and climate variables. Warm- and cool-season geopotential height, precipitable water, and surface temperature were tested for each method using tree-ring chronologies. Spatial patterns of reconstructive skill were found to be generally consistent across each of the methods, but the robustness of the validation metrics varied by CFR method, season, and climate variable. The most robust validation metrics were achieved with geopotential height, the October through March temporal composite, and the Regularized Expectation Maximization method. While our study is limited to assessment of skill over multidecadal (rather than multi-centennial) time scales, our findings suggest that the climate variable of interest, seasonality, and spatial domain of the target field should be considered when assessing potential CFR methods for real-world applications.
Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method
NASA Astrophysics Data System (ADS)
Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing
2017-05-01
Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach's feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method.
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian; Wilson, John L.
2000-09-01
Inverse methods can be used to reconstruct the release history of a known source of groundwater contamination from concentration data describing the present-day spatial distribution of the contaminant plume. Using hypothetical release history functions and contaminant plumes, we evaluate the relative effectiveness of two proposed inverse methods, Tikhonov regularization (TR) and minimum relative entropy (MRE) inversion, in reconstructing the release history of a conservative contaminant in a one-dimensional domain [Skaggs and Kabala, 1994; Woodbury and Ulrych, 1996]. We also address issues of reproducibility of the solution and the appropriateness of models for simulating random measurement error. The results show that if error-free plume concentration data are available, both methods perform well in reconstructing a smooth source history function. With error-free data the MRE method is more robust than TR in reconstructing a nonsmooth source history function; however, the TR method is more robust if the data contain measurement error. Two error models were evaluated in this study, and we found that the particular error model does not affect the reliability of the solutions. The results for the TR method have somewhat greater reproducibility because, in some cases, its input parameters are less subjective than those of the MRE method; however, the MRE solution can identify regions where the data give little or no information about the source history function, while the TR solution cannot.
Environment-based pin-power reconstruction method for homogeneous core calculations
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-07-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOX assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)
A novel building boundary reconstruction method based on lidar data and images
NASA Astrophysics Data System (ADS)
Chen, Yiming; Zhang, Wuming; Zhou, Guoqing; Yan, Guangjian
2013-09-01
Building boundary is important for the urban mapping and real estate industry applications. The reconstruction of building boundary is also a significant but difficult step in generating city building models. As Light detection and ranging system (Lidar) can acquire large and dense point cloud data fast and easily, it has great advantages for building reconstruction. In this paper, we combine Lidar data and images to develop a novel building boundary reconstruction method. We use only one scan of Lidar data and one image to do the reconstruction. The process consists of a sequence of three steps: project boundary Lidar points to image; extract accurate boundary from image; and reconstruct boundary in Lidar points. We define a relationship between 3D points and the pixel coordinates. Then we extract the boundary in the image and use the relationship to get boundary in the point cloud. The method presented here reduces the difficulty of data acquisition effectively. The theory is not complex so it has low computational complexity. It can also be widely used in the data acquired by other 3D scanning devices to improve the accuracy. Results of the experiment demonstrate that this method has a clear advantage and high efficiency over others, particularly in the data with large point spacing.
Gülekon, Nadir; Peker, Tuncay; Turgut, Hasan Basri; Anil, Afitap; Karaköse, Mustafa
2007-07-01
STATING BACKGROUND: This study was designed to examine the entire intramuscular nerve distribution pattern of various human skeletal muscles in fetuses. In the present study rhomboid major, trapezius, long head of the biceps femoris and masseter muscles were investigated in five 18 weeks old fetal cadavers. Anatomical microdissection was applied to one fetal cadaver. In two fetuses, the extramuscular (main), major and minor nerve branches, and anastomosis were examined using Sihler's staining and labeling. In the remaining two fetuses, consecutive slices with 0.5 mm interval and 5 microm thickness were obtained from each skeletal muscle. These slices were stained with S100 for the demonstration of the nerve fibers and thereafter 3D reconstruction images were constituted using PC software. Anatomical microdissection, Sihler's staining and computerized reconstruction methods were compared to demonstrate the intramuscular nerve distribution pattern. Demonstration of the intramuscular minor nerve branches and anastomosis showed difficulties in anatomical dissected specimens when compared with three-dimensionally reconstructed images and specimens obtained with Sihler's staining technique. Nevertheless, anatomical dissection is a simple method whereas Sihler's technique and computer aided 3D reconstruction are complex methods and take a long time to complete. The obtained information exposed that staining technique and the 3D reconstructions appeared to provide better results than did anatomical dissection.
NASA Astrophysics Data System (ADS)
Cao, Zhang; Xu, Lijun; Wang, Huaxiang
2009-10-01
Calderon's method was introduced to electrical capacitance tomography in this paper. It is a direct algorithm of the image reconstruction for low-contrast dielectrics, as no matrix inversion or iterative process is needed. It was implemented through numerical integration. Since the Gauss-Legendre quadrature was applied and can be predetermined, the image reconstruction process was fast and resulted in images of high quality. Simulations were carried out to study the effect of different dielectric contrasts and different electrode numbers. Both simulated and experimental results validated the feasibility and effectiveness of Calderon's method in electrical capacitance tomography for low-contrast dielectrics.
A new 3D reconstruction method of small solar system bodies
NASA Astrophysics Data System (ADS)
Capanna, C.; Jorda, L.; Lamy, P.; Gesquiere, G.
2011-10-01
The 3D reconstruction of small solar system bodies consitutes an essential step toward understanding and interpreting their physical and geological properties. We propose a new reconstruction method by photoclinometry based on the minimization of the chisquare difference between observed and synthetic images by deformation of a 3D triangular mesh. This method has been tested on images of the two asteroids (2867) Steins and (21) Lutetia observed during ESA's ROSETTA mission, and it will be applied to elaborate digital terrain models from images of the asteroid (4) Vesta, the target of NASA's DAWN spacecraft.
A comparative study of limited-angle cone-beam reconstruction methods for breast tomosynthesis
Zhang, Yiheng; Chan, Heang-Ping; Sahiner, Berkman; Wei, Jun; Goodsitt, Mitchell M.; Hadjiiski, Lubomir M.; Ge, Jun; Zhou, Chuan
2009-01-01
Digital tomosynthesis mammography (DTM) is a promising new modality for breast cancer detection. In DTM, projection-view images are acquired at a limited number of angles over a limited angular range and the imaged volume is reconstructed from the two-dimensional projections, thus providing three-dimensional structural information of the breast tissue. In this work, we investigated three representative reconstruction methods for this limited-angle cone-beam tomographic problem, including the backprojection (BP) method, the simultaneous algebraic reconstruction technique (SART) and the maximum likelihood method with the convex algorithm (ML-convex). The SART and ML-convex methods were both initialized with BP results to achieve efficient reconstruction. A second generation GE prototype tomosynthesis mammography system with a stationary digital detector was used for image acquisition. Projection-view images were acquired from 21 angles in 3° increments over a ±30° angular range. We used an American College of Radiology phantom and designed three additional phantoms to evaluate the image quality and reconstruction artifacts. In addition to visual comparison of the reconstructed images of different phantom sets, we employed the contrast-to-noise ratio (CNR), a line profile of features, an artifact spread function (ASF), a relative noise power spectrum (NPS), and a line object spread function (LOSF) to quantitatively evaluate the reconstruction results. It was found that for the phantoms with homogeneous background, the BP method resulted in less noisy tomosynthesized images and higher CNR values for masses than the SART and ML-convex methods. However, the two iterative methods provided greater contrast enhancement for both masses and calcification, sharper LOSF, and reduced inter-plane blurring and artifacts with better ASF behaviors for masses. For a contrast-detail phantom with heterogeneous tissue-mimicking background, the BP method had strong blurring artifacts
A comparative study of limited-angle cone-beam reconstruction methods for breast tomosynthesis
Zhang Yiheng; Chan, H.-P.; Sahiner, Berkman; Wei, Jun; Goodsitt, Mitchell M.; Hadjiiski, Lubomir M.; Ge Jun; Zhou Chuan
2006-10-15
Digital tomosynthesis mammography (DTM) is a promising new modality for breast cancer detection. In DTM, projection-view images are acquired at a limited number of angles over a limited angular range and the imaged volume is reconstructed from the two-dimensional projections, thus providing three-dimensional structural information of the breast tissue. In this work, we investigated three representative reconstruction methods for this limited-angle cone-beam tomographic problem, including the backprojection (BP) method, the simultaneous algebraic reconstruction technique (SART) and the maximum likelihood method with the convex algorithm (ML-convex). The SART and ML-convex methods were both initialized with BP results to achieve efficient reconstruction. A second generation GE prototype tomosynthesis mammography system with a stationary digital detector was used for image acquisition. Projection-view images were acquired from 21 angles in 3 deg. increments over a {+-}30 deg. angular range. We used an American College of Radiology phantom and designed three additional phantoms to evaluate the image quality and reconstruction artifacts. In addition to visual comparison of the reconstructed images of different phantom sets, we employed the contrast-to-noise ratio (CNR), a line profile of features, an artifact spread function (ASF), a relative noise power spectrum (NPS), and a line object spread function (LOSF) to quantitatively evaluate the reconstruction results. It was found that for the phantoms with homogeneous background, the BP method resulted in less noisy tomosynthesized images and higher CNR values for masses than the SART and ML-convex methods. However, the two iterative methods provided greater contrast enhancement for both masses and calcification, sharper LOSF, and reduced interplane blurring and artifacts with better ASF behaviors for masses. For a contrast-detail phantom with heterogeneous tissue-mimicking background, the BP method had strong blurring
Chlorophyll a reconstruction from in situ measurements: 1. Method description
NASA Astrophysics Data System (ADS)
Fründt, B.; Dippner, J. W.; Waniek, J. J.
2015-02-01
Understanding the development of primary production is essential for projections of the global carbon cycle in the context of climate change. A chlorophyll a hindcast that serves as a primary production indicator was obtained by fitting in situ measurements of nitrate, chlorophyll a, and temperature. The resulting fitting functions were adapted to a modeled temperature field. The method was applied to observations from the Madeira Basin, in the northeastern part of the oligotrophic North Atlantic Subtropical Gyre and yielded a chlorophyll a field from 1989 to 2008 with a monthly resolution validated with remotely measured surface chlorophyll a data by SeaWiFS. The chlorophyll a hindcast determined with our method resolved the seasonal and interannual variability in the phytoplankton biomass of the euphotic zone as well as the deep chlorophyll maximum. Moreover, it will allow estimation of carbon uptake over long time scales.
Unhappy triad in limb reconstruction: Management by Ilizarov method
El-Alfy, Barakat Sayed
2017-01-01
AIM To evaluate the results of the Ilizarov method in management of cases with bone loss, soft tissue loss and infection. METHODS Twenty eight patients with severe leg trauma complicated by bone loss, soft tissue loss and infection were managed by distraction osteogenesis in our institution. After radical debridement of all the infected and dead tissues the Ilizarov frame was applied, corticotomy was done and bone transport started. The wounds were left open to drain. Partial limb shortening was done in seven cases to reduce the size of both the skeletal and soft tissue defects. The average follow up period was 39 mo (range 27-56 mo). RESULTS The infection was eradicated in all cases. All the soft tissue defects healed during bone transport and plastic surgery was only required in 2 cases. Skeletal defects were treated in all cases. All patients required another surgery at the docking site to fashion the soft tissue and to cover the bone ends. The external fixation time ranged from 9 to 17 mo with an average of 13 mo. The complications included pin tract infection in 16 cases, wire breakage in 2 cases, unstable scar in 4 cases and chronic edema in 3 cases. According to the association for study and application of methods of Ilizarov score the bone results were excellent in 10, good in 16 and fair in 2 cases while the functional results were excellent in 8, good in 17 and fair in 3 cases. CONCLUSION Distraction osteogenesis is a good method that can treat the three problems of this triad simultaneously. PMID:28144578
Wu, Sean F; Zhao, Xiang
2002-07-01
A combined Helmholtz equation-least squares (CHELS) method is developed for reconstructing acoustic radiation from an arbitrary object. This method combines the advantages of both the HELS method and the Helmholtz integral theory based near-field acoustic holography (NAH). As such it allows for reconstruction of the acoustic field radiated from an arbitrary object with relatively few measurements, thus significantly enhancing the reconstruction efficiency. The first step in the CHELS method is to establish the HELS formulations based on a finite number of acoustic pressure measurements taken on or beyond a hypothetical spherical surface that encloses the object under consideration. Next enough field acoustic pressures are generated using the HELS formulations and taken as the input to the Helmholtz integral formulations implemented through the boundary element method (BEM). The acoustic pressure and normal component of the velocity at the discretized nodes on the surface are then determined by solving two matrix equations using singular value decomposition (SVD) and regularization techniques. Also presented are in-depth analyses of the advantages and limitations of the CHELS method. Examples of reconstructing acoustic radiation from separable and nonseparable surfaces are demonstrated.
One step linear reconstruction method for continuous wave diffuse optical tomography
NASA Astrophysics Data System (ADS)
Ukhrowiyah, N.; Yasin, M.
2017-09-01
The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.
A new method for the reconstruction of micro- and nanoscale planar periodic structures.
Hu, Zhenxing; Xie, Huimin; Lu, Jian; Liu, Zhanwei; Wang, Qinghua
2010-08-01
In recent years, the micro- and nanoscale structures and materials are observed and characterized under microscopes with large magnification at the cost of small view field. In this paper, a new phase-shifting inverse geometry moiré method for the full-field reconstruction of micro- and nanoscale planar periodic structures is proposed. The random phase shift techniques are realized under the scanning types of microscopes. A simulation test and a practical verification experiment were performed, which demonstrate this method is feasible. As an application, the method was used to reconstruct the structure of a butterfly wing and a holographic grating. The results verify the reconstruction process is convenient. When being compared with the direct measurement method using point-by-point way, the method is very effective with a large view field. This method can be extended to reconstruct other planar periodic microstructures and to locate the defects in material possessing the regular lattice structure. Furthermore, it can be applied to evaluate the quality of micro- and nanoscale planar periodic structures under various high-power scanning microscopes. 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Sean F.; Zhao, Xiang
2002-07-01
A combined Helmholtz equation-least squares (CHELS) method is developed for reconstructing acoustic radiation from an arbitrary object. This method combines the advantages of both the HELS method and the Helmholtz integral theory based near-field acoustic holography (NAH). As such it allows for reconstruction of the acoustic field radiated from an arbitrary object with relatively few measurements, thus significantly enhancing the reconstruction efficiency. The first step in the CHELS method is to establish the HELS formulations based on a finite number of acoustic pressure measurements taken on or beyond a hypothetical spherical surface that encloses the object under consideration. Next enough field acoustic pressures are generated using the HELS formulations and taken as the input to the Helmholtz integral formulations implemented through the boundary element method (BEM). The acoustic pressure and normal component of the velocity at the discretized nodes on the surface are then determined by solving two matrix equations using singular value decomposition (SVD) and regularization techniques. Also presented are in-depth analyses of the advantages and limitations of the CHELS method. Examples of reconstructing acoustic radiation from separable and nonseparable surfaces are demonstrated. copyright 2002 Acoustical Society of America.
Reconstruction from Uniformly Attenuated SPECT Projection Data Using the DBH Method
Huang, Qiu; You, Jiangsheng; Zeng, Gengsheng L.; Gullberg, Grant T.
2008-03-20
An algorithm was developed for the two-dimensional (2D) reconstruction of truncated and non-truncated uniformly attenuated data acquired from single photon emission computed tomography (SPECT). The algorithm is able to reconstruct data from half-scan (180o) and short-scan (180?+fan angle) acquisitions for parallel- and fan-beam geometries, respectively, as well as data from full-scan (360o) acquisitions. The algorithm is a derivative, backprojection, and Hilbert transform (DBH) method, which involves the backprojection of differentiated projection data followed by an inversion of the finite weighted Hilbert transform. The kernel of the inverse weighted Hilbert transform is solved numerically using matrix inversion. Numerical simulations confirm that the DBH method provides accurate reconstructions from half-scan and short-scan data, even when there is truncation. However, as the attenuation increases, finer data sampling is required.
Application of information theory methods to food web reconstruction
Moniz, L.J.; Cooch, E.G.; Ellner, S.P.; Nichols, J.D.; Nichols, J.M.
2007-01-01
In this paper we use information theory techniques on time series of abundances to determine the topology of a food web. At the outset, the food web participants (two consumers, two resources) are known; in addition we know that each consumer prefers one of the resources over the other. However, we do not know which consumer prefers which resource, and if this preference is absolute (i.e., whether or not the consumer will consume the non-preferred resource). Although the consumers and resources are identified at the beginning of the experiment, we also provide evidence that the consumers are not resources for each other, and the resources do not consume each other. We do show that there is significant mutual information between resources; the model is seasonally forced and some shared information between resources is expected. Similarly, because the model is seasonally forced, we expect shared information between consumers as they respond to the forcing of the resources. The model that we consider does include noise, and in an effort to demonstrate that these methods may be of some use in other than model data, we show the efficacy of our methods with decreasing time series size; in this particular case we obtain reasonably clear results with a time series length of 400 points. This approaches ecological time series lengths from real systems.
A comparative study of interface reconstruction methods for multi-material ALE simulations
Kucharik, Milan; Garimalla, Rao; Schofield, Samuel; Shashkov, Mikhail
2009-01-01
In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs some-what worse than the above two while the solutions with VOF using the wrong material order are considerably worse.
Ankowski, Artur M.; Benhar, Omar; Coloma, Pilar; ...
2015-10-22
To be able to achieve their physics goals, future neutrino-oscillation experiments will need to reconstruct the neutrino energy with very high accuracy. In this work, we analyze how the energy reconstruction may be affected by realistic detection capabilities, such as energy resolutions, efficiencies, and thresholds. This allows us to estimate how well the detector performance needs to be determined a priori in order to avoid a sizable bias in the measurement of the relevant oscillation parameters. We compare the kinematic and calorimetric methods of energy reconstruction in the context of two νμ → νμ disappearance experiments operating in different energymore » regimes. For the calorimetric reconstruction method, we find that the detector performance has to be estimated with an O(10%) accuracy to avoid a significant bias in the extracted oscillation parameters. Thus, in the case of kinematic energy reconstruction, we observe that the results exhibit less sensitivity to an overestimation of the detector capabilities.« less
Ankowski, Artur M.; Benhar, Omar; Coloma, Pilar; Huber, Patrick; Jen, Chun -Min; Mariani, Camillo; Meloni, Davide; Vagnoni, Erica
2015-10-22
To be able to achieve their physics goals, future neutrino-oscillation experiments will need to reconstruct the neutrino energy with very high accuracy. In this work, we analyze how the energy reconstruction may be affected by realistic detection capabilities, such as energy resolutions, efficiencies, and thresholds. This allows us to estimate how well the detector performance needs to be determined a priori in order to avoid a sizable bias in the measurement of the relevant oscillation parameters. We compare the kinematic and calorimetric methods of energy reconstruction in the context of two ν_{μ} → ν_{μ} disappearance experiments operating in different energy regimes. For the calorimetric reconstruction method, we find that the detector performance has to be estimated with an O(10%) accuracy to avoid a significant bias in the extracted oscillation parameters. Thus, in the case of kinematic energy reconstruction, we observe that the results exhibit less sensitivity to an overestimation of the detector capabilities.
Virtual biomechanics: a new method for online reconstruction of force from EMG recordings.
de Rugy, Aymar; Loeb, Gerald E; Carroll, Timothy J
2012-12-01
Current methods to reconstruct muscle contributions to joint torque usually combine electromyograms (EMGs) with cadaver-based estimates of biomechanics, but both are imperfect representations of reality. Here, we describe a new method that enables online force reconstruction in which we optimize a "virtual" representation of muscle biomechanics. We first obtain tuning curves for the five major wrist muscles from the mean rectified EMG during the hold phase of an isometric aiming task when a cursor is driven by actual force recordings. We then apply a custom, gradient-descent algorithm to determine the set of "virtual pulling vectors" that best reach the target forces when combined with the observed muscle activity. When these pulling vectors are multiplied by the rectified and low-pass-filtered (1.3 Hz) EMG of the five muscles online, the reconstructed force provides a close spatiotemporal match to the true force exerted at the wrist. In three separate experiments, we demonstrate that the technique works equally well for surface and fine-wire recordings and is sensitive to biomechanical changes elicited by a modification of the forearm posture. In all conditions tested, muscle tuning curves obtained when the task was performed with feedback of reconstructed force were similar to those obtained when the task was performed with real force feedback. This online force reconstruction technique provides new avenues to study the relationship between neural control and limb biomechanics since the "virtual biomechanics" can be systematically altered at will.
Mitton, D; Landry, C; Véron, S; Skalli, W; Lavaste, F; De Guise, J A
2000-03-01
Standard 3D reconstruction of bones using stereoradiography is limited by the number of anatomical landmarks visible in more than one projection. The proposed technique enables the 3D reconstruction of additional landmarks that can be identified in only one of the radiographs. The principle of this method is the deformation of an elastic object that respects stereocorresponding and non-stereocorresponding observations available in different projections. This technique is based on the principle that any non-stereocorresponding point belongs to a line joining the X-ray source and the projection of the point in one view. The aim is to determine the 3D position of these points on their line of projection when submitted to geometrical and topological constraints. This technique is used to obtain the 3D geometry of 18 cadaveric upper cervical vertebrae. The reconstructed geometry obtained is compared with direct measurements using a magnetic digitiser. The order of precision determined with the point-to-surface distance between the reconstruction obtained with that technique and reference measurements is about 1 mm, depending on the vertebrae studied. Comparison results indicate that the obtained reconstruction is close to the actual vertebral geometry. This method can therefore be proposed to obtain the 3D geometry of vertebrae.
Wang, Zhengzhou; Hu, Bingliang; Yin, Qinye
2017-01-01
The schlieren method of measuring far-field focal spots offers many advantages at the Shenguang III laser facility such as low cost and automatic laser-path collimation. However, current methods of far-field focal spot measurement often suffer from low precision and efficiency when the final focal spot is merged manually, thereby reducing the accuracy of reconstruction. In this paper, we introduce an improved schlieren method to construct the high dynamic-range image of far-field focal spots and improve the reconstruction accuracy and efficiency. First, a detection method based on weak light beam sampling and magnification imaging was designed; images of the main and side lobes of the focused laser irradiance in the far field were obtained using two scientific CCD cameras. Second, using a self-correlation template matching algorithm, a circle the same size as the schlieren ball was dug from the main lobe cutting image and used to change the relative region of the main lobe cutting image within a 100×100 pixel region. The position that had the largest correlation coefficient between the side lobe cutting image and the main lobe cutting image when a circle was dug was identified as the best matching point. Finally, the least squares method was used to fit the center of the side lobe schlieren small ball, and the error was less than 1 pixel. The experimental results show that this method enables the accurate, high-dynamic-range measurement of a far-field focal spot and automatic image reconstruction. Because the best matching point is obtained through image processing rather than traditional reconstruction methods based on manual splicing, this method is less sensitive to the efficiency of focal-spot reconstruction and thus offers better experimental precision. PMID:28207758
Analysis of dental root apical morphology: a new method for dietary reconstructions in primates.
Hamon, NoÉmie; Emonet, Edouard-Georges; Chaimanee, Yaowalak; Guy, Franck; Tafforeau, Paul; Jaeger, Jean-Jacques
2012-06-01
The reconstruction of paleo-diets is an important task in the study of fossil primates. Previously, paleo-diet reconstructions were performed using different methods based on extant primate models. In particular, dental microwear or isotopic analyses provided accurate reconstructions for some fossil primates. However, there is sometimes difficult or impossible to apply these methods to fossil material. Therefore, the development of new, independent methods of diet reconstructions is crucial to improve our knowledge of primates paleobiology and paleoecology. This study aims to investigate the correlation between tooth root apical morphology and diet in primates, and its potential for paleo-diet reconstructions. Dental roots are composed of two portions: the eruptive portion with a smooth and regular surface, and the apical penetrative portion which displays an irregular and corrugated surface. Here, the angle formed by these two portions (aPE), and the ratio of penetrative portion over total root length (PPI), are calculated for each mandibular tooth root. A strong correlation between these two variables and the proportion of some food types (fruits, leaves, seeds, animal matter, and vertebrates) in diet is found, allowing the use of tooth root apical morphology as a tool for dietary reconstructions in primates. The method was then applied to the fossil hominoid Khoratpithecus piriyai, from the Late Miocene of Thailand. The paleo-diet deduced from aPE and PPI is dominated by fruits (>50%), associated with animal matter (1-25%). Leaves, vertebrates and most probably seeds were excluded from the diet of Khoratpithecus, which is consistent with previous studies.
Balima, O.; Favennec, Y.; Rousse, D.
2013-10-15
Highlights: •New strategies to improve the accuracy of the reconstruction through mesh and finite element parameterization. •Use of gradient filtering through an alternative inner product within the adjoint method. •An integral form of the cost function is used to make the reconstruction compatible with all finite element formulations, continuous and discontinuous. •Gradient-based algorithm with the adjoint method is used for the reconstruction. -- Abstract: Optical tomography is mathematically treated as a non-linear inverse problem where the optical properties of the probed medium are recovered through the minimization of the errors between the experimental measurements and their predictions with a numerical model at the locations of the detectors. According to the ill-posed behavior of the inverse problem, some regularization tools must be performed and the Tikhonov penalization type is the most commonly used in optical tomography applications. This paper introduces an optimized approach for optical tomography reconstruction with the finite element method. An integral form of the cost function is used to take into account the surfaces of the detectors and make the reconstruction compatible with all finite element formulations, continuous and discontinuous. Through a gradient-based algorithm where the adjoint method is used to compute the gradient of the cost function, an alternative inner product is employed for preconditioning the reconstruction algorithm. Moreover, appropriate re-parameterization of the optical properties is performed. These regularization strategies are compared with the classical Tikhonov penalization one. It is shown that both the re-parameterization and the use of the Sobolev cost function gradient are efficient for solving such an ill-posed inverse problem.
Deng, Qingqiong; Zhou, Mingquan; Wu, Zhongke; Shui, Wuyang; Ji, Yuan; Wang, Xingce; Liu, Ching Yiu Jessica; Huang, Youliang; Jiang, Haiyan
2016-02-01
Craniofacial reconstruction recreates a facial outlook from the cranium based on the relationship between the face and the skull to assist identification. But craniofacial structures are very complex, and this relationship is not the same in different craniofacial regions. Several regional methods have recently been proposed, these methods segmented the face and skull into regions, and the relationship of each region is then learned independently, after that, facial regions for a given skull are estimated and finally glued together to generate a face. Most of these regional methods use vertex coordinates to represent the regions, and they define a uniform coordinate system for all of the regions. Consequently, the inconsistence in the positions of regions between different individuals is not eliminated before learning the relationships between the face and skull regions, and this reduces the accuracy of the craniofacial reconstruction. In order to solve this problem, an improved regional method is proposed in this paper involving two types of coordinate adjustments. One is the global coordinate adjustment performed on the skulls and faces with the purpose to eliminate the inconsistence of position and pose of the heads; the other is the local coordinate adjustment performed on the skull and face regions with the purpose to eliminate the inconsistence of position of these regions. After these two coordinate adjustments, partial least squares regression (PLSR) is used to estimate the relationship between the face region and the skull region. In order to obtain a more accurate reconstruction, a new fusion strategy is also proposed in the paper to maintain the reconstructed feature regions when gluing the facial regions together. This is based on the observation that the feature regions usually have less reconstruction errors compared to rest of the face. The results demonstrate that the coordinate adjustments and the new fusion strategy can significantly improve the
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan
2015-01-01
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan
2015-11-15
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method
Reconstruction of the sound field above a reflecting plane using the equivalent source method
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Jing, Wen-Qian; Zhang, Yong-Bin; Lin, Wang-Lin
2017-01-01
In practical situations, vibrating objects are usually located above a reflecting plane instead of exposing to a free field. The conventional nearfield acoustic holography (NAH) sometimes fails to identify sound sources under such situations. This paper develops two kinds of equivalent source method (ESM)-based half-space NAH to reconstruct the sound field above a reflecting plane. In the first kind of method, the half-space Green's function is introduced into the ESM-based NAH, and the sound field is reconstructed based on the condition that the surface impedance of the reflecting plane is known a prior. The second kind of method regards the reflections as being radiated by equivalent sources placed under the reflecting plane, and the sound field is reconstructed by matching the pressure on the hologram surface with the equivalent sources distributed within the vibrating object and those substituting for reflections. Thus, this kind of method is independent of the surface impedance of the reflecting plane. Numerical simulations and experiments demonstrate the feasibility of these two kinds of methods for reconstructing the sound field above a reflecting plane.
Spectrum reconstruction method based on the detector response model calibrated by x-ray fluorescence
NASA Astrophysics Data System (ADS)
Li, Ruizhe; Li, Liang; Chen, Zhiqiang
2017-02-01
Accurate estimation of distortion-free spectra is important but difficult in various applications, especially for spectral computed tomography. Two key problems must be solved to reconstruct the incident spectrum. One is the acquisition of the detector energy response. It can be calculated by Monte Carlo simulation, which requires detailed modeling of the detector system and a high computational power. It can also be acquired by establishing a parametric response model and be calibrated using monochromatic x-ray sources, such as synchrotron sources or radioactive isotopes. However, these monochromatic sources are difficult to obtain. Inspired by x-ray fluorescence (XRF) spectrum modeling, we propose a feasible method to obtain the detector energy response based on an optimized parametric model for CdZnTe or CdTe detectors. The other key problem is the reconstruction of the incident spectrum with the detector response. Directly obtaining an accurate solution from noisy data is difficult because the reconstruction problem is severely ill-posed. Different from the existing spectrum stripping method, a maximum likelihood-expectation maximization iterative algorithm is developed based on the Poisson noise model of the system. Simulation and experiment results show that our method is effective for spectrum reconstruction and markedly increases the accuracy of XRF spectra compared with the spectrum stripping method. The applicability of the proposed method is discussed, and promising results are presented.
Li, Ruizhe; Li, Liang; Chen, Zhiqiang
2017-02-07
Accurate estimation of distortion-free spectra is important but difficult in various applications, especially for spectral computed tomography. Two key problems must be solved to reconstruct the incident spectrum. One is the acquisition of the detector energy response. It can be calculated by Monte Carlo simulation, which requires detailed modeling of the detector system and a high computational power. It can also be acquired by establishing a parametric response model and be calibrated using monochromatic x-ray sources, such as synchrotron sources or radioactive isotopes. However, these monochromatic sources are difficult to obtain. Inspired by x-ray fluorescence (XRF) spectrum modeling, we propose a feasible method to obtain the detector energy response based on an optimized parametric model for CdZnTe or CdTe detectors. The other key problem is the reconstruction of the incident spectrum with the detector response. Directly obtaining an accurate solution from noisy data is difficult because the reconstruction problem is severely ill-posed. Different from the existing spectrum stripping method, a maximum likelihood-expectation maximization iterative algorithm is developed based on the Poisson noise model of the system. Simulation and experiment results show that our method is effective for spectrum reconstruction and markedly increases the accuracy of XRF spectra compared with the spectrum stripping method. The applicability of the proposed method is discussed, and promising results are presented.
Single-cell volume estimation by applying three-dimensional reconstruction methods
NASA Astrophysics Data System (ADS)
Khatibi, Siamak; Allansson, Louise; Gustavsson, Tomas; Blomstrand, Fredrik; Hansson, Elisabeth; Olsson, Torsten
1999-05-01
We have studied three-dimensional reconstruction methods to estimate the cell volume of astroglial cells in primary culture. The studies are based on fluorescence imaging and optical sectioning. An automated image-acquisition system was developed to collect two-dimensional microscopic images. Images were reconstructed by the Linear Maximum a Posteriori method and the non-linear Maximum Likelihood Expectation Maximization (ML-EM) method. In addition, because of the high computational demand of the ML-EM algorithm, we have developed a fast variant of this method. (1) Advanced image analysis techniques were applied for accurate and automated cell volume determination. (2) The sensitivity and accuracy of the reconstruction methods were evaluated by using fluorescent micro-beads with known diameter. The algorithms were applied to fura-2-labeled astroglial cells in primary culture exposed to hypo- or hyper-osmotic stress. The results showed that the ML-EM reconstructed images are adequate for the determination of volume changes in cells or parts thereof.
Pantograph: A template-based method for genome-scale metabolic model reconstruction.
Loira, Nicolas; Zhukova, Anna; Sherman, David James
2015-04-01
Genome-scale metabolic models are a powerful tool to study the inner workings of biological systems and to guide applications. The advent of cheap sequencing has brought the opportunity to create metabolic maps of biotechnologically interesting organisms. While this drives the development of new methods and automatic tools, network reconstruction remains a time-consuming process where extensive manual curation is required. This curation introduces specific knowledge about the modeled organism, either explicitly in the form of molecular processes, or indirectly in the form of annotations of the model elements. Paradoxically, this knowledge is usually lost when reconstruction of a different organism is started. We introduce the Pantograph method for metabolic model reconstruction. This method combines a template reaction knowledge base, orthology mappings between two organisms, and experimental phenotypic evidence, to build a genome-scale metabolic model for a target organism. Our method infers implicit knowledge from annotations in the template, and rewrites these inferences to include them in the resulting model of the target organism. The generated model is well suited for manual curation. Scripts for evaluating the model with respect to experimental data are automatically generated, to aid curators in iterative improvement. We present an implementation of the Pantograph method, as a toolbox for genome-scale model reconstruction, curation and validation. This open source package can be obtained from: http://pathtastic.gforge.inria.fr.
Zhou, Xuezhong; Liu, Baoyan; Wu, Zhaohui; Feng, Yi
2007-10-01
The amount of biomedical data in different disciplines is growing at an exponential rate. Integrating these significant knowledge sources to generate novel hypotheses for systems biology research is difficult. Traditional Chinese medicine (TCM) is a completely different discipline, and is a complementary knowledge system to modern biomedical science. This paper uses a significant TCM bibliographic literature database in China, together with MEDLINE, to help discover novel gene functional knowledge. We present an integrative mining approach to uncover the functional gene relationships from MEDLINE and TCM bibliographic literature. This paper introduces TCM literature (about 50,000 records) as one knowledge source for constructing literature-based gene networks. We use the TCM diagnosis, TCM syndrome, to automatically congregate the related genes. The syndrome-gene relationships are discovered based on the syndrome-disease relationships extracted from TCM literature and the disease-gene relationships in MEDLINE. Based on the bubble-bootstrapping and relation weight computing methods, we have developed a prototype system called MeDisco/3S, which has name entity and relation extraction, and online analytical processing (OLAP) capabilities, to perform the integrative mining process. We have got about 200,000 syndrome-gene relations, which could help generate syndrome-based gene networks, and help analyze the functional knowledge of genes from syndrome perspective. We take the gene network of Kidney-Yang Deficiency syndrome (KYD syndrome) and the functional analysis of some genes, such as CRH (corticotropin releasing hormone), PTH (parathyroid hormone), PRL (prolactin), BRCA1 (breast cancer 1, early onset) and BRCA2 (breast cancer 2, early onset), to demonstrate the preliminary results. The underlying hypothesis is that the related genes of the same syndrome will have some biological functional relationships, and will constitute a functional network. This paper presents
Quantitative schlieren method for studying the wavefront reconstructed from a hologram
Lyalikov, A.M.
1995-03-01
A schlieren method is proposed for visualizing the deflection angles of the light beams reconstructed from a phase object hologram. The method is based on employing a stationary visualizing slit and selecting the image of a slit light source by a movable slit. This light source comprises several equidistant slit sources. Compensation for the aberrations of the hologram-recording system is considered. Experimental results of the evaluation tests showing the performance of the method developed are presented. 15 refs., 4 figs.
McCloskey, Rosemary M.; Liang, Richard H.; Harrigan, P. Richard; Brumme, Zabrina L.
2014-01-01
ABSTRACT A population of human immunodeficiency virus (HIV) within a host often descends from a single transmitted/founder virus. The high mutation rate of HIV, coupled with long delays between infection and diagnosis, make isolating and characterizing this strain a challenge. In theory, ancestral reconstruction could be used to recover this strain from sequences sampled in chronic infection; however, the accuracy of phylogenetic techniques in this context is unknown. To evaluate the accuracy of these methods, we applied ancestral reconstruction to a large panel of published longitudinal clonal and/or single-genome-amplification HIV sequence data sets with at least one intrapatient sequence set sampled within 6 months of infection or seroconversion (n = 19,486 sequences, median [interquartile range] = 49 [20 to 86] sequences/set). The consensus of the earliest sequences was used as the best possible estimate of the transmitted/founder. These sequences were compared to ancestral reconstructions from sequences sampled at later time points using both phylogenetic and phylogeny-naive methods. Overall, phylogenetic methods conferred a 16% improvement in reproducing the consensus of early sequences, compared to phylogeny-naive methods. This relative advantage increased with intrapatient sequence diversity (P < 10−5) and the time elapsed between the earliest and subsequent samples (P < 10−5). However, neither approach performed well for reconstructing ancestral indel variation, especially within indel-rich regions of the HIV genome. Although further improvements are needed, our results indicate that phylogenetic methods for ancestral reconstruction significantly outperform phylogeny-naive alternatives, and we identify experimental conditions and study designs that can enhance accuracy of transmitted/founder virus reconstruction. IMPORTANCE When HIV is transmitted into a new host, most of the viruses fail to infect host cells. Consequently, an HIV infection tends to be
Linear control theory for gene network modeling.
Shin, Yong-Jun; Bleris, Leonidas
2010-09-16
Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain) and linear state-space (time domain) can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.
An iterative reconstruction method for high-pitch helical luggage CT
NASA Astrophysics Data System (ADS)
Xue, Hui; Zhang, Li; Chen, Zhiqiang; Jin, Xin
2012-10-01
X-ray luggage CT is widely used in airports and railway stations for the purpose of detecting contrabands and dangerous goods that may be potential threaten to public safety, playing an important role in homeland security. An X-ray luggage CT is usually in a helical trajectory with a high pitch for achieving a high passing speed of the luggage. The disadvantage of high pitch is that conventional filtered back-projection (FBP) requires a very large slice thickness, leading to bad axial resolution and helical artifacts. Especially when severe data inconsistencies are present in the z-direction, like the ends of a scanning object, the partial volume effect leads to inaccuracy value and may cause a wrong identification. In this paper, an iterative reconstruction method is developed to improve the image quality and accuracy for a large-spacing multi-detector high-pitch helical luggage CT system. In this method, the slice thickness is set to be much smaller than the pitch. Each slice involves projection data collected in a rather small angular range, being an ill-conditioned limited-angle problem. Firstly a low-resolution reconstruction is employed to obtain images, which are used as prior images in the following process. Then iterative reconstruction is performed to obtain high-resolution images. This method enables a high volume coverage speed and a thin reconstruction slice for the helical luggage CT. We validate this method with data collected in a commercial X-ray luggage CT.
Zhang, Xiao-Zheng; Thomas, Jean-Hugh; Bi, Chuan-Xing; Pascal, Jean-Claude
2012-10-01
A time-domain plane wave superposition method is proposed to reconstruct nonstationary sound fields. In this method, the sound field is expressed as a superposition of time convolutions between the estimated time-wavenumber spectrum of the sound pressure on a virtual source plane and the time-domain propagation kernel at each wavenumber. By discretizing the time convolutions directly, the reconstruction can be carried out iteratively in the time domain, thus providing the advantage of continuously reconstructing time-dependent pressure signals. In the reconstruction process, the Tikhonov regularization is introduced at each time step to obtain a relevant estimate of the time-wavenumber spectrum on the virtual source plane. Because the double infinite integral of the two-dimensional spatial Fourier transform is discretized directly in the wavenumber domain in the proposed method, it does not need to perform the two-dimensional spatial fast Fourier transform that is generally used in time domain holography and real-time near-field acoustic holography, and therefore it avoids some errors associated with the two-dimensional spatial fast Fourier transform in theory and makes possible to use an irregular microphone array. The feasibility of the proposed method is demonstrated by numerical simulations and an experiment with two speakers.
A Reconstructed Discontinuous Galerkin Method for the Euler Equations on Arbitrary Grids
Hong Luo; Luqing Luo; Robert Nourgaliev
2012-11-01
A reconstruction-based discontinuous Galerkin (RDG(P1P2)) method, a variant of P1P2 method, is presented for the solution of the compressible Euler equations on arbitrary grids. In this method, an in-cell reconstruction, designed to enhance the accuracy of the discontinuous Galerkin method, is used to obtain a quadratic polynomial solution (P2) from the underlying linear polynomial (P1) discontinuous Galerkin solution using a least-squares method. The stencils used in the reconstruction involve only the von Neumann neighborhood (face-neighboring cells) and are compact and consistent with the underlying DG method. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG(P1P2) method is third-order accurate, and outperforms the third-order DG method (DG(P2)) in terms of both computing costs and storage requirements.
NASA Astrophysics Data System (ADS)
Xu, Luopeng; Dan, Youquan; Wang, Qingyuan
2015-10-01
The continuous wavelet transform (CWT) introduces an expandable spatial and frequency window which can overcome the inferiority of localization characteristic in Fourier transform and windowed Fourier transform. The CWT method is widely applied in the non-stationary signal analysis field including optical 3D shape reconstruction with remarkable performance. In optical 3D surface measurement, the performance of CWT for optical fringe pattern phase reconstruction usually depends on the choice of wavelet function. A large kind of wavelet functions of CWT, such as Mexican Hat wavelet, Morlet wavelet, DOG wavelet, Gabor wavelet and so on, can be generated from Gauss wavelet function. However, so far, application of the Gauss wavelet transform (GWT) method (i.e. CWT with Gauss wavelet function) in optical profilometry is few reported. In this paper, the method using GWT for optical fringe pattern phase reconstruction is presented first and the comparisons between real and complex GWT methods are discussed in detail. The examples of numerical simulations are also given and analyzed. The results show that both the real GWT method along with a Hilbert transform and the complex GWT method can realize three-dimensional surface reconstruction; and the performance of reconstruction generally depends on the frequency domain appearance of Gauss wavelet functions. For the case of optical fringe pattern of large phase variation with position, the performance of real GWT is better than that of complex one due to complex Gauss series wavelets existing frequency sidelobes. Finally, the experiments are carried out and the experimental results agree well with our theoretical analysis.
Pan, Xiaochang; Liu, Ke; Bai, Jing; Luo, Jianwen
2014-09-07
In ultrasound elastography, reconstruction of tissue elasticity (e.g., Young's modulus) requires regularization and known information of forces and/or displacements on tissue boundaries. In practice, it is challenging to choose an appropriate regularization parameter; and the boundary conditions are difficult to obtain in vivo. The purpose of this study is to develop a more applicable algorithm that does not need any regularization or boundary force/displacement information. The proposed method adopts the bicubic B-spline as the tissue motion model to estimate the displacement fields. Then the estimated displacements are input to the finite element inversion scheme to reconstruct the Young's modulus of each element. In the inversion, a modulus boundary condition is used instead of force/displacement boundary conditions. Simulation and experiments on tissue-mimicking phantoms are carried out to test the proposed method. The simulation results demonstrate that Young's modulus reconstruction of the proposed method has a relative error of -3.43 ± 0.43% and root-squared-mean error of 16.94 ± 0.25%. The phantom experimental results show that the target hardening artifacts in the strain images are significantly reduced in the Young's modulus images. In both simulation and phantom studies, the size and position of inclusions can be accurately depicted in the modulus images. The proposed method can reconstruct tissue Young's modulus distribution with a high accuracy. It can reduce the artifacts shown in the strain image and correctly delineate the locations and sizes of inclusions. Unlike most modulus reconstruction methods, it does not need any regularization during the inversion procedure. Furthermore, it does not need to measure the boundary conditions of displacement or force. Thus this method can be used with a freehand scan, which facilitates its usage in the clinic.
A novel method for event reconstruction in Liquid Argon Time Projection Chamber
NASA Astrophysics Data System (ADS)
Diwan, M.; Potekhin, M.; Viren, B.; Qian, X.; Zhang, C.
2016-10-01
Future experiments such as the Deep Underground Neutrino Experiment (DUNE) will use very large Liquid Argon Projection Chambers (LArTPC) containing tens of kilotons of cryogenic medium. To be able to utilize sensitive volume of that size, current design employs arrays of wire electrodes grouped in readout planes, arranged with a stereo angle. This leads to certain challenges for object reconstruction due to ambiguities inherent in such a scheme. We present a novel reconstruction method (named "Wirecell") inspired by principles used in tomography, which brings the LArTPC technology closer to its full potential.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems
A simple method of aortic valve reconstruction with fixed pericardium in children
Hosseinpour, Amir-Reza; González-Calle, Antonio; Adsuar-Gómez, Alejandro; Santos-deSoto, José
2013-01-01
Aortic valve reconstruction with fixed pericardium may occasionally be very useful when treating children with aortic valve disease. This is because diseased aortic valves in children are sometimes too dysmorphic for simple repair without the addition of material, their annulus may be too small for a prosthesis, and the Ross operation may be precluded due to other congenital anomalies such as pulmonary valvar or coronary malformations. Such reconstruction is usually technically demanding and requires much precision. We describe a simple alternative method, which we have carried out in 3 patients, aged 1 week, 3 years and 12 years, respectively, with good early results. PMID:23343835
Image reconstruction of muon tomographic data using a density-based clustering method
NASA Astrophysics Data System (ADS)
Perry, Kimberly B.
Muons are subatomic particles capable of reaching the Earth's surface before decaying. When these particles collide with an object that has a high atomic number (Z), their path of travel changes substantially. Tracking muon movement through shielded containers can indicate what types of materials lie inside. This thesis proposes using a density-based clustering algorithm called OPTICS to perform image reconstructions using muon tomographic data. The results show that this method is capable of detecting high-Z materials quickly, and can also produce detailed reconstructions with large amounts of data.
Iterative reconstruction method for three-dimensional non-cartesian parallel MRI
NASA Astrophysics Data System (ADS)
Jiang, Xuguang
Parallel magnetic resonance imaging (MRI) with non-Cartesian sampling pattern is a promising technique that increases the scan speed using multiple receiver coils with reduced samples. However, reconstruction is challenging due to the increased complexity. Three reconstruction methods were evaluated: gridding, blocked uniform resampling (BURS) and non-uniform FFT (NUFFT). Computer simulations of parallel reconstruction were performed. Root mean square error (RMSE) of the reconstructed images to the simulated phantom were used as image quality criterion. Gridding method showed best RMSE performance. Two type of a priori constraints to reduce noise and artifacts were evaluated: edge preserving penalty, which suppresses noise and aliasing artifact in image while preventing over-smoothness, and object support penalty, which reduces background noise amplification. A trust region based step-ratio method that iteratively calculates the penalty coefficient was proposed for the penalty functions. Two methods to alleviate computation burden were evaluated: smaller over sampling ratio, and interpolation coefficient matrix compression. The performance were individually tested using computer simulations. Edge preserving penalty and object support penalty were shown to have consistent improvement on RMSE. The performance of calculated penalty coefficients on the two penalties were close to the best RMSE. Oversampling ratio as low as 1.125 was shown to have impact of less than one percent on RMSE for the radial sampling pattern reconstruction. The value reduced the three dimensional data requirement to less than 1/5 of what the conventional 2x grid needed. Interpolation matrix compression with compression ratio up to 50 percent showed small impact on RMSE. The proposed method was validated on 25MR data set from a GEMR scanner. Six image quality metrics were used to evaluate the performance. RMSE, normalized mutual information (NMI) and joint entropy (JE) relative to a reference
A marked bounding box method for image data reduction and reconstruction of sole patterns
NASA Astrophysics Data System (ADS)
Wang, Xingyue; Wu, Jianhua; Zhao, Qingmin; Cheng, Jian; Zhu, Yican
2011-12-01
A novel and efficient method called marked bounding box method based on marching cubes is presented for the point cloud data reduction of sole patterns. This method is characterized in that each bounding box is marked with an index during the process of data reduction and later for use of data reconstruction. The data reconstruction is implemented from the simplified data set by using triangular meshes, the indices being used to search the nearest points from adjacent bounding boxes. Afterwards, the normal vectors are estimated to determine the strength and direction of the surface reflected light. The proposed method is used in a sole pattern classification and query system which uses OpenGL under Visual C++ to render the image of sole patterns. Digital results are given to demonstrate the efficiency and novelty of our method. Finally, conclusion and discussions are made.
A marked bounding box method for image data reduction and reconstruction of sole patterns
NASA Astrophysics Data System (ADS)
Wang, Xingyue; Wu, Jianhua; Zhao, Qingmin; Cheng, Jian; Zhu, Yican
2012-01-01
A novel and efficient method called marked bounding box method based on marching cubes is presented for the point cloud data reduction of sole patterns. This method is characterized in that each bounding box is marked with an index during the process of data reduction and later for use of data reconstruction. The data reconstruction is implemented from the simplified data set by using triangular meshes, the indices being used to search the nearest points from adjacent bounding boxes. Afterwards, the normal vectors are estimated to determine the strength and direction of the surface reflected light. The proposed method is used in a sole pattern classification and query system which uses OpenGL under Visual C++ to render the image of sole patterns. Digital results are given to demonstrate the efficiency and novelty of our method. Finally, conclusion and discussions are made.
NASA Technical Reports Server (NTRS)
Baxes, Gregory A. (Inventor)
2010-01-01
Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.
NASA Technical Reports Server (NTRS)
Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)
2011-01-01
Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.
NASA Astrophysics Data System (ADS)
Keselman, J. A.; Nusser, A.
2017-05-01
No Action Method (NoAM) is a framework for reconstructing the past orbits of observed tracers of the large-scale mass density field. It seeks exact solutions of the equations of motion (EoM), satisfying initial homogeneity and the final observed particle (tracer) positions. The solutions are found iteratively reaching a specified tolerance defined as the RMS of the distance between reconstructed and observed positions. Starting from a guess for the initial conditions, NoAM advances particles using standard N-body techniques for solving the EoM. Alternatively, the EoM can be replaced by any approximation such as Zel'dovich and second-order perturbation theory (2LPT). NoAM is suitable for billions of particles and can easily handle non-regular volumes, redshift space and other constraints. We implement NoAM to systematically compare Zel'dovich, 2LPT, and N-body dynamics over diverse configurations ranging from an idealized high-res periodic simulation box to realistic galaxy mocks. Our findings are: (i) non-linear reconstructions with Zel'dovich, 2LPT, and full dynamics perform better than linear theory only for idealized catalogues in real space. For realistic catalogues, linear theory is the optimal choice for reconstructing velocity fields smoothed on scales ≳ 5 h- 1 Mpc; (ii) all non-linear back-in-time reconstructions tested here produce comparable enhancement of the baryonic oscillation signal in the correlation function.
NASA Astrophysics Data System (ADS)
Stephanakis, Ioannis M.; Anastassopoulos, George C.
2009-03-01
A novel algorithm for 3-D tomographic reconstruction is proposed. The proposed algorithm is based on multiresolution techniques for local inversion of the 3-D Radon transform in confined subvolumes within the entire object space. Directional wavelet functions of the form ψm,nj(x)=2j/2ψ(2jwm,nx) are employed in a sequel of double filtering and 2-D backprojection operations performed on vertical and horizontal reconstruction planes using the method suggested by Marr and others. The densities of the 3-D object are found initially as backprojections of coarse wavelet functions of this form at directions on vertical and horizontal planes that intersect the object. As the algorithm evolves, finer planar wavelets intersecting a subvolume of medical interest within the original object may be used to reconstruct its details by double backprojection steps on vertical and horizontal planes in a similar fashion. Reduction in the complexity of the reconstruction algorithm is achieved due to the good localization properties of planar wavelets that render the details of the projections with small errors. Experimental results that illustrate multiresolution reconstruction at four successive levels of resolution are given for wavelets belonging to the Daubechies family.
Deep learning methods to guide CT image reconstruction and reduce metal artifacts
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge
2017-03-01
The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.
NASA Astrophysics Data System (ADS)
Keselman, J. A.; Nusser, A.
2017-01-01
NoAM for "No Action Method" is a framework for reconstructing the past orbits of observed tracers of the large scale mass density field. It seeks exact solutions of the equations of motion (EoM), satisfying initial homogeneity and the final observed particle (tracer) positions. The solutions are found iteratively reaching a specified tolerance defined as the RMS of the distance between reconstructed and observed positions. Starting from a guess for the initial conditions, NoAM advances particles using standard N-body techniques for solving the EoM. Alternatively, the EoM can be replaced by any approximation such as Zel'dovich and second order perturbation theory (2LPT). NoAM is suitable for billions of particles and can easily handle non-regular volumes, redshift space, and other constraints. We implement NoAM to systematically compare Zel'dovich, 2LPT, and N-body dynamics over diverse configurations ranging from idealized high-res periodic simulation box to realistic galaxy mocks. Our findings are (i) Non-linear reconstructions with Zel'dovich, 2LPT, and full dynamics perform better than linear theory only for idealized catalogs in real space. For realistic catalogs, linear theory is the optimal choice for reconstructing velocity fields smoothed on scales {buildrel > over {˜}} 5 h^{-1}{Mpc}.(ii) all non-linear back-in-time reconstructions tested here, produce comparable enhancement of the baryonic oscillation signal in the correlation function.
A comparison of force reconstruction methods for a lumped mass beam
Bateman, V.I.; Mayes, R.L.; Carne, T.G.
1992-11-01
Two extensions of the force reconstruction method, the Sum of Weighted Accelerations Technique (SWAT), are presented in this paper; and the results are compared to those obtained using SWAT. SWAT requires the use of the structure`s elastic mode shapes for reconstruction of the applied force. Although based on the same theory, the two, new techniques do not rely on mode shapes to reconstruct the applied force and may be applied to structures whose mode shapes are not available. One technique uses the measured force and acceleration responses with the rigid body mode shapes to calculate the scalar weighting vector, so the technique is called SWAT-CAL (SWAT using a CALibrated force input). The second technique uses only the free-decay time response of the structure with the rigid body mode shapes to calculate the scalar weighting vector and is called SWAT-TEEM (SWAT using Time Eliminated Elastic Modes).
A comparison of force reconstruction methods for a lumped mass beam
Bateman, V.I.; Mayes, R.L.; Carne, T.G.
1992-01-01
Two extensions of the force reconstruction method, the Sum of Weighted Accelerations Technique (SWAT), are presented in this paper; and the results are compared to those obtained using SWAT. SWAT requires the use of the structure's elastic mode shapes for reconstruction of the applied force. Although based on the same theory, the two, new techniques do not rely on mode shapes to reconstruct the applied force and may be applied to structures whose mode shapes are not available. One technique uses the measured force and acceleration responses with the rigid body mode shapes to calculate the scalar weighting vector, so the technique is called SWAT-CAL (SWAT using a CALibrated force input). The second technique uses only the free-decay time response of the structure with the rigid body mode shapes to calculate the scalar weighting vector and is called SWAT-TEEM (SWAT using Time Eliminated Elastic Modes).
Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method
Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng
2016-01-01
In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC. PMID:28005929
Richart, Jose; Otal, Antonio; Rodriguez, Silvia; Nicolás, Ana Isabel; DePiaggio, Marina; Santos, Manuel; Vijande, Javier; Perez-Calatayud, Jose
2015-01-01
Purpose There are perineal templates for interstitial implants such as MUPIT and Syed applicators. Their limitations are the intracavitary component deficit and the necessity to use computed tomography (CT) for treatment planning since both applicators are non-magnetic resonance imaging (MRI) compatibles. To overcome these problems, a new template named Template Benidorm (TB) has been recently developed. Titanium needles are usually reconstructed based on their own artifacts, mainly in T1-weighted sequence, using the void on the tip as the needle tip position. Nevertheless, patient tissues surrounding the needles present heterogeneities that complicate the accurate identification of these artifact patterns. The purpose of this work is to improve the titanium needle reconstruction uncertainty for the TB case using a simple method based on the free needle lengths and typical MRI pellets markers. Material and methods The proposed procedure consists on the inclusion of three small A-vitamin pellets (hyperintense on MRI images) compressed by both applicator plates defining the central plane of the plate's arrangement. The needles used are typically 20 cm in length. For each needle, two points are selected defining the straight line. From such line and the plane equations, the intersection can be obtained, and using the free length (knowing the offset distance), the coordinates of the needle tip can be obtained. The method is applied in both T1W and T2W acquisition sequences. To evaluate the inter-observer variation of the method, three implants of T1W and another three of T2W have been reconstructed by two different medical physicists with experience on these reconstructions. Results and conclusions The differences observed in the positioning were significantly smaller than 1 mm in all cases. The presented algorithm also allows the use of only T2W sequence either for contouring or reconstruction purposes. The proposed method is robust and independent of the visibility
Listening to the noise: random fluctuations reveal gene network parameters.
Munsky, Brian; Trinh, Brooke; Khammash, Mustafa
2009-01-01
The cellular environment is abuzz with noise originating from the inherent random motion of reacting molecules in the living cell. In this noisy environment, clonal cell populations show cell-to-cell variability that can manifest significant phenotypic differences. Noise-induced stochastic fluctuations in cellular constituents can be measured and their statistics quantified. We show that these random fluctuations carry within them valuable information about the underlying genetic network. Far from being a nuisance, the ever-present cellular noise acts as a rich source of excitation that, when processed through a gene network, carries its distinctive fingerprint that encodes a wealth of information about that network. We show that in some cases the analysis of these random fluctuations enables the full identification of network parameters, including those that may otherwise be difficult to measure. This establishes a potentially powerful approach for the identification of gene networks and offers a new window into the workings of these networks.
Simons, Craig J; Cobb, Loren; Davidson, Bradley S
2014-04-01
In vivo measurement of lumbar spine configuration is useful for constructing quantitative biomechanical models. Positional magnetic resonance imaging (MRI) accommodates a larger range of movement in most joints than conventional MRI and does not require a supine position. However, this is achieved at the expense of image resolution and contrast. As a result, quantitative research using positional MRI has required long reconstruction times and is sensitive to incorrectly identifying the vertebral boundary due to low contrast between bone and surrounding tissue in the images. We present a semi-automated method used to obtain digitized reconstructions of lumbar vertebrae in any posture of interest. This method combines a high-resolution reference scan with a low-resolution postural scan to provide a detailed and accurate representation of the vertebrae in the posture of interest. Compared to a criterion standard, translational reconstruction error ranged from 0.7 to 1.6 mm and rotational reconstruction error ranged from 0.3 to 2.6°. Intraclass correlation coefficients indicated high interrater reliability for measurements within the imaging plane (ICC 0.97-0.99). Computational efficiency indicates that this method may be used to compile data sets large enough to account for population variance, and potentially expand the use of positional MRI as a quantitative biomechanics research tool.
NASA Astrophysics Data System (ADS)
Smerdon, J. E.; Kaplan, A.; Zorita, E.; Gonzalez-Rouco, F. J.; Evans, M. N.
2009-12-01
Paleoclimatic reconstructions of hemispheric and global surface temperatures during the last millennium vary significantly in their estimates of decadal-to-centennial variability. Although several estimates are based on spatially-resolved climate field reconstruction (CFR) methods, comparisons have been limited to mean Northern Hemisphere temperatures. Spatial skill is explicitly investigated for four CFR methods using pseudoproxy experiments derived from two millennial-length coupled Atmosphere-Ocean General Circulation Model (AOGCM) simulations. The adopted pseudoproxy network approximates the spatial distribution of a widely used multi-proxy network and the CFRs target annual temperature variability on a 5-degree latitude-longitude grid. Results indicate that the spatial skill of presently available large-scale CFRs depends on proxy type and location, target data, and the employed reconstruction methodology, although there are widespread consistencies in the general performance of all four methods. While results are somewhat sensitive to the ability of the AOGCMs to resolve ENSO and its teleconnections, important areas such as the ocean basins and much of the Southern Hemisphere are reconstructed with particularly poor skill in both model experiments. New high-resolution proxies from poorly sampled regions may be one of the best means of improving estimates of large-scale CFRs of the last millennium.
NASA Astrophysics Data System (ADS)
Nakamura, Gen; Wang, Haibing
2017-05-01
Consider the problem of reconstructing unknown Robin inclusions inside a heat conductor from boundary measurements. This problem arises from active thermography and is formulated as an inverse boundary value problem for the heat equation. In our previous works, we proposed a sampling-type method for reconstructing the boundary of the Robin inclusion and gave its rigorous mathematical justification. This method is non-iterative and based on the characterization of the solution to the so-called Neumann- to-Dirichlet map gap equation. In this paper, we give a further investigation of the reconstruction method from both the theoretical and numerical points of view. First, we clarify the solvability of the Neumann-to-Dirichlet map gap equation and establish a relation of its solution to the Green function associated with an initial-boundary value problem for the heat equation inside the Robin inclusion. This naturally provides a way of computing this Green function from the Neumann-to-Dirichlet map and explains what is the input for the linear sampling method. Assuming that the Neumann-to-Dirichlet map gap equation has a unique solution, we also show the convergence of our method for noisy measurements. Second, we give the numerical implementation of the reconstruction method for two-dimensional spatial domains. The measurements for our inverse problem are simulated by solving the forward problem via the boundary integral equation method. Numerical results are presented to illustrate the efficiency and stability of the proposed method. By using a finite sequence of transient input over a time interval, we propose a new sampling method over the time interval by single measurement which is most likely to be practical.
Reconstruction method for running shape of rotor blade considering nonlinear stiffness and loads
NASA Astrophysics Data System (ADS)
Wang, Yongliang; Kang, Da; Zhong, Jingjun
2017-10-01
The aerodynamic and centrifugal loads acting on the rotating blade make the blade configuration deformed comparing to its shape at rest. Accurate prediction of the running blade configuration plays a significant role in examining and analyzing turbomachinery performance. Considering nonlinear stiffness and loads, a reconstruction method is presented to address transformation of a rotating blade from cold to hot state. When calculating blade deformations, the blade stiffness and load conditions are updated simultaneously as blade shape varies. The reconstruction procedure is iterated till a converged hot blade shape is obtained. This method has been employed to determine the operating blade shapes of a test rotor blade and the Stage 37 rotor blade. The calculated results are compared with the experiments. The results show that the proposed method used for blade operating shape prediction is effective. The studies also show that this method can improve precision of finite element analysis and aerodynamic performance analysis.
NASA Astrophysics Data System (ADS)
Guo, Siyang; Lin, Jiarui; Yang, Linghui; Ren, Yongjie; Guo, Yin
2017-07-01
The workshop Measurement Position System (wMPS) is a distributed measurement system which is suitable for the large-scale metrology. However, there are some inevitable measurement problems in the shipbuilding industry, such as the restriction by obstacles and limited measurement range. To deal with these factors, this paper presents a method of reconstructing the spatial measurement network by mobile transmitter. A high-precision coordinate control network with more than six target points is established. The mobile measuring transmitter can be added into the measurement network using this coordinate control network with the spatial resection method. This method reconstructs the measurement network and broadens the measurement scope efficiently. To verify this method, two comparison experiments are designed with the laser tracker as the reference. The results demonstrate that the accuracy of point-to-point length is better than 0.4mm and the accuracy of coordinate measurement is better than 0.6mm.
A gene network engineering platform for lactic acid bacteria
Kong, Wentao; Kapuganti, Venkata S.; Lu, Ting
2016-01-01
Recent developments in synthetic biology have positioned lactic acid bacteria (LAB) as a major class of cellular chassis for applications. To achieve the full potential of LAB, one fundamental prerequisite is the capacity for rapid engineering of complex gene networks, such as natural biosynthetic pathways and multicomponent synthetic circuits, into which cellular functions are encoded. Here, we present a synthetic biology platform for rapid construction and optimization of large-scale gene networks in LAB. The platform involves a copy-controlled shuttle for hosting target networks and two associated strategies that enable efficient genetic editing and phenotypic validation. By using a nisin biosynthesis pathway and its variants as examples, we demonstrated multiplex, continuous editing of small DNA parts, such as ribosome-binding sites, as well as efficient manipulation of large building blocks such as genes and operons. To showcase the platform, we applied it to expand the phenotypic diversity of the nisin pathway by quickly generating a library of 63 pathway variants. We further demonstrated its utility by altering the regulatory topology of the nisin pathway for constitutive bacteriocin biosynthesis. This work demonstrates the feasibility of rapid and advanced engineering of gene networks in LAB, fostering their applications in biomedicine and other areas. PMID:26503255
A gene network engineering platform for lactic acid bacteria.
Kong, Wentao; Kapuganti, Venkata S; Lu, Ting
2016-02-29
Recent developments in synthetic biology have positioned lactic acid bacteria (LAB) as a major class of cellular chassis for applications. To achieve the full potential of LAB, one fundamental prerequisite is the capacity for rapid engineering of complex gene networks, such as natural biosynthetic pathways and multicomponent synthetic circuits, into which cellular functions are encoded. Here, we present a synthetic biology platform for rapid construction and optimization of large-scale gene networks in LAB. The platform involves a copy-controlled shuttle for hosting target networks and two associated strategies that enable efficient genetic editing and phenotypic validation. By using a nisin biosynthesis pathway and its variants as examples, we demonstrated multiplex, continuous editing of small DNA parts, such as ribosome-binding sites, as well as efficient manipulation of large building blocks such as genes and operons. To showcase the platform, we applied it to expand the phenotypic diversity of the nisin pathway by quickly generating a library of 63 pathway variants. We further demonstrated its utility by altering the regulatory topology of the nisin pathway for constitutive bacteriocin biosynthesis. This work demonstrates the feasibility of rapid and advanced engineering of gene networks in LAB, fostering their applications in biomedicine and other areas.
He, Zhijie; Qiao, Quanbang; Li, Jun; Huang, Meiping; Zhu, Shouping; Huang, Liyu
2016-11-22
The CT image reconstruction algorithm based compressed sensing (CS) can be formulated as an optimization problem that minimizes the total-variation (TV) term constrained by the data fidelity and image nonnegativity. There are a lot of solutions to this problem, but the computational efficiency and reconstructed image quality of these methods still need to be improved. To investigate a faster and more accurate mathematical algorithm to settle TV term minimization problem of CT image reconstruction. A Nesterov's algorithm (NESTA) is a fast and accurate algorithm for solving TV minimization problem, which can be ascribed to the use of most notably Nesterov's smoothing technique and a subtle averaging of sequences of iterates, which has been shown to improve the convergence properties of standard gradient-descent algorithms. In order to demonstrate the superior performance of NESTA on computational efficiency and image quality, a comparison with Simultaneous Algebraic Reconstruction Technique-TV (SART-TV) and Split-Bregman (SpBr) algorithm is made using a digital phantom study and two physical phantom studies from highly undersampled projection measurements. With only 25% of conventional full-scan dose and, NESTA method reduces the average CT number error from 51.76HU to 9.98HU on Shepp-Logan phantom and reduces the average CT number error from 50.13HU to 0.32HU on Catphan 600 phantom. On an anthropomorphic head phantom, the average CT number error is reduced from 84.21HU to 1.01HU in the central uniform area. To the best of our knowledge this is the first work that apply the NESTA method into CT reconstruction based CS. Research shows that this method is of great potential, further studies and optimization are necessary.
A Method for 3D Histopathology Reconstruction Supporting Mouse Microvasculature Analysis
Xu, Yiwen; Pickering, J. Geoffrey; Nong, Zengxuan; Gibson, Eli; Arpino, John-Michael; Yin, Hao; Ward, Aaron D.
2015-01-01
Structural abnormalities of the microvasculature can impair perfusion and function. Conventional histology provides good spatial resolution with which to evaluate the microvascular structure but affords no 3-dimensional information; this limitation could lead to misinterpretations of the complex microvessel network in health and disease. The objective of this study was to develop and evaluate an accurate, fully automated 3D histology reconstruction method to visualize the arterioles and venules within the mouse hind-limb. Sections of the tibialis anterior muscle from C57BL/J6 mice (both normal and subjected to femoral artery excision) were reconstructed using pairwise rigid and affine registrations of 5 µm-thick, paraffin-embedded serial sections digitized at 0.25 µm/pixel. Low-resolution intensity-based rigid registration was used to initialize the nucleus landmark-based registration, and conventional high-resolution intensity-based registration method. The affine nucleus landmark-based registration was developed in this work and was compared to the conventional affine high-resolution intensity-based registration method. Target registration errors were measured between adjacent tissue sections (pairwise error), as well as with respect to a 3D reference reconstruction (accumulated error, to capture propagation of error through the stack of sections). Accumulated error measures were lower (p<0.01) for the nucleus landmark technique and superior vasculature continuity was observed. These findings indicate that registration based on automatic extraction and correspondence of small, homologous landmarks may support accurate 3D histology reconstruction. This technique avoids the otherwise problematic “banana-into-cylinder” effect observed using conventional methods that optimize the pairwise alignment of salient structures, forcing them to be section-orthogonal. This approach will provide a valuable tool for high-accuracy 3D histology tissue reconstructions for
Spectral/HP Element Method With Hierarchical Reconstruction for Solving Hyperbolic Conservation Laws
Xu, Zhiliang; Lin, Guang
2009-12-01
Hierarchical reconstruction (HR) has been successfully applied to prevent oscillations in solutions computed by finite volume, discontinuous Galerkin, spectral volume schemes when solving hyperbolic conservation laws. In this paper, we demonstrate that HR can also be combined with spectral/hp element methods for solving hyperbolic conservation laws. We show that HR preserves the order of accuracy of spectral/hp element methods for smooth solutions and generate essentially non-oscillatory solution profiles for shock wave problems.
Reconstruction of generic shape with cubic Bézier using least square method
NASA Astrophysics Data System (ADS)
Rusdi, Nur'Afifah; Yahya, Zainor Ridzuan
2015-05-01
Reverse engineering procedure has been used to represent the generic shapes of some objects in order to explore its technical principles and mechanism so that an improved system can be develops. Curve reconstruction had immensely used in reverse engineering to reproduce the curves. In this paper, cubic Bézier curve function was used for curve fitting by Least Square Method. Least Square Method is applied to find the optimal values of the parameters in the description of the curve function used.
Xiaodong Liu; Lijun Xuan; Hong Luo; Yidong Xia
2001-01-01
A reconstructed discontinuous Galerkin (rDG(P1P2)) method, originally introduced for the compressible Euler equations, is developed for the solution of the compressible Navier- Stokes equations on 3D hybrid grids. In this method, a piecewise quadratic polynomial solution is obtained from the underlying piecewise linear DG solution using a hierarchical Weighted Essentially Non-Oscillatory (WENO) reconstruction. The reconstructed quadratic polynomial solution is then used for the computation of the inviscid fluxes and the viscous fluxes using the second formulation of Bassi and Reay (Bassi-Rebay II). The developed rDG(P1P2) method is used to compute a variety of flow problems to assess its accuracy, efficiency, and robustness. The numerical results demonstrate that the rDG(P1P2) method is able to achieve the designed third-order of accuracy at a cost slightly higher than its underlying second-order DG method, outperform the third order DG method in terms of both computing costs and storage requirements, and obtain reliable and accurate solutions to the large eddy simulation (LES) and direct numerical simulation (DNS) of compressible turbulent flows.
An airborne acoustic method to reconstruct a dynamically rough flow surface.
Krynkin, Anton; Horoshenkov, Kirill V; Van Renterghem, Timothy
2016-09-01
Currently, there is no airborne in situ method to reconstruct with high fidelity the instantaneous elevation of a dynamically rough surface of a turbulent flow. This work proposes a holographic method that reconstructs the elevation of a one-dimensional rough water surface from airborne acoustic pressure data. This method can be implemented practically using an array of microphones deployed over a dynamically rough surface or using a single microphone which is traversed above the surface at a speed that is much higher than the phase velocity of the roughness pattern. In this work, the theory is validated using synthetic data calculated with the Kirchhoff approximation and a finite difference time domain method over a number of measured surface roughness patterns. The proposed method is able to reconstruct the surface elevation with a sub-millimeter accuracy and over a representatively large area of the surface. Since it has been previously shown that the surface roughness pattern reflects accurately the underlying hydraulic processes in open channel flow [e.g., Horoshenkov, Nichols, Tait, and Maximov, J. Geophys. Res. 118(3), 1864-1876 (2013)], the proposed method paves the way for the development of non-invasive instrumentation for flow mapping and characterization that are based on the acoustic holography principle.
NASA Astrophysics Data System (ADS)
Ye, Jinzuo; Du, Yang; An, Yu; Chi, Chongwei; Tian, Jie
2014-12-01
Fluorescence molecular tomography (FMT) is a promising imaging technique in preclinical research, enabling three-dimensional location of the specific tumor position for small animal imaging. However, FMT presents a challenging inverse problem that is quite ill-posed and ill-conditioned. Thus, the reconstruction of FMT faces various challenges in its robustness and efficiency. We present an FMT reconstruction method based on nonmonotone spectral projected gradient pursuit (NSPGP) with l1-norm optimization. At each iteration, a spectral gradient-projection method approximately minimizes a least-squares problem with an explicit one-norm constraint. A nonmonotone line search strategy is utilized to get the appropriate updating direction, which guarantees global convergence. Additionally, the Barzilai-Borwein step length is applied to build the optimal step length, further improving the convergence speed of the proposed method. Several numerical simulation studies, including multisource cases as well as comparative analyses, have been performed to evaluate the performance of the proposed method. The results indicate that the proposed NSPGP method is able to ensure the accuracy, robustness, and efficiency of FMT reconstruction. Furthermore, an in vivo experiment based on a heterogeneous mouse model was conducted, and the results demonstrated that the proposed method held the potential for practical applications of FMT.
Analysis of Cone-Beam Artifacts in off-Centered Circular CT for Four Reconstruction Methods
Peyrin, F.; Sappey-Marinier, D.
2006-01-01
Cone-beam (CB) acquisition is increasingly used for truly three-dimensional X-ray computerized tomography (CT). However, tomographic reconstruction from data collected along a circular trajectory with the popular Feldkamp algorithm is known to produce the so-called CB artifacts. These artifacts result from the incompleteness of the source trajectory and the resulting missing data in the Radon space increasing with the distance to the plane containing the source orbit. In the context of the development of integrated PET/CT microscanners, we introduced a novel off-centered circular CT cone-beam geometry. We proposed a generalized Feldkamp formula (α-FDK) adapted to this geometry, but reconstructions suffer from increased CB artifacts. In this paper, we evaluate and compare four different reconstruction methods for correcting CB artifacts in off-centered geometry. We consider the α-FDK algorithm, the shift-variant FBP method derived from the T-FDK, an FBP method based on the Grangeat formula, and an iterative algebraic method (SART). The results show that the low contrast artifacts can be efficiently corrected by the shift-variant method and the SART method to achieve good quality images at the expense of increased computation time, but the geometrical deformations are still not compensated for by these techniques. PMID:23165048
A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.
Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G
2017-08-01
Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kempny, Tomas; Paroulek, Jiri; Marik, Vladimir; Kurial, Pavel; Lipovy, Bretislav; Huemer, Georg M
2013-06-01
Posttraumatic loss of the thumb is devastating. Toe-to-hand transfer is considered the criterion standard of reconstruction but is associated with donor-site issues. The twisted-toe technique uses parts from the great toe and the second toe, which allows for almost anatomical restoration of the donor foot. The authors present their experience and technical modifications of this method. Between January of 2003 and November of 2011, 18 patients suffering from loss of thumb because of a variety of indications were treated with the authors' modification of the twisted-toe technique. The neothumb was constructed with a partial onychocutaneous flap from the great toe and an osseotendinous flap from the second toe. Of 18 transplanted twisted-toe flaps, 17 survived completely (5.6 percent flap loss rate). Similarity of the reconstructed thumb compared with the healthy side was very acceptable in all cases. All patients in whom the procedure was successful were able to use the neothumb in daily life without constraints. Reconstruction of the donor site yielded very acceptable outcomes with a distinct reduction in morbidity and disfigurement compared with conventional toe harvest. The modified twisted-toe technique is the authors' preferred choice of thumb reconstruction. It allows the reconstructive surgeon to construct a very natural-appearing neothumb with good stability and grip force. In addition, it eliminates many of the donor-site problems associated with pure great toe harvest, by recreating a "neo-great toe" at the donor foot. Although the procedure is more complicated and time-consuming compared with single toe harvest, the authors firmly believe that this extra effort takes thumb reconstruction to a next level. Therapeutic, IV.
NASA Astrophysics Data System (ADS)
Mignone, A.
2014-08-01
High-order reconstruction schemes for the solution of hyperbolic conservation laws in orthogonal curvilinear coordinates are revised in the finite volume approach. The formulation employs a piecewise polynomial approximation to the zone-average values to reconstruct left and right interface states from within a computational zone to arbitrary order of accuracy by inverting a Vandermonde-like linear system of equations with spatially varying coefficients. The approach is general and can be used on uniform and non-uniform meshes although explicit expressions are derived for polynomials from second to fifth degree in cylindrical and spherical geometries with uniform grid spacing. It is shown that, in regions of large curvature, the resulting expressions differ considerably from their Cartesian counterparts and that the lack of such corrections can severely degrade the accuracy of the solution close to the coordinate origin. Limiting techniques and monotonicity constraints are revised for conventional reconstruction schemes, namely, the piecewise linear method (PLM), third-order weighted essentially non-oscillatory (WENO) scheme and the piecewise parabolic method (PPM). The performance of the improved reconstruction schemes is investigated in a number of selected numerical benchmarks involving the solution of both scalar and systems of nonlinear equations (such as the equations of gas dynamics and magnetohydrodynamics) in cylindrical and spherical geometries in one and two dimensions. Results confirm that the proposed approach yields considerably smaller errors, higher convergence rates and it avoid spurious numerical effects at a symmetry axis.
Weak-lensing Power Spectrum Reconstruction by Counting Galaxies. I. The ABS Method
NASA Astrophysics Data System (ADS)
Yang, Xinjuan; Zhang, Jun; Yu, Yu; Zhang, Pengjie
2017-08-01
We propose an analytical method of blind separation (ABS) of cosmic magnification from the intrinsic fluctuations of galaxy number density in the observed galaxy number density distribution. The ABS method utilizes the different dependences of the signal (cosmic magnification) and contamination (galaxy intrinsic clustering) on galaxy flux to separate the two. It works directly on the measured cross-galaxy angular power spectra between different flux bins. It determines/reconstructs the lensing power spectrum analytically, without assumptions of galaxy intrinsic clustering and cosmology. It is unbiased in the limit of an infinite number of galaxies. In reality, the lensing reconstruction accuracy depends on survey configurations, galaxy biases, and other complexities due to a finite number of galaxies and the resulting shot noise fluctuations in the cross-galaxy power spectra. We estimate its performance (systematic and statistical errors) in various cases. We find that stage IV dark energy surveys such as Square Kilometre Array and Large Synoptic Survey Telescope are capable of reconstructing the lensing power spectrum at z≃ 1 and {\\ell }≲ 5000 accurately. This lensing reconstruction only requires counting galaxies and is therefore highly complementary to cosmic shear measurement by the same surveys.
Bailey, Geoffrey N; Reynolds, Sally C; King, Geoffrey C P
2011-03-01
This paper examines the relationship between complex and tectonically active landscapes and patterns of human evolution. We show how active tectonics can produce dynamic landscapes with geomorphological and topographic features that may be critical to long-term patterns of hominin land use, but which are not typically addressed in landscape reconstructions based on existing geological and paleoenvironmental principles. We describe methods of representing topography at a range of scales using measures of roughness based on digital elevation data, and combine the resulting maps with satellite imagery and ground observations to reconstruct features of the wider landscape as they existed at the time of hominin occupation and activity. We apply these methods to sites in South Africa, where relatively stable topography facilitates reconstruction. We demonstrate the presence of previously unrecognized tectonic effects and their implications for the interpretation of hominin habitats and land use. In parts of the East African Rift, reconstruction is more difficult because of dramatic changes since the time of hominin occupation, while fossils are often found in places where activity has now almost ceased. However, we show that original, dynamic landscape features can be assessed by analogy with parts of the Rift that are currently active and indicate how this approach can complement other sources of information to add new insights and pose new questions for future investigation of hominin land use and habitats. Copyright © 2010 Elsevier Ltd. All rights reserved.
Reconstructing paleo- and initial landscapes using a multi-method approach in hummocky NE Germany
NASA Astrophysics Data System (ADS)
van der Meij, Marijn; Temme, Arnaud; Sommer, Michael
2016-04-01
The unknown state of the landscape at the onset of soil and landscape formation is one of the main sources of uncertainty in landscape evolution modelling. Reconstruction of these initial conditions is not straightforward due to the problems of polygenesis and equifinality: different initial landscapes can change through different sets of processes to an identical end state. Many attempts have been done to reconstruct this initial landscape. These include remote sensing, reverse modelling and the usage of soil properties. However, each of these methods is only applicable on a certain spatial scale and comes with its own uncertainties. Here we present a new framework and preliminary results of reconstructing paleo-landscapes in an eroding setting, where we combine reverse modelling, remote sensing, geochronology, historical data and present soil data. With the combination of these different approaches, different spatial scales can be covered and the uncertainty in the reconstructed landscape can be reduced. The study area is located in north-east Germany, where the landscape consists of a collection of small local depressions, acting as closed catchments. This postglacial hummocky landscape is suitable to test our new multi-method approach because of several reasons: i) the closed catchments enable a full mass balance of erosion and deposition, due to the collection of colluvium in these depressions, ii) significant topography changes only started recently with medieval deforestation and recent intensification of agriculture and iii) due to extensive previous research a large dataset is readily available.
Lichenstein, Sarah D.; Bishop, James H.; Verstynen, Timothy D.; Yeh, Fang-Cheng
2016-01-01
Purpose: Diffusion MRI provides a non-invasive way of estimating structural connectivity in the brain. Many studies have used diffusion phantoms as benchmarks to assess the performance of different tractography reconstruction algorithms and assumed that the results can be applied to in vivo studies. Here we examined whether quality metrics derived from a common, publically available, diffusion phantom can reliably predict tractography performance in human white matter tissue. Materials and Methods: We compared estimates of fiber length and fiber crossing among a simple tensor model (diffusion tensor imaging), a more complicated model (ball-and-sticks) and model-free (diffusion spectrum imaging, generalized q-sampling imaging) reconstruction methods using a capillary phantom and in vivo human data (N = 14). Results: Our analysis showed that evaluation outcomes differ depending on whether they were obtained from phantom or human data. Specifically, the diffusion phantom favored a more complicated model over a simple tensor model or model-free methods for resolving crossing fibers. On the other hand, the human studies showed the opposite pattern of results, with the model-free methods being more advantageous than model-based methods or simple tensor models. This performance difference was consistent across several metrics, including estimating fiber length and resolving fiber crossings in established white matter pathways. Conclusions: These findings indicate that the construction of current capillary diffusion phantoms tends to favor complicated reconstruction models over a simple tensor model or model-free methods, whereas the in vivo data tends to produce opposite results. This brings into question the previous phantom-based evaluation approaches and suggests that a more realistic phantom or simulation is necessary to accurately predict the relative performance of different tractography reconstruction methods. PMID:27656122
Accident or homicide--virtual crime scene reconstruction using 3D methods.
Buck, Ursula; Naether, Silvio; Räss, Beat; Jackowski, Christian; Thali, Michael J
2013-02-10
The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event.
NASA Astrophysics Data System (ADS)
Zhang, Yi; Zhang, Xiao-Dong; Wang, Wen-Xin; Yang, He-Run; Yang, Zheng-Cai; Hu, Bi-Tao
2009-01-01
In this paper a two dimensional readout micromegas detector with a polyethylene foil as converter was simulated on GEANT4 toolkit and GARFIELD for fast neutron detection. A new track reconstruction method based on time coincidence technology was developed in the simulation to obtain the incident neutron position. The results showed that with this reconstruction method higher spatial resolution was achieved.
GENIUS: web server to predict local gene networks and key genes for biological functions.
Puelma, Tomas; Araus, Viviana; Canales, Javier; Vidal, Elena A; Cabello, Juan M; Soto, Alvaro; Gutiérrez, Rodrigo A
2017-03-01
GENIUS is a user-friendly web server that uses a novel machine learning algorithm to infer functional gene networks focused on specific genes and experimental conditions that are relevant to biological functions of interest. These functions may have different levels of complexity, from specific biological processes to complex traits that involve several interacting processes. GENIUS also enriches the network with new genes related to the biological function of interest, with accuracies comparable to highly discriminative Support Vector Machine methods. GENIUS currently supports eight model organisms and is freely available for public use at http://networks.bio.puc.cl/genius . genius.psbl@gmail.com. Supplementary data are available at Bioinformatics online.
An infrared image super-resolution reconstruction method based on compressive sensing
NASA Astrophysics Data System (ADS)
Mao, Yuxing; Wang, Yan; Zhou, Jintao; Jia, Haiwei
2016-05-01
Limited by the properties of infrared detector and camera lens, infrared images are often detail missing and indistinct in vision. The spatial resolution needs to be improved to satisfy the requirements of practical application. Based on compressive sensing (CS) theory, this thesis presents a single image super-resolution reconstruction (SRR) method. With synthetically adopting image degradation model, difference operation-based sparse transformation method and orthogonal matching pursuit (OMP) algorithm, the image SRR problem is transformed into a sparse signal reconstruction issue in CS theory. In our work, the sparse transformation matrix is obtained through difference operation to image, and, the measurement matrix is achieved analytically from the imaging principle of infrared camera. Therefore, the time consumption can be decreased compared with the redundant dictionary obtained by sample training such as K-SVD. The experimental results show that our method can achieve favorable performance and good stability with low algorithm complexity.
Method to reconstruct neuronal action potential train from two-photon calcium imaging
NASA Astrophysics Data System (ADS)
Quan, Tingwei; Liu, Xiuli; Lv, Xiaohua; Chen, Wei R.; Zeng, Shaoqun
2010-11-01
Identification of a small population of neuronal action potentials (APs) firing is considered essential to discover the operating principles of neuronal circuits. A promising method is to indirectly monitor the AP discharges in neurons from the recordings their intracellular calcium fluorescence transients. However, it is hard to reveal the nonlinear relationship between neuronal calcium fluorescence transients and the corresponding AP burst discharging. We propose a method to reconstruct the neuronal AP train from calcium fluorescence diversifications based on a multiscale filter and a convolution operation. Results of experimental data processing show that the false-positive rate and the event detection rate are about 10 and 90%, respectively. Meanwhile, the APs firing at a high frequency up to 40 Hz can also be successfully identified. From the results, it can be concluded that the method has strong power to reconstruct a neuronal AP train from a burst firing.
Zhang, Haiyan; Zhang, Liyi; Sun, Yunshan; Zhang, Jingyu
2015-01-01
Reducing X-ray tube current is one of the widely used methods for decreasing the radiation dose. Unfortunately, the signal-to-noise ratio (SNR) of the projection data degrades simultaneously. To improve the quality of reconstructed images, a dictionary learning based penalized weighted least-squares (PWLS) approach is proposed for sinogram denoising. The weighted least-squares considers the statistical characteristic of noise and the penalty models the sparsity of sinogram based on dictionary learning. Then reconstruct CT image using filtered back projection (FBP) algorithm from the denoised sinogram. The proposed method is particularly suitable for the projection data with low SNR. Experimental results show that the proposed method can get high-quality CT images when the signal to noise ratio of projection data declines sharply.
Suzuki, S; Arai, H
1990-04-01
In single-photon emission computed tomography (SPECT) and X-ray CT one-dimensional (1-D) convolution method is used for their image reconstruction from projections. The method makes a 1-D convolution filtering on projection data with a 1-D filter in the space domain, and back projects the filtered data for reconstruction. Images can also be reconstructed by first forming the 2-D backprojection images from projections and then convoluting them with a 2-D space-domain filter. This is the reconstruction by the 2-D convolution method, and it has the opposite reconstruction process to the 1-D convolution method. Since the 2-D convolution method is inferior to the 1-D convolution method in speed in reconstruction, it has no practical use. In the actual reconstruction by the 2-D convolution method, convolution is made on a finite plane which is called convolution window. A convolution window of size N X N needs a 2-D discrete filter of the same size. If better reconstructions are achieved with small convolution windows, the reconstruction time for the 2-D convolution method can be reduced. For this purpose, 2-D filters of a simple function form are proposed which can give good reconstructions with small convolution windows. They are here defined on a finite plane, depending on the window size used, although a filter function is usually defined on the infinite plane. They are however set so that they better approximate the property of a 2-D filter function defined on the infinite plane. Filters of size N X N are thus determined. Their value varies with window size. The filters are applied to image reconstructions of SPECT.(ABSTRACT TRUNCATED AT 250 WORDS)
Wisdom of crowds for robust gene network inference
Marbach, Daniel; Costello, James C.; Küffner, Robert; Vega, Nicci; Prill, Robert J.; Camacho, Diogo M.; Allison, Kyle R.; Kellis, Manolis; Collins, James J.; Stolovitzky, Gustavo
2012-01-01
Reconstructing gene regulatory networks from high-throughput data is a long-standing problem. Through the DREAM project (Dialogue on Reverse Engineering Assessment and Methods), we performed a comprehensive blind assessment of over thirty network inference methods on Escherichia coli, Staphylococcus aureus, Saccharomyces cerevisiae, and in silico microarray data. We characterize performance, data requirements, and inherent biases of different inference approaches offering guidelines for both algorithm application and development. We observe that no single inference method performs optimally across all datasets. In contrast, integration of predictions from multiple inference methods shows robust and high performance across diverse datasets. Thereby, we construct high-confidence networks for E. coli and S. aureus, each comprising ~1700 transcriptional interactions at an estimated precision of 50%. We experimentally test 53 novel interactions in E. coli, of which 23 were supported (43%). Our results establish community-based methods as a powerful and robust tool for the inference of transcriptional gene regulatory networks. PMID:22796662
Karnowski, Thomas Paul; Tobin Jr, Kenneth William; Chaum, Edward; Muthusamy Govindasamy, Vijaya Priya
2009-09-01
In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.
Karnowski, Thomas P; Govindasamy, V; Tobin, Kenneth W; Chaum, Edward; Abramoff, M D
2008-01-01
In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.
Hirose, Osamu; Yoshida, Ryo; Imoto, Seiya; Yamaguchi, Rui; Higuchi, Tomoyuki; Charnock-Jones, D Stephen; Print, Cristin; Miyano, Satoru
2008-04-01
Statistical inference of gene networks by using time-course microarray gene expression profiles is an essential step towards understanding the temporal structure of gene regulatory mechanisms. Unfortunately, most of the current studies have been limited to analysing a small number of genes because the length of time-course gene expression profiles is fairly short. One promising approach to overcome such a limitation is to infer gene networks by exploring the potential transcriptional modules which are sets of genes sharing a common function or involved in the same pathway. In this article, we present a novel approach based on the state space model to identify the transcriptional modules and module-based gene networks simultaneously. The state space model has the potential to infer large-scale gene networks, e.g. of order 10(3), from time-course gene expression profiles. Particularly, we succeeded in the identification of a cell cycle system by using the gene expression profiles of Saccharomyces cerevisiae in which the length of the time-course and number of genes were 24 and 4382, respectively. However, when analysing shorter time-course data, e.g. of length 10 or less, the parameter estimations of the state space model often fail due to overfitting. To extend the applicability of the state space model, we provide an approach to use the technical replicates of gene expression profiles, which are often measured in duplicate or triplicate. The use of technical replicates is important for achieving highly-efficient inferences of gene networks with short time-course data. The potential of the proposed method has been demonstrated through the time-course analysis of the gene expression profiles of human umbilical vein endothelial cells (HUVECs) undergoing growth factor deprivation-induced apoptosis. Supplementary Information and the software (TRANS-MNET) are available at http://daweb.ism.ac.jp/~yoshidar/software/ssm/.
NASA Astrophysics Data System (ADS)
Zhang, Yue; Sun, Xian; Thiele, Antje; Hinz, Stefan
2015-10-01
Synthetic aperture radar (SAR) systems, such as TanDEM-X, TerraSAR-X and Cosmo-SkyMed, acquire imagery with high spatial resolution (HR), making it possible to observe objects in urban areas with high detail. In this paper, we propose a new top-down framework for three-dimensional (3D) building reconstruction from HR interferometric SAR (InSAR) data. Unlike most methods proposed before, we adopt a generative model and utilize the reconstruction process by maximizing a posteriori estimation (MAP) through Monte Carlo methods. The reason for this strategy refers to the fact that the noisiness of SAR images calls for a thorough prior model to better cope with the inherent amplitude and phase fluctuations. In the reconstruction process, according to the radar configuration and the building geometry, a 3D building hypothesis is mapped to the SAR image plane and decomposed to feature regions such as layover, corner line, and shadow. Then, the statistical properties of intensity, interferometric phase and coherence of each region are explored respectively, and are included as region terms. Roofs are not directly considered as they are mixed with wall into layover area in most cases. When estimating the similarity between the building hypothesis and the real data, the prior, the region term, together with the edge term related to the contours of layover and corner line, are taken into consideration. In the optimization step, in order to achieve convergent reconstruction outputs and get rid of local extrema, special transition kernels are designed. The proposed framework is evaluated on the TanDEM-X dataset and performs well for buildings reconstruction.
The trap door flap: a reliable, reproducible method of anterior pinna reconstruction.
McInerney, N M; Piggott, R P; Regan, P J
2013-10-01
Resection of skin cancers of the conchal fossa and anti-helical rim presents a challenging reconstructive problem. A full thickness skin graft is often used following excision of the cartilage underlying the lesion. Colour mismatch, a contour defect and a donor site scar are potential drawbacks to this method of reconstruction. The postauricular trap door flap offers a superior option for these defects. This study aims to assess the reliability and outcomes of the trap door flap for defects of the anterior surface of the pinna. A retrospective review of all trap door flaps carried out in Galway University Hospital was carried out. Charts were reviewed in order to examine operative notes and assess for any complications and length of follow up. 45 Patients were operated on by a single surgeon. The age range was 61-93 years. The majority of lesions excised were from the conchal area with 6 defects predominantly involving the scapha. No partial or complete flap loss occurred. 2 patients required further excision due to an incomplete margin and a local recurrence respectively. Follow up ranged from 3 months to 4 years with excellent cosmetic results were achieved in all cases with no scar issues at the flap or donor sites. The trap door flap is an excellent method of conchal reconstruction. It is reliable and reproducible with no flap loss demonstrated in our series of 45 patients. Large defects can be reconstructed with this flap and the cosmetic result in terms of colour and contour, as well as a hidden donor site scar, make this a superior option to a full thickness skin graft. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Gang; Zou, Jiangwei; Xu, Shiyou; Tian, Biao; Chen, Zengping
2014-10-01
In this paper the effects of orbits motion makes for scattering centers trajectory is analyzed, and introduced to scattering centers association, as a constraint. A screening method of feature points is presented to analysis the false points of reconstructed result, and the wrong association which lead these false points. The loop iteration between 3D reconstruction and association result makes the precision of final reconstructed result have a further improvement. The simulation data shows the validity of the algorithm.
A novel calibration method of focused light field camera for 3-D reconstruction of flame temperature
NASA Astrophysics Data System (ADS)
Sun, Jun; Hossain, Md. Moinul; Xu, Chuan-Long; Zhang, Biao; Wang, Shi-Min
2017-05-01
This paper presents a novel geometric calibration method for focused light field camera to trace the rays of flame radiance and to reconstruct the three-dimensional (3-D) temperature distribution of a flame. A calibration model is developed to calculate the corner points and their projections of the focused light field camera. The characteristics of matching main lens and microlens f-numbers are used as an additional constrains for the calibration. Geometric parameters of the focused light field camera are then achieved using Levenberg-Marquardt algorithm. Total focused images in which all the points are in focus, are utilized to validate the proposed calibration method. Calibration results are presented and discussed in details. The maximum mean relative error of the calibration is found less than 0.13%, indicating that the proposed method is capable of calibrating the focused light field camera successfully. The parameters obtained by the calibration are then utilized to trace the rays of flame radiance. A least square QR-factorization algorithm with Plank's radiation law is used to reconstruct the 3-D temperature distribution of a flame. Experiments were carried out on an ethylene air fired combustion test rig to reconstruct the temperature distribution of flames. The flame temperature obtained by the proposed method is then compared with that obtained by using high-precision thermocouple. The difference between the two measurements was found no greater than 6.7%. Experimental results demonstrated that the proposed calibration method and the applied measurement technique perform well in the reconstruction of the flame temperature.
Does thorax EIT image analysis depend on the image reconstruction method?
NASA Astrophysics Data System (ADS)
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2013-04-01
Different methods were proposed to analyze the resulting images of electrical impedance tomography (EIT) measurements during ventilation. The aim of our study was to examine if the analysis methods based on back-projection deliver the same results when applied on images based on other reconstruction algorithms. Seven mechanically ventilated patients with ARDS were examined by EIT. The thorax contours were determined from the routine CT images. EIT raw data was reconstructed offline with (1) filtered back-projection with circular forward model (BPC); (2) GREIT reconstruction method with circular forward model (GREITC) and (3) GREIT with individual thorax geometry (GREITT). Three parameters were calculated on the resulting images: linearity, global ventilation distribution and regional ventilation distribution. The results of linearity test are 5.03±2.45, 4.66±2.25 and 5.32±2.30 for BPC, GREITC and GREITT, respectively (median ±interquartile range). The differences among the three methods are not significant (p = 0.93, Kruskal-Wallis test). The proportions of ventilation in the right lung are 0.58±0.17, 0.59±0.20 and 0.59±0.25 for BPC, GREITC and GREITT, respectively (p = 0.98). The differences of the GI index based on different reconstruction methods (0.53±0.16, 0.51±0.25 and 0.54±0.16 for BPC, GREITC and GREITT, respectively) are also not significant (p = 0.93). We conclude that the parameters developed for images generated with GREITT are comparable with filtered back-projection and GREITC.
Bernstein, Ally Leigh; Dhanantwari, Amar; Jurcova, Martina; Cheheltani, Rabee; Naha, Pratap Chandra; Ivanc, Thomas; Shefer, Efrat; Cormode, David Peter
2016-01-01
Computed tomography is a widely used medical imaging technique that has high spatial and temporal resolution. Its weakness is its low sensitivity towards contrast media. Iterative reconstruction techniques (ITER) have recently become available, which provide reduced image noise compared with traditional filtered back-projection methods (FBP), which may allow the sensitivity of CT to be improved, however this effect has not been studied in detail. We scanned phantoms containing either an iodine contrast agent or gold nanoparticles. We used a range of tube voltages and currents. We performed reconstruction with FBP, ITER and a novel, iterative, modal-based reconstruction (IMR) algorithm. We found that noise decreased in an algorithm dependent manner (FBP > ITER > IMR) for every scan and that no differences were observed in attenuation rates of the agents. The contrast to noise ratio (CNR) of iodine was highest at 80 kV, whilst the CNR for gold was highest at 140 kV. The CNR of IMR images was almost tenfold higher than that of FBP images. Similar trends were found in dual energy images formed using these algorithms. In conclusion, IMR-based reconstruction techniques will allow contrast agents to be detected with greater sensitivity, and may allow lower contrast agent doses to be used. PMID:27185492
NASA Astrophysics Data System (ADS)
Bernstein, Ally Leigh; Dhanantwari, Amar; Jurcova, Martina; Cheheltani, Rabee; Naha, Pratap Chandra; Ivanc, Thomas; Shefer, Efrat; Cormode, David Peter
2016-05-01
Computed tomography is a widely used medical imaging technique that has high spatial and temporal resolution. Its weakness is its low sensitivity towards contrast media. Iterative reconstruction techniques (ITER) have recently become available, which provide reduced image noise compared with traditional filtered back-projection methods (FBP), which may allow the sensitivity of CT to be improved, however this effect has not been studied in detail. We scanned phantoms containing either an iodine contrast agent or gold nanoparticles. We used a range of tube voltages and currents. We performed reconstruction with FBP, ITER and a novel, iterative, modal-based reconstruction (IMR) algorithm. We found that noise decreased in an algorithm dependent manner (FBP > ITER > IMR) for every scan and that no differences were observed in attenuation rates of the agents. The contrast to noise ratio (CNR) of iodine was highest at 80 kV, whilst the CNR for gold was highest at 140 kV. The CNR of IMR images was almost tenfold higher than that of FBP images. Similar trends were found in dual energy images formed using these algorithms. In conclusion, IMR-based reconstruction techniques will allow contrast agents to be detected with greater sensitivity, and may allow lower contrast agent doses to be used.
A comparison of reconstruction methods for undersampled atomic force microscopy images.
Luo, Yufan; Andersson, Sean B
2015-12-18
Non-raster scanning and undersampling of atomic force microscopy (AFM) images is a technique for improving imaging rate and reducing the amount of tip-sample interaction needed to produce an image. Generation of the final image can be done using a variety of image processing techniques based on interpolation or optimization. The choice of reconstruction method has a large impact on the quality of the recovered image and the proper choice depends on the sample under study. In this work we compare interpolation through the use of inpainting algorithms with reconstruction based on optimization through the use of the basis pursuit algorithm commonly used for signal recovery in compressive sensing. Using four different sampling patterns found in non-raster AFM, namely row subsampling, spiral scanning, Lissajous scanning, and random scanning, we subsample data from existing images and compare reconstruction performance against the original image. The results illustrate that inpainting generally produces superior results when the image contains primarily low frequency content while basis pursuit is better when the images have mixed, but sparse, frequency content. Using support vector machines, we then classify images based on their frequency content and sparsity and, from this classification, develop a fast decision strategy to select a reconstruction algorithm to be used on subsampled data. The performance of the classification and decision test are demonstrated on test AFM images.
A comparison of reconstruction methods for undersampled atomic force microscopy images
NASA Astrophysics Data System (ADS)
Luo, Yufan; Andersson, Sean B.
2015-12-01
Non-raster scanning and undersampling of atomic force microscopy (AFM) images is a technique for improving imaging rate and reducing the amount of tip-sample interaction needed to produce an image. Generation of the final image can be done using a variety of image processing techniques based on interpolation or optimization. The choice of reconstruction method has a large impact on the quality of the recovered image and the proper choice depends on the sample under study. In this work we compare interpolation through the use of inpainting algorithms with reconstruction based on optimization through the use of the basis pursuit algorithm commonly used for signal recovery in compressive sensing. Using four different sampling patterns found in non-raster AFM, namely row subsampling, spiral scanning, Lissajous scanning, and random scanning, we subsample data from existing images and compare reconstruction performance against the original image. The results illustrate that inpainting generally produces superior results when the image contains primarily low frequency content while basis pursuit is better when the images have mixed, but sparse, frequency content. Using support vector machines, we then classify images based on their frequency content and sparsity and, from this classification, develop a fast decision strategy to select a reconstruction algorithm to be used on subsampled data. The performance of the classification and decision test are demonstrated on test AFM images.
NASA Astrophysics Data System (ADS)
Preza, Chrysanthe; Miller, Michael I.; Conchello, Jose-Angel
1993-07-01
We have shown that the linear least-squares (LLS) estimate of the intensities of a 3-D object obtained from a set of optical sections is unstable due to the inversion of small and zero-valued eigenvalues of the point-spread function (PSF) operator. The LLS solution was regularized by constraining it to lie in a subspace spanned by the eigenvectors corresponding to a selected number of the largest eigenvalues. In this paper we extend the regularized LLS solution to a maximum a posteriori (MAP) solution induced by a prior formed from a 'Good's like' smoothness penalty. This approach also yields a regularized linear estimator which reduces noise as well as edge artifacts in the reconstruction. The advantage of the linear MAP (LMAP) estimate over the current regularized LLS (RLLS) is its ability to regularize the inverse problem by smoothly penalizing components in the image associated with small eigenvalues. Computer simulations were performed using a theoretical PSF and a simple phantom to compare the two regularization techniques. It is shown that the reconstructions using the smoothness prior, give superior variance and bias results compared to the RLLS reconstructions. Encouraging reconstructions obtained with the LMAP method from real microscopical images of a 10 micrometers fluorescent bead, and a four-cell Volvox embryo are shown.
On reconstruction of acoustic pressure fields using the Helmholtz equation least squares method
Wu
2000-05-01
This paper presents analyses and implementation of the reconstruction of acoustic pressure fields radiated from a general, three-dimensional complex vibrating structure using the Helmholtz equation least-squares (HELS) method. The structure under consideration emulates a full-size four-cylinder engine. To simulate sound radiation from a vibrating structure, harmonic excitations are assumed to act on arbitrarily selected surfaces. The resulting vibration responses are solved by the commercial FEM (finite element method) software I-DEAS. Once the normal component of the surface velocity distribution is determined, the surface acoustic pressures are calculated using standard boundary element method (BEM) codes. The radiated acoustic pressures over several planar surfaces at certain distances from the source are calculated by the Helmholtz integral formulation. These field pressures are taken as the input to the HELS formulation to reconstruct acoustic pressures on the entire source surface, as well as in the field. The reconstructed acoustic pressures thus obtained are then compared with benchmark values. Numerical results demonstrate that good agreements can be obtained with relatively few expansion functions. The HELS method is shown to be very effective in the low-to-mid frequency regime, and can potentially become a powerful noise diagnostic tool.
The method for the reconstruction of complex images of specimens using backscattered electrons.
Kaczmarek, Danuta; Domaradzki, Jaroslaw
2002-01-01
The backscattered electron signal (BSE) is widely used for investigation of specimen surfaces in a scanning electron microscope (SEM). The development of multiple detector systems for BSE signal detection and the methods of digital processing of these signals have allowed for reconstruction of the third dimension on the basis of the two-dimensional (2-D) SEM image. A technique for simultaneous mapping of material composition (COMPO mode) and reconstruction of surface topography (TOPO mode) has also been proposed. This method is based on the measurements of BSE currents sensed by four semiconductor detectors versus the inclination angle of surface. To improve the separation of topographic and material contrasts in SEM, a correction of the TOPO and COMPO modes (resulting from a theoretical description of the system: electron beam, specimen, and detector) was applied. The proposed method can be used for a correct reconstruction of the surface image when the surface slope is <60 degrees. The measuring limit of the slope was closely connected with the detector setup. Next, the digital simulation of the colors was performed (after application of the method of linearization of BSE characteristic versus atomic number). This procedure to increase the SEM resolution for the BSE signal by use of digital image processing allows for a better distinction between the two elements with high atomic numbers.
A Reconstruction Method of Blood Flow Velocity in Left Ventricle Using Color Flow Ultrasound
Jang, Jaeseong; Ahn, Chi Young; Jeon, Kiwan; Heo, Jung; Lee, DongHak; Choi, Jung-il
2015-01-01
Vortex flow imaging is a relatively new medical imaging method for the dynamic visualization of intracardiac blood flow, a potentially useful index of cardiac dysfunction. A reconstruction method is proposed here to quantify the distribution of blood flow velocity fields inside the left ventricle from color flow images compiled from ultrasound measurements. In this paper, a 2D incompressible Navier-Stokes equation with a mass source term is proposed to utilize the measurable color flow ultrasound data in a plane along with the moving boundary condition. The proposed model reflects out-of-plane blood flows on the imaging plane through the mass source term. The boundary conditions to solve the system of equations are derived from the dimensions of the ventricle extracted from 2D echocardiography data. The performance of the proposed method is evaluated numerically using synthetic flow data acquired from simulating left ventricle flows. The numerical simulations show the feasibility and potential usefulness of the proposed method of reconstructing the intracardiac flow fields. Of particular note is the finding that the mass source term in the proposed model improves the reconstruction performance. PMID:26078773
A Reconstruction Method of Blood Flow Velocity in Left Ventricle Using Color Flow Ultrasound.
Jang, Jaeseong; Ahn, Chi Young; Jeon, Kiwan; Heo, Jung; Lee, DongHak; Joo, Chulmin; Choi, Jung-il; Seo, Jin Keun
2015-01-01
Vortex flow imaging is a relatively new medical imaging method for the dynamic visualization of intracardiac blood flow, a potentially useful index of cardiac dysfunction. A reconstruction method is proposed here to quantify the distribution of blood flow velocity fields inside the left ventricle from color flow images compiled from ultrasound measurements. In this paper, a 2D incompressible Navier-Stokes equation with a mass source term is proposed to utilize the measurable color flow ultrasound data in a plane along with the moving boundary condition. The proposed model reflects out-of-plane blood flows on the imaging plane through the mass source term. The boundary conditions to solve the system of equations are derived from the dimensions of the ventricle extracted from 2D echocardiography data. The performance of the proposed method is evaluated numerically using synthetic flow data acquired from simulating left ventricle flows. The numerical simulations show the feasibility and potential usefulness of the proposed method of reconstructing the intracardiac flow fields. Of particular note is the finding that the mass source term in the proposed model improves the reconstruction performance.
SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT
Wu, P; Mao, T; Gong, S; Wang, J; Niu, T; Sheng, K; Xie, Y
2016-06-15
Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimization trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R
The Nagoya cosmic-ray muon spectrometer 3, part 4: Track reconstruction method
NASA Technical Reports Server (NTRS)
Shibata, S.; Kamiya, Y.; Iijima, K.; Iida, S.
1985-01-01
One of the greatest problems in measuring particle trajectories with an optical or visual detector system, is the reconstruction of trajectories in real space from their recorded images. In the Nagoya cosmic-ray muon spectrometer, muon tracks are detected by wide gap spark chambers and their images are recorded on the photographic film through an optical system of 10 mirrors and two cameras. For the spatial reconstruction, 42 parameters of the optical system should be known to determine the configuration of this system. It is almost impossible to measure this many parameters directly with usual techniques. In order to solve this problem, the inverse transformation method was applied. In this method, all the optical parameters are determined from the locations of fiducial marks in real space and the locations of their images on the photographic film by the non-linear least square fitting.
A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system
NASA Astrophysics Data System (ADS)
Ge, Zhuo; Zhu, Ying; Liang, Guanhao
2017-01-01
To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.
NASA Astrophysics Data System (ADS)
Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul
2012-06-01
Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.
Reconstruction of photonic crystal geometries using a reduced basis method for nonlinear outputs
NASA Astrophysics Data System (ADS)
Hammerschmidt, Martin; Barth, Carlo; Pomplun, Jan; Burger, Sven; Becker, Christiane; Schmidt, Frank
2016-03-01
Maxwell solvers based on the hp-adaptive finite element method allow for accurate geometrical modeling and high numerical accuracy. These features are indispensable for the optimization of optical properties or reconstruction of parameters through inverse processes. High computational complexity prohibits the evaluation of the solution for many parameters. We present a reduced basis method (RBM) for the time-harmonic electromagnetic scattering problem allowing to compute solutions for a parameter configuration orders of magnitude faster. The RBM allows to evaluate linear and nonlinear outputs of interest like Fourier transform or the enhancement of the electromagnetic field in milliseconds. We apply the RBM to compute light-scattering off two dimensional photonic crystal structures made of silicon and reconstruct geometrical parameters.
NASA Astrophysics Data System (ADS)
Feng, Min-nan; Wang, Yu-cong; Wang, Hao; Liu, Guo-quan; Xue, Wei-hua
2017-03-01
Using a total of 297 segmented sections, we reconstructed the three-dimensional (3D) structure of pure iron and obtained the largest dataset of 16254 3D complete grains reported to date. The mean values of equivalent sphere radius and face number of pure iron were observed to be consistent with those of Monte Carlo simulated grains, phase-field simulated grains, Ti-alloy grains, and Ni-based super alloy grains. In this work, by finding a balance between automatic methods and manual refinement, we developed an interactive segmentation method to segment serial sections accurately in the reconstruction of the 3D microstructure; this approach can save time as well as substantially eliminate errors. The segmentation process comprises four operations: image preprocessing, breakpoint detection based on mathematical morphology analysis, optimized automatic connection of the breakpoints, and manual refinement by artificial evaluation.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
NASA Technical Reports Server (NTRS)
Striepe, Scott A.; Blanchard, Robert C.; Kirsch, Michael F.; Fowler, Wallace T.
2007-01-01
On January 14, 2005, ESA's Huygens probe separated from NASA's Cassini spacecraft, entered the Titan atmosphere and landed on its surface. As part of NASA Engineering Safety Center Independent Technical Assessment of the Huygens entry, descent, and landing, and an agreement with ESA, NASA provided results of all EDL analyses and associated findings to the Huygens project team prior to probe entry. In return, NASA was provided the flight data from the probe so that trajectory reconstruction could be done and simulation models assessed. Trajectory reconstruction of the Huygens entry probe at Titan was accomplished using two independent approaches: a traditional method and a POST2-based method. Results from both approaches are discussed in this paper.
Wolen, Aaron R.; Phillips, Charles A.; Langston, Michael A.; Putman, Alex H.; Vorster, Paul J.; Bruce, Nathan A.; York, Timothy P.; Williams, Robert W.; Miles, Michael F.
2012-01-01
Background Individual differences in initial sensitivity to ethanol are strongly related to the heritable risk of alcoholism in humans. To elucidate key molecular networks that modulate ethanol sensitivity we performed the first systems genetics analysis of ethanol-responsive gene expression in brain regions of the mesocorticolimbic reward circuit (prefrontal cortex, nucleus accumbens, and ventral midbrain) across a highly diverse family of 27 isogenic mouse strains (BXD panel) before and after treatment with ethanol. Results Acute ethanol altered the expression of ∼2,750 genes in one or more regions and 400 transcripts were jointly modulated in all three. Ethanol-responsive gene networks were extracted with a powerful graph theoretical method that efficiently summarized ethanol's effects. These networks correlated with acute behavioral responses to ethanol and other drugs of abuse. As predicted, networks were heavily populated by genes controlling synaptic transmission and neuroplasticity. Several of the most densely interconnected network hubs, including Kcnma1 and Gsk3β, are known to influence behavioral or physiological responses to ethanol, validating our overall approach. Other major hub genes like Grm3, Pten and Nrg3 represent novel targets of ethanol effects. Networks were under strong genetic control by variants that we mapped to a small number of chromosomal loci. Using a novel combination of genetic, bioinformatic and network-based approaches, we identified high priority cis-regulatory candidate genes, including Scn1b, Gria1, Sncb and Nell2. Conclusions The ethanol-responsive gene networks identified here represent a previously uncharacterized intermediate phenotype between DNA variation and ethanol sensitivity in mice. Networks involved in synaptic transmission were strongly regulated by ethanol and could contribute to behavioral plasticity seen with chronic ethanol. Our novel finding that hub genes and a small number of loci exert major influence
An Iterative Method for Improving the Quality of Reconstruction of a Three-Dimensional Surface
Vishnyakov, G.N.; Levin, G.G.; Sukhorukov, K.A.
2005-12-15
A complex image with constraints imposed on the amplitude and phase image components is processed using the Gerchberg iterative algorithm for the first time. The use of the Gerchberg iterative algorithm makes it possible to improve the quality of a three-dimensional surface profile reconstructed by the previously proposed method that is based on the multiangle projection of fringes and the joint processing of the obtained images by Fourier synthesis.
NASA Astrophysics Data System (ADS)
Guo, Wei; Jia, Kebin; Tian, Jie; Han, Dong; Liu, Xueyan; Wu, Ping; Feng, Jinchao; Yang, Xin
2012-03-01
Among many molecular imaging modalities, Bioluminescence tomography (BLT) is an important optical molecular imaging modality. Due to its unique advantages in specificity, sensitivity, cost-effectiveness and low background noise, BLT is widely studied for live small animal imaging. Since only the photon distribution over the surface is measurable and the photo propagation with biological tissue is highly diffusive, BLT is often an ill-posed problem and may bear multiple solutions and aberrant reconstruction in the presence of measurement noise and optical parameter mismatches. For many BLT practical applications, such as early detection of tumors, the volumes of the light sources are very small compared with the whole body. Therefore, the L1-norm sparsity regularization has been used to take advantage of the sparsity prior knowledge and alleviate the ill-posedness of the problem. Iterative shrinkage (IST) algorithm is an important research achievement in a field of compressed sensing and widely applied in sparse signal reconstruction. However, the convergence rate of IST algorithm depends heavily on the linear operator. When the problem is ill-posed, it becomes very slow. In this paper, we present a sparsity regularization reconstruction method for BLT based on the two-step iterated shrinkage approach. By employing Two-step strategy of iterative reweighted shrinkage (IRS) to improve IST, the proposed method shows faster convergence rate and better adaptability for BLT. The simulation experiments with mouse atlas were conducted to evaluate the performance of proposed method. By contrast, the proposed method can obtain the stable and comparable reconstruction solution with less number of iterations.
NASA Astrophysics Data System (ADS)
Jang, Jaeseong; Ahn, Chi Young; Jeon, Kiwan; Choi, Jung-il; Lee, Changhoon; Seo, Jin Keun
2015-03-01
A reconstruction method is proposed here to quantify the distribution of blood flow velocity fields inside the left ventricle from color Doppler echocardiography measurement. From 3D incompressible Navier- Stokes equation, a 2D incompressible Navier-Stokes equation with a mass source term is derived to utilize the measurable color flow ultrasound data in a plane along with the moving boundary condition. The proposed model reflects out-of-plane blood flows on the imaging plane through the mass source term. For demonstrating a feasibility of the proposed method, we have performed numerical simulations of the forward problem and numerical analysis of the reconstruction method. First, we construct a 3D moving LV region having a specific stroke volume. To obtain synthetic intra-ventricular flows, we performed a numerical simulation of the forward problem of Navier-Stokes equation inside the 3D moving LV, computed 3D intra-ventricular velocity fields as a solution of the forward problem, projected the 3D velocity fields on the imaging plane and took the inner product of the 2D velocity fields on the imaging plane and scanline directional velocity fields for synthetic scanline directional projected velocity at each position. The proposed method utilized the 2D synthetic projected velocity data for reconstructing LV blood flow. By computing the difference between synthetic flow and reconstructed flow fields, we obtained the averaged point-wise errors of 0.06 m/s and 0.02 m/s for u- and v-components, respectively.
Four-dimensional cone beam CT reconstruction and enhancement using a temporal nonlocal means method
Jia, Xun; Tian, Zhen; Lou, Yifei; Sonke, Jan-Jakob; Jiang, Steve B.
2012-01-01
Purpose: Four-dimensional cone beam computed tomography (4D-CBCT) has been developed to provide respiratory phase-resolved volumetric imaging in image guided radiation therapy. Conventionally, it is reconstructed by first sorting the x-ray projections into multiple respiratory phase bins according to a breathing signal extracted either from the projection images or some external surrogates, and then reconstructing a 3D CBCT image in each phase bin independently using FDK algorithm. This method requires adequate number of projections for each phase, which can be achieved using a low gantry rotation or multiple gantry rotations. Inadequate number of projections in each phase bin results in low quality 4D-CBCT images with obvious streaking artifacts. 4D-CBCT images at different breathing phases share a lot of redundant information, because they represent the same anatomy captured at slightly different temporal points. Taking this redundancy along the temporal dimension into account can in principle facilitate the reconstruction in the situation of inadequate number of projection images. In this work, the authors propose two novel 4D-CBCT algorithms: an iterative reconstruction algorithm and an enhancement algorithm, utilizing a temporal nonlocal means (TNLM) method. Methods: The authors define a TNLM energy term for a given set of 4D-CBCT images. Minimization of this term favors those 4D-CBCT images such that any anatomical features at one spatial point at one phase can be found in a nearby spatial point at neighboring phases. 4D-CBCT reconstruction is achieved by minimizing a total energy containing a data fidelity term and the TNLM energy term. As for the image enhancement, 4D-CBCT images generated by the FDK algorithm are enhanced by minimizing the TNLM function while keeping the enhanced images close to the FDK results. A forward–backward splitting algorithm and a Gauss–Jacobi iteration method are employed to solve the problems. The algorithms implementation
Four-dimensional cone beam CT reconstruction and enhancement using a temporal nonlocal means method
Jia Xun; Tian Zhen; Lou Yifei; Sonke, Jan-Jakob; Jiang, Steve B.
2012-09-15
Purpose: Four-dimensional cone beam computed tomography (4D-CBCT) has been developed to provide respiratory phase-resolved volumetric imaging in image guided radiation therapy. Conventionally, it is reconstructed by first sorting the x-ray projections into multiple respiratory phase bins according to a breathing signal extracted either from the projection images or some external surrogates, and then reconstructing a 3D CBCT image in each phase bin independently using FDK algorithm. This method requires adequate number of projections for each phase, which can be achieved using a low gantry rotation or multiple gantry rotations. Inadequate number of projections in each phase bin results in low quality 4D-CBCT images with obvious streaking artifacts. 4D-CBCT images at different breathing phases share a lot of redundant information, because they represent the same anatomy captured at slightly different temporal points. Taking this redundancy along the temporal dimension into account can in principle facilitate the reconstruction in the situation of inadequate number of projection images. In this work, the authors propose two novel 4D-CBCT algorithms: an iterative reconstruction algorithm and an enhancement algorithm, utilizing a temporal nonlocal means (TNLM) method. Methods: The authors define a TNLM energy term for a given set of 4D-CBCT images. Minimization of this term favors those 4D-CBCT images such that any anatomical features at one spatial point at one phase can be found in a nearby spatial point at neighboring phases. 4D-CBCT reconstruction is achieved by minimizing a total energy containing a data fidelity term and the TNLM energy term. As for the image enhancement, 4D-CBCT images generated by the FDK algorithm are enhanced by minimizing the TNLM function while keeping the enhanced images close to the FDK results. A forward-backward splitting algorithm and a Gauss-Jacobi iteration method are employed to solve the problems. The algorithms implementation on
Multiscale AM-FM demodulation and image reconstruction methods with improved accuracy.
Murray, Victor; Rodriguez, Paul; Pattichis, Marios S
2010-05-01
We develop new multiscale amplitude-modulation frequency-modulation (AM-FM) demodulation methods for image processing. The approach is based on three basic ideas: (i) AM-FM demodulation using a new multiscale filterbank, (ii) new, accurate methods for instantaneous frequency (IF) estimation, and (iii) multiscale least squares AM-FM reconstructions. In particular, we introduce a variable-spacing local linear phase (VS-LLP) method for improved instantaneous frequency (IF) estimation and compare it to an extended quasilocal method and the quasi-eigen function approximation (QEA). It turns out that the new VS-LLP method is a generalization of the QEA method where we choose the best integer spacing between the samples to adapt as a function of frequency. We also introduce a new quasi-local method (QLM) for IF and IA estimation and discuss some of its advantages and limitations. The new IF estimation methods lead to significantly improved estimates. We present different multiscale decompositions to show that the proposed methods can be used to reconstruct and analyze general images.
Ray, J.; Lee, J.; Yadav, V.; ...
2015-04-29
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also
a Data Driven Method for Building Reconstruction from LiDAR Point Clouds
NASA Astrophysics Data System (ADS)
Sajadian, M.; Arefi, H.
2014-10-01
Airborne laser scanning, commonly referred to as LiDAR, is a superior technology for three-dimensional data acquisition from Earth's surface with high speed and density. Building reconstruction is one of the main applications of LiDAR system which is considered in this study. For a 3D reconstruction of the buildings, the buildings points should be first separated from the other points such as; ground and vegetation. In this paper, a multi-agent strategy has been proposed for simultaneous extraction and segmentation of buildings from LiDAR point clouds. Height values, number of returned pulse, length of triangles, direction of normal vectors, and area are five criteria which have been utilized in this step. Next, the building edge points are detected using a new method named "Grid Erosion". A RANSAC based technique has been employed for edge line extraction. Regularization constraints are performed to achieve the final lines. Finally, by modelling of the roofs and walls, 3D building model is reconstructed. The results indicate that the proposed method could successfully extract the building from LiDAR data and generate the building models automatically. A qualitative and quantitative assessment of the proposed method is then provided.
The Pixon Method for Data Compression Image Classification, and Image Reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard; Yahil, Amos
2002-01-01
As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.
Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D
2010-10-01
Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.
NASA Astrophysics Data System (ADS)
Nara, T.; Koiwa, K.; Takagi, S.; Oyama, D.; Uehara, G.
2014-05-01
This paper presents an algebraic reconstruction method for dipole-quadrupole sources using magnetoencephalography data. Compared to the conventional methods with the equivalent current dipoles source model, our method can more accurately reconstruct two close, oppositely directed sources. Numerical simulations show that two sources on both sides of the longitudinal fissure of cerebrum are stably estimated. The method is verified using a quadrupolar source phantom, which is composed of two isosceles-triangle-coils with parallel bases.
NASA Astrophysics Data System (ADS)
Tomiwa, K. G.
2017-09-01
The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.
Su, Jianzhong; Shan, Hua; Liu, Hanli; Klibanov, Michael V
2006-10-01
A method is presented for reconstruction of the optical absorption coefficient from transmission near-infrared data with a cw source. As it is distinct from other available schemes such as optimization or Newton's iterative method, this method resolves the inverse problem by solving a boundary value problem for a Volterra-type integral-differential equation. It is demonstrated in numerical studies that this technique has a better than average stability with respect to the discrepancy between the initial guess and the actual unknown absorption coefficient. The method is particularly useful for reconstruction from a large data set obtained from a CCD camera. Several numerical reconstruction examples are presented.
Critical node treatment in the analytic function expansion method for Pin Power Reconstruction
Gao, Z.; Xu, Y.; Downar, T.
2013-07-01
Pin Power Reconstruction (PPR) was implemented in PARCS using the eight term analytic function expansion method (AFEN). This method has been demonstrated to be both accurate and efficient. However, similar to all the methods involving analytic functions, such as the analytic node method (ANM) and AFEN for nodal solution, the use of AFEN for PPR also has potential numerical issue with critical nodes. The conventional analytic functions are trigonometric or hyperbolic sine or cosine functions with an angular frequency proportional to buckling. For a critic al node the buckling is zero and the sine functions becomes zero, and the cosine function become unity. In this case, the eight terms of the analytic functions are no longer distinguishable from ea ch other which makes their corresponding coefficients can no longer be determined uniquely. The mode flux distribution of critical node can be linear while the conventional analytic functions can only express a uniform distribution. If there is critical or near critical node in a plane, the reconstructed pin power distribution is often be shown negative or very large values using the conventional method. In this paper, we propose a new method to avoid the numerical problem wit h critical nodes which uses modified trigonometric or hyperbolic sine functions which are the ratio of trigonometric or hyperbolic sine and its angular frequency. If there are no critical or near critical nodes present, the new pin power reconstruction method with modified analytic functions are equivalent to the conventional analytic functions. The new method is demonstrated using the L336C5 benchmark problem. (authors)
Benazzi, S; Stansfield, E; Milani, C; Gruppioni, G
2009-07-01
The process of forensic identification of missing individuals is frequently reliant on the superimposition of cranial remains onto an individual's picture and/or facial reconstruction. In the latter, the integrity of the skull or a cranium is an important factor in successful identification. Here, we recommend the usage of computerized virtual reconstruction and geometric morphometrics for the purposes of individual reconstruction and identification in forensics. We apply these methods to reconstruct a complete cranium from facial remains that allegedly belong to the famous Italian humanist of the fifteenth century, Angelo Poliziano (1454-1494). Raw data was obtained by computed tomography scans of the Poliziano face and a complete reference skull of a 37-year-old Italian male. Given that the amount of distortion of the facial remains is unknown, two reconstructions are proposed: The first calculates the average shape between the original and its reflection, and the second discards the less preserved left side of the cranium under the assumption that there is no deformation on the right. Both reconstructions perform well in the superimposition with the original preserved facial surface in a virtual environment. The reconstruction by means of averaging between the original and reflection yielded better results during the superimposition with portraits of Poliziano. We argue that the combination of computerized virtual reconstruction and geometric morphometric methods offers a number of advantages over traditional plastic reconstruction, among which are speed, reproducibility, easiness of manipulation when superimposing with pictures in virtual environment, and assumptions control.
Setterbo, Jacob J.; Chau, Anh; Fyhrie, Patricia B.; Hubbard, Mont; Upadhyaya, Shrini K.; Symons, Jennifer E.; Stover, Susan M.
2012-01-01
Background Racetrack surface is a risk factor for racehorse injuries and fatalities. Current research indicates that race surface mechanical properties may be influenced by material composition, moisture content, temperature, and maintenance. Race surface mechanical testing in a controlled laboratory setting would allow for objective evaluation of dynamic properties of surface and factors that affect surface behavior. Objective To develop a method for reconstruction of race surfaces in the laboratory and validate the method by comparison with racetrack measurements of dynamic surface properties. Methods Track-testing device (TTD) impact tests were conducted to simulate equine hoof impact on dirt and synthetic race surfaces; tests were performed both in situ (racetrack) and using laboratory reconstructions of harvested surface materials. Clegg Hammer in situ measurements were used to guide surface reconstruction in the laboratory. Dynamic surface properties were compared between in situ and laboratory settings. Relationships between racetrack TTD and Clegg Hammer measurements were analyzed using stepwise multiple linear regression. Results Most dynamic surface property setting differences (racetrack-laboratory) were small relative to surface material type differences (dirt-synthetic). Clegg Hammer measurements were more strongly correlated with TTD measurements on the synthetic surface than the dirt surface. On the dirt surface, Clegg Hammer decelerations were negatively correlated with TTD forces. Conclusions Laboratory reconstruction of racetrack surfaces guided by Clegg Hammer measurements yielded TTD impact measurements similar to in situ values. The negative correlation between TTD and Clegg Hammer measurements confirms the importance of instrument mass when drawing conclusions from testing results. Lighter impact devices may be less appropriate for assessing dynamic surface properties compared to testing equipment designed to simulate hoof impact (TTD
A method of dose reconstruction for moving targets compatible with dynamic treatments
Rugaard Poulsen, Per; Lykkegaard Schmidt, Mai; Keall, Paul; Schjodt Worm, Esben; Fledelius, Walther; Hoffmann, Lone
2012-10-15
Purpose: To develop a method that allows a commercial treatment planning system (TPS) to perform accurate dose reconstruction for rigidly moving targets and to validate the method in phantom measurements for a range of treatments including intensity modulated radiation therapy (IMRT), volumetric arc therapy (VMAT), and dynamic multileaf collimator (DMLC) tracking. Methods: An in-house computer program was developed to manipulate Dicom treatment plans exported from a TPS (Eclipse, Varian Medical Systems) such that target motion during treatment delivery was incorporated into the plans. For each treatment, a motion including plan was generated by dividing the intratreatment target motion into 1 mm position bins and construct sub-beams that represented the parts of the treatment that were delivered, while the target was located within each position bin. For each sub-beam, the target shift was modeled by a corresponding isocenter shift. The motion incorporating Dicom plans were reimported into the TPS, where dose calculation resulted in motion including target dose distributions. For experimental validation of the dose reconstruction a thorax phantom with a moveable lung equivalent rod with a tumor insert of solid water was first CT scanned. The tumor insert was delineated as a gross tumor volume (GTV), and a planning target volume (PTV) was formed by adding margins. A conformal plan, two IMRT plans (step-and-shoot and sliding windows), and a VMAT plan were generated giving minimum target doses of 95% (GTV) and 67% (PTV) of the prescription dose (3 Gy). Two conformal fields with MLC leaves perpendicular and parallel to the tumor motion, respectively, were generated for DMLC tracking. All treatment plans were delivered to the thorax phantom without tumor motion and with a sinusoidal tumor motion. The two conformal fields were delivered with and without portal image guided DMLC tracking based on an embedded gold marker. The target dose distribution was measured with a
A method of dose reconstruction for moving targets compatible with dynamic treatments
Poulsen, Per Rugaard; Schmidt, Mai Lykkegaard; Keall, Paul; Worm, Esben Schjødt; Fledelius, Walther; Hoffmann, Lone
2012-01-01
Purpose: To develop a method that allows a commercial treatment planning system (TPS) to perform accurate dose reconstruction for rigidly moving targets and to validate the method in phantom measurements for a range of treatments including intensity modulated radiation therapy (IMRT), volumetric arc therapy (VMAT), and dynamic multileaf collimator (DMLC) tracking. Methods: An in-house computer program was developed to manipulate Dicom treatment plans exported from a TPS (Eclipse, Varian Medical Systems) such that target motion during treatment delivery was incorporated into the plans. For each treatment, a motion including plan was generated by dividing the intratreatment target motion into 1 mm position bins and construct sub-beams that represented the parts of the treatment that were delivered, while the target was located within each position bin. For each sub-beam, the target shift was modeled by a corresponding isocenter shift. The motion incorporating Dicom plans were reimported into the TPS, where dose calculation resulted in motion including target dose distributions. For experimental validation of the dose reconstruction a thorax phantom with a moveable lung equivalent rod with a tumor insert of solid water was first CT scanned. The tumor insert was delineated as a gross tumor volume (GTV), and a planning target volume (PTV) was formed by adding margins. A conformal plan, two IMRT plans (step-and-shoot and sliding windows), and a VMAT plan were generated giving minimum target doses of 95% (GTV) and 67% (PTV) of the prescription dose (3 Gy). Two conformal fields with MLC leaves perpendicular and parallel to the tumor motion, respectively, were generated for DMLC tracking. All treatment plans were delivered to the thorax phantom without tumor motion and with a sinusoidal tumor motion. The two conformal fields were delivered with and without portal image guided DMLC tracking based on an embedded gold marker. The target dose distribution was measured with a
Finding pathway-modulating genes from a novel Ontology Fingerprint-derived gene network
Qin, Tingting; Matmati, Nabil; Tsoi, Lam C.; Mohanty, Bidyut K.; Gao, Nan; Tang, Jijun; Lawson, Andrew B.; Hannun, Yusuf A.; Zheng, W. Jim
2014-01-01
To enhance our knowledge regarding biological pathway regulation, we took an integrated approach, using the biomedical literature, ontologies, network analyses and experimental investigation to infer novel genes that could modulate biological pathways. We first constructed a novel gene network via a pairwise comparison of all yeast genes’ Ontology Fingerprints—a set of Gene Ontology terms overrepresented in the PubMed abstracts linked to a gene along with those terms’ corresponding enrichment P-values. The network was further refined using a Bayesian hierarchical model to identify novel genes that could potentially influence the pathway activities. We applied this method to the sphingolipid pathway in yeast and found that many top-ranked genes indeed displayed altered sphingolipid pathway functions, initially measured by their sensitivity to myriocin, an inhibitor of de novo sphingolipid biosynthesis. Further experiments confirmed the modulation of the sphingolipid pathway by one of these genes, PFA4, encoding a palmitoyl transferase. Comparative analysis showed that few of these novel genes could be discovered by other existing methods. Our novel gene network provides a unique and comprehensive resource to study pathway modulations and systems biology in general. PMID:25063300
Finding pathway-modulating genes from a novel Ontology Fingerprint-derived gene network.
Qin, Tingting; Matmati, Nabil; Tsoi, Lam C; Mohanty, Bidyut K; Gao, Nan; Tang, Jijun; Lawson, Andrew B; Hannun, Yusuf A; Zheng, W Jim
2014-10-01
To enhance our knowledge regarding biological pathway regulation, we took an integrated approach, using the biomedical literature, ontologies, network analyses and experimental investigation to infer novel genes that could modulate biological pathways. We first constructed a novel gene network via a pairwise comparison of all yeast genes' Ontology Fingerprints--a set of Gene Ontology terms overrepresented in the PubMed abstracts linked to a gene along with those terms' corresponding enrichment P-values. The network was further refined using a Bayesian hierarchical model to identify novel genes that could potentially influence the pathway activities. We applied this method to the sphingolipid pathway in yeast and found that many top-ranked genes indeed displayed altered sphingolipid pathway functions, initially measured by their sensitivity to myriocin, an inhibitor of de novo sphingolipid biosynthesis. Further experiments confirmed the modulation of the sphingolipid pathway by one of these genes, PFA4, encoding a palmitoyl transferase. Comparative analysis showed that few of these novel genes could be discovered by other existing methods. Our novel gene network provides a unique and comprehensive resource to study pathway modulations and systems biology in general.
a Method for the Reconstruction and Temporal Extension of Climatological Time Series
NASA Astrophysics Data System (ADS)
Valero, F.; Gonzalez, J. F.; Doblas, F. J.; García-Miguel, J. A.
1996-02-01
A method for the reconstruction and temporal extension of climatological time series is provided. This method was focused on a combination of methods, including harmonic analysis, seasonal weights, and the Durbin-Watson (DW) regression method. The DW method has been modified in this paper and is described in detail because it represents a novel use of the original DW method.The method is applied to monthly means of daily wind-run data sets recorded in two historical observatories (M series and A series) within the Parque del Retiro in Madrid (Spain) and covering different time periods with an overlapping period (1901-1919). The aim of the present study is to fill up to and to construct a historical time series ranging from 1867 to 1992. The proposed model is developed for the 1906-1919 calibration period and validated over the 1901-1905 verification period, which includes the hypothesis of constant ratio of variances. The verification results are almost as good as those for the calibration period. Hence, the M series was extended back to 1867, which results in the longest climatological wind-run data-set in Spain. Also, the reconstruction is shown to be reliable.
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Haering, Edward A., Jr.; Ehernberger, L. J.
1996-01-01
In-flight measurements of the SR-71 near-field sonic boom were obtained by an F-16XL airplane at flightpath separation distances from 40 to 740 ft. Twenty-two signatures were obtained from Mach 1.60 to Mach 1.84 and altitudes from 47,600 to 49,150 ft. The shock wave signatures were measured by the total and static sensors on the F-16XL noseboo. These near-field signature measurements were distorted by pneumatic attenuation in the pitot-static sensors and accounting for their effects using optimal deconvolution. Measurement system magnitude and phase characteristics were determined from ground-based step-response tests and extrapolated to flight conditions using analytical models. Deconvolution was implemented using Fourier transform methods. Comparisons of the shock wave signatures reconstructed from the total and static pressure data are presented. The good agreement achieved gives confidence of the quality of the reconstruction analysis. although originally developed to reconstruct the sonic boom signatures from SR-71 sonic boom flight tests, the methods presented here generally apply to other types of highly attenuated or distorted pneumatic measurements.
NASA Astrophysics Data System (ADS)
Montgomery, Kevin N.; Ross, Muriel D.
1993-07-01
A simple method to reconstruct details of neural tissue architectures from transmission electron microscope (TEM) images will help us to increase our knowledge of the functional organization of neural systems in general. To be useful, the reconstruction method should provide high resolution, quantitative measurement, and quick turnaround. In pursuit of these goals, we developed a modern, semiautomated system for reconstruction of neural tissue from TEM serial sections. Images are acquired by a video camera mounted on TEM (Zeiss 902) equipped with an automated stage control. The images are reassembled automatically as a mosaicked section using a crosscorrelation algorithm on a Connection Machine-2 (CM-2) parallel supercomputer. An object detection algorithm on a Silicon Graphics workstation is employed to aid contour extraction. An estimated registration between sections is computed and verified by the user. The contours are then tessellated into a triangle-based mesh. At this point the data can be visualized as a wireframe or solid object, volume rendered, or used as a basis for simulations of functional activity.
NASA Astrophysics Data System (ADS)
Bittar, Eric; Lavallee, Stephane; Szeliski, Richard
1993-08-01
This paper presents a method to register overlapping 3-D surfaces which we use to reconstruct entire three-dimensional objects from sets of views. We use a range imaging sensor to digitize the object in several positions. Each pair of overlapping images is then registered using the algorithm developed in this paper. Rather than extracting and matching features, we match the complete surface, which we represent using a collection of points. This enables us to reconstruct smooth free-form objects which may lack sufficient features. Our algorithm is an extension of an algorithm we previously developed to register 3-D surfaces. This algorithm first creates an octree-spline from one set of points to quickly compute point to surface distances. It then uses an iterative nonlinear least squares minimization technique to minimize the sum of squared distances from the data point set to the octree point set. In this paper, we replace the squared distance with a function of the distance, which allows the elimination of points that are not in the shared region between the two sets. Once the object has been reconstructed by merging all the views, a continuous surface model is created from the set of points. This method has been successfully used on the limbs of a dummy and on a human head.
Reconstruction of RHESSI Solar Flare Images with a Forward Fitting Method
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.; Schmahl, Ed; RHESSI Team
2002-11-01
We describe a forward-fitting method that has been developed to reconstruct hard X-ray images of solar flares from the Ramaty High-Energy Solar Spectroscopic Imager (RHESSI), a Fourier imager with rotation-modulated collimators that was launched on 5 February 2002. The forward-fitting method is based on geometric models that represent a spatial map by a superposition of multiple source structures, which are quantified by circular gaussians (4 parameters per source), elliptical gaussians (6 parameters), or curved ellipticals (7 parameters), designed to characterize real solar flare hard X-ray maps with a minimum number of geometric elements. We describe and demonstrate the use of the forward-fitting algorithm. We perform some 500 simulations of rotation-modulated time profiles of the 9 RHESSI detectors, based on single and multiple source structures, and perform their image reconstruction. We quantify the fidelity of the image reconstruction, as function of photon statistics, and the accuracy of retrieved source positions, widths, and fluxes. We outline applications for which the forward-fitting code is most suitable, such as measurements of the energy-dependent altitude of energy loss near the limb, or footpoint separation during flares.
Single-slice reconstruction method for helical cone-beam differential phase-contrast CT.
Fu, Jian; Chen, Liyuan
2014-01-01
X-ray phase-contrast computed tomography (PC-CT) can provide the internal structure information of biomedical specimens with high-quality cross-section images and has become an invaluable analysis tool. Here a simple and fast reconstruction algorithm is reported for helical cone-beam differential PC-CT (DPC-CT), which is called the DPC-CB-SSRB algorithm. It combines the existing CB-SSRB method of helical cone-beam absorption-contrast CT with the differential nature of DPC imaging. The reconstruction can be performed using 2D fan-beam filtered back projection algorithm with the Hilbert imaginary filter. The quality of the results for large helical pitches is surprisingly good. In particular, with this algorithm comparable quality is obtained using helical cone-beam DPC-CT data with a normalized pitch of 10 to that obtained using the traditional inter-row interpolation reconstruction with a normalized pitch of 2. This method will push the future medical helical cone-beam DPC-CT imaging applications.
ADMM-EM Method for L1-Norm Regularized Weighted Least Squares PET Reconstruction
2016-01-01
The L1-norm regularization is usually used in positron emission tomography (PET) reconstruction to suppress noise artifacts while preserving edges. The alternating direction method of multipliers (ADMM) is proven to be effective for solving this problem. It sequentially updates the additional variables, image pixels, and Lagrangian multipliers. Difficulties lie in obtaining a nonnegative update of the image. And classic ADMM requires updating the image by greedy iteration to minimize the cost function, which is computationally expensive. In this paper, we consider a specific application of ADMM to the L1-norm regularized weighted least squares PET reconstruction problem. Main contribution is derivation of a new approach to iteratively and monotonically update the image while self-constraining in the nonnegativity region and the absence of a predetermined step size. We give a rigorous convergence proof on the quadratic subproblem of the ADMM algorithm considered in the paper. A simplified version is also developed by replacing the minima of the image-related cost function by one iteration that only decreases it. The experimental results show that the proposed algorithm with greedy iterations provides a faster convergence than other commonly used methods. Furthermore, the simplified version gives a comparable reconstructed result with far lower computational costs. PMID:27840655
NASA Astrophysics Data System (ADS)
Paget, A. C.; Brodzik, M. J.; Gotberg, J.; Hardman, M.; Long, D. G.
2014-12-01
Spanning over 35 years of Earth observations, satellite passive microwave sensors have generated a near-daily, multi-channel brightness temperature record of observations. Critical to describing and understanding Earth system hydrologic and cryospheric parameters, data products derived from the passive microwave record include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. While swath data are valuable to oceanographers due to the temporal scales of ocean phenomena, gridded data are more valuable to researchers interested in derived parameters at fixed locations through time and are widely used in climate studies. We are applying recent developments in image reconstruction methods to produce a systematically reprocessed historical time series NASA MEaSUREs Earth System Data Record, at higher spatial resolutions than have previously been available, for the entire SMMR, SSM/I-SSMIS and AMSR-E record. We take advantage of recently released, recalibrated SSM/I-SSMIS swath format Fundamental Climate Data Records. Our presentation will compare and contrast the two candidate image reconstruction techniques we are evaluating: Backus-Gilbert (BG) interpolation and a radiometer version of Scatterometer Image Reconstruction (SIR). Both BG and SIR use regularization to trade off noise and resolution. We discuss our rationale for the respective algorithm parameters we have selected, compare results and computational costs, and include prototype SSM/I images at enhanced resolutions of up to 3 km. We include a sensitivity analysis for estimating sensor measurement response functions critical to both methods.
High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI
NASA Astrophysics Data System (ADS)
Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer
2011-03-01
Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.
Method of producing nanopatterned articles using surface-reconstructed block copolymer films
Russell, Thomas P; Park, Soojin; Wang, Jia-Yu; Kim, Bokyung
2013-08-27
Nanopatterned surfaces are prepared by a method that includes forming a block copolymer film on a substrate, annealing and surface reconstructing the block copolymer film to create an array of cylindrical voids, depositing a metal on the surface-reconstructed block copolymer film, and heating the metal-coated block copolymer film to redistribute at least some of the metal into the cylindrical voids. When very thin metal layers and low heating temperatures are used, metal nanodots can be formed. When thicker metal layers and higher heating temperatures are used, the resulting metal structure includes nanoring-shaped voids. The nanopatterned surfaces can be transferred to the underlying substrates via etching, or used to prepare nanodot- or nanoring-decorated substrate surfaces.
Prediction of a reconstructed α-boron (111) surface by the minima hopping method
NASA Astrophysics Data System (ADS)
Amsler, Maximilian; Goedecker, Stefan; Botti, Silvana; Marques, Miguel A. L.
2014-03-01
Boron exhibits an impressive structural variety and immense efforts have recently been made to explore boron structures of low dimensionality, such as boron fullerenes, two-dimensional boron sheets or boron nanotubes which are theoretically predicted to exhibit superior electronic properties compared to their carbon analogues. By performing an extensive and systematic ab initio structural search for the (111) surface of α-boron (111) using the minima hopping structure prediction method we found very strong reconstructions that lead to two-dimensional surface layers. The topmost layer of these low energy reconstructions is a conductive, nearly perfectly planar boron sheet. If exfoliation was experimentally possible, promising precursors for a large variety of boron nano-structures such as single walled boron nanotubes and boron fullerenes could be obtained.
NASA Astrophysics Data System (ADS)
Okawa, Shinpei; Hirasawa, Takeshi; Kushibiki, Toshihiro; Ishihara, Miya
2013-09-01
An image reconstruction algorithm for biomedical photoacoustic imaging is discussed. The algorithm solves the inverse problem of the photoacoustic phenomenon in biological media and images the distribution of large optical absorption coefficients, which can indicate diseased tissues such as cancers with angiogenesis and the tissues labeled by exogenous photon absorbers. The linearized forward problem, which relates the absorption coefficients to the detected photoacoustic signals, is formulated by using photon diffusion and photoacoustic wave equations. Both partial differential equations are solved by a finite element method. The inverse problem is solved by truncated singular value decomposition, which reduces the effects of the measurement noise and the errors between forward modeling and actual measurement systems. The spatial resolution and the robustness to various factors affecting the image reconstruction are evaluated by numerical experiments with 2D geometry.
NASA Astrophysics Data System (ADS)
Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen
2016-11-01
To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.
Ray, J.; Lee, J.; Yadav, V.; ...
2014-08-20
We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
Bayesian network reconstruction using systems genetics data: comparison of MCMC methods.
Tasaki, Shinya; Sauerwine, Ben; Hoff, Bruce; Toyoshiba, Hiroyoshi; Gaiteri, Chris; Chaibub Neto, Elias
2015-04-01
Reconstructing biological networks using high-throughput technologies has the potential to produce condition-specific interactomes. But are these reconstructed networks a reliable source of biological interactions? Do some network inference methods offer dramatically improved performance on certain types of networks? To facilitate the use of network inference methods in systems biology, we report a large-scale simulation study comparing the ability of Markov chain Monte Carlo (MCMC) samplers to reverse engineer Bayesian networks. The MCMC samplers we investigated included foundational and state-of-the-art Metropolis-Hastings and Gibbs sampling approaches, as well as novel samplers we have designed. To enable a comprehensive comparison, we simulated gene expression and genetics data from known network structures under a range of biologically plausible scenarios. We examine the overall quality of network inference via different methods, as well as how their performance is affected by network characteristics. Our simulations reveal that network size, edge density, and strength of gene-to-gene signaling are major parameters that differentiate the performance of various samplers. Specifically, more recent samplers including our novel methods outperform traditional samplers for highly interconnected large networks with strong gene-to-gene signaling. Our newly developed samplers show comparable or superior performance to the top existing methods. Moreover, this performance gain is strongest in networks with biologically oriented topology, which indicates that our novel samplers are suitable for inferring biological networks. The performance of MCMC samplers in this simulation framework can guide the choice of methods for network reconstruction using systems genetics data.
Application of accelerated acquisition and highly constrained reconstruction methods to MR
NASA Astrophysics Data System (ADS)
Wang, Kang
2011-12-01
There are many Magnetic Resonance Imaging (MRI) applications that require rapid data acquisition. In conventional proton MRI, representative applications include real-time dynamic imaging, whole-chest pulmonary perfusion imaging, high resolution coronary imaging, MR T1 or T2 mapping, etc. The requirement for fast acquisition and novel reconstruction methods is either due to clinical demand for high temporal resolution, high spatial resolution, or both. Another important category in which fast MRI methods are highly desirable is imaging with hyperpolarized (HP) contrast media, such as HP 3He imaging for evaluation of pulmonary function, and imaging of HP 13C-labeled substrates for the study of in vivo metabolic processes. To address these needs, numerous MR undersampling methods have been developed and combined with novel image reconstruction techniques. This thesis aims to develop novel data acquisition and image reconstruction techniques for the following applications. (I) Ultrashort echo time spectroscopic imaging (UTESI). The need for acquiring many echo images in spectroscopic imaging with high spatial resolution usually results in extended scan times, and thus requires k-space undersampling and novel imaging reconstruction methods to overcome the artifacts related to the undersampling. (2) Dynamic hyperpolarized 13C spectroscopic imaging. HP 13C compounds exhibit non-equilibrium T1 decay and rapidly evolving spectral dynamics, and therefore it is vital to utilize the polarized signal wisely and efficiently to observe the entire temporal dynamic of the injected "C compounds as well as the corresponding downstream metabolites. (3) Time-resolved contrast-enhanced MR angiography. The diagnosis of vascular diseases often requires large coverage of human body anatomies with high spatial resolution and sufficient temporal resolution for the separation of arterial phases from venous phases. The goal of simultaneously achieving high spatial and temporal resolution has
Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian
2015-03-09
We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional "point intersection" to "trajectory intersection" in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable.
Reconstruction of multiple gastric electrical wave fronts using potential based inverse methods
Kim, J HK; Pullan, A J; Cheng, L K
2012-01-01
One approach, commonly used in the field of electrocardiography, involves solving an inverse problem whereby electrical potentials on the stomach surface are directly reconstructed from dense potential measurements on the skin surface. To investigate this problem, an anatomically realistic torso model and an electrical stomach model were used to simulate potentials on stomach and skin surfaces arising from normal gastric electrical activity. The effectiveness of the Greensite-Tikhonov or the Tikhonov inverse methods were compared under the presence of 10% Gaussian noise with either 84 or 204 body surface electrodes. The stability and accuracy of the Greensite-Tikhonov method was further investigated by introducing varying levels of Gaussian signal noise or by increasing or decreasing the size of the stomach by 10%. Results showed that the reconstructed solutions were able to represent the presence of propagating multiple wave fronts and the Greensite-Tikhonov method with 204 electrodes performed best (Correlation coefficients of activation time: 90%; Pacemaker localization error: 3 cm). The Greensite-Tikhonov method was stable with Gaussian noise levels up to 20% and 10% change in stomach size. The use of 204 rather than 84 body surface electrodes improved the performance; however, for all investigated cases, the Greensite-Tikhonov method outperformed the Tikhonov method. PMID:22842812
Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian
2015-01-01
We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional “point intersection” to “trajectory intersection” in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable. PMID:25760053
Reconstruction of multiple gastric electrical wave fronts using potential-based inverse methods.
Kim, J H K; Pullan, A J; Cheng, L K
2012-08-21
One approach for non-invasively characterizing gastric electrical activity, commonly used in the field of electrocardiography, involves solving an inverse problem whereby electrical potentials on the stomach surface are directly reconstructed from dense potential measurements on the skin surface. To investigate this problem, an anatomically realistic torso model and an electrical stomach model were used to simulate potentials on stomach and skin surfaces arising from normal gastric electrical activity. The effectiveness of the Greensite-Tikhonov or the Tikhonov inverse methods were compared under the presence of 10% Gaussian noise with either 84 or 204 body surface electrodes. The stability and accuracy of the Greensite-Tikhonov method were further investigated by introducing varying levels of Gaussian signal noise or by increasing or decreasing the size of the stomach by 10%. Results showed that the reconstructed solutions were able to represent the presence of propagating multiple wave fronts and the Greensite-Tikhonov method with 204 electrodes performed best (correlation coefficients of activation time: 90%; pacemaker localization error: 3 cm). The Greensite-Tikhonov method was stable with Gaussian noise levels up to 20% and 10% change in stomach size. The use of 204 rather than 84 body surface electrodes improved the performance; however, for all investigated cases, the Greensite-Tikhonov method outperformed the Tikhonov method.
Reconstruction of multiple gastric electrical wave fronts using potential-based inverse methods
NASA Astrophysics Data System (ADS)
Kim, J. H. K.; Pullan, A. J.; Cheng, L. K.
2012-08-01
One approach for non-invasively characterizing gastric electrical activity, commonly used in the field of electrocardiography, involves solving an inverse problem whereby electrical potentials on the stomach surface are directly reconstructed from dense potential measurements on the skin surface. To investigate this problem, an anatomically realistic torso model and an electrical stomach model were used to simulate potentials on stomach and skin surfaces arising from normal gastric electrical activity. The effectiveness of the Greensite-Tikhonov or the Tikhonov inverse methods were compared under the presence of 10% Gaussian noise with either 84 or 204 body surface electrodes. The stability and accuracy of the Greensite-Tikhonov method were further investigated by introducing varying levels of Gaussian signal noise or by increasing or decreasing the size of the stomach by 10%. Results showed that the reconstructed solutions were able to represent the presence of propagating multiple wave fronts and the Greensite-Tikhonov method with 204 electrodes performed best (correlation coefficients of activation time: 90%; pacemaker localization error: 3 cm). The Greensite-Tikhonov method was stable with Gaussian noise levels up to 20% and 10% change in stomach size. The use of 204 rather than 84 body surface electrodes improved the performance; however, for all investigated cases, the Greensite-Tikhonov method outperformed the Tikhonov method.
FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector
Schaefer, Dirk; Grass, Michael; Haar, Peter van de
2011-05-15
Purpose: Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. Methods: The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting and back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. Results: The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical
Super-resolution image reconstruction methods applied to GFE-referenced navigation system
NASA Astrophysics Data System (ADS)
Yan, Lei; Lin, Yi; Tong, Qingxi
2007-11-01
The problem about reference grid data's overlarge spacing, which makes deviated estimation of un-surveyed points and poor accuracy of correlation positioning, has been embarrassing Geophysical Fields of the Earth (GFE) referenced navigation research. The super-resolution images reconstruction methods in remote sensing field give some inspiration, and its brief method, Maximum A-Posterior (MAP) based on Bayesian theory, is transplanted on grid data. The proposed algorithm named MAP-G can implement interpolation of reference data field by reflecting whole distribution trend. Comparison with traditional interpolation algorithms and simulation experiments on underwater terrain/gravity-aided navigation platform, indicate that MAP-G algorithm can effectively improve navigation's performance.
Terahertz digital holography using angular spectrum and dual wavelength reconstruction methods.
Heimbeck, Martin S; Kim, Myung K; Gregory, Don A; Everitt, Henry O
2011-05-09
Terahertz digital off-axis holography is demonstrated using a Mach-Zehnder interferometer with a highly coherent, frequency tunable, continuous wave terahertz source emitting around 0.7 THz and a single, spatially-scanned Schottky diode detector. The reconstruction of amplitude and phase objects is performed digitally using the angular spectrum method in conjunction with Fourier space filtering to reduce noise from the twin image and DC term. Phase unwrapping is achieved using the dual wavelength method, which offers an automated approach to overcome the 2π phase ambiguity. Potential applications for nondestructive test and evaluation of visually opaque dielectric and composite objects are discussed. © 2011 Optical Society of America
Shatokhina, Iuliia; Obereder, Andreas; Rosensteiner, Matthias; Ramlau, Ronny
2013-04-20
We present a fast method for the wavefront reconstruction from pyramid wavefront sensor (P-WFS) measurements. The method is based on an analytical relation between pyramid and Shack-Hartmann sensor (SH-WFS) data. The algorithm consists of two steps--a transformation of the P-WFS data to SH data, followed by the application of cumulative reconstructor with domain decomposition, a wavefront reconstructor from SH-WFS measurements. The closed loop simulations confirm that our method provides the same quality as the standard matrix vector multiplication method. A complexity analysis as well as speed tests confirm that the method is very fast. Thus, the method can be used on extremely large telescopes, e.g., for eXtreme adaptive optics systems.
Lartizien, Carole; Kinahan, Paul E; Swensson, Richard; Comtat, Claude; Lin, Michael; Villemagne, Victor; Trébossen, Régine
2003-02-01
We compare 3 image reconstruction algorithms for use in 3-dimensional (3D) whole-body PET oncology imaging. We have previously shown that combining Fourier rebinning (FORE) with 2-dimensional (2D) statistical image reconstruction via the ordered-subsets expectation-maximization (OSEM) and attenuation-weighted OSEM (AWOSEM) algorithms demonstrates improvements in image signal-to-noise ratios compared with the commonly used analytic 3D reprojection (3DRP) or FORE+FBP (2D filtered backprojection) reconstruction methods. To assess the impact of these reconstruction methods on detecting and localizing small lesions, we performed a human observer study comparing the different reconstruction methods. The observer study used the same volumetric visualization software tool that is used in clinical practice, instead of a planar viewing mode as is generally used with the standard receiver operating characteristic (ROC) methodology. This change in the human evaluation strategy disallowed the use of a ROC analysis, so instead we compared the fraction of actual targets found and reported (fraction-found) and also investigated the use of an alternative free-response operating characteristic (AFROC) analysis. We used a non-Monte Carlo technique to generate 50 statistically accurate realizations of 3D whole-body PET data based on an extended mathematic cardiac torso (MCAT) phantom and with noise levels typical of clinical scans performed on a PET scanner. To each realization, we added 7 randomly located 1-cm-diameter lesions (targets) whose contrasts were varied to sample the range of detectability. These targets were inserted in 3 organs of interest: lungs, liver, and soft tissues. The images were reconstructed with 3 reconstruction strategies (FORE+OSEM, FORE+AWOSEM, and FORE+FBP). Five human observers reported (localized and rated) 7 targets within each volume image. An observer's performance accuracy with each algorithm was measured, as a function of the lesion contrast and
Optimal Feedback Strength for Noise Suppression in Autoregulatory Gene Networks
Singh, Abhyudai; Hespanha, Joao P.
2009-01-01
Autoregulatory feedback loops, where the protein expressed from a gene inhibits or activates its own expression are common gene network motifs within cells. In these networks, stochastic fluctuations in protein levels are attributed to two factors: intrinsic noise (i.e., the randomness associated with mRNA/protein expression and degradation) and extrinsic noise (i.e., the noise caused by fluctuations in cellular components such as enzyme levels and gene-copy numbers). We present results that predict the level of both intrinsic and extrinsic noise in protein numbers as a function of quantities that can be experimentally determined and/or manipulated, such as the response time of the protein and the level of feedback strength. In particular, we show that for a fixed average number of protein molecules, decreasing response times leads to attenuation of both protein intrinsic and extrinsic noise, with the extrinsic noise being more sensitive to changes in the response time. We further show that for autoregulatory networks with negative feedback, the protein noise levels can be minimal at an optimal level of feedback strength. For such cases, we provide an analytical expression for the highest level of noise suppression and the amount of feedback that achieves this minimal noise. These theoretical results are shown to be consistent and explain recent experimental observations. Finally, we illustrate how measuring changes in the protein noise levels as the feedback strength is manipulated can be used to determine the level of extrinsic noise in these gene networks. PMID:19450473
Listening to the noise: random fluctuations reveal gene network parameters
Munsky, Brian; Khammash, Mustafa
2009-01-01
The cellular environment is abuzz with noise. The origin of this noise is attributed to the inherent random motion of reacting molecules that take part in gene expression and post expression interactions. In this noisy environment, clonal populations of cells exhibit cell-to-cell variability that frequently manifests as significant phenotypic differences within the cellular population. The stochastic fluctuations in cellular constituents induced by noise can be measured and their statistics quantified. We show that these random fluctuations carry within them valuable information about the underlying genetic network. Far from being a nuisance, the ever-present cellular noise acts as a rich source of excitation that, when processed through a gene network, carries its distinctive fingerprint that encodes a wealth of information about that network. We demonstrate that in some cases the analysis of these random fluctuations enables the full identification of network parameters, including those that may otherwise be difficult to measure. This establishes a potentially powerful approach for the identification of gene networks and offers a new window into the workings of these networks.
Identifying gene networks underlying the neurobiology of ethanol and alcoholism.
Wolen, Aaron R; Miles, Michael F
2012-01-01
For complex disorders such as alcoholism, identifying the genes linked to these diseases and their specific roles is difficult. Traditional genetic approaches, such as genetic association studies (including genome-wide association studies) and analyses of quantitative trait loci (QTLs) in both humans and laboratory animals already have helped identify some candidate genes. However, because of technical obstacles, such as the small impact of any individual gene, these approaches only have limited effectiveness in identifying specific genes that contribute to complex diseases. The emerging field of systems biology, which allows for analyses of entire gene networks, may help researchers better elucidate the genetic basis of alcoholism, both in humans and in animal models. Such networks can be identified using approaches such as high-throughput molecular profiling (e.g., through microarray-based gene expression analyses) or strategies referred to as genetical genomics, such as the mapping of expression QTLs (eQTLs). Characterization of gene networks can shed light on the biological pathways underlying complex traits and provide the functional context for identifying those genes that contribute to disease development.
Wang, Jin; Zhang, Chen; Wang, Yuanyuan
2017-05-30
In photoacoustic tomography (PAT), total variation (TV) based iteration algorithm is reported to have a good performance in PAT image reconstruction. However, classical TV based algorithm fails to preserve the edges and texture details of the image because it is not sensitive to the direction of the image. Therefore, it is of great significance to develop a new PAT reconstruction algorithm to effectively solve the drawback of TV. In this paper, a directional total variation with adaptive directivity (DDTV) model-based PAT image reconstruction algorithm, which weightedly sums the image gradients based on the spatially varying directivity pattern of the image is proposed to overcome the shortcomings of TV. The orientation field of the image is adaptively estimated through a gradient-based approach. The image gradients are weighted at every pixel based on both its anisotropic direction and another parameter, which evaluates the estimated orientation field reliability. An efficient algorithm is derived to solve the iteration problem associated with DDTV and possessing directivity of the image adaptively updated for each iteration step. Several texture images with various directivity patterns are chosen as the phantoms for the numerical simulations. The 180-, 90- and 30-view circular scans are conducted. Results obtained show that the DDTV-based PAT reconstructed algorithm outperforms the filtered back-projection method (FBP) and TV algorithms in the quality of reconstructed images with the peak signal-to-noise rations (PSNR) exceeding those of TV and FBP by about 10 and 18 dB, respectively, for all cases. The Shepp-Logan phantom is studied with further discussion of multimode scanning, convergence speed, robustness and universality aspects. In-vitro experiments are performed for both the sparse-view circular scanning and linear scanning. The results further prove the effectiveness of the DDTV, which shows better results than that of the TV with sharper image edges and
Matsumoto, Tomotaka; Akashi, Hiroshi; Yang, Ziheng
2015-07-01
Inference of gene sequences in ancestral species has been widely used to test hypotheses concerning the process of molecular sequence evolution. However, the approach may produce spurious results, mainly because using the single best reconstruction while ignoring the suboptimal ones creates systematic biases. Here we implement methods to correct for such biases and use computer simulation to evaluate their performance when the substitution process is nonstationary. The methods we evaluated include parsimony and likelihood using the single best reconstruction (SBR), averaging over reconstructions weighted by the posterior probabilities (AWP), and a new method called expected Markov counting (EMC) that produces maximum-likelihood estimates of substitution counts for any branch under a nonstationary Markov model. We simulated base composition evolution on a phylogeny for six species, with different selective pressures on G+C content among lineages, and compared the counts of nucleotide substitutions recorded during simulation with the inference by different methods. We found that large systematic biases resulted from (i) the use of parsimony or likelihood with SBR, (ii) the use of a stationary model when the substitution process is nonstationary, and (iii) the use of the Hasegawa-Kishino-Yano (HKY) model, which is too simple to adequately describe the substitution process. The nonstationary general time reversible (GTR) model, used with AWP or EMC, accurately recovered the substitution counts, even in cases of complex parameter fluctuations. We discuss model complexity and the compromise between bias and variance and suggest that the new methods may be useful for studying complex patterns of nucleotide substitution in large genomic data sets.
A maximum-likelihood multi-resolution weak lensing mass reconstruction method
NASA Astrophysics Data System (ADS)
Khiabanian, Hossein
Gravitational lensing is formed when the light from a distant source is "bent" around a massive object. Lensing analysis has increasingly become the method of choice for studying dark matter, so much that it is one of the main tools that will be employed in the future surveys to study the dark energy and its equation of state as well as the evolution of galaxy clustering. Unlike other popular techniques for selecting galaxy clusters (such as studying the X-ray emission or observing the over-densities of galaxies), weak gravitational lensing does not have the disadvantage of relying on the luminous matter and provides a parameter-free reconstruction of the projected mass distribution in clusters without dependence on baryon content. Gravitational lensing also provides a unique test for the presence of truly dark clusters, though it is otherwise an expensive detection method. Therefore it is essential to make use of all the information provided by the data to improve the quality of the lensing analysis. This thesis project has been motivated by the limitations encountered with the commonly used direct reconstruction methods of producing mass maps. We have developed a multi-resolution maximum-likelihood reconstruction method for producing two dimensional mass maps using weak gravitational lensing data. To utilize all the shear information, we employ an iterative inverse method with a properly selected regularization coefficient which fits the deflection potential at the position of each galaxy. By producing mass maps with multiple resolutions in the different parts of the observed field, we can achieve a uniform signal to noise level by increasing the resolution in regions of higher distortions or regions with an over-density of background galaxies. In addition, we are able to better study the sub- structure of the massive clusters at a resolution which is not attainable in the rest of the observed field.
DNA-Binding Kinetics Determines the Mechanism of Noise-Induced Switching in Gene Networks.
Tse, Margaret J; Chu, Brian K; Roy, Mahua; Read, Elizabeth L
2015-10-20
Gene regulatory networks are multistable dynamical systems in which attractor states represent cell phenotypes. Spontaneous, noise-induced transitions between these states are thought to underlie critical cellular processes, including cell developmental fate decisions, phenotypic plasticity in fluctuating environments, and carcinogenesis. As such, there is increasing interest in the development of theoretical and computational approaches that can shed light on the dynamics of these stochastic state transitions in multistable gene networks. We applied a numerical rare-event sampling algorithm to study transition paths of spontaneous noise-induced switching for a ubiquitous gene regulatory network motif, the bistable toggle switch, in which two mutually repressive genes compete for dominant expression. We find that the method can efficiently uncover detailed switching mechanisms that involve fluctuations both in occupancies of DNA regulatory sites and copy numbers of protein products. In addition, we show that the rate parameters governing binding and unbinding of regulatory proteins to DNA strongly influence the switching mechanism. In a regime of slow DNA-binding/unbinding kinetics, spontaneous switching occurs relatively frequently and is driven primarily by fluctuations in DNA-site occupancies. In contrast, in a regime of fast DNA-binding/unbinding kinetics, switching occurs rarely and is driven by fluctuations in levels of expressed protein. Our results demonstrate how spontaneous cell phenotype transitions involve collective behavior of both regulatory proteins and DNA. Computational approaches capable of simulating dynamics over many system variables are thus well suited to exploring dynamic mechanisms in gene networks.
Integrative gene network construction to analyze cancer recurrence using semi-supervised learning.
Park, Chihyun; Ahn, Jaegyoon; Kim, Hyunjin; Park, Sanghyun
2014-01-01
The prognosis of cancer recurrence is an important research area in bioinformatics and is challenging due to the small sample sizes compared to the vast number of genes. There have been several attempts to predict cancer recurrence. Most studies employed a supervised approach, which uses only a few labeled samples. Semi-supervised learning can be a great alternative to solve this problem. There have been few attempts based on manifold assumptions to reveal the detailed roles of identified cancer genes in recurrence. In order to predict cancer recurrence, we proposed a novel semi-supervised learning algorithm based on a graph regularization approach. We transformed the gene expression data into a graph structure for semi-supervised learning and integrated protein interaction data with the gene expression data to select functionally-related gene pairs. Then, we predicted the recurrence of cancer by applying a regularization approach to the constructed graph containing both labeled and unlabeled nodes. The average improvement rate of accuracy for three different cancer datasets was 24.9% compared to existing supervised and semi-supervised methods. We performed functional enrichment on the gene networks used for learning. We identified that those gene networks are significantly associated with cancer-recurrence-related biological functions. Our algorithm was developed with standard C++ and is available in Linux and MS Windows formats in the STL library. The executable program is freely available at: http://embio.yonsei.ac.kr/~Park/ssl.php.
NASA Astrophysics Data System (ADS)
Xia, Yidong
The objective this work is to develop a parallel, implicit reconstructed discontinuous Galerkin (RDG) method using Taylor basis for the solution of the compressible Navier-Stokes equations on 3D hybrid grids. This third-order accurate RDG method is based on a hierarchical weighed essentially non- oscillatory reconstruction scheme, termed as HWENO(P1P 2) to indicate that a quadratic polynomial solution is obtained from the underlying linear polynomial DG solution via a hierarchical WENO reconstruction. The HWENO(P1P2) is designed not only to enhance the accuracy of the underlying DG(P1) method but also to ensure non-linear stability of the RDG method. In this reconstruction scheme, a quadratic polynomial (P2) solution is first reconstructed using a least-squares approach from the underlying linear (P1) discontinuous Galerkin solution. The final quadratic solution is then obtained using a Hermite WENO reconstruction, which is necessary to ensure the linear stability of the RDG method on 3D unstructured grids. The first derivatives of the quadratic polynomial solution are then reconstructed using a WENO reconstruction in order to eliminate spurious oscillations in the vicinity of strong discontinuities, thus ensuring the non-linear stability of the RDG method. The parallelization in the RDG method is based on a message passing interface (MPI) programming paradigm, where the METIS library is used for the partitioning of a mesh into subdomain meshes of approximately the same size. Both multi-stage explicit Runge-Kutta and simple implicit backward Euler methods are implemented for time advancement in the RDG method. In the implicit method, three approaches: analytical differentiation, divided differencing (DD), and automatic differentiation (AD) are developed and implemented to obtain the resulting flux Jacobian matrices. The automatic differentiation is a set of techniques based on the mechanical application of the chain rule to obtain derivatives of a function given as
Has Ombrëdanne's method of hypospadic urethra reconstruction been ignored with reason?
Kvesić, Ante; Vuckov, Sime; Zupancić, Bozidar; Bastić, Mislav; Jonovska, Suzana; Cizmić, Ante; Klarić, Miro; Bahtijarević, Zoran
2005-06-01
From January 1970 to December 1979 inclusive, 193 boys (aged 2 to 16) underwent surgery for distal hypospadia using Ombredanne's method at the Department of Pediatric Surgery University Hospital Center Rijeka and at the Department of Pediatric Surgery Zagreb. Follow-up period was 7 to 20 years (mean 13.4). 20 (10.36%) subjects had post-operative organic complications and 15 (7.77%) of them required surgical correction. According to these findings, the success rate using Ombredanne's method of reconstruction of the hypospadic urethra in no way lags behind the success rate using MAGPI and Mathieu's methods as well as Preputial island flap urethroplasty for analogous cases. Out of 193 subjects who underwent surgery, 80 (41.45%) of those who were sexually mature and had normal psychosexual development were questioned. In this sample, 75 (93.75%) were satisfied with the post-operative appearance of the penis while only 5 (6.25%) were dissatisfied, 3 of which had hypoplastic penis. In 78 (97.50%) subjects questioned, the post-operative urinary squirt was normal and two of them had weak urinary squirt (2.50%), due to meatal stenosis. In conclusion, Ombredanne's method of reconstruction of the urethra in boys with distal hypospadia is equally successful as other methods used for this purpose.