Gene Network Reconstruction using Global-Local Shrinkage Priors*
Leday, Gwenaël G.R.; de Gunst, Mathisca C.M.; Kpogbezan, Gino B.; van der Vaart, Aad W.; van Wieringen, Wessel N.; van de Wiel, Mark A.
2016-01-01
Reconstructing a gene network from high-throughput molecular data is an important but challenging task, as the number of parameters to estimate easily is much larger than the sample size. A conventional remedy is to regularize or penalize the model likelihood. In network models, this is often done locally in the neighbourhood of each node or gene. However, estimation of the many regularization parameters is often difficult and can result in large statistical uncertainties. In this paper we propose to combine local regularization with global shrinkage of the regularization parameters to borrow strength between genes and improve inference. We employ a simple Bayesian model with non-sparse, conjugate priors to facilitate the use of fast variational approximations to posteriors. We discuss empirical Bayes estimation of hyper-parameters of the priors, and propose a novel approach to rank-based posterior thresholding. Using extensive model- and data-based simulations, we demonstrate that the proposed inference strategy outperforms popular (sparse) methods, yields more stable edges, and is more reproducible. The proposed method, termed ShrinkNet, is then applied to Glioblastoma to investigate the interactions between genes associated with patient survival.
Snapshot of iron response in Shewanella oneidensis by gene network reconstruction
Yang, Yunfeng; Harris, Daniel P.; Luo, Feng; Xiong, Wenlu; Joachimiak, Marcin; Wu, Liyou; Dehal, Paramvir; Jacobsen, Janet; Yang, Zamin; Palumbo, Anthony V.; Arkin, Adam P.; Zhou, Jizhong
2008-10-09
Background: Iron homeostasis of Shewanella oneidensis, a gamma-proteobacterium possessing high iron content, is regulated by a global transcription factor Fur. However, knowledge is incomplete about other biological pathways that respond to changes in iron concentration, as well as details of the responses. In this work, we integrate physiological, transcriptomics and genetic approaches to delineate the iron response of S. oneidensis. Results: We show that the iron response in S. oneidensis is a rapid process. Temporal gene expression profiles were examined for iron depletion and repletion, and a gene co-expression network was reconstructed. Modules of iron acquisition systems, anaerobic energy metabolism and protein degradation were the most noteworthy in the gene network. Bioinformatics analyses suggested that genes in each of the modules might be regulated by DNA-binding proteins Fur, CRP and RpoH, respectively. Closer inspection of these modules revealed a transcriptional regulator (SO2426) involved in iron acquisition and ten transcriptional factors involved in anaerobic energy metabolism. Selected genes in the network were analyzed by genetic studies. Disruption of genes encoding a putative alcaligin biosynthesis protein (SO3032) and a gene previously implicated in protein degradation (SO2017) led to severe growth deficiency under iron depletion conditions. Disruption of a novel transcriptional factor (SO1415) caused deficiency in both anaerobic iron reduction and growth with thiosulfate or TMAO as an electronic acceptor, suggesting that SO1415 is required for specific branches of anaerobic energy metabolism pathways. Conclusions: Using a reconstructed gene network, we identified major biological pathways that were differentially expressed during iron depletion and repletion. Genetic studies not only demonstrated the importance of iron acquisition and protein degradation for iron depletion, but also characterized a novel transcriptional factor (SO1415) with a
A Synthesis Method of Gene Networks Having Cyclic Expression Pattern Sequences by Network Learning
NASA Astrophysics Data System (ADS)
Mori, Yoshihiro; Kuroe, Yasuaki
Recently, synthesis of gene networks having desired functions has become of interest to many researchers because it is a complementary approach to understanding gene networks, and it could be the first step in controlling living cells. There exist several periodic phenomena in cells, e.g. circadian rhythm. These phenomena are considered to be generated by gene networks. We have already proposed synthesis method of gene networks based on gene expression. The method is applicable to synthesizing gene networks possessing the desired cyclic expression pattern sequences. It ensures that realized expression pattern sequences are periodic, however, it does not ensure that their corresponding solution trajectories are periodic, which might bring that their oscillations are not persistent. In this paper, in order to resolve the problem we propose a synthesis method of gene networks possessing the desired cyclic expression pattern sequences together with their corresponding solution trajectories being periodic. In the proposed method the persistent oscillations of the solution trajectories are realized by specifying passing points of them.
Methods of Voice Reconstruction
Chen, Hung-Chi; Kim Evans, Karen F.; Salgado, Christopher J.; Mardini, Samir
2010-01-01
This article reviews methods of voice reconstruction. Nonsurgical methods of voice reconstruction include electrolarynx, pneumatic artificial larynx, and esophageal speech. Surgical methods of voice reconstruction include neoglottis, tracheoesophageal puncture, and prosthesis. Tracheoesophageal puncture can be performed in patients with pedicled flaps such as colon interposition, jejunum, or gastric pull-up or in free flaps such as perforator flaps, jejunum, and colon flaps. Other flaps for voice reconstruction include the ileocolon flap and jejunum. Laryngeal transplantation is also reviewed. PMID:22550443
Mine, Karina L; Shulzhenko, Natalia; Yambartsev, Anatoly; Rochman, Mark; Sanson, Gerdine F O; Lando, Malin; Varma, Sudhir; Skinner, Jeff; Volfovsky, Natalia; Deng, Tao; Brenna, Sylvia M F; Carvalho, Carmen R N; Ribalta, Julisa C L; Bustin, Michael; Matzinger, Polly; Silva, Ismael D C G; Lyng, Heidi; Gerbase-DeLima, Maria; Morgun, Andrey
2013-01-01
Although human papillomavirus was identified as an aetiological factor in cervical cancer, the key human gene drivers of this disease remain unknown. Here we apply an unbiased approach integrating gene expression and chromosomal aberration data. In an independent group of patients, we reconstruct and validate a gene regulatory meta-network, and identify cell cycle and antiviral genes that constitute two major subnetworks upregulated in tumour samples. These genes are located within the same regions as chromosomal amplifications, most frequently on 3q. We propose a model in which selected chromosomal gains drive activation of antiviral genes contributing to episomal virus elimination, which synergizes with cell cycle dysregulation. These findings may help to explain the paradox of episomal human papillomavirus decline in women with invasive cancer who were previously unable to clear the virus.
How to train your microbe: methods for dynamically characterizing gene networks
Castillo-Hair, Sebastian M.; Igoshin, Oleg A.; Tabor, Jeffrey J.
2015-01-01
Gene networks regulate biological processes dynamically. However, researchers have largely relied upon static perturbations, such as growth media variations and gene knockouts, to elucidate gene network structure and function. Thus, much of the regulation on the path from DNA to phenotype remains poorly understood. Recent studies have utilized improved genetic tools, hardware, and computational control strategies to generate precise temporal perturbations outside and inside of live cells. These experiments have, in turn, provided new insights into the organizing principles of biology. Here, we introduce the major classes of dynamical perturbations that can be used to study gene networks, and discuss technologies available for creating them in a wide range of microbial pathways. PMID:25677419
He, Feng; Balling, Rudi; Zeng, An-Ping
2009-11-01
Reverse engineering of gene networks aims at revealing the structure of the gene regulation network in a biological system by reasoning backward directly from experimental data. Many methods have recently been proposed for reverse engineering of gene networks by using gene transcript expression data measured by microarray. Whereas the potentials of the methods have been well demonstrated, the assumptions and limitations behind them are often not clearly stated or not well understood. In this review, we first briefly explain the principles of the major methods, identify the assumptions behind them and pinpoint the limitations and possible pitfalls in applying them to real biological questions. With regard to applications, we then discuss challenges in the experimental verification of gene networks generated from reverse engineering methods. We further propose an optimal experimental design for allocating sampling schedule and possible strategies for reducing the limitations of some of the current reverse engineering methods. Finally, we examine the perspectives for the development of reverse engineering and urge the need to move from revealing network structure to the dynamics of biological systems.
CHAI, Lian En; LAW, Chow Kuan; MOHAMAD, Mohd Saberi; CHONG, Chuii Khim; CHOON, Yee Wen; DERIS, Safaai; ILLIAS, Rosli Md
2014-01-01
Background: Gene expression data often contain missing expression values. Therefore, several imputation methods have been applied to solve the missing values, which include k-nearest neighbour (kNN), local least squares (LLS), and Bayesian principal component analysis (BPCA). However, the effects of these imputation methods on the modelling of gene regulatory networks from gene expression data have rarely been investigated and analysed using a dynamic Bayesian network (DBN). Methods: In the present study, we separately imputed datasets of the Escherichia coli S.O.S. DNA repair pathway and the Saccharomyces cerevisiae cell cycle pathway with kNN, LLS, and BPCA, and subsequently used these to generate gene regulatory networks (GRNs) using a discrete DBN. We made comparisons on the basis of previous studies in order to select the gene network with the least error. Results: We found that BPCA and LLS performed better on larger networks (based on the S. cerevisiae dataset), whereas kNN performed better on smaller networks (based on the E. coli dataset). Conclusion: The results suggest that the performance of each imputation method is dependent on the size of the dataset, and this subsequently affects the modelling of the resultant GRNs using a DBN. In addition, on the basis of these results, a DBN has the capacity to discover potential edges, as well as display interactions, between genes. PMID:24876803
Computational methods for image reconstruction.
Chung, Julianne; Ruthotto, Lars
2017-04-01
Reconstructing images from indirect measurements is a central problem in many applications, including the subject of this special issue, quantitative susceptibility mapping (QSM). The process of image reconstruction typically requires solving an inverse problem that is ill-posed and large-scale and thus challenging to solve. Although the research field of inverse problems is thriving and very active with diverse applications, in this part of the special issue we will focus on recent advances in inverse problems that are specific to deconvolution problems, the class of problems to which QSM belongs. We will describe analytic tools that can be used to investigate underlying ill-posedness and apply them to the QSM reconstruction problem and the related extensively studied image deblurring problem. We will discuss state-of-the-art computational tools and methods for image reconstruction, including regularization approaches and regularization parameter selection methods. We finish by outlining some of the current trends and future challenges. Copyright © 2016 John Wiley & Sons, Ltd.
Modern methods of image reconstruction.
NASA Astrophysics Data System (ADS)
Puetter, R. C.
The author reviews the image restoration or reconstruction problem in its general setting. He first discusses linear methods for solving the problem of image deconvolution, i.e. the case in which the data are a convolution of a point-spread function and an underlying unblurred image. Next, non-linear methods are introduced in the context of Bayesian estimation, including maximum likelihood and maximum entropy methods. Then, the author discusses the role of language and information theory concepts for data compression and solving the inverse problem. The concept of algorithmic information content (AIC) is introduced and is shown to be crucial to achieving optimal data compression and optimized Bayesian priors for image reconstruction. The dependence of the AIC on the selection of language then suggests how efficient coordinate systems for the inverse problem may be selected. The author also introduced pixon-based image restoration and reconstruction methods. The relation between image AIC and the Bayesian incarnation of Occam's Razor is discussed, as well as the relation of multiresolution pixon languages and image fractal dimension. Also discussed is the relation of pixons to the role played by the Heisenberg uncertainty principle in statistical physics and how pixon-based image reconstruction provides a natural extension to the Akaike information criterion for maximum likelihood. The author presents practical applications of pixon-based Bayesian estimation to the restoration of astronomical images. He discusses the effects of noise, effects of finite sampling on resolution, and special problems associated with spatially correlated noise introduced by mosaicing. Comparisons to other methods demonstrate the significant improvements afforded by pixon-based methods and illustrate the science that such performance improvements allow.
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
Reconstructive methods in hearing disorders - surgical methods
Zahnert, Thomas
2005-01-01
Restoration of hearing is associated in many cases with resocialisation of those affected and therefore occupies an important place in a society where communication is becoming ever faster. Not all problems can be solved surgically. Even 50 years after the introduction of tympanoplasty, the hearing results are unsatisfactory and often do not reach the threshold for social hearing. The cause of this can in most cases be regarded as incomplete restoration of the mucosal function of the middle ear and tube, which leads to ventilation disorders of the ear and does not allow real vibration of the reconstructed middle ear. However, a few are also caused by the biomechanics of the reconstructed ossicular chain. There has been progress in reconstructive middle ear surgery, which applies particularly to the development of implants. Implants made of titanium, which are distinguished by outstanding biocompatibility, delicate design and by biomechanical possibilities in the reconstruction of chain function, can be regarded as a new generation. Metal implants for the first time allow a controlled close fit with the remainder of the chain and integration of micromechanical functions in the implant. Moreover, there has also been progress in microsurgery itself. This applies particularly to the operative procedures for auditory canal atresia, the restoration of the tympanic membrane and the coupling of implants. This paper gives a summary of the current state of reconstructive microsurgery paying attention to the acousto-mechanical rules. PMID:22073050
Shape reconstruction methods with incomplete data
NASA Astrophysics Data System (ADS)
Nakahata, K.; Kitahara, M.
2000-05-01
Linearized inverse scattering methods are applied to the shape reconstruction of defects in elastic solids. The linearized methods are based on the Born approximation in the low frequency range and the Kirchhoff approximation in the high frequency range. The experimental measurement is performed to collect the scattering data from defects. The processed data from the measurement are fed into the linearized methods and the shape of the defect is reconstructed by two linearized methods. The importance of scattering data in the low frequency range is pointed out not only for Born inversion but also for Kirchhoff inversion. In the ultrasonic measurement for the real structure, the access points of the sensor may be limited to one side of the structural surfaces and a part of the surface. From the viewpoint of application, the incomplete scattering data are used as inputs for the shape reconstruction methods and the effect of the sensing points are discussed.
Replaying the evolutionary tape: biomimetic reverse engineering of gene networks.
Marbach, Daniel; Mattiussi, Claudio; Floreano, Dario
2009-03-01
In this paper, we suggest a new approach for reverse engineering gene regulatory networks, which consists of using a reconstruction process that is similar to the evolutionary process that created these networks. The aim is to integrate prior knowledge into the reverse-engineering procedure, thus biasing the search toward biologically plausible solutions. To this end, we propose an evolutionary method that abstracts and mimics the natural evolution of gene regulatory networks. Our method can be used with a wide range of nonlinear dynamical models. This allows us to explore novel model types such as the log-sigmoid model introduced here. We apply the biomimetic method to a gold-standard dataset from an in vivo gene network. The obtained results won a reverse engineering competition of the second DREAM conference (Dialogue on Reverse Engineering Assessments and Methods 2007, New York, NY).
Parametric reconstruction method in optical tomography.
Gu, Xuejun; Ren, Kui; Masciotti, James; Hielscher, Andreas H
2006-01-01
Optical tomography consists of reconstructing the spatial of a medium's optical properties from measurements of transmitted light on the boundary of the medium. Mathematically this problem amounts to parameter identification for the radiative transport equation (ERT) or diffusion approximation (DA). However, this type of boundary-value problem is highly ill-posed and the image reconstruction process is often unstable and non-unique. To overcome this problem, we present a parametric inverse method that considerably reduces the number of variables being reconstructed. In this way the amount of measured data is equal or larger than the number of unknowns. Using synthetic data, we show examples that demonstrate how this approach leads to improvements in imaging quality.
Bullet trajectory reconstruction - Methods, accuracy and precision.
Mattijssen, Erwin J A T; Kerkhoff, Wim
2016-05-01
Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement.
Magnetic flux reconstruction methods for shaped tokamaks
NASA Astrophysics Data System (ADS)
Tsui, Chi-Wa
1993-12-01
The use of a variational method permits the Grad-Shafranov (GS) equation to be solved by reducing the problem of solving the two dimensional nonlinear partial differential equation to the problem of minimizing a function of several variables. This high speed algorithm approximately solves the GS equation given a parameterization of the plasma boundary and the current profile (p' and FF' functions). The current profile parameters are treated as unknowns. The goal is to reconstruct the internal magnetic flux surfaces of a tokamak plasma and the toroidal current density profile from the external magnetic measurements. This is a classic problem of inverse equilibrium determination. The current profile parameters can be evaluated by several different matching procedures. Matching of magnetic flux and field at the probe locations using the Biot-Savart law and magnetic Green's function provides a robust method of magnetic reconstruction. The matching of poloidal magnetic field on the plasma surface provides a unique method of identifying the plasma current profile. However, the power of this method is greatly compromised by the experimental errors of the magnetic signals. The Casing principle provides a very fast way to evaluate the plasma contribution to the magnetic signals. It has the potential of being a fast matching method. The performance of this method is hindered by the accuracy of the poloidal magnetic field computed from the equilibrium solver. A flux reconstruction package has been implemented which integrates a vacuum field solver using a filament model for the plasma, a multilayer perception neural network as an interface, and the volume integration of plasma current density using Green's functions as a matching method for the current profile parameters. The flux reconstruction package is applied to compare with the ASEQ and EFIT data.
Magnetic flux reconstruction methods for shaped tokamaks
Tsui, Chi-Wa
1993-12-01
The use of a variational method permits the Grad-Shafranov (GS) equation to be solved by reducing the problem of solving the 2D non-linear partial differential equation to the problem of minimizing a function of several variables. This high speed algorithm approximately solves the GS equation given a parameterization of the plasma boundary and the current profile (p` and FF` functions). The author treats the current profile parameters as unknowns. The goal is to reconstruct the internal magnetic flux surfaces of a tokamak plasma and the toroidal current density profile from the external magnetic measurements. This is a classic problem of inverse equilibrium determination. The current profile parameters can be evaluated by several different matching procedures. Matching of magnetic flux and field at the probe locations using the Biot-Savart law and magnetic Green`s function provides a robust method of magnetic reconstruction. The matching of poloidal magnetic field on the plasma surface provides a unique method of identifying the plasma current profile. However, the power of this method is greatly compromised by the experimental errors of the magnetic signals. The Casing Principle provides a very fast way to evaluate the plasma contribution to the magnetic signals. It has the potential of being a fast matching method. The performance of this method is hindered by the accuracy of the poloidal magnetic field computed from the equilibrium solver. A flux reconstruction package has been implemented which integrates a vacuum field solver using a filament model for the plasma, a multi-layer perception neural network as an interface, and the volume integration of plasma current density using Green`s functions as a matching method for the current profile parameters. The flux reconstruction package is applied to compare with the ASEQ and EFIT data. The results are promising.
Introduction: Cancer Gene Networks.
Clarke, Robert
2017-01-01
Constructing, evaluating, and interpreting gene networks generally sits within the broader field of systems biology, which continues to emerge rapidly, particular with respect to its application to understanding the complexity of signaling in the context of cancer biology. For the purposes of this volume, we take a broad definition of systems biology. Considering an organism or disease within an organism as a system, systems biology is the study of the integrated and coordinated interactions of the network(s) of genes, their variants both natural and mutated (e.g., polymorphisms, rearrangements, alternate splicing, mutations), their proteins and isoforms, and the organic and inorganic molecules with which they interact, to execute the biochemical reactions (e.g., as enzymes, substrates, products) that reflect the function of that system. Central to systems biology, and perhaps the only approach that can effectively manage the complexity of such systems, is the building of quantitative multiscale predictive models. The predictions of the models can vary substantially depending on the nature of the model and its inputoutput relationships. For example, a model may predict the outcome of a specific molecular reaction(s), a cellular phenotype (e.g., alive, dead, growth arrest, proliferation, and motility), a change in the respective prevalence of cell or subpopulations, a patient or patient subgroup outcome(s). Such models necessarily require computers. Computational modeling can be thought of as using machine learning and related tools to integrate the very high dimensional data generated from modern, high throughput omics technologies including genomics (next generation sequencing), transcriptomics (gene expression microarrays; RNAseq), metabolomics and proteomics (ultra high performance liquid chromatography, mass spectrometry), and "subomic" technologies to study the kinome, methylome, and others. Mathematical modeling can be thought of as the use of ordinary
Magnetic Flux Reconstruction Methods for Shaped Tokamaks
NASA Astrophysics Data System (ADS)
Tsui, Chi-Wa.
The use of a variational method permits the Grad -Shafranov (GS) equation to be solved by reducing the problem of solving the 2D non-linear partial differential equation to the problem of minimizing a function of several variables. This high speed algorithm approximately solves the GS equation given a pararmeterization of the plasma boundary and the current profile (p^' and FF^' functions). We treat the current profile parameters as unknowns. The goal is to reconstruct the internal magnetic flux surfaces of a tokamak plasma and the toroidal current density profile from the external magnetic measurements. This is a classic problem of inverse equilibrium determination. The current profile parameters can be evaluated by several different matching procedures. We found that the matching of magnetic flux and field at the probe locations using the Biot-Savart law and magnetic Green's function provides a robust method of magnetic reconstruction. The matching of poloidal magnetic field on the plasma surface provides a unique method of identifying the plasma current profile. However, the power of this method is greatly compromised by the experimental errors of the magnetic signals. The Casing Principle (60) provides a very fast way to evaluate the plasma contribution to the magnetic signals. It has the potential of being a fast matching method. We found that the performance of this method is hindered by the accuracy of the poloidal magnetic field computed from the equilibrium solver. A flux reconstruction package have been implemented which integrates a vacuum field solver using a filament model for the plasma, a multi-layer perceptron neural network as a interface, and the volume integration of plasma current density using Green's functions as a matching method for the current profile parameters. The flux reconstruction package is applied to compare with the ASEQ and EFIT data. The results are promising. Also, we found that some plasmas in the tokamak Alcator C-Mod lie
Gene network and pathway generation and analysis: Editorial
Zhao, Zhongming; Sanfilippo, Antonio P.; Huang, Kun
2011-02-18
The past decade has witnessed an exponential growth of biological data including genomic sequences, gene annotations, expression and regulation, and protein-protein interactions. A key aim in the post-genome era is to systematically catalogue gene networks and pathways in a dynamic living cell and apply them to study diseases and phenotypes. To promote the research in systems biology and its application to disease studies, we organized a workshop focusing on the reconstruction and analysis of gene networks and pathways in any organisms from high-throughput data collected through techniques such as microarray analysis and RNA-Seq.
Hybrid stochastic simplifications for multiscale gene networks
Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu
2009-01-01
Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach. PMID:19735554
Inverse polynomial reconstruction method in DCT domain
NASA Astrophysics Data System (ADS)
Dadkhahi, Hamid; Gotchev, Atanas; Egiazarian, Karen
2012-12-01
The discrete cosine transform (DCT) offers superior energy compaction properties for a large class of functions and has been employed as a standard tool in many signal and image processing applications. However, it suffers from spurious behavior in the vicinity of edge discontinuities in piecewise smooth signals. To leverage the sparse representation provided by the DCT, in this article, we derive a framework for the inverse polynomial reconstruction in the DCT expansion. It yields the expansion of a piecewise smooth signal in terms of polynomial coefficients, obtained from the DCT representation of the same signal. Taking advantage of this framework, we show that it is feasible to recover piecewise smooth signals from a relatively small number of DCT coefficients with high accuracy. Furthermore, automatic methods based on minimum description length principle and cross-validation are devised to select the polynomial orders, as a requirement of the inverse polynomial reconstruction method in practical applications. The developed framework can considerably enhance the performance of the DCT in sparse representation of piecewise smooth signals. Numerical results show that denoising and image approximation algorithms based on the proposed framework indicate significant improvements over wavelet counterparts for this class of signals.
Buffering in cyclic gene networks
NASA Astrophysics Data System (ADS)
Glyzin, S. D.; Kolesov, A. Yu.; Rozov, N. Kh.
2016-06-01
We consider cyclic chains of unidirectionally coupled delay differential-difference equations that are mathematical models of artificial oscillating gene networks. We establish that the buffering phenomenon is realized in these system for an appropriate choice of the parameters: any given finite number of stable periodic motions of a special type, the so-called traveling waves, coexist.
Accelerated augmented Lagrangian method for few-view CT reconstruction
NASA Astrophysics Data System (ADS)
Wu, Junfeng; Mou, Xuanqin
2012-03-01
Recently iterative reconstruction algorithms with total variation (TV) regularization have shown its tremendous power in image reconstruction from few-view projection data, but it is much more demanding in computation. In this paper, we propose an accelerated augmented Lagrangian method (ALM) for few-view CT reconstruction with total variation regularization. Experimental phantom results demonstrate that the proposed method not only reconstruct high quality image from few-view projection data but also converge fast to the optimal solution.
2014-01-01
Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel
Reverse engineering transcriptional gene networks.
Belcastro, Vincenzo; di Bernardo, Diego
2014-01-01
The aim of this chapter is a step-by-step guide on how to infer gene networks from gene expression profiles. The definition of a gene network is given in Subheading 1, where the different types of networks are discussed. The chapter then guides the readers through a data-gathering process in order to build a compendium of gene expression profiles from a public repository. Gene expression profiles are then discretized and a statistical relationship between genes, called mutual information (MI), is computed. Gene pairs with insignificant MI scores are then discarded by applying one of the described pruning steps. The retained relationships are then used to build up a Boolean adjacency matrix used as input for a clustering algorithm to divide the network into modules (or communities). The gene network can then be used as a hypothesis generator for discovering gene function and analyzing gene signatures. Some case studies are presented, and an online web-tool called Netview is described.
Hybrid Method for Tokamak MHD Equilibrium Configuration Reconstruction
NASA Astrophysics Data System (ADS)
He, Hong-Da; Dong, Jia-Qi; Zhang, Jin-Hua; Jiang, Hai-Bin
2007-02-01
A hybrid method for tokamak MHD equilibrium configuration reconstruction is proposed and employed in the modified EFIT code. This method uses the free boundary tokamak equilibrium configuration reconstruction algorithm with one boundary point fixed. The results show that the position of the fixed point has explicit effects on the reconstructed divertor configurations. In particular, the separatrix of the reconstructed divertor configuration precisely passes the required position when the hybrid method is used in the reconstruction. The profiles of plasma parameters such as pressure and safety factor for reconstructed HL-2A tokamak configurations with the hybrid and the free boundary methods are compared. The possibility for applications of the method to swing the separatrix strike point on the divertor target plate is discussed.
Hong Luo; Hanping Xiao; Robert Nourgaliev; Chunpei Cai
2011-06-01
A comparative study of different reconstruction schemes for a reconstruction-based discontinuous Galerkin, termed RDG(P1P2) method is performed for compressible flow problems on arbitrary grids. The RDG method is designed to enhance the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution via a reconstruction scheme commonly used in the finite volume method. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are implemented to obtain a quadratic polynomial representation of the underlying discontinuous Galerkin linear polynomial solution on each cell. These three reconstruction/recovery methods are compared for a variety of compressible flow problems on arbitrary meshes to access their accuracy and robustness. The numerical results demonstrate that all three reconstruction methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstruction method provides the best performance in terms of both accuracy and robustness.
A new target reconstruction method considering atmospheric refraction
NASA Astrophysics Data System (ADS)
Zuo, Zhengrong; Yu, Lijuan
2015-12-01
In this paper, a new target reconstruction method considering the atmospheric refraction is presented to improve 3D reconstruction accuracy in long rang surveillance system. The basic idea of the method is that the atmosphere between the camera and the target is partitioned into several thin layers radially in which the density is regarded as uniform; Then the reverse tracking of the light propagation path from sensor to target was carried by applying Snell's law at the interface between layers; and finally the average of the tracked target's positions from different cameras is regarded as the reconstructed position. The reconstruction experiments were carried, and the experiment results showed that the new method have much better reconstruction accuracy than the traditional stereoscopic reconstruction method.
Gene networks controlling petal organogenesis.
Huang, Tengbo; Irish, Vivian F
2016-01-01
One of the biggest unanswered questions in developmental biology is how growth is controlled. Petals are an excellent organ system for investigating growth control in plants: petals are dispensable, have a simple structure, and are largely refractory to environmental perturbations that can alter their size and shape. In recent studies, a number of genes controlling petal growth have been identified. The overall picture of how such genes function in petal organogenesis is beginning to be elucidated. This review will focus on studies using petals as a model system to explore the underlying gene networks that control organ initiation, growth, and final organ morphology.
High resolution x-ray CMT: Reconstruction methods
Brown, J.K.
1997-02-01
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited for high accuracy, tomographic reconstruction codes.
Compressive measurement and feature reconstruction method for autonomous star trackers
NASA Astrophysics Data System (ADS)
Yin, Hang; Yan, Ye; Song, Xin; Yang, Yueneng
2016-12-01
Compressive sensing (CS) theory provides a framework for signal reconstruction using a sub-Nyquist sampling rate. CS theory enables the reconstruction of a signal that is sparse or compressible from a small set of measurements. The current CS application in optical field mainly focuses on reconstructing the original image using optimization algorithms and conducts data processing in full-dimensional image, which cannot reduce the data processing rate. This study is based on the spatial sparsity of star image and proposes a new compressive measurement and reconstruction method that extracts the star feature from compressive data and directly reconstructs it to the original image for attitude determination. A pixel-based folding model that preserves the star feature and enables feature reconstruction is presented to encode the original pixel location into the superposed space. A feature reconstruction method is then proposed to extract the star centroid by compensating distortions and to decode the centroid without reconstructing the whole image, which reduces the sampling rate and data processing rate at the same time. The statistical results investigate the proportion of star distortion and false matching results, which verifies the correctness of the proposed method. The results also verify the robustness of the proposed method to a great extent and demonstrate that its performance can be improved by sufficient measurement in noise cases. Moreover, the result on real star images significantly ensures the correct star centroid estimation for attitude determination and confirms the feasibility of applying the proposed method in a star tracker.
Exhaustive Search for Fuzzy Gene Networks from Microarray Data
Sokhansanj, B A; Fitch, J P; Quong, J N; Quong, A A
2003-07-07
Recent technological advances in high-throughput data collection allow for the study of increasingly complex systems on the scale of the whole cellular genome and proteome. Gene network models are required to interpret large and complex data sets. Rationally designed system perturbations (e.g. gene knock-outs, metabolite removal, etc) can be used to iteratively refine hypothetical models, leading to a modeling-experiment cycle for high-throughput biological system analysis. We use fuzzy logic gene network models because they have greater resolution than Boolean logic models and do not require the precise parameter measurement needed for chemical kinetics-based modeling. The fuzzy gene network approach is tested by exhaustive search for network models describing cyclin gene interactions in yeast cell cycle microarray data, with preliminary success in recovering interactions predicted by previous biological knowledge and other analysis techniques. Our goal is to further develop this method in combination with experiments we are performing on bacterial regulatory networks.
An image reconstruction method (IRBis) for optical/infrared interferometry
NASA Astrophysics Data System (ADS)
Hofmann, K.-H.; Weigelt, G.; Schertl, D.
2014-05-01
Aims: We present an image reconstruction method for optical/infrared long-baseline interferometry called IRBis (image reconstruction software using the bispectrum). We describe the theory and present applications to computer-simulated interferograms. Methods: The IRBis method can reconstruct an image from measured visibilities and closure phases. The applied optimization routine ASA_CG is based on conjugate gradients. The method allows the user to implement different regularizers, apply residual ratios as an additional metric for goodness-of-fit, and use previous iteration results as a prior to force convergence. Results: We present the theory of the IRBis method and several applications of the method to computer-simulated interferograms. The image reconstruction results show the dependence of the reconstructed image on the noise in the interferograms (e.g., for ten electron read-out noise and 139 to 1219 detected photons per interferogram), the regularization method, the angular resolution, and the reconstruction parameters applied. Furthermore, we present the IRBis reconstructions submitted to the interferometric imaging beauty contest 2012 initiated by the IAU Working Group on Optical/IR Interferometry and describe the performed data processing steps.
An alternative method of middle vault reconstruction.
Gassner, Holger G; Friedman, Oren; Sherris, David A; Kern, Eugene B
2006-01-01
Surgery of the nasal valves is a challenging aspect of rhinoplasty surgery. The middle nasal vault assumes an important role in certain aspects of nasal valve collapse. Techniques that address pathologies of the middle vault include the placement of spreader grafts and the butterfly graft. We present an alternative technique of middle vault reconstruction that allows simultaneous repair of nasal valve collapse and creation of a smooth dorsal profile. The surgical technique is described in detail and representative cases are discussed.
Optimization and Comparison of Different Digital Mammographic Tomosynthesis Reconstruction Methods
2007-04-01
likelihood iterative algorithm (MLEM) by Wu et al. [4,5], tuned-aperture computed tomography (TACT) reconstruction methods developed by Webber and...A. Karellas, S. Vedantham, S. J. Glick, C. J. D’Orsi, S. P. Baker, and R. L. Webber , “Comparison of tomosynthesis methods used with digital...L. Webber , “Evaluation of linear and nonlinear tomosynthetic reconstruction methods in digital mammography,” Acad. Radiol. 8, 219-224 (2001). 8. L
Petrovskaya, Olga V; Petrovskiy, Evgeny D; Lavrik, Inna N; Ivanisenko, Vladimir A
2016-12-22
Gene network modeling is one of the widely used approaches in systems biology. It allows for the study of complex genetic systems function, including so-called mosaic gene networks, which consist of functionally interacting subnetworks. We conducted a study of a mosaic gene networks modeling method based on integration of models of gene subnetworks by linear control functionals. An automatic modeling of 10,000 synthetic mosaic gene regulatory networks was carried out using computer experiments on gene knockdowns/knockouts. Structural analysis of graphs of generated mosaic gene regulatory networks has revealed that the most important factor for building accurate integrated mathematical models, among those analyzed in the study, is data on expression of genes corresponding to the vertices with high properties of centrality.
An analytic reconstruction method for PET based on cubic splines
NASA Astrophysics Data System (ADS)
Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.
2014-03-01
PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.
Anatomically-aided PET reconstruction using the kernel method
NASA Astrophysics Data System (ADS)
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-09-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
A novel method of anterior lumbosacral cage reconstruction.
Mathios, Dimitrios; Kaloostian, Paul Edward; Bydon, Ali; Sciubba, Daniel M; Wolinsky, Jean Paul; Gokaslan, Ziya L; Witham, Timothy F
2014-02-01
Reconstruction of the lumbosacral junction is a considerable challenge for spinal surgeons due to the unique anatomical constraints of this region as well as the vectors of force that are applied focally in this area. The standard cages, both expandable and nonexpendable, often fail to reconstitute the appropriate anatomical alignment of the lumbosacral junction. This inadequate reconstruction may predispose the patient to continued back pain and neurological symptoms as well as possible pseudarthrosis and instrumentation failure. The authors describe their preoperative planning and the technical characteristics of their novel reconstruction technique at the lumbosacral junction using a cage with adjustable caps. Based precisely on preoperative measurements that maintain the appropriate Cobb angle, they performed reconstruction of the lumbosacral junction in a series of 3 patients. All 3 patients had excellent installation of the cages used for reconstruction. Postoperative CT scans were used to radiographically confirm the appropriate reconstruction of the lumbosacral junction. All patients had a significant reduction in pain, had neurological improvement, and experienced no instrumentation failure at the time of latest follow-up. Taking into account the inherent morphology of the lumbosacral junction and carefully planning the technical characteristics of the cage installation preoperatively and intraoperatively, the authors achieved favorable clinical and radiographic outcomes in all 3 cases. Based on this small case series, this technique for reconstruction of the lumbosacral junction appears to be a safe and appropriate method of reconstruction of the anterior spinal column in this technically challenging region of the spine.
Combinatorial explosion in model gene networks
NASA Astrophysics Data System (ADS)
Edwards, R.; Glass, L.
2000-09-01
The explosive growth in knowledge of the genome of humans and other organisms leaves open the question of how the functioning of genes in interacting networks is coordinated for orderly activity. One approach to this problem is to study mathematical properties of abstract network models that capture the logical structures of gene networks. The principal issue is to understand how particular patterns of activity can result from particular network structures, and what types of behavior are possible. We study idealized models in which the logical structure of the network is explicitly represented by Boolean functions that can be represented by directed graphs on n-cubes, but which are continuous in time and described by differential equations, rather than being updated synchronously via a discrete clock. The equations are piecewise linear, which allows significant analysis and facilitates rapid integration along trajectories. We first give a combinatorial solution to the question of how many distinct logical structures exist for n-dimensional networks, showing that the number increases very rapidly with n. We then outline analytic methods that can be used to establish the existence, stability and periods of periodic orbits corresponding to particular cycles on the n-cube. We use these methods to confirm the existence of limit cycles discovered in a sample of a million randomly generated structures of networks of 4 genes. Even with only 4 genes, at least several hundred different patterns of stable periodic behavior are possible, many of them surprisingly complex. We discuss ways of further classifying these periodic behaviors, showing that small mutations (reversal of one or a few edges on the n-cube) need not destroy the stability of a limit cycle. Although these networks are very simple as models of gene networks, their mathematical transparency reveals relationships between structure and behavior, they suggest that the possibilities for orderly dynamics in such
Reconstruction methods for phase-contrast tomography
Raven, C.
1997-02-01
Phase contrast imaging with coherent x-rays can be distinguished in outline imaging and holography, depending on the wavelength {lambda}, the object size d and the object-to-detector distance r. When r << d{sup 2}{lambda}, phase contrast occurs only in regions where the refractive index fastly changes, i.e. at interfaces and edges in the sample. With increasing object-to-detector distance we come in the area of holographic imaging. The image contrast outside the shadow region of the object is due to interference of the direct, undiffracted beam and a beam diffracted by the object, or, in terms of holography, the interference of a reference wave with the object wave. Both, outline imaging and holography, offer the possibility to obtain three dimensional information of the sample in conjunction with a tomographic technique. But the data treatment and the kind of information one can obtain from the reconstruction is different.
Yeast Ancestral Genome Reconstructions: The Possibilities of Computational Methods
NASA Astrophysics Data System (ADS)
Tannier, Eric
In 2006, a debate has risen on the question of the efficiency of bioinformatics methods to reconstruct mammalian ancestral genomes. Three years later, Gordon et al. (PLoS Genetics, 5(5), 2009) chose not to use automatic methods to build up the genome of a 100 million year old Saccharomyces cerevisiae ancestor. Their manually constructed ancestor provides a reference genome to test whether automatic methods are indeed unable to approach confident reconstructions. Adapting several methodological frameworks to the same yeast gene order data, I discuss the possibilities, differences and similarities of the available algorithms for ancestral genome reconstructions. The methods can be classified into two types: local and global. Studying the properties of both helps to clarify what we can expect from their usage. Both methods propose contiguous ancestral regions that come very close (> 95% identity) to the manually predicted ancestral yeast chromosomes, with a good coverage of the extant genomes.
A novel electron density reconstruction method for asymmetrical toroidal plasmas
Shi, N.; Ohshima, S.; Minami, T.; Nagasaki, K.; Yamamoto, S.; Mizuuchi, T.; Okada, H.; Kado, S.; Kobayashi, S.; Konoshima, S.; Sano, F.; Tanaka, K.; Ohtani, Y.; Zang, L.; Kenmochi, N.
2014-05-15
A novel reconstruction method is developed for acquiring the electron density profile from multi-channel interferometric measurements of strongly asymmetrical toroidal plasmas. It is based on a regularization technique, and a generalized cross-validation function is used to optimize the regularization parameter with the aid of singular value decomposition. The feasibility of method could be testified by simulated measurements based on a magnetic configuration of the flexible helical-axis heliotron device, Heliotron J, which has an asymmetrical poloidal cross section. And the successful reconstruction makes possible to construct a multi-channel Far-infrared laser interferometry on this device. The advantages of this method are demonstrated by comparison with a conventional method. The factors which may affect the accuracy of the results are investigated, and an error analysis is carried out. Based on the obtained results, the proposed method is highly promising for accurately reconstructing the electron density in the asymmetrical toroidal plasma.
Assessing the Accuracy of Ancestral Protein Reconstruction Methods
Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A
2006-01-01
The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of “ancestral sequences” inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a “best guess” amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated. PMID:16789817
A Comparison of Methods for Ocean Reconstruction from Sparse Observations
NASA Astrophysics Data System (ADS)
Streletz, G. J.; Kronenberger, M.; Weber, C.; Gebbie, G.; Hagen, H.; Garth, C.; Hamann, B.; Kreylos, O.; Kellogg, L. H.; Spero, H. J.
2014-12-01
We present a comparison of two methods for developing reconstructions of oceanic scalar property fields from sparse scattered observations. Observed data from deep sea core samples provide valuable information regarding the properties of oceans in the past. However, because the locations of sample sites are distributed on the ocean floor in a sparse and irregular manner, developing a global ocean reconstruction is a difficult task. Our methods include a flow-based and a moving least squares -based approximation method. The flow-based method augments the process of interpolating or approximating scattered scalar data by incorporating known flow information. The scheme exploits this additional knowledge to define a non-Euclidean distance measure between points in the spatial domain. This distance measure is used to create a reconstruction of the desired scalar field on the spatial domain. The resulting reconstruction thus incorporates information from both the scattered samples and the known flow field. The second method does not assume a known flow field, but rather works solely with the observed scattered samples. It is based on a modification of the moving least squares approach, a weighted least squares approximation method that blends local approximations into a global result. The modifications target the selection of data used for these local approximations and the construction of the weighting function. The definition of distance used in the weighting function is crucial for this method, so we use a machine learning approach to determine a set of near-optimal parameters for the weighting. We have implemented both of the reconstruction methods and have tested them using several sparse oceanographic datasets. Based upon these studies, we discuss the advantages and disadvantages of each method and suggest possible ways to combine aspects of both methods in order to achieve an overall high-quality reconstruction.
The frequency split method for helical cone-beam reconstruction.
Shechter, G; Köhler, Th; Altman, A; Proksa, R
2004-08-01
A new approximate method for the utilization of redundant data in helical cone-beam CT is presented. It is based on the observation that the original WEDGE method provides excellent image quality if only little more than 180 degrees data are used for back-projection, and that significant low-frequency artifacts appear if a larger amount of redundant data are used. This degradation is compensated by the frequency split method: The low-frequency part of the image is reconstructed using little more than 180 degrees of data, while the high frequency part is reconstructed using all data. The resulting algorithm shows no cone-beam artifacts in a simulation of a 64-row scanner. It is further shown that the frequency split method hardly degrades the signal-to-noise ratio of the reconstructed images and that it behaves robustly in the presence of motion.
Matrix-based image reconstruction methods for tomography
Llacer, J.; Meng, J.D.
1984-10-01
Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures.
Fast alternating projection methods for constrained tomographic reconstruction.
Liu, Li; Han, Yongxin; Jin, Mingwu
2017-01-01
The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification.
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods
Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
A reconstructed discontinuous Galerkin method for magnetohydrodynamics on arbitrary grids
NASA Astrophysics Data System (ADS)
Karami Halashi, Behrouz; Luo, Hong
2016-12-01
A reconstructed discontinuous Galerkin (rDG) method, designed not only to enhance the accuracy of DG methods but also to ensure the nonlinear stability of the rDG method, is developed for solving the Magnetohydrodynamics (MHD) equations on arbitrary grids. In this rDG(P1P2) method, a quadratic polynomial solution (P2) is first obtained using a Hermite Weighted Essentially Non-oscillatory (WENO) reconstruction from the underlying linear polynomial (P1) discontinuous Galerkin solution to ensure linear stability of the rDG method and to improves efficiency of the underlying DG method. By taking advantage of handily available and yet invaluable information, namely the first derivatives in the DG formulation, the stencils used in reconstruction involve only Von Neumann neighborhood (adjacent face-neighboring cells) and thus are compact. The first derivatives of the quadratic polynomial solution are then reconstructed using a WENO reconstruction in order to eliminate spurious oscillations in the vicinity of strong discontinuities, thus ensuring the nonlinear stability of the rDG method. The HLLD Riemann solver introduced in the literature for one-dimensional MHD problems is adopted in normal direction to compute numerical fluxes. The divergence free constraint is satisfied using the Locally Divergence Free (LDF) approach. The developed rDG method is used to compute a variety of 2D and 3D MHD problems on arbitrary grids to demonstrate its accuracy, robustness, and non-oscillatory property. Our numerical experiments indicate that the rDG(P1P2) method is able to capture shock waves sharply essentially without any spurious oscillations, and achieve the designed third-order of accuracy: one order accuracy higher than the underlying DG method.
Tomographic fluorescence reconstruction by a spectral projected gradient pursuit method
NASA Astrophysics Data System (ADS)
Ye, Jinzuo; An, Yu; Mao, Yamin; Jiang, Shixin; Yang, Xin; Chi, Chongwei; Tian, Jie
2015-03-01
In vivo fluorescence molecular imaging (FMI) has played an increasingly important role in biomedical research of preclinical area. Fluorescence molecular tomography (FMT) further upgrades the two-dimensional FMI optical information to three-dimensional fluorescent source distribution, which can greatly facilitate applications in related studies. However, FMT presents a challenging inverse problem which is quite ill-posed and ill-conditioned. Continuous efforts to develop more practical and efficient methods for FMT reconstruction are still needed. In this paper, a method based on spectral projected gradient pursuit (SPGP) has been proposed for FMT reconstruction. The proposed method was based on the directional pursuit framework. A mathematical strategy named the nonmonotone line search was associated with the SPGP method, which guaranteed the global convergence. In addition, the Barzilai-Borwein step length was utilized to build the new step length of the SPGP method, which was able to speed up the convergence of this gradient method. To evaluate the performance of the proposed method, several heterogeneous simulation experiments including multisource cases as well as comparative analyses have been conducted. The results demonstrated that, the proposed method was able to achieve satisfactory source localizations with a bias less than 1 mm; the computational efficiency of the method was one order of magnitude faster than the contrast method; and the fluorescence reconstructed by the proposed method had a higher contrast to the background than the contrast method. All the results demonstrated the potential for practical FMT applications with the proposed method.
Bubble reconstruction method for wire-mesh sensors measurements
NASA Astrophysics Data System (ADS)
Mukin, Roman V.
2016-08-01
A new algorithm is presented for post-processing of void fraction measurements with wire-mesh sensors, particularly for identifying and reconstructing bubble surfaces in a two-phase flow. This method is a combination of the bubble recognition algorithm presented in Prasser (Nuclear Eng Des 237(15):1608, 2007) and Poisson surface reconstruction algorithm developed in Kazhdan et al. (Poisson surface reconstruction. In: Proceedings of the fourth eurographics symposium on geometry processing 7, 2006). To verify the proposed technique, a comparison was done of the reconstructed individual bubble shapes with those obtained numerically in Sato and Ničeno (Int J Numer Methods Fluids 70(4):441, 2012). Using the difference between reconstructed and referenced bubble shapes, the accuracy of the proposed algorithm was estimated. At the next step, the algorithm was applied to void fraction measurements performed in Ylönen (High-resolution flow structure measurements in a rod bundle (Diss., Eidgenössische Technische Hochschule ETH Zürich, Nr. 20961, 2013) by means of wire-mesh sensors in a rod bundle geometry. The reconstructed bubble shape yields bubble surface area and volume, hence its Sauter diameter d_{32} as well. Sauter diameter is proved to be more suitable for bubbles size characterization compared to volumetric diameter d_{30}, proved capable to capture the bi-disperse bubble size distribution in the flow. The effect of a spacer grid was studied as well: For the given spacer grid and considered flow rates, bubble size frequency distribution is obtained almost at the same position for all cases, approximately at d_{32} = 3.5 mm. This finding can be related to the specific geometry of the spacer grid or the air injection device applied in the experiments, or even to more fundamental properties of the bubble breakup and coagulation processes. In addition, an application of the new algorithm for reconstruction of a large air-water interface in a tube bundle is
NASA Technical Reports Server (NTRS)
Newman, Timothy; Santhanam, Naveen; Zhang, Huijuan; Gallagher, Dennis
2003-01-01
A new method for reconstructing the global 3D distribution of plasma densities in the plasmasphere from a limited number of 2D views is presented. The method is aimed at using data from the Extreme Ultra Violet (EUV) sensor on NASA s Imager for Magnetopause-to-Aurora Global Exploration (IMAGE) satellite. Physical properties of the plasmasphere are exploited by the method to reduce the level of inaccuracy imposed by the limited number of views. The utility of the method is demonstrated on synthetic data.
Robust Methods for Sensing and Reconstructing Sparse Signals
ERIC Educational Resources Information Center
Carrillo, Rafael E.
2012-01-01
Compressed sensing (CS) is an emerging signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are…
An improved reconstruction method for cosmological density fields
NASA Technical Reports Server (NTRS)
Gramann, Mirt
1993-01-01
This paper proposes some improvements to existing reconstruction methods for recovering the initial linear density and velocity fields of the universe from the present large-scale density distribution. We derive the Eulerian continuity equation in the Zel'dovich approximation and show that, by applying this equation, we can trace the evolution of the gravitational potential of the universe more exactly than is possible with previous approaches based on the Zel'dovich-Bernoulli equation. The improved reconstruction method is tested using N-body simulations. When the Zel'dovich-Bernoulli equation describes the formation of filaments, then the Zel'dovich continuity equation also follows the clustering of clumps inside the filaments. Our reconstruction method recovers the true initial gravitational potential with an rms error about 3 times smaller than previous methods. We examine the recovery of the initial distribution of Fourier components and find the scale at which the recovered phases are scrambled with respect their true initial values. Integrating the Zel'dovich continuity equation back in time, we can improve the spatial resolution of the reconstruction by a factor of about 2.
Method for image reconstruction of moving radionuclide source distribution
Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick
2012-12-18
A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.
Endoscopic Skull Base Reconstruction: An Evolution of Materials and Methods.
Sigler, Aaron C; D'Anza, Brian; Lobo, Brian C; Woodard, Troy; Recinos, Pablo F; Sindwani, Raj
2017-03-31
Endoscopic skull base surgery has developed rapidly over the last decade, in large part because of the expanding armamentarium of endoscopic repair techniques. This article reviews the available technologies and techniques, including vascularized and nonvascularized flaps, synthetic grafts, sealants and glues, and multilayer reconstruction. Understanding which of these repair methods is appropriate and under what circumstances is paramount to achieving success in this challenging but rewarding field. A graduated approach to skull base reconstruction is presented to provide a systematic framework to guide selection of repair technique to ensure a successful outcome while minimizing morbidity for the patient.
Testing the global flow reconstruction method on coupled chaotic oscillators
NASA Astrophysics Data System (ADS)
Plachy, Emese; Kolláth, Zoltán
2010-03-01
Irregular behaviour of pulsating variable stars may occur due to low dimensional chaos. To determine the quantitative properties of the dynamics in such systems, we apply a suitable time series analysis, the global flow reconstruction method. The robustness of the reconstruction can be tested through the resultant quantities, like Lyapunov dimension and Fourier frequencies. The latter is specially important as it is directly derivable from the observed light curves. We have performed tests using coupled Rossler oscillators to investigate the possible connection between those quantities. In this paper we present our test results.
3D reconstruction methods of coronal structures by radio observations
NASA Technical Reports Server (NTRS)
Aschwanden, Markus J.; Bastian, T. S.; White, Stephen M.
1992-01-01
The ability to carry out the three dimensional (3D) reconstruction of structures in the solar corona would represent a major advance in the study of the physical properties in active regions and in flares. Methods which allow a geometric reconstruction of quasistationary coronal structures (for example active region loops) or dynamic structures (for example flaring loops) are described: stereoscopy of multi-day imaging observations by the VLA (Very Large Array); tomography of optically thin emission (in radio or soft x-rays); multifrequency band imaging by the VLA; and tracing of magnetic field lines by propagating electron beams.
Digital Signal Processing and Control for the Study of Gene Networks
Shin, Yong-Jun
2016-01-01
Thanks to the digital revolution, digital signal processing and control has been widely used in many areas of science and engineering today. It provides practical and powerful tools to model, simulate, analyze, design, measure, and control complex and dynamic systems such as robots and aircrafts. Gene networks are also complex dynamic systems which can be studied via digital signal processing and control. Unlike conventional computational methods, this approach is capable of not only modeling but also controlling gene networks since the experimental environment is mostly digital today. The overall aim of this article is to introduce digital signal processing and control as a useful tool for the study of gene networks. PMID:27102828
Digital Signal Processing and Control for the Study of Gene Networks.
Shin, Yong-Jun
2016-04-22
Thanks to the digital revolution, digital signal processing and control has been widely used in many areas of science and engineering today. It provides practical and powerful tools to model, simulate, analyze, design, measure, and control complex and dynamic systems such as robots and aircrafts. Gene networks are also complex dynamic systems which can be studied via digital signal processing and control. Unlike conventional computational methods, this approach is capable of not only modeling but also controlling gene networks since the experimental environment is mostly digital today. The overall aim of this article is to introduce digital signal processing and control as a useful tool for the study of gene networks.
Digital Signal Processing and Control for the Study of Gene Networks
NASA Astrophysics Data System (ADS)
Shin, Yong-Jun
2016-04-01
Thanks to the digital revolution, digital signal processing and control has been widely used in many areas of science and engineering today. It provides practical and powerful tools to model, simulate, analyze, design, measure, and control complex and dynamic systems such as robots and aircrafts. Gene networks are also complex dynamic systems which can be studied via digital signal processing and control. Unlike conventional computational methods, this approach is capable of not only modeling but also controlling gene networks since the experimental environment is mostly digital today. The overall aim of this article is to introduce digital signal processing and control as a useful tool for the study of gene networks.
Parallel MR image reconstruction using augmented Lagrangian methods.
Ramani, Sathish; Fessler, Jeffrey A
2011-03-01
Magnetic resonance image (MRI) reconstruction using SENSitivity Encoding (SENSE) requires regularization to suppress noise and aliasing effects. Edge-preserving and sparsity-based regularization criteria can improve image quality, but they demand computation-intensive nonlinear optimization. In this paper, we present novel methods for regularized MRI reconstruction from undersampled sensitivity encoded data--SENSE-reconstruction--using the augmented Lagrangian (AL) framework for solving large-scale constrained optimization problems. We first formulate regularized SENSE-reconstruction as an unconstrained optimization task and then convert it to a set of (equivalent) constrained problems using variable splitting. We then attack these constrained versions in an AL framework using an alternating minimization method, leading to algorithms that can be implemented easily. The proposed methods are applicable to a general class of regularizers that includes popular edge-preserving (e.g., total-variation) and sparsity-promoting (e.g., l(1)-norm of wavelet coefficients) criteria and combinations thereof. Numerical experiments with synthetic and in vivo human data illustrate that the proposed AL algorithms converge faster than both general-purpose optimization algorithms such as nonlinear conjugate gradient (NCG) and state-of-the-art MFISTA.
NASA Astrophysics Data System (ADS)
Chan, Harley; Gilbert, Ralph W.; Pagedar, Nitin A.; Daly, Michael J.; Irish, Jonathan C.; Siewerdsen, Jeffrey H.
2010-02-01
esthetic appearance is one of the most important factors for reconstructive surgery. The current practice of maxillary reconstruction chooses radial forearm, fibula or iliac rest osteocutaneous to recreate three-dimensional complex structures of the palate and maxilla. However, these bone flaps lack shape similarity to the palate and result in a less satisfactory esthetic. Considering similarity factors and vasculature advantages, reconstructive surgeons recently explored the use of scapular tip myo-osseous free flaps to restore the excised site. We have developed a new method that quantitatively evaluates the morphological similarity of the scapula tip bone and palate based on a diagnostic volumetric computed tomography (CT) image. This quantitative result was further interpreted as a color map that rendered on the surface of a three-dimensional computer model. For surgical planning, this color interpretation could potentially assist the surgeon to maximize the orientation of the bone flaps for best fit of the reconstruction site. With approval from the Research Ethics Board (REB) of the University Health Network, we conducted a retrospective analysis with CT image obtained from 10 patients. Each patient had a CT scans including the maxilla and chest on the same day. Based on this image set, we simulated total, subtotal and hemi palate reconstruction. The procedure of simulation included volume segmentation, conversing the segmented volume to a stereo lithography (STL) model, manual registration, computation of minimum geometric distances and curvature between STL model. Across the 10 patients data, we found the overall root-mean-square (RMS) conformance was 3.71+/- 0.16 mm
A Robust Shape Reconstruction Method for Facial Feature Point Detection.
Tan, Shuqiu; Chen, Dongyi; Guo, Chenggang; Huang, Zhiqi
2017-01-01
Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.
GENIES: gene network inference engine based on supervised analysis.
Kotera, Masaaki; Yamanishi, Yoshihiro; Moriya, Yuki; Kanehisa, Minoru; Goto, Susumu
2012-07-01
Gene network inference engine based on supervised analysis (GENIES) is a web server to predict unknown part of gene network from various types of genome-wide data in the framework of supervised network inference. The originality of GENIES lies in the construction of a predictive model using partially known network information and in the integration of heterogeneous data with kernel methods. The GENIES server accepts any 'profiles' of genes or proteins (e.g. gene expression profiles, protein subcellular localization profiles and phylogenetic profiles) or pre-calculated gene-gene similarity matrices (or 'kernels') in the tab-delimited file format. As a training data set to learn a predictive model, the users can choose either known molecular network information in the KEGG PATHWAY database or their own gene network data. The user can also select an algorithm of supervised network inference, choose various parameters in the method, and control the weights of heterogeneous data integration. The server provides the list of newly predicted gene pairs, maps the predicted gene pairs onto the associated pathway diagrams in KEGG PATHWAY and indicates candidate genes for missing enzymes in organism-specific metabolic pathways. GENIES (http://www.genome.jp/tools/genies/) is publicly available as one of the genome analysis tools in GenomeNet.
Adaptive Kaczmarz Method for Image Reconstruction in Electrical Impedance Tomography
Li, Taoran; Kao, Tzu-Jen; Isaacson, David; Newell, Jonathan C.; Saulnier, Gary J.
2013-01-01
We present an adaptive Kaczmarz method for solving the inverse problem in electrical impedance tomography and determining the conductivity distribution inside an object from electrical measurements made on the surface. To best characterize an unknown conductivity distribution and avoid inverting the Jacobian-related term JTJ which could be expensive in terms of computation cost and memory in large scale problems, we propose solving the inverse problem by applying the optimal current patterns for distinguishing the actual conductivity from the conductivity estimate between each iteration of the block Kaczmarz algorithm. With a novel subset scheme, the memory-efficient reconstruction algorithm which appropriately combines the optimal current pattern generation with the Kaczmarz method can produce more accurate and stable solutions adaptively as compared to traditional Kaczmarz and Gauss-Newton type methods. Choices of initial current pattern estimates are discussed in the paper. Several reconstruction image metrics are used to quantitatively evaluate the performance of the simulation results. PMID:23718952
Efficient ghost cell reconstruction for embedded boundary methods
NASA Astrophysics Data System (ADS)
Rapaka, Narsimha; Al-Marouf, Mohamad; Samtaney, Ravi
2016-11-01
A non-iterative linear reconstruction procedure for Cartesian grid embedded boundary methods is introduced. The method exploits the inherent geometrical advantage of the Cartesian grid and employs batch sorting of the ghost cells to eliminate the need for an iterative solution procedure. This reduces the computational cost of the reconstruction procedure significantly, especially for large scale problems in a parallel environment that have significant communication overhead, e.g., patch based adaptive mesh refinement (AMR) methods. In this approach, prior computation and storage of the weightage coefficients for the neighbour cells is not required which is particularly attractive for moving boundary problems and memory intensive stationary boundary problems. The method utilizes a compact and unique interpolation stencil but also provides second order spatial accuracy. It provides a single step/direct reconstruction for the ghost cells that enforces the boundary conditions on the embedded boundary. The method is extendable to higher order interpolations as well. Examples that demonstrate the advantages of the present approach are presented. Supported by the KAUST Office of Competitive Research Funds under Award No. URF/1/1394-01.
Iterative reconstruction methods in X-ray CT.
Beister, Marcel; Kolditz, Daniel; Kalender, Willi A
2012-04-01
Iterative reconstruction (IR) methods have recently re-emerged in transmission x-ray computed tomography (CT). They were successfully used in the early years of CT, but given up when the amount of measured data increased because of the higher computational demands of IR compared to analytical methods. The availability of large computational capacities in normal workstations and the ongoing efforts towards lower doses in CT have changed the situation; IR has become a hot topic for all major vendors of clinical CT systems in the past 5 years. This review strives to provide information on IR methods and aims at interested physicists and physicians already active in the field of CT. We give an overview on the terminology used and an introduction to the most important algorithmic concepts including references for further reading. As a practical example, details on a model-based iterative reconstruction algorithm implemented on a modern graphics adapter (GPU) are presented, followed by application examples for several dedicated CT scanners in order to demonstrate the performance and potential of iterative reconstruction methods. Finally, some general thoughts regarding the advantages and disadvantages of IR methods as well as open points for research in this field are discussed.
Improving automated 3D reconstruction methods via vision metrology
NASA Astrophysics Data System (ADS)
Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart
2015-05-01
This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.
Image reconstruction methods for the PBX-M pinhole camera.
Holland, A; Powell, E T; Fonck, R J
1991-09-10
We describe two methods that have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera [Proc. Soc. Photo-Opt. Instrum. Eng. 691, 111 (1986)]. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least-squares fit to the data. This has the advantage of being fast and small and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape that can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster for an overdetermined system than the usual Lagrange multiplier approach to finding the maximum entropy solution [J. Opt. Soc. Am. 62, 511 (1972); Rev. Sci. Instrum. 57, 1557 (1986)].
Optical Sensors and Methods for Underwater 3D Reconstruction
Massot-Campos, Miquel; Oliver-Codina, Gabriel
2015-01-01
This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389
Efficient finite element method for grating profile reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Ruming; Sun, Jiguang
2015-12-01
This paper concerns the reconstruction of grating profiles from scattering data. The inverse problem is formulated as an optimization problem with a regularization term. We devise an efficient finite element method (FEM) and employ a quasi-Newton method to solve it. For the direct problems, the FEM stiff and mass matrices are assembled once at the beginning of the numerical procedure. Then only minor changes are made to the mass matrix at each iteration, which significantly saves the computation cost. Numerical examples show that the method is effective and robust.
Computational methods estimating uncertainties for profile reconstruction in scatterometry
NASA Astrophysics Data System (ADS)
Gross, H.; Rathsfeld, A.; Scholze, F.; Model, R.; Bär, M.
2008-04-01
The solution of the inverse problem in scatterometry, i.e. the determination of periodic surface structures from light diffraction patterns, is incomplete without knowledge of the uncertainties associated with the reconstructed surface parameters. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to the efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation and end up minimizing the deviation of the measured efficiency or phase shift values from the simulated ones. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping and the uncertainties of the measured efficiencies or phase shifts. In particular, the uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data and can be estimated by various methods. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.
Reconstruction of the Sunspot Group Number: The Backbone Method
NASA Astrophysics Data System (ADS)
Svalgaard, Leif; Schatten, Kenneth H.
2016-11-01
We have reconstructed the sunspot-group count, not by comparisons with other reconstructions and correcting those where they were deemed to be deficient, but by a re-assessment of original sources. The resulting series is a pure solar index and does not rely on input from other proxies, e.g. radionuclides, auroral sightings, or geomagnetic records. "Backboning" the data sets, our chosen method, provides substance and rigidity by using long-time observers as a stiffness character. Solar activity, as defined by the Group Number, appears to reach and sustain for extended intervals of time the same level in each of the last three centuries since 1700 and the past several decades do not seem to have been exceptionally active, contrary to what is often claimed.
Belaineh, Getachew; Sumner, David; Carter, Edward; Clapp, David
2013-01-01
Potential evapotranspiration (PET) and reference evapotranspiration (RET) data are usually critical components of hydrologic analysis. Many different equations are available to estimate PET and RET. Most of these equations, such as the Priestley-Taylor and Penman- Monteith methods, rely on detailed meteorological data collected at ground-based weather stations. Few weather stations collect enough data to estimate PET or RET using one of the more complex evapotranspiration equations. Currently, satellite data integrated with ground meteorological data are used with one of these evapotranspiration equations to accurately estimate PET and RET. However, earlier than the last few decades, historical reconstructions of PET and RET needed for many hydrologic analyses are limited by the paucity of satellite data and of some types of ground data. Air temperature stands out as the most generally available meteorological ground data type over the last century. Temperature-based approaches used with readily available historical temperature data offer the potential for long period-of-record PET and RET historical reconstructions. A challenge is the inconsistency between the more accurate, but more data intensive, methods appropriate for more recent periods and the less accurate, but less data intensive, methods appropriate to the more distant past. In this study, multiple methods are harmonized in a seamless reconstruction of historical PET and RET by quantifying and eliminating the biases of the simple Hargreaves-Samani method relative to the more complex and accurate Priestley-Taylor and Penman-Monteith methods. This harmonization process is used to generate long-term, internally consistent, spatiotemporal databases of PET and RET.
Two-dimensional signal reconstruction: The correlation sampling method
Roman, H. E.
2007-12-15
An accurate approach for reconstructing a time-dependent two-dimensional signal from non-synchronized time series recorded at points located on a grid is discussed. The method, denoted as correlation sampling, improves the standard conditional sampling approach commonly employed in the study of turbulence in magnetoplasma devices. Its implementation is illustrated in the case of an artificial time-dependent signal constructed using a fractal algorithm that simulates a fluctuating surface. A statistical method is also discussed for distinguishing coherent (i.e., collective) from purely random (noisy) behavior for such two-dimensional fluctuating phenomena.
Comparing 3D virtual methods for hemimandibular body reconstruction.
Benazzi, Stefano; Fiorenza, Luca; Kozakowski, Stephanie; Kullmer, Ottmar
2011-07-01
Reconstruction of fractured, distorted, or missing parts in human skeleton presents an equal challenge in the fields of paleoanthropology, bioarcheology, forensics, and medicine. This is particularly important within the disciplines such as orthodontics and surgery, when dealing with mandibular defects due to tumors, developmental abnormalities, or trauma. In such cases, proper restorations of both form (for esthetic purposes) and function (restoration of articulation, occlusion, and mastication) are required. Several digital approaches based on three-dimensional (3D) digital modeling, computer-aided design (CAD)/computer-aided manufacturing techniques, and more recently geometric morphometric methods have been used to solve this problem. Nevertheless, comparisons among their outcomes are rarely provided. In this contribution, three methods for hemimandibular body reconstruction have been tested. Two bone defects were virtually simulated in a 3D digital model of a human hemimandible. Accordingly, 3D digital scaffolds were obtained using the mirror copy of the unaffected hemimandible (Method 1), the thin plate spline (TPS) interpolation (Method 2), and the combination between TPS and CAD techniques (Method 3). The mirror copy of the unaffected hemimandible does not provide a suitable solution for bone restoration. The combination between TPS interpolation and CAD techniques (Method 3) produces an almost perfect-fitting 3D digital model that can be used for biocompatible custom-made scaffolds generated by rapid prototyping technologies.
Park, Gui-Yong; Cho, Hee-Eun; Lee, Byung-Il; Park, Seung-Ha
2016-01-01
Background The objective of this paper was to describe a novel technique for improving the maintenance of nipple projection in primary nipple reconstruction by using acellular dermal matrix as a strut in one of three different configurations, according to the method of prior breast reconstruction. The struts were designed to best fill the different types of dead spaces in nipple reconstruction depending on the breast reconstruction method. Methods A total of 50 primary nipple reconstructions were performed between May 2012 and May 2015. The prior breast reconstruction methods were latissimus dorsi (LD) flap (28 cases), transverse rectus abdominis myocutaneous (TRAM) flap (10 cases), or tissue expander/implant (12 cases). The nipple reconstruction technique involved the use of local flaps, including the C-V flap or star flap. A 1×2-cm acellular dermal matrix was placed into the core with O-, I-, and L-shaped struts for prior LD, TRAM, and expander/implant methods, respectively. The projection of the reconstructed nipple was measured at the time of surgery and at 3, 6, and 9 months postoperatively. Results The nine-month average maintenance of nipple projection was 73.0%±9.67% for the LD flap group using an O-strut, 72.0%±11.53% for the TRAM flap group using an I-strut, and 69.0%±10.82% for the tissue expander/implant group using an L-strut. There were no cases of infection, wound dehiscence, or flap necrosis. Conclusions The application of an acellular dermal matrix with a different kind of strut for each of 3 breast reconstruction methods is an effective addition to current techniques for improving the maintenance of long-term projection in primary nipple reconstruction. PMID:27689049
An Assessment of Iterative Reconstruction Methods for Sparse Ultrasound Imaging
Valente, Solivan A.; Zibetti, Marcelo V. W.; Pipa, Daniel R.; Maia, Joaquim M.; Schneider, Fabio K.
2017-01-01
Ultrasonic image reconstruction using inverse problems has recently appeared as an alternative to enhance ultrasound imaging over beamforming methods. This approach depends on the accuracy of the acquisition model used to represent transducers, reflectivity, and medium physics. Iterative methods, well known in general sparse signal reconstruction, are also suited for imaging. In this paper, a discrete acquisition model is assessed by solving a linear system of equations by an ℓ1-regularized least-squares minimization, where the solution sparsity may be adjusted as desired. The paper surveys 11 variants of four well-known algorithms for sparse reconstruction, and assesses their optimization parameters with the goal of finding the best approach for iterative ultrasound imaging. The strategy for the model evaluation consists of using two distinct datasets. We first generate data from a synthetic phantom that mimics real targets inside a professional ultrasound phantom device. This dataset is contaminated with Gaussian noise with an estimated SNR, and all methods are assessed by their resulting images and performances. The model and methods are then assessed with real data collected by a research ultrasound platform when scanning the same phantom device, and results are compared with beamforming. A distinct real dataset is finally used to further validate the proposed modeling. Although high computational effort is required by iterative methods, results show that the discrete model may lead to images closer to ground-truth than traditional beamforming. However, computing capabilities of current platforms need to evolve before frame rates currently delivered by ultrasound equipments are achievable. PMID:28282862
Tensor-based dynamic reconstruction method for electrical capacitance tomography
NASA Astrophysics Data System (ADS)
Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.
2017-03-01
Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.
Reconstruction and analysis of hybrid composite shells using meshless methods
NASA Astrophysics Data System (ADS)
Bernardo, G. M. S.; Loja, M. A. R.
2017-02-01
The importance of focusing on the research of viable models to predict the behaviour of structures which may possess in some cases complex geometries is an issue that is growing in different scientific areas, ranging from the civil and mechanical engineering to the architecture or biomedical devices fields. In these cases, the research effort to find an efficient approach to fit laser scanning point clouds, to the desired surface, has been increasing, leading to the possibility of modelling as-built/as-is structures and components' features. However, combining the task of surface reconstruction and the implementation of a structural analysis model is not a trivial task. Although there are works focusing those different phases in separate, there is still an effective need to find approaches able to interconnect them in an efficient way. Therefore, achieving a representative geometric model able to be subsequently submitted to a structural analysis in a similar based platform is a fundamental step to establish an effective expeditious processing workflow. With the present work, one presents an integrated methodology based on the use of meshless approaches, to reconstruct shells described by points' clouds, and to subsequently predict their static behaviour. These methods are highly appropriate on dealing with unstructured points clouds, as they do not need to have any specific spatial or geometric requirement when implemented, depending only on the distance between the points. Details on the formulation, and a set of illustrative examples focusing the reconstruction of cylindrical and double-curvature shells, and its further analysis, are presented.
An improved image reconstruction method for optical intensity correlation Imaging
NASA Astrophysics Data System (ADS)
Gao, Xin; Feng, Lingjie; Li, Xiyu
2016-12-01
The intensity correlation imaging method is a novel kind of interference imaging and it has favorable prospects in deep space recognition. However, restricted by the low detecting signal-to-noise ratio (SNR), it's usually very difficult to obtain high-quality image of deep space object like high-Earth-orbit (HEO) satellite with existing phase retrieval methods. In this paper, based on the priori intensity statistical distribution model of the object and characteristics of measurement noise distribution, an improved method of Prior Information Optimization (PIO) is proposed to reduce the ambiguous images and accelerate the phase retrieval procedure thus realizing fine image reconstruction. As the simulations and experiments show, compared to previous methods, our method could acquire higher-resolution images with less error in low SNR condition.
Reverse engineering and analysis of large genome-scale gene networks.
Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas
2013-01-07
Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web.
A Robust Shape Reconstruction Method for Facial Feature Point Detection
Huang, Zhiqi
2017-01-01
Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods. PMID:28316615
The impact of HGT on phylogenomic reconstruction methods.
Lapierre, Pascal; Lasek-Nesselquist, Erica; Gogarten, Johann Peter
2014-01-01
Supermatrix and supertree analyses are frequently used to more accurately recover vertical evolutionary history but debate still exists over which method provides greater reliability. Traditional methods that resolve relationships among organisms from single genes are often unreliable because of the frequent lack of strong phylogenetic signal and the presence of systematic artifacts. Methods developed to reconstruct organismal history from multiple genes can be divided into supermatrix and supertree approaches. A supermatrix analysis consists of the concatenation of multiple genes into a single, possibly partitioned alignment, from which phylogenies are reconstructed using a variety of approaches. Supertrees build consensus trees from the topological information contained within individual gene trees. Both methods are now widely used and have been demonstrated to solve previously ambiguous or unresolved phylogenies with high statistical support. However, the amount of misleading signal needed to induce erroneous phylogenies for both strategies is still unknown. Using genome simulations, we test the accuracy of supertree and supermatrix approaches in recovering the true organismal phylogeny under increased amounts of horizontally transferred genes and changes in substitution rates. Our results show that overall, supermatrix approaches are preferable when a low amount of gene transfer is suspected to be present in the dataset, while supertrees have greater reliability in the presence of a moderate amount of misleading gene transfers. In the face of very high or very low substitution rates without horizontal gene transfers, supermatrix approaches outperform supertrees as individual gene trees remain unresolved and additional sequences contribute to a congruent phylogenetic signal.
Image reconstruction by the speckle-masking method.
Weigelt, G; Wirnitzer, B
1983-07-01
Speckle masking is a method for reconstructing high-resolution images of general astronomical objects from stellar speckle interferograms. In speckle masking no unresolvable star is required within the isoplanatic patch of the object. We present digital applications of speckle masking to close spectroscopic double stars. The speckle interferograms were recorded with the European Southern Observatory's 3.6-m telescope. Diffraction-limited resolution (0.03 arc see) was achieved, which is about 30 times higher than the resolution of conventional astrophotography.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos; Pina, Robert
2005-05-17
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos
2002-01-01
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos
2002-01-01
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape size, and/or position) as needed to best fit the data.
Algebraic filter approach for fast approximation of nonlinear tomographic reconstruction methods
NASA Astrophysics Data System (ADS)
Plantagie, Linda; Batenburg, Kees Joost
2015-01-01
We present a computational approach for fast approximation of nonlinear tomographic reconstruction methods by filtered backprojection (FBP) methods. Algebraic reconstruction algorithms are the methods of choice in a wide range of tomographic applications, yet they require significant computation time, restricting their usefulness. We build upon recent work on the approximation of linear algebraic reconstruction methods and extend the approach to the approximation of nonlinear reconstruction methods which are common in practice. We demonstrate that if a blueprint image is available that is sufficiently similar to the scanned object, our approach can compute reconstructions that approximate iterative nonlinear methods, yet have the same speed as FBP.
A New Method for Coronal Magnetic Field Reconstruction
NASA Astrophysics Data System (ADS)
Yi, Sibaek; Choe, Gwangson; Lim, Daye
2015-08-01
We present a new, simple, variational method for reconstruction of coronal force-free magnetic fields based on vector magnetogram data. Our method employs vector potentials for magnetic field description in order to ensure the divergence-free condition. As boundary conditions, it only requires the normal components of magnetic field and current density so that the boundary conditions are not over-specified as in many other methods. The boundary normal current distribution is initially fixed once and for all and does not need continual adjustment as in stress-and-relax type methods. We have tested the computational code based on our new method in problems with known solutions and those with actual photospheric data. When solutions are fully given at all boundaries, the accuracy of our method is almost comparable to best performing methods in the market. When magnetic field data are given only at the photospheric boundary, our method excels other methods in most “figures of merit” devised by Schrijver et al. (2006). Furthermore the residual force in the solution is at least an order of magnitude smaller than that of any other method. It can also accommodate the source-surface boundary condition at the top boundary. Our method is expected to contribute to the real time monitoring of the sun required for future space weather forecasts.
Reconstruction of Gene Networks of Iron Response in Shewanella oneidensis
Yang, Yunfeng; Harris, Daniel P; Luo, Feng; Joachimiak, Marcin; Wu, Liyou; Dehal, Paramvir; Jacobsen, Janet; Yang, Zamin Koo; Gao, Haichun; Arkin, Adam; Palumbo, Anthony Vito; Zhou, Jizhong
2009-01-01
It is of great interest to study the iron response of the -proteobacterium Shewanella oneidensis since it possesses a high content of iron and is capable of utilizing iron for anaerobic respiration. We report here that the iron response in S. oneidensis is a rapid process. To gain more insights into the bacterial response to iron, temporal gene expression profiles were examined for iron depletion and repletion, resulting in identification of iron-responsive biological pathways in a gene co-expression network. Iron acquisition systems, including genes unique to S. oneidensis, were rapidly and strongly induced by iron depletion, and repressed by iron repletion. Some were required for iron depletion, as exemplified by the mutational analysis of the putative siderophore biosynthesis protein SO3032. Unexpectedly, a number of genes related to anaerobic energy metabolism were repressed by iron depletion and induced by repletion, which might be due to the iron storage potential of their protein products. Other iron-responsive biological pathways include protein degradation, aerobic energy metabolism and protein synthesis. Furthermore, sequence motifs enriched in gene clusters as well as their corresponding DNA-binding proteins (Fur, CRP and RpoH) were identified, resulting in a regulatory network of iron response in S. oneidensis. Together, this work provides an overview of iron response and reveals novel features in S. oneidensis, including Shewanella-specific iron acquisition systems, and suggests the intimate relationship between anaerobic energy metabolism and iron response.
Detection of driver pathways using mutated gene network in cancer.
Li, Feng; Gao, Lin; Ma, Xiaoke; Yang, Xiaofei
2016-06-21
Distinguishing driver pathways has been extensively studied because they are critical for understanding the development and molecular mechanisms of cancers. Most existing methods for driver pathways are based on high coverage as well as high mutual exclusivity, with the underlying assumption that mutations are exclusive. However, in many cases, mutated driver genes in the same pathways are not strictly mutually exclusive. Based on this observation, we propose an index for quantifying mutual exclusivity between gene pairs. Then, we construct a mutated gene network for detecting driver pathways by integrating the proposed index and coverage. The detection of driver pathways on the mutated gene network consists of two steps: raw pathways are obtained using a CPM method, and the final driver pathways are selected using a strict testing strategy. We apply this method to glioblastoma and breast cancers and find that our method is more accurate than state-of-the-art methods in terms of enrichment of KEGG pathways. Furthermore, the detected driver pathways intersect with well-known pathways with moderate exclusivity, which cannot be discovered using the existing algorithms. In conclusion, the proposed method provides an effective way to investigate driver pathways in cancers.
Local motion-compensated method for high-quality 3D coronary artery reconstruction
Liu, Bo; Bai, Xiangzhi; Zhou, Fugen
2016-01-01
The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method. PMID:28018741
Local motion-compensated method for high-quality 3D coronary artery reconstruction.
Liu, Bo; Bai, Xiangzhi; Zhou, Fugen
2016-12-01
The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method.
Post-refinement multiscale method for pin power reconstruction
Collins, B.; Seker, V.; Downar, T.; Xu, Y.
2012-07-01
The ability to accurately predict local pin powers in nuclear reactors is necessary to understand the mechanisms that cause fuel pin failure during steady state and transient operation. In the research presented here, methods are developed to improve the local solution using high order methods with boundary conditions from a low order global solution. Several different core configurations were tested to determine the improvement in the local pin powers compared to the standard techniques based on diffusion theory and pin power reconstruction (PPR). The post-refinement multiscale methods use the global solution to determine boundary conditions for the local solution. The local solution is solved using either a fixed boundary source or an albedo boundary condition; this solution is 'post-refinement' and thus has no impact on the global solution. (authors)
Elastography Method for Reconstruction of Nonlinear Breast Tissue Properties
Wang, Z. G.; Liu, Y.; Wang, G.; Sun, L. Z.
2009-01-01
Elastography is developed as a quantitative approach to imaging linear elastic properties of tissues to detect suspicious tumors. In this paper a nonlinear elastography method is introduced for reconstruction of complex breast tissue properties. The elastic parameters are estimated by optimally minimizing the difference between the computed forces and experimental measures. A nonlinear adjoint method is derived to calculate the gradient of the objective function, which significantly enhances the numerical efficiency and stability. Simulations are conducted on a three-dimensional heterogeneous breast phantom extracting from real imaging including fatty tissue, glandular tissue, and tumors. An exponential-form of nonlinear material model is applied. The effect of noise is taken into account. Results demonstrate that the proposed nonlinear method opens the door toward nonlinear elastography and provides guidelines for future development and clinical application in breast cancer study. PMID:19636362
Comparison of pulse phase and thermographic signal reconstruction processing methods
NASA Astrophysics Data System (ADS)
Oswald-Tranta, Beata; Shepard, Steven M.
2013-05-01
Active thermography data for nondestructive testing has traditionally been evaluated by either visual or numerical identification of anomalous surface temperature contrast in the IR image sequence obtained as the target sample cools in response to thermal stimulation. However, in recent years, it has been demonstrated that considerably more information about the subsurface condition of a sample can be obtained by evaluating the time history of each pixel independently. In this paper, we evaluate the capabilities of two such analysis techniques, Pulse Phase Thermography (PPT) and Thermographic Signal Reconstruction (TSR) using induction and optical flash excitation. Data sequences from optical pulse and scanned induction heating are analyzed with both methods. Results are evaluated in terms of signal-tobackground ratio for a given subsurface feature. In addition to the experimental data, we present finite element simulation models with varying flaw diameter and depth, and discuss size measurement accuracy and the effect of noise on detection limits and sensitivity for both methods.
Comparison of image reconstruction methods for structured illumination microscopy
NASA Astrophysics Data System (ADS)
Lukeš, Tomas; Hagen, Guy M.; Křížek, Pavel; Švindrych, Zdeněk.; Fliegel, Karel; Klíma, Miloš
2014-05-01
Structured illumination microscopy (SIM) is a recent microscopy technique that enables one to go beyond the diffraction limit using patterned illumination. The high frequency information is encoded through aliasing into the observed image. By acquiring multiple images with different illumination patterns aliased components can be separated and a highresolution image reconstructed. Here we investigate image processing methods that perform the task of high-resolution image reconstruction, namely square-law detection, scaled subtraction, super-resolution SIM (SR-SIM), and Bayesian estimation. The optical sectioning and lateral resolution improvement abilities of these algorithms were tested under various noise level conditions on simulated data and on fluorescence microscopy images of a pollen grain test sample and of a cultured cell stained for the actin cytoskeleton. In order to compare the performance of the algorithms, the following objective criteria were evaluated: Signal to Noise Ratio (SNR), Signal to Background Ratio (SBR), circular average of the power spectral density and the S3 sharpness index. The results show that SR-SIM and Bayesian estimation combine illumination patterned images more effectively and provide better lateral resolution in exchange for more complex image processing. SR-SIM requires one to precisely shift the separated spectral components to their proper positions in reciprocal space. High noise levels in the raw data can cause inaccuracies in the shifts of the spectral components which degrade the super-resolved image. Bayesian estimation has proven to be more robust to changes in noise level and illumination pattern frequency.
NASA Astrophysics Data System (ADS)
Hu, Hui
This dissertation is principally concerned with improving the performance of a prototype image-intensifier -based cone-beam volume computed tomography system by removing or partially removing two of its restricting factors, namely, the inaccuracy of current cone-beam reconstruction algorithm and the image distortion associated with the curved detecting surface of the image intensifier. To improve the accuracy of cone-beam reconstruction, first, the currently most accurate and computationally efficient cone-beam reconstruction method, the Feldkamp algorithm, is investigated by studying the relation of an original unknown function with its Feldkamp estimate. From this study, a partial knowledge on the unknown function can be derived in the Fourier domain from its Feldkamp estimate. Then, based on the Gerchberg-Papoulis algorithm, a modified iterative algorithm efficiently incorporating the Fourier knowledge as well as the a priori spatial knowledge on the unknown function is devised and tested to improve the cone-beam reconstruction accuracy by postprocessing the Feldkamp estimate. Two methods are developed to remove the distortion associated with the curved surface of image intensifier. A calibrating method based on a rubber-sheet remapping is designed and implemented. As an alternative, the curvature can be considered in the reconstruction algorithm. As an initial effort along this direction, a generalized convolution -backprojection reconstruction algorithm for fan-beam and any circular detector arrays is derived and studied.
Structural influence of gene networks on their inference: analysis of C3NET
2011-01-01
Background The availability of large-scale high-throughput data possesses considerable challenges toward their functional analysis. For this reason gene network inference methods gained considerable interest. However, our current knowledge, especially about the influence of the structure of a gene network on its inference, is limited. Results In this paper we present a comprehensive investigation of the structural influence of gene networks on the inferential characteristics of C3NET - a recently introduced gene network inference algorithm. We employ local as well as global performance metrics in combination with an ensemble approach. The results from our numerical study for various biological and synthetic network structures and simulation conditions, also comparing C3NET with other inference algorithms, lead a multitude of theoretical and practical insights into the working behavior of C3NET. In addition, in order to facilitate the practical usage of C3NET we provide an user-friendly R package, called c3net, and describe its functionality. It is available from https://r-forge.r-project.org/projects/c3net and from the CRAN package repository. Conclusions The availability of gene network inference algorithms with known inferential properties opens a new era of large-scale screening experiments that could be equally beneficial for basic biological and biomedical research with auspicious prospects. The availability of our easy to use software package c3net may contribute to the popularization of such methods. Reviewers This article was reviewed by Lev Klebanov, Joel Bader and Yuriy Gusev. PMID:21696592
A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS
Hong Luo; Yidong Xia; Robert Nourgaliev
2011-05-01
A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.
Ethanol modulation of gene networks: implications for alcoholism.
Farris, Sean P; Miles, Michael F
2012-01-01
Alcoholism is a complex disease caused by a confluence of environmental and genetic factors influencing multiple brain pathways to produce a variety of behavioral sequelae, including addiction. Genetic factors contribute to over 50% of the risk for alcoholism and recent evidence points to a large number of genes with small effect sizes as the likely molecular basis for this disease. Recent progress in genomics (microarrays or RNA-Seq) and genetics has led to the identification of a large number of potential candidate genes influencing ethanol behaviors or alcoholism itself. To organize this complex information, investigators have begun to focus on the contribution of gene networks, rather than individual genes, for various ethanol-induced behaviors in animal models or behavioral endophenotypes comprising alcoholism. This chapter reviews some of the methods used for constructing gene networks from genomic data and some of the recent progress made in applying such approaches to the study of the neurobiology of ethanol. We show that rapid technology development in gathering genomic data, together with sophisticated experimental design and a growing collection of analysis tools are producing novel insights for understanding the molecular basis of alcoholism and that such approaches promise new opportunities for therapeutic development.
NASA Astrophysics Data System (ADS)
Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook
2013-05-01
In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method.
Patel, Niyant V.; Wagner, Douglas S.
2015-01-01
Background: Venous thromboembolism (VTE) risk models including the Davison risk score and the 2005 Caprini risk assessment model have been validated in plastic surgery patients. However, their utility and predictive value in breast reconstruction has not been well described. We sought to determine the utility of current VTE risk models in this population and the VTE rate observed in various methods of breast reconstruction. Methods: A retrospective review of breast reconstructions by a single surgeon was performed. One hundred consecutive transverse rectus abdominis myocutaneous (TRAM) patients, 100 consecutive implant patients, and 100 consecutive latissimus dorsi patients were identified over a 10-year period. Patient demographics and presence of symptomatic VTE were collected. 2005 Caprini risk scores and Davison risk scores were calculated for each patient. Results: The TRAM reconstruction group was found to have a higher VTE rate (6%) than the implant (0%) and latissimus (0%) reconstruction groups (P < 0.01). Mean Davison risk scores and 2005 Caprini scores were similar across all reconstruction groups (P > 0.1). The vast majority of patients were stratified as high risk (87.3%) by the VTE risk models. However, only TRAM reconstruction patients demonstrated significant VTE risk. Conclusions: TRAM reconstruction appears to have a significantly higher risk of VTE than both implant and latissimus reconstruction. Current risk models do not effectively stratify breast reconstruction patients at risk for VTE. The method of breast reconstruction appears to have a significant role in patients’ VTE risk. PMID:26090287
Yoon, Sungwon; Pineda, Angel R.; Fahrig, Rebecca
2010-01-01
Purpose: An iterative tomographic reconstruction algorithm that simultaneously segments and reconstructs the reconstruction domain is proposed and applied to tomographic reconstructions from a sparse number of projection images. Methods: The proposed algorithm uses a two-phase level set method segmentation in conjunction with an iterative tomographic reconstruction to achieve simultaneous segmentation and reconstruction. The simultaneous segmentation and reconstruction is achieved by alternating between level set function evolutions and per-region intensity value updates. To deal with the limited number of projections, a priori information about the reconstruction is enforced via penalized likelihood function. Specifically, smooth function within each region (piecewise smooth function) and bounded function intensity values for each region are assumed. Such a priori information is formulated into a quadratic objective function with linear bound constraints. The level set function evolutions are achieved by artificially time evolving the level set function in the negative gradient direction; the intensity value updates are achieved by using the gradient projection conjugate gradient algorithm. Results: The proposed simultaneous segmentation and reconstruction results were compared to “conventional” iterative reconstruction (with no segmentation), iterative reconstruction followed by segmentation, and filtered backprojection. Improvements of 6%–13% in the normalized root mean square error were observed when the proposed algorithm was applied to simulated projections of a numerical phantom and to real fan-beam projections of the Catphan phantom, both of which did not satisfy the a priori assumptions. Conclusions: The proposed simultaneous segmentation and reconstruction resulted in improved reconstruction image quality. The algorithm correctly segments the reconstruction space into regions, preserves sharp edges between different regions, and smoothes the noise
Yoon, Sungwon; Pineda, Angel R.; Fahrig, Rebecca
2010-05-15
Purpose: An iterative tomographic reconstruction algorithm that simultaneously segments and reconstructs the reconstruction domain is proposed and applied to tomographic reconstructions from a sparse number of projection images. Methods: The proposed algorithm uses a two-phase level set method segmentation in conjunction with an iterative tomographic reconstruction to achieve simultaneous segmentation and reconstruction. The simultaneous segmentation and reconstruction is achieved by alternating between level set function evolutions and per-region intensity value updates. To deal with the limited number of projections, a priori information about the reconstruction is enforced via penalized likelihood function. Specifically, smooth function within each region (piecewise smooth function) and bounded function intensity values for each region are assumed. Such a priori information is formulated into a quadratic objective function with linear bound constraints. The level set function evolutions are achieved by artificially time evolving the level set function in the negative gradient direction; the intensity value updates are achieved by using the gradient projection conjugate gradient algorithm. Results: The proposed simultaneous segmentation and reconstruction results were compared to ''conventional'' iterative reconstruction (with no segmentation), iterative reconstruction followed by segmentation, and filtered backprojection. Improvements of 6%-13% in the normalized root mean square error were observed when the proposed algorithm was applied to simulated projections of a numerical phantom and to real fan-beam projections of the Catphan phantom, both of which did not satisfy the a priori assumptions. Conclusions: The proposed simultaneous segmentation and reconstruction resulted in improved reconstruction image quality. The algorithm correctly segments the reconstruction space into regions, preserves sharp edges between different regions, and smoothes the noise
New method to analyze internal disruptions with tomographic reconstructions
NASA Astrophysics Data System (ADS)
Tanzi, C. P.; de Blank, H. J.
1997-03-01
Sawtooth crashes have been investigated on the Rijnhuizen Tokamak Project (RTP) [N. J. Lopes Cardozo et al., Proceedings of the 14th International Conference on Plasma Physics and Controlled Nuclear Fusion Research, Würzburg, 1992 (International Atomic Energy Agency, Vienna, 1993), Vol. 1, p. 271]. Internal disruptions in tokamak plasmas often exhibit an m=1 poloidal mode structure prior to the collapse which can be clearly identified by means of multicamera soft x-ray diagnostics. In this paper tomographic reconstructions of such m=1 modes are analyzed with a new method, based on magnetohydrodynamic (MHD) invariants computed from the two-dimensional emissivity profiles, which quantifies the amount of profile flattening not only after the crash but also during the precursor oscillations. The results are interpreted by comparing them with two models which simulate the measurements of the m=1 redistribution of soft x-ray emissivity prior to the sawtooth crash. One model is based on the magnetic reconnection model of Kadomtsev. The other involves ideal MHD motion only. In cases where differences in magnetic topology between the two models cannot be seen in the tomograms, the analysis of profile flattening has an advantage. The analysis shows that in RTP the clearly observed m=1 displacement of some sawteeth requires the presence of convective ideal MHD motion, whereas other precursors are consistent with magnetic reconnection of up to 75% of the magnetic flux within the q=1 surface. The possibility of ideal interchange combined with enhanced cross-field transport is not excluded.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N F; Sitek, A
2011-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496
Reduction and reconstruction methods for simulation and control of fluids
NASA Astrophysics Data System (ADS)
Ma, Zhanhua
In this thesis we develop model reduction/reconstruction methods that are applied to simulation and control of fluids. In the first part of the thesis, we focus on development of dimension reduction methods that compute reduced-order models (at the order of 101˜2) of systems with high-dimensional states (at the order of 105˜8) that are typical in computational fluid dynamics. The reduced-order models are then used for feedback control design for the full systems, as the control design tools are usually applicable only to systems of order up to 10 4. First, we show that a widely-used model reduction method for stable linear timeinvariant (LTI) systems, the approximate balanced truncation method (also called balanced POD), yields identical reduced-order models as Eigensystem Realization Algorithm (ERA), a well-known method in system identification. Unlike ERA, Balanced POD generates sets of modes that are useful in controller/observer design and systems analysis. On the other hand, ERA is more computationally efficient and does not need data from adjoint systems, which cannot be constructed in experiments and are often costly to construct and simulate numerically. The equivalence of ERA and balanced POD leads us to further design a version of ERA that works for unstable (linear) systems with one-dimensional unstable eigenspace and is equivalent to a recently developed version of balanced POD for unstable systems. We consider further generalization of balanced POD/ERA methods for linearized time-periodic systems around an unstable orbit. Four algorithms are presented: the lifted balanced POD/lifted ERA and the periodic balanced POD/periodic ERA. The lifting approach generates a LTI reduced-order model that updates the system once every period, and the periodic approach generates a periodic reduced-order model. By construction the lifted ERA is the most computationally efficient algorithm and it does not need adjoint data. By removing periodicity in periodic balanced
Gene Networks Underlying Chronic Sleep Deprivation in Drosophila
2014-06-15
SECURITY CLASSIFICATION OF: Studies of the gene network affected by sleep deprivation and stress in the fruit fly Drosophila have revealed the...transduction pathways are affected. Subseuqent tests of mutants in these pathways demonstrated a strong effect on sleep maintenance. Further...15-Apr-2009 14-Apr-2013 Approved for Public Release; Distribution Unlimited Gene Networks Underlying Chronic Sleep Deprivation in Drosophila The
Comparison of methods for the reduction of reconstructed layers in atmospheric tomography.
Saxenhuber, Daniela; Auzinger, Günter; Louarn, Miska Le; Helin, Tapio
2017-04-01
For the new generation of extremely large telescopes (ELTs), the computational effort for adaptive optics (AO) systems is demanding even for fast reconstruction algorithms. In wide-field AO, atmospheric tomography, i.e., the reconstruction of turbulent atmospheric layers from wavefront sensor data in several directions of view, is the crucial step for an overall reconstruction. Along with the number of deformable mirrors, wavefront sensors and their resolution, as well as the guide star separation, the number of reconstruction layers contributes significantly to the numerical effort. To reduce the computational cost, a sparse reconstruction profile which still yields good reconstruction quality is needed. In this paper, we analyze existing methods and present new approaches to determine optimal layer heights and turbulence weights for the tomographic reconstruction. Two classes of methods are discussed. On the one hand, we have compression methods that downsample a given input profile to fewer layers. Among other methods, a new compression method based on discrete optimization of collecting atmospheric layers to subgroups and the compression by means of conserving turbulence moments is presented. On the other hand, we take a look at a joint optimization of tomographic reconstruction and reconstruction profile during atmospheric tomography, which is independent of any a priori information on the underlying input profile. We analyze and study the qualitative performance of these methods for different input profiles and varying fields of view in an ELT-sized multi-object AO setting on the European Southern Observatory end-to-end simulation tool OCTOPUS.
NASA Astrophysics Data System (ADS)
He, Jinping; Ruan, Ningjuan; Zhao, Haibo; Liu, Yuchen
2016-10-01
Remote sensing features are varied and complicated. There is no comprehensive coverage dictionary for reconstruction. The reconstruction precision is not guaranteed. Aiming at the above problems, a novel reconstruction method with multiple compressed sensing data based on energy compensation is proposed in this paper. The multiple measured data and multiple coding matrices compose the reconstruction equation. It is locally solved through the Orthogonal Matching Pursuit (OMP) algorithm. Then the initial reconstruction image is obtained. Further assuming the local image patches have the same compensation gray value, the mathematical model of compensation value is constructed by minimizing the error of multiple estimated measured values and actual measured values. After solving the minimization, the compensation values are added to the initial reconstruction image. Then the final energy compensation image is obtained. The experiments prove that the energy compensation method is superior to those without compensation. Our method is more suitable for remote sensing features.
A novel method for the 3-D reconstruction of scoliotic ribs from frontal and lateral radiographs.
Seoud, Lama; Cheriet, Farida; Labelle, Hubert; Dansereau, Jean
2011-05-01
Among the external manifestations of scoliosis, the rib hump, which is associated with the ribs' deformities and rotations, constitutes the most disturbing aspect of the scoliotic deformity for patients. A personalized 3-D model of the rib cage is important for a better evaluation of the deformity, and hence, a better treatment planning. A novel method for the 3-D reconstruction of the rib cage, based only on two standard radiographs, is proposed in this paper. For each rib, two points are extrapolated from the reconstructed spine, and three points are reconstructed by stereo radiography. The reconstruction is then refined using a surface approximation. The method was evaluated using clinical data of 13 patients with scoliosis. A comparison was conducted between the reconstructions obtained with the proposed method and those obtained by using a previous reconstruction method based on two frontal radiographs. A first comparison criterion was the distances between the reconstructed ribs and the surface topography of the trunk, considered as the reference modality. The correlation between ribs axial rotation and back surface rotation was also evaluated. The proposed method successfully reconstructed the ribs of the 6th-12th thoracic levels. The evaluation results showed that the 3-D configuration of the new rib reconstructions is more consistent with the surface topography and provides more accurate measurements of ribs axial rotation.
Evolution of a core gene network for skeletogenesis in chordates.
Hecht, Jochen; Stricker, Sigmar; Wiecha, Ulrike; Stiege, Asita; Panopoulou, Georgia; Podsiadlowski, Lars; Poustka, Albert J; Dieterich, Christoph; Ehrich, Siegfried; Suvorova, Julia; Mundlos, Stefan; Seitz, Volkhard
2008-03-21
The skeleton is one of the most important features for the reconstruction of vertebrate phylogeny but few data are available to understand its molecular origin. In mammals the Runt genes are central regulators of skeletogenesis. Runx2 was shown to be essential for osteoblast differentiation, tooth development, and bone formation. Both Runx2 and Runx3 are essential for chondrocyte maturation. Furthermore, Runx2 directly regulates Indian hedgehog expression, a master coordinator of skeletal development. To clarify the correlation of Runt gene evolution and the emergence of cartilage and bone in vertebrates, we cloned the Runt genes from hagfish as representative of jawless fish (MgRunxA, MgRunxB) and from dogfish as representative of jawed cartilaginous fish (ScRunx1-3). According to our phylogenetic reconstruction the stem species of chordates harboured a single Runt gene and thereafter Runt locus duplications occurred during early vertebrate evolution. All newly isolated Runt genes were expressed in cartilage according to quantitative PCR. In situ hybridisation confirmed high MgRunxA expression in hard cartilage of hagfish. In dogfish ScRunx2 and ScRunx3 were expressed in embryonal cartilage whereas all three Runt genes were detected in teeth and placoid scales. In cephalochordates (lancelets) Runt, Hedgehog and SoxE were strongly expressed in the gill bars and expression of Runt and Hedgehog was found in endo- as well as ectodermal cells. Furthermore we demonstrate that the lancelet Runt protein binds to Runt binding sites in the lancelet Hedgehog promoter and regulates its activity. Together, these results suggest that Runt and Hedgehog were part of a core gene network for cartilage formation, which was already active in the gill bars of the common ancestor of cephalochordates and vertebrates and diversified after Runt duplications had occurred during vertebrate evolution. The similarities in expression patterns of Runt genes support the view that teeth and
Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models
Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...
Chen, Shuo; Wang, Gang; Cui, Xiaoyu; Liu, Quan
2017-01-23
Raman spectroscopy has demonstrated great potential in biomedical applications. However, spectroscopic Raman imaging is limited in the investigation of fast changing phenomena because of slow data acquisition. Our previous studies have indicated that spectroscopic Raman imaging can be significantly sped up using the approach of narrow-band imaging followed by spectral reconstruction. A multi-channel system was built to demonstrate the feasibility of fast wide-field spectroscopic Raman imaging using the approach of simultaneous narrow-band image acquisition followed by spectral reconstruction based on Wiener estimation in phantoms. To further improve the accuracy of reconstructed Raman spectra, we propose a stepwise spectral reconstruction method in this study, which can be combined with the earlier developed sequential weighted Wiener estimation to improve spectral reconstruction accuracy. The stepwise spectral reconstruction method first reconstructs the fluorescence background spectrum from narrow-band measurements and then the pure Raman narrow-band measurements can be estimated by subtracting the estimated fluorescence background from the overall narrow-band measurements. Thereafter, the pure Raman spectrum can be reconstructed from the estimated pure Raman narrow-band measurements. The result indicates that the stepwise spectral reconstruction method can improve spectral reconstruction accuracy significantly when combined with sequential weighted Wiener estimation, compared with the traditional Wiener estimation. In addition, qualitatively accurate cell Raman spectra were successfully reconstructed using the stepwise spectral reconstruction method from the narrow-band measurements acquired by a four-channel wide-field Raman spectroscopic imaging system. This method can potentially facilitate the adoption of spectroscopic Raman imaging to the investigation of fast changing phenomena.
New method to analyze internal disruptions with tomographic reconstructions
Tanzi, C.P.; de Blank, H.J.
1997-03-01
Sawtooth crashes have been investigated on the Rijnhuizen Tokamak Project (RTP) [N. J. Lopes Cardozo {ital et al.}, {ital Proceedings of the 14th International Conference on Plasma Physics and Controlled Nuclear Fusion Research}, W{umlt u}rzburg, 1992 (International Atomic Energy Agency, Vienna, 1993), Vol. 1, p. 271]. Internal disruptions in tokamak plasmas often exhibit an m=1 poloidal mode structure prior to the collapse which can be clearly identified by means of multicamera soft x-ray diagnostics. In this paper tomographic reconstructions of such m=1 modes are analyzed with a new method, based on magnetohydrodynamic (MHD) invariants computed from the two-dimensional emissivity profiles, which quantifies the amount of profile flattening not only after the crash but also during the precursor oscillations. The results are interpreted by comparing them with two models which simulate the measurements of the m=1 redistribution of soft x-ray emissivity prior to the sawtooth crash. One model is based on the magnetic reconnection model of Kadomtsev. The other involves ideal MHD motion only. In cases where differences in magnetic topology between the two models cannot be seen in the tomograms, the analysis of profile flattening has an advantage. The analysis shows that in RTP the clearly observed m=1 displacement of some sawteeth requires the presence of convective ideal MHD motion, whereas other precursors are consistent with magnetic reconnection of up to 75{percent} of the magnetic flux within the q=1 surface. The possibility of ideal interchange combined with enhanced cross-field transport is not excluded. {copyright} {ital 1997 American Institute of Physics.}
On multigrid methods for image reconstruction from projections
Henson, V.E.; Robinson, B.T.; Limber, M.
1994-12-31
The sampled Radon transform of a 2D function can be represented as a continuous linear map R : L{sup 1} {yields} R{sup N}. The image reconstruction problem is: given a vector b {element_of} R{sup N}, find an image (or density function) u(x, y) such that Ru = b. Since in general there are infinitely many solutions, the authors pick the solution with minimal 2-norm. Numerous proposals have been made regarding how best to discretize this problem. One can, for example, select a set of functions {phi}{sub j} that span a particular subspace {Omega} {contained_in} L{sup 1}, and model R : {Omega} {yields} R{sup N}. The subspace {Omega} may be chosen as a member of a sequence of subspaces whose limit is dense in L{sup 1}. One approach to the choice of {Omega} gives rise to a natural pixel discretization of the image space. Two possible choices of the set {phi}{sub j} are the set of characteristic functions of finite-width `strips` representing energy transmission paths and the set of intersections of such strips. The authors have studied the eigenstructure of the matrices B resulting from these choices and the effect of applying a Gauss-Seidel iteration to the problem Bw = b. There exists a near null space into which the error vectors migrate with iteration, after which Gauss-Seidel iteration stalls. The authors attempt to accelerate convergence via a multilevel scheme, based on the principles of McCormick`s Multilevel Projection Method (PML). Coarsening is achieved by thickening the rays which results in a much smaller discretization of an optimal grid, and a halving of the number of variables. This approach satisfies all the requirements of the PML scheme. They have observed that a multilevel approach based on this idea accelerates convergence at least to the point where noise in the data dominates.
Reconstruction method for data protection in telemedicine systems
NASA Astrophysics Data System (ADS)
Buldakova, T. I.; Suyatinov, S. I.
2015-03-01
In the report the approach to protection of transmitted data by creation of pair symmetric keys for the sensor and the receiver is offered. Since biosignals are unique for each person, their corresponding processing allows to receive necessary information for creation of cryptographic keys. Processing is based on reconstruction of the mathematical model generating time series that are diagnostically equivalent to initial biosignals. Information about the model is transmitted to the receiver, where the restoration of physiological time series is performed using the reconstructed model. Thus, information about structure and parameters of biosystem model received in the reconstruction process can be used not only for its diagnostics, but also for protection of transmitted data in telemedicine complexes.
NASA Astrophysics Data System (ADS)
Chen, Xueli; Yang, Defu; Zhang, Qitan; Liang, Jimin
2014-05-01
Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l1/2 regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l1/2 regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l1 regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.
Solution of the quasispecies model for an arbitrary gene network
NASA Astrophysics Data System (ADS)
Tannenbaum, Emmanuel; Shakhnovich, Eugene I.
2004-08-01
In this paper, we study the equilibrium behavior of Eigen’s quasispecies equations for an arbitrary gene network. We consider a genome consisting of N genes, so that the full genome sequence σ may be written as σ=σ1σ2⋯σN , where σi are sequences of individual genes. We assume a single fitness peak model for each gene, so that gene i has some “master” sequence σi,0 for which it is functioning. The fitness landscape is then determined by which genes in the genome are functioning and which are not. The equilibrium behavior of this model may be solved in the limit of infinite sequence length. The central result is that, instead of a single error catastrophe, the model exhibits a series of localization to delocalization transitions, which we term an “error cascade.” As the mutation rate is increased, the selective advantage for maintaining functional copies of certain genes in the network disappears, and the population distribution delocalizes over the corresponding sequence spaces. The network goes through a series of such transitions, as more and more genes become inactivated, until eventually delocalization occurs over the entire genome space, resulting in a final error catastrophe. This model provides a criterion for determining the conditions under which certain genes in a genome will lose functionality due to genetic drift. It also provides insight into the response of gene networks to mutagens. In particular, it suggests an approach for determining the relative importance of various genes to the fitness of an organism, in a more accurate manner than the standard “deletion set” method. The results in this paper also have implications for mutational robustness and what C.O. Wilke termed “survival of the flattest.”
Comparison of reconstruction methods and quantitative accuracy in Siemens Inveon PET scanner
NASA Astrophysics Data System (ADS)
Ram Yu, A.; Kim, Jin Su; Kang, Joo Hyun; Moo Lim, Sang
2015-04-01
PET reconstruction is key to the quantification of PET data. To our knowledge, no comparative study of reconstruction methods has been performed to date. In this study, we compared reconstruction methods with various filters in terms of their spatial resolution, non-uniformities (NU), recovery coefficients (RCs), and spillover ratios (SORs). In addition, the linearity of reconstructed radioactivity between linearity of measured and true concentrations were also assessed. A Siemens Inveon PET scanner was used in this study. Spatial resolution was measured with NEMA standard by using a 1 mm3 sized 18F point source. Image quality was assessed in terms of NU, RC and SOR. To measure the effect of reconstruction algorithms and filters, data was reconstructed using FBP, 3D reprojection algorithm (3DRP), ordered subset expectation maximization 2D (OSEM 2D), and maximum a posteriori (MAP) with various filters or smoothing factors (β). To assess the linearity of reconstructed radioactivity, image quality phantom filled with 18F was used using FBP, OSEM and MAP (β =1.5 & 5 × 10-5). The highest achievable volumetric resolution was 2.31 mm3 and the highest RCs were obtained when OSEM 2D was used. SOR was 4.87% for air and 3.97% for water, obtained OSEM 2D reconstruction was used. The measured radioactivity of reconstruction image was proportional to the injected one for radioactivity below 16 MBq/ml when FBP or OSEM 2D reconstruction methods were used. By contrast, when the MAP reconstruction method was used, activity of reconstruction image increased proportionally, regardless of the amount of injected radioactivity. When OSEM 2D or FBP were used, the measured radioactivity concentration was reduced by 53% compared with true injected radioactivity for radioactivity <16 MBq/ml. The OSEM 2D reconstruction method provides the highest achievable volumetric resolution and highest RC among all the tested methods and yields a linear relation between the measured and true
Analysis of cascading failure in gene networks.
Sun, Longxiao; Wang, Shudong; Li, Kaikai; Meng, Dazhi
2012-01-01
It is an important subject to research the functional mechanism of cancer-related genes make in formation and development of cancers. The modern methodology of data analysis plays a very important role for deducing the relationship between cancers and cancer-related genes and analyzing functional mechanism of genome. In this research, we construct mutual information networks using gene expression profiles of glioblast and renal in normal condition and cancer conditions. We investigate the relationship between structure and robustness in gene networks of the two tissues using a cascading failure model based on betweenness centrality. Define some important parameters such as the percentage of failure nodes of the network, the average size-ratio of cascading failure, and the cumulative probability of size-ratio of cascading failure to measure the robustness of the networks. By comparing control group and experiment groups, we find that the networks of experiment groups are more robust than that of control group. The gene that can cause large scale failure is called structural key gene. Some of them have been confirmed to be closely related to the formation and development of glioma and renal cancer respectively. Most of them are predicted to play important roles during the formation of glioma and renal cancer, maybe the oncogenes, suppressor genes, and other cancer candidate genes in the glioma and renal cancer cells. However, these studies provide little information about the detailed roles of identified cancer genes.
Paper-based Synthetic Gene Networks
Pardee, Keith; Green, Alexander A.; Ferrante, Tom; Cameron, D. Ewen; DaleyKeyser, Ajay; Yin, Peng; Collins, James J.
2014-01-01
Synthetic gene networks have wide-ranging uses in reprogramming and rewiring organisms. To date, there has not been a way to harness the vast potential of these networks beyond the constraints of a laboratory or in vivo environment. Here, we present an in vitro paper-based platform that provides a new venue for synthetic biologists to operate, and a much-needed medium for the safe deployment of engineered gene circuits beyond the lab. Commercially available cell-free systems are freeze-dried onto paper, enabling the inexpensive, sterile and abiotic distribution of synthetic biology-based technologies for the clinic, global health, industry, research and education. For field use, we create circuits with colorimetric outputs for detection by eye, and fabricate a low-cost, electronic optical interface. We demonstrate this technology with small molecule and RNA actuation of genetic switches, rapid prototyping of complex gene circuits, and programmable in vitro diagnostics, including glucose sensors and strain-specific Ebola virus sensors. PMID:25417167
Paper-based synthetic gene networks.
Pardee, Keith; Green, Alexander A; Ferrante, Tom; Cameron, D Ewen; DaleyKeyser, Ajay; Yin, Peng; Collins, James J
2014-11-06
Synthetic gene networks have wide-ranging uses in reprogramming and rewiring organisms. To date, there has not been a way to harness the vast potential of these networks beyond the constraints of a laboratory or in vivo environment. Here, we present an in vitro paper-based platform that provides an alternate, versatile venue for synthetic biologists to operate and a much-needed medium for the safe deployment of engineered gene circuits beyond the lab. Commercially available cell-free systems are freeze dried onto paper, enabling the inexpensive, sterile, and abiotic distribution of synthetic-biology-based technologies for the clinic, global health, industry, research, and education. For field use, we create circuits with colorimetric outputs for detection by eye and fabricate a low-cost, electronic optical interface. We demonstrate this technology with small-molecule and RNA actuation of genetic switches, rapid prototyping of complex gene circuits, and programmable in vitro diagnostics, including glucose sensors and strain-specific Ebola virus sensors.
Image reconstruction method IRBis for optical/infrared long-baseline interferometry
NASA Astrophysics Data System (ADS)
Hofmann, Karl-Heinz; Heininger, Matthias; Schertl, Dieter; Weigelt, Gerd; Millour, Florentin; Berio, Philippe
2016-07-01
IRBis is an image reconstruction method for optical/infrared long-baseline interferometry. IRBis can reconstruct images from (a) measured visibilities and closure phases, or from (b) measured complex visibilities (i.e. the Fourier phases and visibilities). The applied optimization routine ASA CG is based on conjugate gradients. The method allows the user to implement different regularizers, as for example, maximum entropy, smoothness, total variation, etc., and apply residual ratios as an additional metric for goodness-of-fit. In addition, IRBis allows the user to change the following reconstruction parameters: (a) FOV of the area to be reconstructed, (b) the size of the pixel-grid used, (c) size of a binary mask in image space allowing reconstructed intensities < 0 within the binary mask only, (d) the strength of the regularization, etc. The two main reconstruction parameters are the size of the binary mask in image space (c) and the strength of the regularization (d). Several values of these two parameters are tested within the algorithm. The quality of the different reconstructions obtained is roughly estimated by evaluation of the differences between the measured data and the reconstructed image (using the reduced χ2 values and the residual ratios). The best-quality reconstruction and a few reconstructions sorted according to their quality are provided to the user as resulting reconstructions. We describe the theory of IRBis and will present several applications to simulated interferometric data and data of real astronomical objects: (a) We have investigated image reconstruction experiments of MATISSE target candidates by computer simulations. We have modeled gaps in a disk of a young stellar object and have simulated interferometric data (squared visibilities and closure phases) with a signal-to-noise ratio as expected for MATISSE observations. We have performed image reconstruction experiments with this model for different flux levels of the target and
A Novel Parallel Method for Speckle Masking Reconstruction Using the OpenMP
NASA Astrophysics Data System (ADS)
Li, Xuebao; Zheng, Yanfang
2016-08-01
High resolution reconstruction technology is developed to help enhance the spatial resolution of observational images for ground-based solar telescopes, such as speckle masking. Near real-time reconstruction performance is achieved on a high performance cluster using the Message Passing Interface (MPI). However, much time is spent in reconstructing solar subimages in such a speckle reconstruction. We design and implement a novel parallel method for speckle masking reconstruction of solar subimage on a shared memory machine using the OpenMP. Real tests are performed to verify the correctness of our codes. We present the details of several parallel reconstruction steps. The parallel implementation between various modules shows a great speed increase as compared to single thread serial implementation, and a speedup of about 2.5 is achieved in one subimage reconstruction. The timing result for reconstructing one subimage with 256×256 pixels shows a clear advantage with greater number of threads. This novel parallel method can be valuable in real-time reconstruction of solar images, especially after porting to a high performance cluster.
NASA Astrophysics Data System (ADS)
Althobaiti, Murad; Vavadi, Hamed; Zhu, Quing
2017-02-01
Ultrasound-guided diffuse optical tomography (DOT) is a promising imaging technique that maps hemoglobin concentrations of breast lesions to assist ultrasound (US) for cancer diagnosis and treatment monitoring. The accurate recovery of breast lesion optical properties requires an effective image reconstruction method. We introduce a reconstruction approach in which US images are encoded as prior information for regularization of the inversion matrix. The framework of this approach is based on image reconstruction package "NIRFAST." We compare this approach to the US-guided dual-zone mesh reconstruction method, which is based on Born approximation and conjugate gradient optimization developed in our laboratory. Results were evaluated using phantoms and clinical data. This method improves classification of malignant and benign lesions by increasing malignant to benign lesion absorption contrast. The results also show improvements in reconstructed lesion shapes and the spatial distribution of absorption maps.
Tamada, Yoshinori; Imoto, Seiya; Araki, Hiromitsu; Nagasaki, Masao; Print, Cristin; Charnock-Jones, D Stephen; Miyano, Satoru
2011-01-01
We present a novel algorithm to estimate genome-wide gene networks consisting of more than 20,000 genes from gene expression data using nonparametric Bayesian networks. Due to the difficulty of learning Bayesian network structures, existing algorithms cannot be applied to more than a few thousand genes. Our algorithm overcomes this limitation by repeatedly estimating subnetworks in parallel for genes selected by neighbor node sampling. Through numerical simulation, we confirmed that our algorithm outperformed a heuristic algorithm in a shorter time. We applied our algorithm to microarray data from human umbilical vein endothelial cells (HUVECs) treated with siRNAs, to construct a human genome-wide gene network, which we compared to a small gene network estimated for the genes extracted using a traditional bioinformatics method. The results showed that our genome-wide gene network contains many features of the small network, as well as others that could not be captured during the small network estimation. The results also revealed master-regulator genes that are not in the small network but that control many of the genes in the small network. These analyses were impossible to realize without our proposed algorithm.
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
GFD-Net: a novel semantic similarity methodology for the analysis of gene networks.
Díaz-Montaña, Juan J; Díaz-Díaz, Norberto; Gómez-Vela, Francisco
2017-03-05
Since the popularization of biological network inference methods, it has become crucial to create methods to validate the resulting models. Here we present GFD-Net, the first methodology that applies the concept of semantic similarity to gene network analysis. GFD-Net combines the concept of semantic similarity with the use of gene network topology to analyze the functional dissimilarity of gene networks based on Gene Ontology (GO). The main innovation of GFD-Net lies in the way that semantic similarity is used to analyze gene networks taking into account the network topology. GFD-Net selects a functionality for each gene (specified by a GO term), weights each edge according to the dissimilarity between the nodes at its ends and calculates a quantitative measure of the network functional dissimilarity, i.e. a quantitative value of the degree of dissimilarity between the connected genes. The robustness of GFD-Net as a gene network validation tool was demonstrated by performing a ROC analysis on several network repositories. Furthermore, a well-known network was analyzed showing that GFD-Net can also be used to infer knowledge. The relevance of GFD-Net becomes more evident in subSection 3.3 where an example of how GFD-Net can be applied to the study of human diseases is presented. GFD-Net is available as an open-source Cytoscape app which offers a user-friendly interface to configure and execute the algorithm as well as the ability to visualize and interact with the results (http://apps.cytoscape.org/apps/gfdnet).
Qian Weixin; Qi Shuangxi; Wang Wanli; Cheng Jinming; Liu Dongbing
2011-09-15
Neutron penumbral imaging is a significant diagnostic technique in laser-driven inertial confinement fusion experiment. It is very important to develop a new reconstruction method to improve the resolution of neutron penumbral imaging. A new nonlinear reconstruction method based on total variation (TV) regularization is proposed in this paper. A TV-norm is used as regularized term to construct a smoothing functional for penumbral image reconstruction in the new method, in this way, the problem of penumbral image reconstruction is transformed to the problem of a functional minimization. In addition, a fixed point iteration scheme is introduced to solve the problem of functional minimization. The numerical experimental results show that, compared to linear reconstruction method based on Wiener filter, the TV regularized nonlinear reconstruction method is beneficial to improve the quality of reconstructed image with better performance of noise smoothing and edge preserving. Meanwhile, it can also obtain the spatial resolution with 5 {mu}m which is higher than the Wiener method.
Yang, Alice C; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole
2016-06-01
The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in magnetic resonance imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be used to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MRI, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold standards, are discussed.
Keller, Susanna R.; Lee, Jae K.
2017-01-01
Different computational approaches have been examined and compared for inferring network relationships from time-series genomic data on human disease mechanisms under the recent Dialogue on Reverse Engineering Assessment and Methods (DREAM) challenge. Many of these approaches infer all possible relationships among all candidate genes, often resulting in extremely crowded candidate network relationships with many more False Positives than True Positives. To overcome this limitation, we introduce a novel approach, Module Anchored Network Inference (MANI), that constructs networks by analyzing sequentially small adjacent building blocks (modules). Using MANI, we inferred a 7-gene adipogenesis network based on time-series gene expression data during adipocyte differentiation. MANI was also applied to infer two 10-gene networks based on time-course perturbation datasets from DREAM3 and DREAM4 challenges. MANI well inferred and distinguished serial, parallel, and time-dependent gene interactions and network cascades in these applications showing a superior performance to other in silico network inference techniques for discovering and reconstructing gene network relationships. PMID:28197408
NASA Astrophysics Data System (ADS)
Liu, Ming; Qin, Zhuanping; Jia, Mengyu; Zhao, Huijuan; Gao, Feng
2015-03-01
Two-layered slab is a rational simplified sample to the near-infrared functional brain imaging using diffuse optical tomography (DOT).The quality of reconstructed images is substantially affected by the accuracy of the background optical properties. In this paper, region step wise reconstruction method is proposed for reconstructing the background optical properties of the two-layered slab sample with the known geometric information based on continuous wave (CW) DOT. The optical properties of the top and bottom layers are respectively reconstructed utilizing the different source-detector-separation groups according to the depth of maximum brain sensitivity of the source-detector-separation. We demonstrate the feasibility of the proposed method and investigate the application range of the source-detector-separation groups by the numerical simulations. The numerical simulation results indicate the proposed method can effectively reconstruct the background optical properties of two-layered slab sample. The relative reconstruction errors are less than 10% when the thickness of the top layer is approximate 10mm. The reconstruction of target caused by brain activation is investigated with the reconstructed optical properties as well. The quantitativeness ratio of the ROI is about 80% which is higher than that of the conventional method. The spatial resolution of the reconstructions (R) with two targets is investigated, and it demonstrates R with the proposed method is better than that with the conventional method as well.
The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method
NASA Astrophysics Data System (ADS)
Voronina, T. A.; Romanenko, A. A.
2016-12-01
Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.
Simultaneous denoising and reconstruction of 5-D seismic data via damped rank-reduction method
NASA Astrophysics Data System (ADS)
Chen, Yangkang; Zhang, Dong; Jin, Zhaoyu; Chen, Xiaohong; Zu, Shaohuan; Huang, Weilin; Gan, Shuwei
2016-09-01
The Cadzow rank-reduction method can be effectively utilized in simultaneously denoising and reconstructing 5-D seismic data that depend on four spatial dimensions. The classic version of Cadzow rank-reduction method arranges the 4-D spatial data into a level-four block Hankel/Toeplitz matrix and then applies truncated singular value decomposition (TSVD) for rank reduction. When the observed data are extremely noisy, which is often the feature of real seismic data, traditional TSVD cannot be adequate for attenuating the noise and reconstructing the signals. The reconstructed data tend to contain a significant amount of residual noise using the traditional TSVD method, which can be explained by the fact that the reconstructed data space is a mixture of both signal subspace and noise subspace. In order to better decompose the block Hankel matrix into signal and noise components, we introduced a damping operator into the traditional TSVD formula, which we call the damped rank-reduction method. The damped rank-reduction method can obtain a perfect reconstruction performance even when the observed data have extremely low signal-to-noise ratio. The feasibility of the improved 5-D seismic data reconstruction method was validated via both 5-D synthetic and field data examples. We presented comprehensive analysis of the data examples and obtained valuable experience and guidelines in better utilizing the proposed method in practice. Since the proposed method is convenient to implement and can achieve immediate improvement, we suggest its wide application in the industry.
A Parallel Reconstructed Discontinuous Galerkin Method for the Compressible Flows on Aritrary Grids
Hong Luo; Amjad Ali; Robert Nourgaliev; Vincent A. Mousseau
2010-01-01
A reconstruction-based discontinuous Galerkin method is presented for the solution of the compressible Navier-Stokes equations on arbitrary grids. In this method, an in-cell reconstruction is used to obtain a higher-order polynomial representation of the underlying discontinuous Galerkin polynomial solution and an inter-cell reconstruction is used to obtain a continuous polynomial solution on the union of two neighboring, interface-sharing cells. The in-cell reconstruction is designed to enhance the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. The inter-cell reconstruction is devised to remove an interface discontinuity of the solution and its derivatives and thus to provide a simple, accurate, consistent, and robust approximation to the viscous and heat fluxes in the Navier-Stokes equations. A parallel strategy is also devised for the resulting reconstruction discontinuous Galerkin method, which is based on domain partitioning and Single Program Multiple Data (SPMD) parallel programming model. The RDG method is used to compute a variety of compressible flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results demonstrate that this RDG method is third-order accurate at a cost slightly higher than its underlying second-order DG method, at the same time providing a better performance than the third order DG method, in terms of both computing costs and storage requirements.
Nishimaru, Eiji; Ichikawa, Katsuhiro; Hara, Takanori; Terakawa, Shoichi; Yokomachi, Kazushi; Fujioka, Chikako; Kiguchi, Masao; Ishifuro, Minoru
2012-01-01
Adaptive iterative reconstruction techniques (IRs) can decrease image noise in computed tomography (CT) and are expected to contribute to reduction of the radiation dose. To evaluate the performance of IRs, the conventional two-dimensional (2D) noise power spectrum (NPS) is widely used. However, when an IR provides an NPS value drop at all spatial frequency (which is similar to NPS changes by dose increase), the conventional method cannot evaluate the correct noise property because the conventional method does not correspond to the volume data natures of CT images. The purpose of our study was to develop a new method for NPS measurements that can be adapted to IRs. Our method utilized thick multi-planar reconstruction (MPR) images. The thick images are generally made by averaging CT volume data in a direction perpendicular to a MPR plane (e.g. z-direction for axial MPR plane). By using this averaging technique as a cutter for 3D-NPS, we can obtain adequate 2D-extracted NPS (eNPS) from 3D NPS. We applied this method to IR images generated with adaptive iterative dose reduction 3D (AIDR-3D, Toshiba) to investigate the validity of our method. A water phantom with 24 cm-diameters was scanned at 120 kV and 200 mAs with a 320-row CT (Acquilion One, Toshiba). From the results of study, the adequate thickness of MPR images for eNPS was more than 25.0 mm. Our new NPS measurement method utilizing thick MPR images was accurate and effective for evaluating noise reduction effects of IRs.
NASA Astrophysics Data System (ADS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2017-04-01
Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results.
NASA Astrophysics Data System (ADS)
Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.
2014-07-01
The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.
Myers, Glenn R.; Thomas, C. David L.; Clement, John G.; Paganin, David M.; Gureyev, Timur E.
2010-01-11
We present a method for tomographic reconstruction of objects containing several distinct materials, which is capable of accurately reconstructing a sample from vastly fewer angular projections than required by conventional algorithms. The algorithm is more general than many previous discrete tomography methods, as: (i) a priori knowledge of the exact number of materials is not required; (ii) the linear attenuation coefficient of each constituent material may assume a small range of a priori unknown values. We present reconstructions from an experimental x-ray computed tomography scan of cortical bone acquired at the SPring-8 synchrotron.
NASA Astrophysics Data System (ADS)
Myers, Glenn R.; Thomas, C. David L.; Paganin, David M.; Gureyev, Timur E.; Clement, John G.
2010-01-01
We present a method for tomographic reconstruction of objects containing several distinct materials, which is capable of accurately reconstructing a sample from vastly fewer angular projections than required by conventional algorithms. The algorithm is more general than many previous discrete tomography methods, as: (i) a priori knowledge of the exact number of materials is not required; (ii) the linear attenuation coefficient of each constituent material may assume a small range of a priori unknown values. We present reconstructions from an experimental x-ray computed tomography scan of cortical bone acquired at the SPring-8 synchrotron.
Image Reconstruction in SNR Units: A General Method for SNR Measurement
Kellman, Peter; McVeigh, Elliot R.
2007-01-01
The method for phased array image reconstruction of uniform noise images may be used in conjunction with proper image scaling as a means of reconstructing images directly in SNR units. This facilitates accurate and precise SNR measurement on a per pixel basis. This method is applicable to root-sum-of-squares magnitude combining, B1-weighted combining, and parallel imaging such as SENSE. A procedure for image reconstruction and scaling is presented, and the method for SNR measurement is validated with phantom data. Alternative methods that rely on noise only regions are not appropriate for parallel imaging where the noise level is highly variable across the field-of-view. The purpose of this article is to provide a nuts and bolts procedure for calculating scale factors used for reconstructing images directly in SNR units. The procedure includes scaling for noise equivalent bandwidth of digital receivers, FFTs and associated window functions (raw data filters), and array combining. PMID:16261576
Reproducibility of the Structural Connectome Reconstruction across Diffusion Methods.
Prčkovska, Vesna; Rodrigues, Paulo; Puigdellivol Sanchez, Ana; Ramos, Marc; Andorra, Magi; Martinez-Heras, Eloy; Falcon, Carles; Prats-Galino, Albert; Villoslada, Pablo
2016-01-01
Analysis of the structural connectomes can lead to powerful insights about the brain's organization and damage. However, the accuracy and reproducibility of constructing the structural connectome done with different acquisition and reconstruction techniques is not well defined. In this work, we evaluated the reproducibility of the structural connectome techniques by performing test-retest (same day) and longitudinal studies (after 1 month) as well as analyzing graph-based measures on the data acquired from 22 healthy volunteers (6 subjects were used for the longitudinal study). We compared connectivity matrices and tract reconstructions obtained with the most typical acquisition schemes used in clinical application: diffusion tensor imaging (DTI), high angular resolution diffusion imaging (HARDI), and diffusion spectrum imaging (DSI). We observed that all techniques showed high reproducibility in the test-retest analysis (correlation >.9). However, HARDI was the only technique with low variability (2%) in the longitudinal assessment (1-month interval). The intraclass coefficient analysis showed the highest reproducibility for the DTI connectome, however, with more sparse connections than HARDI and DSI. Qualitative (neuroanatomical) assessment of selected tracts confirmed the quantitative results showing that HARDI managed to detect most of the analyzed fiber groups and fanning fibers. In conclusion, we found that HARDI acquisition showed the most balanced trade-off between high reproducibility of the connectome, higher rate of path detection and of fanning fibers, and intermediate acquisition times (10-15 minutes), although at the cost of higher appearance of aberrant fibers.
Validation of plasma shape reconstruction by Cauchy condition surface method in KSTAR
Miyata, Y.; Suzuki, T.; Ide, S.; Hahn, S. H.; Chung, J.; Bak, J. G.; Ko, W. H.
2014-03-15
Cauchy Condition Surface (CCS) method is a numerical approach to reconstruct the plasma boundary and calculate the quantities related to plasma shape using the magnetic diagnostics in real time. It has been applied to the KSTAR plasma in order to establish the plasma shape reconstruction with the high elongation of plasma shape and the large effect of eddy currents flowing in the tokamak structures for the first time. For applying the CCS calculation to the KSTAR plasma, the effects by the eddy currents and the ferromagnetic materials on the plasma shape reconstruction are studied. The CCS calculation includes the effect of eddy currents and excludes the magnetic diagnostics, which is expected to be influenced largely by ferromagnetic materials. Calculations have been performed to validate the plasma shape reconstruction in 2012 KSTAR experimental campaign. Comparison between the CCS calculation and non-magnetic measurements revealed that the CCS calculation can reconstruct the accurate plasma shape even with a small I{sub P}.
Yuan, Zhen; Zhang, Jiang; Wang, Xiaodong; Li, Changqing
2014-01-01
We conducted a systematic investigation of the reflectance diffuse optical tomography using continuous wave (CW) measurements and nonlinear reconstruction algorithms. We illustrated and suggested how to fine-tune the nonlinear reconstruction methods in order to optimize target localization with depth-adaptive regularizations, reduce boundary noises in the reconstructed images using a logarithm based objective function, improve reconstruction quantification using transport models, and resolve crosstalk problems between absorption and scattering contrasts with the CW reflectance measurements. The upgraded nonlinear reconstruction algorithms were evaluated with a series of numerical and experimental tests, which show the potentials of the proposed approaches for imaging both absorption and scattering contrasts in the deep targets with enhanced image quality. PMID:25401014
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
NASA Astrophysics Data System (ADS)
Zhu, Dianwen; Zhang, Wei; Zhao, Yue; Li, Changqing
2016-03-01
Dynamic fluorescence molecular tomography (FMT) has the potential to quantify physiological or biochemical information, known as pharmacokinetic parameters, which are important for cancer detection, drug development and delivery etc. To image those parameters, there are indirect methods, which are easier to implement but tend to provide images with low signal-to-noise ratio, and direct methods, which model all the measurement noises together and are statistically more efficient. The direct reconstruction methods in dynamic FMT have attracted a lot of attention recently. However, the coupling of tomographic image reconstruction and nonlinearity of kinetic parameter estimation due to the compartment modeling has imposed a huge computational burden to the direct reconstruction of the kinetic parameters. In this paper, we propose to take advantage of both the direct and indirect reconstruction ideas through a variable splitting strategy under the augmented Lagrangian framework. Each iteration of the direct reconstruction is split into two steps: the dynamic FMT image reconstruction and the node-wise nonlinear least squares fitting of the pharmacokinetic parameter images. Through numerical simulation studies, we have found that the proposed algorithm can achieve good reconstruction results within a small amount of time. This will be the first step for a combined dynamic PET and FMT imaging in the future.
Time-frequency manifold sparse reconstruction: A novel method for bearing fault feature extraction
NASA Astrophysics Data System (ADS)
Ding, Xiaoxi; He, Qingbo
2016-12-01
In this paper, a novel transient signal reconstruction method, called time-frequency manifold (TFM) sparse reconstruction, is proposed for bearing fault feature extraction. This method introduces image sparse reconstruction into the TFM analysis framework. According to the excellent denoising performance of TFM, a more effective time-frequency (TF) dictionary can be learned from the TFM signature by image sparse decomposition based on orthogonal matching pursuit (OMP). Then, the TF distribution (TFD) of the raw signal in a reconstructed phase space would be re-expressed with the sum of learned TF atoms multiplied by corresponding coefficients. Finally, one-dimensional signal can be achieved again by the inverse process of TF analysis (TFA). Meanwhile, the amplitude information of the raw signal would be well reconstructed. The proposed technique combines the merits of the TFM in denoising and the atomic decomposition in image sparse reconstruction. Moreover, the combination makes it possible to express the nonlinear signal processing results explicitly in theory. The effectiveness of the proposed TFM sparse reconstruction method is verified by experimental analysis for bearing fault feature extraction.
Stable-phase method for hierarchical annealing in the reconstruction of porous media images.
Chen, Dongdong; Teng, Qizhi; He, Xiaohai; Xu, Zhi; Li, Zhengji
2014-01-01
In this paper, we introduce a stable-phase approach for hierarchical annealing which addresses the very large computational costs associated with simulated annealing for the reconstruction of large-scale binary porous media images. Our presented method, which uses the two-point correlation function as the morphological descriptor, involves the reconstruction of three-phase and two-phase structures. We consider reconstructing the three-phase structures based on standard annealing and the two-phase structures based on standard and hierarchical annealings. From the result of the two-dimensional (2D) reconstruction, we find that the 2D generation does not fully capture the morphological information of the original image, even though the two-point correlation function of the reconstruction is in excellent agreement with that of the reference image. For the reconstructed three-dimensional (3D) microstructure, we calculate its permeability and compare it to that of the reference 3D microstructure. The result indicates that the reconstructed structure has a lower degree of connectedness than that of the actual sandstone. We also compare the computation time of our presented method to that of the standard annealing, which shows that our presented method of orders of magnitude improves the convergence rate. That is because only a small part of the pixels in the overall hierarchy need to be considered for sampling by the annealer.
Lu, Huancai; Wu, Sean F
2009-03-01
The vibroacoustic responses of a highly nonspherical vibrating object are reconstructed using Helmholtz equation least-squares (HELS) method. The objectives of this study are to examine the accuracy of reconstruction and the impacts of various parameters involved in reconstruction using HELS. The test object is a simply supported and baffled thin plate. The reason for selecting this object is that it represents a class of structures that cannot be exactly described by the spherical Hankel functions and spherical harmonics, which are taken as the basis functions in the HELS formulation, yet the analytic solutions to vibroacoustic responses of a baffled plate are readily available so the accuracy of reconstruction can be checked accurately. The input field acoustic pressures for reconstruction are generated by the Rayleigh integral. The reconstructed normal surface velocities are validated against the benchmark values, and the out-of-plane vibration patterns at several natural frequencies are compared with the natural modes of a simply supported plate. The impacts of various parameters such as number of measurement points, measurement distance, location of the origin of the coordinate system, microphone spacing, and ratio of measurement aperture size to the area of source surface of reconstruction on the resultant accuracy of reconstruction are examined.
NASA Astrophysics Data System (ADS)
Porch, Nick
2010-03-01
If Quaternary palaeoclimatic reconstructions are to be adequately contextualised, it is vital that the nature of modern datasets and the limitations this places on interpreting Quaternary climates are made explicit - such issues are too infrequently considered. This paper describes a coexistence method for the reconstruction of past temperature and precipitation parameters in Australia, using fossil beetles. It presents the context for Quaternary palaeoclimatic reconstruction in terms of climate space, bioclimatic envelope data derived from modern beetle distributions, and the palaeoclimatic limitations of bioclimatic envelope-based reconstructions. Tests in modern climate space, using bioclimatic envelope data for 734 beetle taxa and 54 site-based assemblages from across the continent, indicate that modern seasonal, especially summer, temperatures and precipitation are accurately and, in the case of temperature, precisely reconstructed. The limitations of modern climate space, especially in terms of the limited seasonal variation in thermal regimes and subsequent lack of cold winters in the Australian region, renders winter predictions potentially unreliable when applied to the Quaternary record.
NASA Astrophysics Data System (ADS)
Ko, Han Seo; Gim, Yeonghyeon; Kang, Seung-Hwan
2015-11-01
A three-dimensional optical correction method was developed to reconstruct droplet-based flow fields. For a numerical simulation, synthetic phantoms were reconstructed by a simultaneous multiplicative algebraic reconstruction technique using three projection images which were positioned at an offset angle of 45°. If the synthetic phantom in a conical object with refraction index which differs from atmosphere, the image can be distorted because a light is refracted on the surface of the conical object. Thus, the direction of the projection ray was replaced by the refracted ray which occurred on the surface of the conical object. In order to prove the method considering the distorted effect, reconstruction results of the developed method were compared with the original phantom. As a result, the reconstruction result of the method showed smaller error than that without the method. The method was applied for a Taylor cone which was caused by high voltage between a droplet and a substrate to reconstruct the three-dimensional flow fields for analysis of the characteristics of the droplet. This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korean government (MEST) (No. 2013R1A2A2A01068653).
Apparatus And Method For Reconstructing Data Using Cross-Parity Stripes On Storage Media
Hughes, James Prescott
2003-06-17
An apparatus and method for reconstructing missing data using cross-parity stripes on a storage medium is provided. The apparatus and method may operate on data symbols having sizes greater than a data bit. The apparatus and method makes use of a plurality of parity stripes for reconstructing missing data stripes. The parity symbol values in the parity stripes are used as a basis for determining the value of the missing data symbol in a data stripe. A correction matrix is shifted along the data stripes, correcting missing data symbols as it is shifted. The correction is performed from the outside data stripes towards the inner data stripes to thereby use previously reconstructed data symbols to reconstruct other missing data symbols.
Network Reconstruction Using Nonparametric Additive ODE Models
Henderson, James; Michailidis, George
2014-01-01
Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative
NASA Astrophysics Data System (ADS)
Poudel, Joemini; Matthews, Thomas P.; Anastasio, Mark A.; Wang, Lihong V.
2016-03-01
Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the absorbed optical energy density within tissue. If the object possesses spatially variant acoustic properties that are unaccounted for by the reconstruction algorithm, the estimated image can contain distortions. While reconstruction algorithms have recently been developed for compensating for this effect, they generally require the objects acoustic properties to be known a priori. To circumvent the need for detailed information regarding an objects acoustic properties, we have previously proposed a half-time reconstruction method for PACT. A half-time reconstruction method estimates the PACT image from a data set that has been temporally truncated to exclude the data components that have been strongly aberrated. In this approach, the degree of temporal truncation is the same for all measurements. However, this strategy can be improved upon it when the approximate sizes and locations of strongly heterogeneous structures such as gas voids or bones are known. In this work, we investigate PACT reconstruction algorithms that are based on a variable temporal data truncation (VTDT) approach that represents a generalization of the half-time reconstruction approach. In the VTDT approach, the degree of temporal truncation for each measurement is determined by the distance between the corresponding transducer location and the nearest known bone or gas void location. Reconstructed images from a numerical phantom is employed to demonstrate the feasibility and effectiveness of the approach.
New methods for the computer-assisted 3-D reconstruction of neurons from confocal image stacks.
Schmitt, Stephan; Evers, Jan Felix; Duch, Carsten; Scholz, Michael; Obermayer, Klaus
2004-12-01
Exact geometrical reconstructions of neuronal architecture are indispensable for the investigation of neuronal function. Neuronal shape is important for the wiring of networks, and dendritic architecture strongly affects neuronal integration and firing properties as demonstrated by modeling approaches. Confocal microscopy allows to scan neurons with submicron resolution. However, it is still a tedious task to reconstruct complex dendritic trees with fine structures just above voxel resolution. We present a framework assisting the reconstruction. User time investment is strongly reduced by automatic methods, which fit a skeleton and a surface to the data, while the user can interact and thus keeps full control to ensure a high quality reconstruction. The reconstruction process composes a successive gain of metric parameters. First, a structural description of the neuron is built, including the topology and the exact dendritic lengths and diameters. We use generalized cylinders with circular cross sections. The user provides a rough initialization by marking the branching points. The axes and radii are fitted to the data by minimizing an energy functional, which is regularized by a smoothness constraint. The investigation of proximity to other structures throughout dendritic trees requires a precise surface reconstruction. In order to achieve accuracy of 0.1 microm and below, we additionally implemented a segmentation algorithm based on geodesic active contours that allow for arbitrary cross sections and uses locally adapted thresholds. In summary, this new reconstruction tool saves time and increases quality as compared to other methods, which have previously been applied to real neurons.
Adaptive Forward Modeling Method for Analysis and Reconstructions of Orientation Image Map
Frankie Li, Shiu Fai
2014-06-01
IceNine is a MPI-parallel orientation reconstruction and microstructure analysis code. It's primary purpose is to reconstruct a spatially resolved orientation map given a set of diffraction images from a high energy x-ray diffraction microscopy (HEDM) experiment (1). In particular, IceNine implements the adaptive version of the forward modeling method (2, 3). Part of IceNine is a library used to for conbined analysis of the microstructure with the experimentally measured diffraction signal. The libraries is also designed for tapid prototyping of new reconstruction and analysis algorithms. IceNine is also built with a simulator of diffraction images with an input microstructure.
Hwang, Euna; Kim, Young Soo; Chung, Seum
2014-06-01
Before visiting a plastic surgeon, some microtia patients may undergo canaloplasty for hearing improvement. In such cases, scarred tissues and the reconstructed external auditory canal in the postauricular area may cause a significant limitation in using the posterior auricular skin flap for ear reconstruction. In this article, we present a new method for auricular reconstruction in microtia patients with previous canaloplasty. By dividing a postauricular skin flap into an upper scalp extended skin flap and a lower mastoid extended skin flap at the level of a reconstructed external auditory canal, the entire anterior surface of the auricular framework can be covered with the two extended postauricular skin flaps. The reconstructed ear shows good color match and texture, with the entire anterior surface of the reconstructed ear being resurfaced with the skin flaps. Clinical question/level of evidence; therapeutic level IV.
Chmiel, Z
1996-01-01
An original method for A1 retinaculum reconstruction of flexor pollicis longus sheath with extensor pollicis brevis tendon is presented. Reconstructed retinaculum is very strong. Loss of extensor pollicis brevis did not impaired thumb function.
Shang, Shang; Bai, Jing; Song, Xiaolei; Wang, Hongkai; Lau, Jaclyn
2007-01-01
Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT.
NASA Astrophysics Data System (ADS)
Xu, Min; He, Kang-Lin; Zhang, Zi-Ping; Wang, Yi-Fang; Bian, Jian-Ming; Cao, Guo-Fu; Cao, Xue-Xiang; Chen, Shen-Jian; Deng, Zi-Yan; Fu, Cheng-Dong; Gao, Yuan-Ning; Han, Lei; Han, Shao-Qing; He, Miao; Hu, Ji-Feng; Hu, Xiao-Wei; Huang, Bin; Huang, Xing-Tao; Jia, Lu-Kui; Ji, Xiao-Bin; Li, Hai-Bo; Li, Wei-Dong; Liang, Yu-Tie; Liu, Chun-Xiu; Liu, Huai-Min; Liu, Ying; Liu, Yong; Luo, Tao; Lü, Qi-Wen; Ma, Qiu-Mei; Ma, Xiang; Mao, Ya-Jun; Mao, Ze-Pu; Mo, Xiao-Hu; Ning, Fei-Peng; Ping, Rong-Gang; Qiu, Jin-Fa; Song, Wen-Bo; Sun, Sheng-Sen; Sun, Xiao-Dong; Sun, Yong-Zhao; Tian, Hao-Lai; Wang, Ji-Ke; Wang, Liang-Liang; Wen, Shuo-Pin; Wu, Ling-Hui; Wu, Zhi; Xie, Yu-Guang; Yan, Jie; Yan, Liang; Yao, Jian; Yuan, Chang-Zheng; Yuan, Ye; Zhang, Chang-Chun; Zhang, Jian-Yong; Zhang, Lei; Zhang, Xue-Yao; Zhang, Yao; Zheng, Yang-Heng; Zhu, Yong-Sheng; Zou, Jia-Heng
2009-06-01
This paper focuses mainly on the vertex reconstruction of resonance particles with a relatively long lifetime such as K0S, Λ, as well as on lifetime measurements using a 3-dimensional fit. The kinematic constraints between the production and decay vertices and the decay vertex fitting algorithm based on the least squares method are both presented. Reconstruction efficiencies including experimental resolutions are discussed. The results and systematic errors are calculated based on a Monte Carlo simulation.
Novel l2,1-norm optimization method for fluorescence molecular tomography reconstruction
Jiang, Shixin; Liu, Jie; An, Yu; Zhang, Guanglei; Ye, Jinzuo; Mao, Yamin; He, Kunshan; Chi, Chongwei; Tian, Jie
2016-01-01
Fluorescence molecular tomography (FMT) is a promising tomographic method in preclinical research, which enables noninvasive real-time three-dimensional (3-D) visualization for in vivo studies. The ill-posedness of the FMT reconstruction problem is one of the many challenges in the studies of FMT. In this paper, we propose a l2,1-norm optimization method using a priori information, mainly the structured sparsity of the fluorescent regions for FMT reconstruction. Compared to standard sparsity methods, the structured sparsity methods are often superior in reconstruction accuracy since the structured sparsity utilizes correlations or structures of the reconstructed image. To solve the problem effectively, the Nesterov’s method was used to accelerate the computation. To evaluate the performance of the proposed l2,1-norm method, numerical phantom experiments and in vivo mouse experiments are conducted. The results show that the proposed method not only achieves accurate and desirable fluorescent source reconstruction, but also demonstrates enhanced robustness to noise. PMID:27375949
GENIUS: web server to predict local gene networks and key genes for biological functions.
Puelma, Tomas; Araus, Viviana; Canales, Javier; Vidal, Elena A; Cabello, Juan M; Soto, Alvaro; Gutiérrez, Rodrigo A
2016-12-19
GENIUS is a user-friendly web server that uses a novel machine learning algorithm to infer functional gene networks focused on specific genes and experimental conditions that are relevant to biological functions of interest. These functions may have different levels of complexity, from specific biological processes to complex traits that involve several interacting processes. GENIUS also enriches the network with new genes related to the biological function of interest, with accuracies comparable to highly discriminative Support Vector Machine methods.
An adaptive total variation image reconstruction method for speckles through disordered media
NASA Astrophysics Data System (ADS)
Gong, Changmei; Shao, Xiaopeng; Wu, Tengfei
2013-09-01
Multiple scattering of light in highly disordered medium can break the diffraction limit of conventional optical system combined with image reconstruction method. Once the transmission matrix of the imaging system is obtained, the target image can be reconstructed from its speckle pattern by image reconstruction algorithm. Nevertheless, the restored image attained by common image reconstruction algorithms such as Tikhonov regularization has a relatively low signal-tonoise ratio (SNR) due to the experimental noise and reconstruction noise, greatly reducing the quality of the result image. In this paper, the speckle pattern of the test image is simulated by the combination of light propagation theories and statistical optics theories. Subsequently, an adaptive total variation (ATV) algorithm—the TV minimization by augmented Lagrangian and alternating direction algorithms (TVAL3), which is based on augmented Lagrangian and alternating direction algorithm, is utilized to reconstruct the target image. Numerical simulation experimental results show that, the TVAL3 algorithm can effectively suppress the noise of the restored image and preserve more image details, thus greatly boosts the SNR of the restored image. It also indicates that, compared with the image directly formed by `clean' system, the reconstructed results can overcoming the diffraction limit of the `clean' system, therefore being conductive to the observation of cells and protein molecules in biological tissues and other structures in micro/nano scale.
Hong Luo; Luquing Luo; Robert Nourgaliev; Vincent Mousseau
2009-06-01
A reconstruction-based discontinuous Galerkin (DG) method is presented for the solution of the compressible Euler equations on arbitrary grids. By taking advantage of handily available and yet invaluable information, namely the derivatives, in the context of the discontinuous Galerkin methods, a solution polynomial of one degree higher is reconstructed using a least-squares method. The stencils used in the reconstruction involve only the van Neumann neighborhood (face-neighboring cells) and are compact and consistent with the underlying DG method. The resulting DG method can be regarded as an improvement of a recovery-based DG method in the sense that it shares the same nice features as the recovery-based DG method, such as high accuracy and efficiency, and yet overcomes some of its shortcomings such as a lack of flexibility, compactness, and robustness. The developed DG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate the accuracy and efficiency of the method. The numerical results indicate that this reconstructed DG method is able to obtain a third-order accurate solution at a slightly higher cost than its second-order DG method and provide an increase in performance over the third order DG method in terms of computing time and storage requirement.
A novel digital tomosynthesis (DTS) reconstruction method using a deformation field map.
Ren, Lei; Zhang, Junan; Thongphiew, Danthai; Godfrey, Devon J; Wu, Q Jackie; Zhou, Su-Min; Yin, Fang-Fang
2008-07-01
We developed a novel digital tomosynthesis (DTS) reconstruction method using a deformation field map to optimally estimate volumetric information in DTS images. The deformation field map is solved by using prior information, a deformation model, and new projection data. Patients' previous cone-beam CT (CBCT) or planning CT data are used as the prior information, and the new patient volume to be reconstructed is considered as a deformation of the prior patient volume. The deformation field is solved by minimizing bending energy and maintaining new projection data fidelity using a nonlinear conjugate gradient method. The new patient DTS volume is then obtained by deforming the prior patient CBCT or CT volume according to the solution to the deformation field. This method is novel because it is the first method to combine deformable registration with limited angle image reconstruction. The method was tested in 2D cases using simulated projections of a Shepp-Logan phantom, liver, and head-and-neck patient data. The accuracy of the reconstruction was evaluated by comparing both organ volume and pixel value differences between DTS and CBCT images. In the Shepp-Logan phantom study, the reconstructed pixel signal-to-noise ratio (PSNR) for the 60 degrees DTS image reached 34.3 dB. In the liver patient study, the relative error of the liver volume reconstructed using 60 degrees projections was 3.4%. The reconstructed PSNR for the 60 degrees DTS image reached 23.5 dB. In the head-and-neck patient study, the new method using 60 degrees projections was able to reconstruct the 8.1 degrees rotation of the bony structure with 0.0 degrees error. The reconstructed PSNR for the 60 degrees DTS image reached 24.2 dB. In summary, the new reconstruction method can optimally estimate the volumetric information in DTS images using 60 degrees projections. Preliminary validation of the algorithm showed that it is both technically and clinically feasible for image guidance in radiation
Wang, Jinguo; Zhao, Zhiqin Song, Jian; Chen, Guoping; Nie, Zaiping; Liu, Qing-Huo
2015-05-15
Purpose: An iterative reconstruction method has been previously reported by the authors of this paper. However, the iterative reconstruction method was demonstrated by solely using the numerical simulations. It is essential to apply the iterative reconstruction method to practice conditions. The objective of this work is to validate the capability of the iterative reconstruction method for reducing the effects of acoustic heterogeneity with the experimental data in microwave induced thermoacoustic tomography. Methods: Most existing reconstruction methods need to combine the ultrasonic measurement technology to quantitatively measure the velocity distribution of heterogeneity, which increases the system complexity. Different to existing reconstruction methods, the iterative reconstruction method combines time reversal mirror technique, fast marching method, and simultaneous algebraic reconstruction technique to iteratively estimate the velocity distribution of heterogeneous tissue by solely using the measured data. Then, the estimated velocity distribution is used subsequently to reconstruct the highly accurate image of microwave absorption distribution. Experiments that a target placed in an acoustic heterogeneous environment are performed to validate the iterative reconstruction method. Results: By using the estimated velocity distribution, the target in an acoustic heterogeneous environment can be reconstructed with better shape and higher image contrast than targets that are reconstructed with a homogeneous velocity distribution. Conclusions: The distortions caused by the acoustic heterogeneity can be efficiently corrected by utilizing the velocity distribution estimated by the iterative reconstruction method. The advantage of the iterative reconstruction method over the existing correction methods is that it is successful in improving the quality of the image of microwave absorption distribution without increasing the system complexity.
Hong Luo; Luqing Luo; Robert Nourgaliev; Vincent A. Mousseau
2010-09-01
A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier–Stokes equations on arbitrary grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier–Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need to judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on arbitrary grids. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG method is able to deliver the same accuracy as the well-known Bassi–Rebay II scheme, at a half of its computing costs for the discretization of the viscous fluxes in the Navier–Stokes equations, clearly demonstrating its superior performance over the existing DG methods for solving the compressible Navier–Stokes equations.
Hong Luo; Luqing Luo; Robert Nourgaliev; Vincent A. Mousseau
2010-01-01
A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier-Stokes equations on arbitrary grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier-Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need to judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on arbitrary grids. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG method is able to deliver the same accuracy as the well-known Bassi-Rebay II scheme, at a half of its computing costs for the discretization of the viscous fluxes in the Navier-Stokes equations, clearly demonstrating its superior performance over the existing DG methods for solving the compressible Navier-Stokes equations.
NASA Astrophysics Data System (ADS)
Torrelles, X.; Rius, J.; Boscherini, F.; Heun, S.; Mueller, B. H.; Ferrer, S.; Alvarez, J.; Miravitlles, C.
1998-02-01
The projections of surface reconstructions are normally solved from the interatomic vectors found in two-dimensional Patterson maps computed with the intensities of the in-plane superstructure reflections. Since for difficult reconstructions this procedure is not trivial, an alternative automated one based on the ``direct methods'' sum function [Rius, Miravitlles, and Allmann, Acta Crystallogr. A52, 634 (1996)] is shown. It has been applied successfully to the known c(4×2) reconstruction of Ge(001) and to the so-far unresolved In0.04Ga0.96As (001) p(4×2) surface reconstruction. For this last system we propose a modification of one of the models previously proposed for GaAs(001) whose characteristic feature is the presence of dimers along the fourfold direction.
Free flaps in orbital exenteration: a safe and effective method for reconstruction.
López, Fernando; Suárez, Carlos; Carnero, Susana; Martín, Clara; Camporro, Daniel; Llorente, José L
2013-05-01
The aim of this study was to investigate the course of reconstructive treatment and outcomes with use of free flaps after orbital exenteration for malignancy. Charts of patients who had free flap reconstruction after orbital exenteration were retrospectively reviewed and the surgical technique was evaluated. Demographics, histology, surgical management, complications, locoregional control, and survival were analyzed. We performed 22 flaps in 21 patients. Reconstruction was undertaken mainly with anterolateral thigh (56 %), radial forearm (22 %), or parascapular (22 %) free flaps. Complications occurred in 33 % of patients and the flap's success rate was 91 %. The 5-year locoregional control and survival rates were 42 and 37 %, respectively. Free tissue transfer is a reliable, safe, and effective method for repair of defects of the orbit and periorbital structures resulting from oncologic resection. The anterolateral thigh flap is a versatile option to reconstruct the many orbital defects encountered.
NASA Astrophysics Data System (ADS)
Jiang, Mingfeng; Zhang, Heng; Zhu, Lingyan; Cao, Li; Wang, Yaming; Xia, Ling; Gong, Yinglan
2015-04-01
Non-invasively reconstructing the cardiac transmembrane potentials (TMPs) from body surface potentials can act as a regression problem. The support vector regression (SVR) method is often used to solve the regression problem, however the computational complexity of the SVR training algorithm is usually intensive. In this paper, another learning algorithm, termed as extreme learning machine (ELM), is proposed to reconstruct the cardiac transmembrane potentials. Moreover, ELM can be extended to single-hidden layer feed forward neural networks with kernel matrix (kernelized ELM), which can achieve a good generalization performance at a fast learning speed. Based on the realistic heart-torso models, a normal and two abnormal ventricular activation cases are applied for training and testing the regression model. The experimental results show that the ELM method can perform a better regression ability than the single SVR method in terms of the TMPs reconstruction accuracy and reconstruction speed. Moreover, compared with the ELM method, the kernelized ELM method features a good approximation and generalization ability when reconstructing the TMPs.
Image reconstruction in EIT with unreliable electrode data using random sample consensus method
NASA Astrophysics Data System (ADS)
Jeon, Min Ho; Khambampati, Anil Kumar; Kim, Bong Seok; In Kang, Suk; Kim, Kyung Youn
2017-04-01
In electrical impedance tomography (EIT), it is important to acquire reliable measurement data through EIT system for achieving good reconstructed image. In order to have reliable data, various methods for checking and optimizing the EIT measurement system have been studied. However, most of the methods involve additional cost for testing and the measurement setup is often evaluated before the experiment. It is useful to have a method which can detect the faulty electrode data during the experiment without any additional cost. This paper presents a method based on random sample consensus (RANSAC) to find the incorrect data on fault electrode in EIT data. RANSAC is a curve fitting method that removes the outlier data from measurement data. RANSAC method is applied with Gauss Newton (GN) method for image reconstruction of human thorax with faulty data. Numerical and phantom experiments are performed and the reconstruction performance of the proposed RANSAC method with GN is compared with conventional GN method. From the results, it can be noticed that RANSAC with GN has better reconstruction performance than conventional GN method with faulty electrode data.
NASA Astrophysics Data System (ADS)
Jia, Lu-Kui; Mao, Ze-Pu; Li, Wei-Dong; Cao, Guo-Fu; Cao, Xue-Xiang; Deng, Zi-Yan; He, Kang-Lin; Liu, Chun-Yan; Liu, Huai-Min; Liu, Qiu-Guang; Ma, Qiu-Mei; Ma, Xiang; Qiu, Jin-Fa; Tian, Hao-Lai; Wang, Ji-Ke; Wu, Ling-Hui; Yuan, Ye; Zang, Shi-Lei; Zhang, Chang-Chun; Zhang, Lei; Zhang, Yao; Zhu, Kai; Zou, Jia-Heng
2010-12-01
In order to overcome the difficulty brought by the circling charged tracks with transverse momentum less than 120 MeV in the BESIII Main Drift Chamber (MDC), a specialized method called TCurlFinder was developed. This tracking method focuses on the charged track reconstruction under 120 MeV and possesses a special mechanism to reject background noise hits. The performance of the package has been carefully checked and tuned by both Monte Carlo data and real data. The study shows that this tracking method could obviously enhance the reconstruction efficiency in the low transverse momentum region, providing physics analysis with more and reliable data.
NASA Astrophysics Data System (ADS)
Yamaguchi, Yusaku; Kojima, Takeshi; Yoshinaga, Tetsuya
2016-03-01
In clinical X-ray computed tomography (CT), filtered back-projection as a transform method and iterative reconstruction such as the maximum-likelihood expectation-maximization (ML-EM) method are known methods to reconstruct tomographic images. As the other reconstruction method, we have presented a continuous-time image reconstruction (CIR) system described by a nonlinear dynamical system, based on the idea of continuous methods for solving tomographic inverse problems. Recently, we have also proposed a multiplicative CIR system described by differential equations based on the minimization of a weighted Kullback-Leibler divergence. We prove theoretically that the divergence measure decreases along the solution to the CIR system, for consistent inverse problems. In consideration of the noisy nature of projections in clinical CT, the inverse problem belongs to the category of ill-posed problems. The performance of a noise-reduction scheme for a new (previously developed) CIR system was investigated by means of numerical experiments using a circular phantom image. Compared to the conventional CIR and the ML-EM methods, the proposed CIR method has an advantage on noisy projection with lower signal-to-noise ratios in terms of the divergence measure on the actual image under the same common measure observed via the projection data. The results lead to the conclusion that the multiplicative CIR method is more effective and robust for noise reduction in CT compared to the ML-EM as well as conventional CIR methods.
Hong Luo; Yidong Xia; Robert Nourgaliev; Chunpei Cai
2011-06-01
A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier-Stokes equations on unstructured tetrahedral grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier-Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need to judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on unstructured grids. The preliminary results indicate that this RDG method is stable on unstructured tetrahedral grids, and provides a viable and attractive alternative for the discretization of the viscous and heat fluxes in the Navier-Stokes equations.
Xing, Pei; Chen, Xin; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu
2016-01-01
Large-scale climate history of the past millennium reconstructed solely from tree-ring data is prone to underestimate the amplitude of low-frequency variability. In this paper, we aimed at solving this problem by utilizing a novel method termed “MDVM”, which was a combination of the ensemble empirical mode decomposition (EEMD) and variance matching techniques. We compiled a set of 211 tree-ring records from the extratropical Northern Hemisphere (30–90°N) in an effort to develop a new reconstruction of the annual mean temperature by the MDVM method. Among these dataset, a number of 126 records were screened out to reconstruct temperature variability longer than decadal scale for the period 850–2000 AD. The MDVM reconstruction depicted significant low-frequency variability in the past millennium with evident Medieval Warm Period (MWP) over the interval 950–1150 AD and pronounced Little Ice Age (LIA) cumulating in 1450–1850 AD. In the context of 1150-year reconstruction, the accelerating warming in 20th century was likely unprecedented, and the coldest decades appeared in the 1640s, 1600s and 1580s, whereas the warmest decades occurred in the 1990s, 1940s and 1930s. Additionally, the MDVM reconstruction covaried broadly with changes in natural radiative forcing, and especially showed distinct footprints of multiple volcanic eruptions in the last millennium. Comparisons of our results with previous reconstructions and model simulations showed the efficiency of the MDVM method on capturing low-frequency variability, particularly much colder signals of the LIA relative to the reference period. Our results demonstrated that the MDVM method has advantages in studying large-scale and low-frequency climate signals using pure tree-ring data. PMID:26751947
Ahmad, Munir; Shahzad, Tasawar; Masood, Khalid; Rashid, Khalid; Tanveer, Muhammad; Iqbal, Rabail; Hussain, Nasir; Shahid, Abubakar; Fazal-E-Aleem
2016-06-01
Emission tomographic image reconstruction is an ill-posed problem due to limited and noisy data and various image-degrading effects affecting the data and leads to noisy reconstructions. Explicit regularization, through iterative reconstruction methods, is considered better to compensate for reconstruction-based noise. Local smoothing and edge-preserving regularization methods can reduce reconstruction-based noise. However, these methods produce overly smoothed images or blocky artefacts in the final image because they can only exploit local image properties. Recently, non-local regularization techniques have been introduced, to overcome these problems, by incorporating geometrical global continuity and connectivity present in the objective image. These techniques can overcome drawbacks of local regularization methods; however, they also have certain limitations, such as choice of the regularization function, neighbourhood size or calibration of several empirical parameters involved. This work compares different local and non-local regularization techniques used in emission tomographic imaging in general and emission computed tomography in specific for improved quality of the resultant images.
Chen, Xueli E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin E-mail: jimleung@mail.xidian.edu.cn
2014-05-14
Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.
A physics-based intravascular ultrasound image reconstruction method for lumen segmentation.
Mendizabal-Ruiz, Gerardo; Kakadiaris, Ioannis A
2016-08-01
Intravascular ultrasound (IVUS) refers to the medical imaging technique consisting of a miniaturized ultrasound transducer located at the tip of a catheter that can be introduced in the blood vessels providing high-resolution, cross-sectional images of their interior. Current methods for the generation of an IVUS image reconstruction from radio frequency (RF) data do not account for the physics involved in the interaction between the IVUS ultrasound signal and the tissues of the vessel. In this paper, we present a novel method to generate an IVUS image reconstruction based on the use of a scattering model that considers the tissues of the vessel as a distribution of three-dimensional point scatterers. We evaluated the impact of employing the proposed IVUS image reconstruction method in the segmentation of the lumen/wall interface on 40MHz IVUS data using an existing automatic lumen segmentation method. We compared the results with those obtained using the B-mode reconstruction on 600 randomly selected frames from twelve pullback sequences acquired from rabbit aortas and different arteries of swine. Our results indicate the feasibility of employing the proposed IVUS image reconstruction for the segmentation of the lumen.
ERIC Educational Resources Information Center
Dockray, Samantha; Grant, Nina; Stone, Arthur A.; Kahneman, Daniel; Wardle, Jane; Steptoe, Andrew
2010-01-01
Measurement of affective states in everyday life is of fundamental importance in many types of quality of life, health, and psychological research. Ecological momentary assessment (EMA) is the recognized method of choice, but the respondent burden can be high. The day reconstruction method (DRM) was developed by Kahneman and colleagues ("Science,"…
Phase microscopy using light-field reconstruction method for cell observation.
Xiu, Peng; Zhou, Xin; Kuang, Cuifang; Xu, Yingke; Liu, Xu
2015-08-01
The refractive index (RI) distribution can serve as a natural label for undyed cell imaging. However, the majority of images obtained through quantitative phase microscopy is integrated along the illumination angle and cannot reflect additional information about the refractive map on a certain plane. Herein, a light-field reconstruction method to image the RI map within a depth of 0.2 μm is proposed. It records quantitative phase-delay images using a four-step phase shifting method in different directions and then reconstructs a similar scattered light field for the refractive sample on the focus plane. It can image the RI of samples, transparent cell samples in particular, in a manner similar to the observation of scattering characteristics. The light-field reconstruction method is therefore a powerful tool for use in cytobiology studies.
Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay
Huang, Jian
2013-03-12
A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.
NASA Astrophysics Data System (ADS)
Fugal, Jacob P.; Schulz, Timothy J.; Shaw, Raymond A.
2009-07-01
Hologram reconstruction algorithms often undersample the phase in propagation kernels for typical parameters of holographic optical setups. Given in this paper is an algorithm that addresses this phase undersampling in reconstructing digital in-line holograms of particles for these typical parameters. This algorithm has a lateral sample spacing constant in reconstruction distance, has a diffraction limited resolution, and can be implemented with computational speeds comparable to the fastest of other reconstruction algorithms. This algorithm is shown to be accurate by testing with analytical solutions to the Huygens-Fresnel propagation integral. A low-pass filter can be applied to enforce a uniform minimum particle size detection limit throughout a sample volume, allowing this method to be useful in measuring particle size distributions and number densities. Tens of thousands of holograms of cloud ice particles are digitally reconstructed using the algorithm discussed. Positions of ice particles in the size range of 20 µm-1.5 mm are obtained using an algorithm that accurately finds the position of large and small particles along the optical axis. The digital reconstruction and particle characterization algorithms are implemented in an automated fashion with no user intervention on a computer cluster. Strategies for efficient algorithm implementation on a computer cluster are discussed.
Environment-based pin-power reconstruction method for homogeneous core calculations
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-07-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOX assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian; Wilson, John L.
2000-09-01
Inverse methods can be used to reconstruct the release history of a known source of groundwater contamination from concentration data describing the present-day spatial distribution of the contaminant plume. Using hypothetical release history functions and contaminant plumes, we evaluate the relative effectiveness of two proposed inverse methods, Tikhonov regularization (TR) and minimum relative entropy (MRE) inversion, in reconstructing the release history of a conservative contaminant in a one-dimensional domain [Skaggs and Kabala, 1994; Woodbury and Ulrych, 1996]. We also address issues of reproducibility of the solution and the appropriateness of models for simulating random measurement error. The results show that if error-free plume concentration data are available, both methods perform well in reconstructing a smooth source history function. With error-free data the MRE method is more robust than TR in reconstructing a nonsmooth source history function; however, the TR method is more robust if the data contain measurement error. Two error models were evaluated in this study, and we found that the particular error model does not affect the reliability of the solutions. The results for the TR method have somewhat greater reproducibility because, in some cases, its input parameters are less subjective than those of the MRE method; however, the MRE solution can identify regions where the data give little or no information about the source history function, while the TR solution cannot.
NASA Astrophysics Data System (ADS)
Kang, Jinfen; Moriyama, Koji; Kim, Seung Hyun
2016-09-01
This paper presents an extended, stochastic reconstruction method for catalyst layers (CLs) of Proton Exchange Membrane Fuel Cells (PEMFCs). The focus is placed on the reconstruction of customized, low platinum (Pt) loading CLs where the microstructure of CLs can substantially influence the performance. The sphere-based simulated annealing (SSA) method is extended to generate the CL microstructures with specified and controllable structural properties for agglomerates, ionomer, and Pt catalysts. In the present method, the agglomerate structures are controlled by employing a trial two-point correlation function used in the simulated annealing process. An off-set method is proposed to generate more realistic ionomer structures. The variations of ionomer structures at different humidity conditions are considered to mimic the swelling effects. A method to control Pt loading, distribution, and utilization is presented. The extension of the method to consider heterogeneity in structural properties, which can be found in manufactured CL samples, is presented. Various reconstructed CLs are generated to demonstrate the capability of the proposed method. Proton transport properties of the reconstructed CLs are calculated and validated with experimental data.
NASA Astrophysics Data System (ADS)
Cao, Zhang; Xu, Lijun; Wang, Huaxiang
2009-10-01
Calderon's method was introduced to electrical capacitance tomography in this paper. It is a direct algorithm of the image reconstruction for low-contrast dielectrics, as no matrix inversion or iterative process is needed. It was implemented through numerical integration. Since the Gauss-Legendre quadrature was applied and can be predetermined, the image reconstruction process was fast and resulted in images of high quality. Simulations were carried out to study the effect of different dielectric contrasts and different electrode numbers. Both simulated and experimental results validated the feasibility and effectiveness of Calderon's method in electrical capacitance tomography for low-contrast dielectrics.
A new 3D reconstruction method of small solar system bodies
NASA Astrophysics Data System (ADS)
Capanna, C.; Jorda, L.; Lamy, P.; Gesquiere, G.
2011-10-01
The 3D reconstruction of small solar system bodies consitutes an essential step toward understanding and interpreting their physical and geological properties. We propose a new reconstruction method by photoclinometry based on the minimization of the chisquare difference between observed and synthetic images by deformation of a 3D triangular mesh. This method has been tested on images of the two asteroids (2867) Steins and (21) Lutetia observed during ESA's ROSETTA mission, and it will be applied to elaborate digital terrain models from images of the asteroid (4) Vesta, the target of NASA's DAWN spacecraft.
Unhappy triad in limb reconstruction: Management by Ilizarov method
El-Alfy, Barakat Sayed
2017-01-01
AIM To evaluate the results of the Ilizarov method in management of cases with bone loss, soft tissue loss and infection. METHODS Twenty eight patients with severe leg trauma complicated by bone loss, soft tissue loss and infection were managed by distraction osteogenesis in our institution. After radical debridement of all the infected and dead tissues the Ilizarov frame was applied, corticotomy was done and bone transport started. The wounds were left open to drain. Partial limb shortening was done in seven cases to reduce the size of both the skeletal and soft tissue defects. The average follow up period was 39 mo (range 27-56 mo). RESULTS The infection was eradicated in all cases. All the soft tissue defects healed during bone transport and plastic surgery was only required in 2 cases. Skeletal defects were treated in all cases. All patients required another surgery at the docking site to fashion the soft tissue and to cover the bone ends. The external fixation time ranged from 9 to 17 mo with an average of 13 mo. The complications included pin tract infection in 16 cases, wire breakage in 2 cases, unstable scar in 4 cases and chronic edema in 3 cases. According to the association for study and application of methods of Ilizarov score the bone results were excellent in 10, good in 16 and fair in 2 cases while the functional results were excellent in 8, good in 17 and fair in 3 cases. CONCLUSION Distraction osteogenesis is a good method that can treat the three problems of this triad simultaneously. PMID:28144578
A comparative study of limited-angle cone-beam reconstruction methods for breast tomosynthesis
Zhang, Yiheng; Chan, Heang-Ping; Sahiner, Berkman; Wei, Jun; Goodsitt, Mitchell M.; Hadjiiski, Lubomir M.; Ge, Jun; Zhou, Chuan
2009-01-01
Digital tomosynthesis mammography (DTM) is a promising new modality for breast cancer detection. In DTM, projection-view images are acquired at a limited number of angles over a limited angular range and the imaged volume is reconstructed from the two-dimensional projections, thus providing three-dimensional structural information of the breast tissue. In this work, we investigated three representative reconstruction methods for this limited-angle cone-beam tomographic problem, including the backprojection (BP) method, the simultaneous algebraic reconstruction technique (SART) and the maximum likelihood method with the convex algorithm (ML-convex). The SART and ML-convex methods were both initialized with BP results to achieve efficient reconstruction. A second generation GE prototype tomosynthesis mammography system with a stationary digital detector was used for image acquisition. Projection-view images were acquired from 21 angles in 3° increments over a ±30° angular range. We used an American College of Radiology phantom and designed three additional phantoms to evaluate the image quality and reconstruction artifacts. In addition to visual comparison of the reconstructed images of different phantom sets, we employed the contrast-to-noise ratio (CNR), a line profile of features, an artifact spread function (ASF), a relative noise power spectrum (NPS), and a line object spread function (LOSF) to quantitatively evaluate the reconstruction results. It was found that for the phantoms with homogeneous background, the BP method resulted in less noisy tomosynthesized images and higher CNR values for masses than the SART and ML-convex methods. However, the two iterative methods provided greater contrast enhancement for both masses and calcification, sharper LOSF, and reduced inter-plane blurring and artifacts with better ASF behaviors for masses. For a contrast-detail phantom with heterogeneous tissue-mimicking background, the BP method had strong blurring artifacts
Wu, Sean F; Zhao, Xiang
2002-07-01
A combined Helmholtz equation-least squares (CHELS) method is developed for reconstructing acoustic radiation from an arbitrary object. This method combines the advantages of both the HELS method and the Helmholtz integral theory based near-field acoustic holography (NAH). As such it allows for reconstruction of the acoustic field radiated from an arbitrary object with relatively few measurements, thus significantly enhancing the reconstruction efficiency. The first step in the CHELS method is to establish the HELS formulations based on a finite number of acoustic pressure measurements taken on or beyond a hypothetical spherical surface that encloses the object under consideration. Next enough field acoustic pressures are generated using the HELS formulations and taken as the input to the Helmholtz integral formulations implemented through the boundary element method (BEM). The acoustic pressure and normal component of the velocity at the discretized nodes on the surface are then determined by solving two matrix equations using singular value decomposition (SVD) and regularization techniques. Also presented are in-depth analyses of the advantages and limitations of the CHELS method. Examples of reconstructing acoustic radiation from separable and nonseparable surfaces are demonstrated.
NASA Astrophysics Data System (ADS)
Wu, Sean F.; Zhao, Xiang
2002-07-01
A combined Helmholtz equation-least squares (CHELS) method is developed for reconstructing acoustic radiation from an arbitrary object. This method combines the advantages of both the HELS method and the Helmholtz integral theory based near-field acoustic holography (NAH). As such it allows for reconstruction of the acoustic field radiated from an arbitrary object with relatively few measurements, thus significantly enhancing the reconstruction efficiency. The first step in the CHELS method is to establish the HELS formulations based on a finite number of acoustic pressure measurements taken on or beyond a hypothetical spherical surface that encloses the object under consideration. Next enough field acoustic pressures are generated using the HELS formulations and taken as the input to the Helmholtz integral formulations implemented through the boundary element method (BEM). The acoustic pressure and normal component of the velocity at the discretized nodes on the surface are then determined by solving two matrix equations using singular value decomposition (SVD) and regularization techniques. Also presented are in-depth analyses of the advantages and limitations of the CHELS method. Examples of reconstructing acoustic radiation from separable and nonseparable surfaces are demonstrated. copyright 2002 Acoustical Society of America.
Reconstruction from Uniformly Attenuated SPECT Projection Data Using the DBH Method
Huang, Qiu; You, Jiangsheng; Zeng, Gengsheng L.; Gullberg, Grant T.
2008-03-20
An algorithm was developed for the two-dimensional (2D) reconstruction of truncated and non-truncated uniformly attenuated data acquired from single photon emission computed tomography (SPECT). The algorithm is able to reconstruct data from half-scan (180o) and short-scan (180?+fan angle) acquisitions for parallel- and fan-beam geometries, respectively, as well as data from full-scan (360o) acquisitions. The algorithm is a derivative, backprojection, and Hilbert transform (DBH) method, which involves the backprojection of differentiated projection data followed by an inversion of the finite weighted Hilbert transform. The kernel of the inverse weighted Hilbert transform is solved numerically using matrix inversion. Numerical simulations confirm that the DBH method provides accurate reconstructions from half-scan and short-scan data, even when there is truncation. However, as the attenuation increases, finer data sampling is required.
Application of information theory methods to food web reconstruction
Moniz, L.J.; Cooch, E.G.; Ellner, S.P.; Nichols, J.D.; Nichols, J.M.
2007-01-01
In this paper we use information theory techniques on time series of abundances to determine the topology of a food web. At the outset, the food web participants (two consumers, two resources) are known; in addition we know that each consumer prefers one of the resources over the other. However, we do not know which consumer prefers which resource, and if this preference is absolute (i.e., whether or not the consumer will consume the non-preferred resource). Although the consumers and resources are identified at the beginning of the experiment, we also provide evidence that the consumers are not resources for each other, and the resources do not consume each other. We do show that there is significant mutual information between resources; the model is seasonally forced and some shared information between resources is expected. Similarly, because the model is seasonally forced, we expect shared information between consumers as they respond to the forcing of the resources. The model that we consider does include noise, and in an effort to demonstrate that these methods may be of some use in other than model data, we show the efficacy of our methods with decreasing time series size; in this particular case we obtain reasonably clear results with a time series length of 400 points. This approaches ecological time series lengths from real systems.
Mitton, D; Landry, C; Véron, S; Skalli, W; Lavaste, F; De Guise, J A
2000-03-01
Standard 3D reconstruction of bones using stereoradiography is limited by the number of anatomical landmarks visible in more than one projection. The proposed technique enables the 3D reconstruction of additional landmarks that can be identified in only one of the radiographs. The principle of this method is the deformation of an elastic object that respects stereocorresponding and non-stereocorresponding observations available in different projections. This technique is based on the principle that any non-stereocorresponding point belongs to a line joining the X-ray source and the projection of the point in one view. The aim is to determine the 3D position of these points on their line of projection when submitted to geometrical and topological constraints. This technique is used to obtain the 3D geometry of 18 cadaveric upper cervical vertebrae. The reconstructed geometry obtained is compared with direct measurements using a magnetic digitiser. The order of precision determined with the point-to-surface distance between the reconstruction obtained with that technique and reference measurements is about 1 mm, depending on the vertebrae studied. Comparison results indicate that the obtained reconstruction is close to the actual vertebral geometry. This method can therefore be proposed to obtain the 3D geometry of vertebrae.
Virtual biomechanics: a new method for online reconstruction of force from EMG recordings.
de Rugy, Aymar; Loeb, Gerald E; Carroll, Timothy J
2012-12-01
Current methods to reconstruct muscle contributions to joint torque usually combine electromyograms (EMGs) with cadaver-based estimates of biomechanics, but both are imperfect representations of reality. Here, we describe a new method that enables online force reconstruction in which we optimize a "virtual" representation of muscle biomechanics. We first obtain tuning curves for the five major wrist muscles from the mean rectified EMG during the hold phase of an isometric aiming task when a cursor is driven by actual force recordings. We then apply a custom, gradient-descent algorithm to determine the set of "virtual pulling vectors" that best reach the target forces when combined with the observed muscle activity. When these pulling vectors are multiplied by the rectified and low-pass-filtered (1.3 Hz) EMG of the five muscles online, the reconstructed force provides a close spatiotemporal match to the true force exerted at the wrist. In three separate experiments, we demonstrate that the technique works equally well for surface and fine-wire recordings and is sensitive to biomechanical changes elicited by a modification of the forearm posture. In all conditions tested, muscle tuning curves obtained when the task was performed with feedback of reconstructed force were similar to those obtained when the task was performed with real force feedback. This online force reconstruction technique provides new avenues to study the relationship between neural control and limb biomechanics since the "virtual biomechanics" can be systematically altered at will.
Ankowski, Artur M.; Benhar, Omar; Coloma, Pilar; ...
2015-10-22
To be able to achieve their physics goals, future neutrino-oscillation experiments will need to reconstruct the neutrino energy with very high accuracy. In this work, we analyze how the energy reconstruction may be affected by realistic detection capabilities, such as energy resolutions, efficiencies, and thresholds. This allows us to estimate how well the detector performance needs to be determined a priori in order to avoid a sizable bias in the measurement of the relevant oscillation parameters. We compare the kinematic and calorimetric methods of energy reconstruction in the context of two νμ → νμ disappearance experiments operating in different energymore » regimes. For the calorimetric reconstruction method, we find that the detector performance has to be estimated with an O(10%) accuracy to avoid a significant bias in the extracted oscillation parameters. Thus, in the case of kinematic energy reconstruction, we observe that the results exhibit less sensitivity to an overestimation of the detector capabilities.« less
Ankowski, Artur M.; Benhar, Omar; Coloma, Pilar; Huber, Patrick; Jen, Chun -Min; Mariani, Camillo; Meloni, Davide; Vagnoni, Erica
2015-10-22
To be able to achieve their physics goals, future neutrino-oscillation experiments will need to reconstruct the neutrino energy with very high accuracy. In this work, we analyze how the energy reconstruction may be affected by realistic detection capabilities, such as energy resolutions, efficiencies, and thresholds. This allows us to estimate how well the detector performance needs to be determined a priori in order to avoid a sizable bias in the measurement of the relevant oscillation parameters. We compare the kinematic and calorimetric methods of energy reconstruction in the context of two ν_{μ} → ν_{μ} disappearance experiments operating in different energy regimes. For the calorimetric reconstruction method, we find that the detector performance has to be estimated with an O(10%) accuracy to avoid a significant bias in the extracted oscillation parameters. Thus, in the case of kinematic energy reconstruction, we observe that the results exhibit less sensitivity to an overestimation of the detector capabilities.
Balima, O.; Favennec, Y.; Rousse, D.
2013-10-15
Highlights: •New strategies to improve the accuracy of the reconstruction through mesh and finite element parameterization. •Use of gradient filtering through an alternative inner product within the adjoint method. •An integral form of the cost function is used to make the reconstruction compatible with all finite element formulations, continuous and discontinuous. •Gradient-based algorithm with the adjoint method is used for the reconstruction. -- Abstract: Optical tomography is mathematically treated as a non-linear inverse problem where the optical properties of the probed medium are recovered through the minimization of the errors between the experimental measurements and their predictions with a numerical model at the locations of the detectors. According to the ill-posed behavior of the inverse problem, some regularization tools must be performed and the Tikhonov penalization type is the most commonly used in optical tomography applications. This paper introduces an optimized approach for optical tomography reconstruction with the finite element method. An integral form of the cost function is used to take into account the surfaces of the detectors and make the reconstruction compatible with all finite element formulations, continuous and discontinuous. Through a gradient-based algorithm where the adjoint method is used to compute the gradient of the cost function, an alternative inner product is employed for preconditioning the reconstruction algorithm. Moreover, appropriate re-parameterization of the optical properties is performed. These regularization strategies are compared with the classical Tikhonov penalization one. It is shown that both the re-parameterization and the use of the Sobolev cost function gradient are efficient for solving such an ill-posed inverse problem.
Deng, Qingqiong; Zhou, Mingquan; Wu, Zhongke; Shui, Wuyang; Ji, Yuan; Wang, Xingce; Liu, Ching Yiu Jessica; Huang, Youliang; Jiang, Haiyan
2016-02-01
Craniofacial reconstruction recreates a facial outlook from the cranium based on the relationship between the face and the skull to assist identification. But craniofacial structures are very complex, and this relationship is not the same in different craniofacial regions. Several regional methods have recently been proposed, these methods segmented the face and skull into regions, and the relationship of each region is then learned independently, after that, facial regions for a given skull are estimated and finally glued together to generate a face. Most of these regional methods use vertex coordinates to represent the regions, and they define a uniform coordinate system for all of the regions. Consequently, the inconsistence in the positions of regions between different individuals is not eliminated before learning the relationships between the face and skull regions, and this reduces the accuracy of the craniofacial reconstruction. In order to solve this problem, an improved regional method is proposed in this paper involving two types of coordinate adjustments. One is the global coordinate adjustment performed on the skulls and faces with the purpose to eliminate the inconsistence of position and pose of the heads; the other is the local coordinate adjustment performed on the skull and face regions with the purpose to eliminate the inconsistence of position of these regions. After these two coordinate adjustments, partial least squares regression (PLSR) is used to estimate the relationship between the face region and the skull region. In order to obtain a more accurate reconstruction, a new fusion strategy is also proposed in the paper to maintain the reconstructed feature regions when gluing the facial regions together. This is based on the observation that the feature regions usually have less reconstruction errors compared to rest of the face. The results demonstrate that the coordinate adjustments and the new fusion strategy can significantly improve the
Analysis of dental root apical morphology: a new method for dietary reconstructions in primates.
Hamon, NoÉmie; Emonet, Edouard-Georges; Chaimanee, Yaowalak; Guy, Franck; Tafforeau, Paul; Jaeger, Jean-Jacques
2012-06-01
The reconstruction of paleo-diets is an important task in the study of fossil primates. Previously, paleo-diet reconstructions were performed using different methods based on extant primate models. In particular, dental microwear or isotopic analyses provided accurate reconstructions for some fossil primates. However, there is sometimes difficult or impossible to apply these methods to fossil material. Therefore, the development of new, independent methods of diet reconstructions is crucial to improve our knowledge of primates paleobiology and paleoecology. This study aims to investigate the correlation between tooth root apical morphology and diet in primates, and its potential for paleo-diet reconstructions. Dental roots are composed of two portions: the eruptive portion with a smooth and regular surface, and the apical penetrative portion which displays an irregular and corrugated surface. Here, the angle formed by these two portions (aPE), and the ratio of penetrative portion over total root length (PPI), are calculated for each mandibular tooth root. A strong correlation between these two variables and the proportion of some food types (fruits, leaves, seeds, animal matter, and vertebrates) in diet is found, allowing the use of tooth root apical morphology as a tool for dietary reconstructions in primates. The method was then applied to the fossil hominoid Khoratpithecus piriyai, from the Late Miocene of Thailand. The paleo-diet deduced from aPE and PPI is dominated by fruits (>50%), associated with animal matter (1-25%). Leaves, vertebrates and most probably seeds were excluded from the diet of Khoratpithecus, which is consistent with previous studies.
Wang, Zhengzhou; Hu, Bingliang; Yin, Qinye
2017-01-01
The schlieren method of measuring far-field focal spots offers many advantages at the Shenguang III laser facility such as low cost and automatic laser-path collimation. However, current methods of far-field focal spot measurement often suffer from low precision and efficiency when the final focal spot is merged manually, thereby reducing the accuracy of reconstruction. In this paper, we introduce an improved schlieren method to construct the high dynamic-range image of far-field focal spots and improve the reconstruction accuracy and efficiency. First, a detection method based on weak light beam sampling and magnification imaging was designed; images of the main and side lobes of the focused laser irradiance in the far field were obtained using two scientific CCD cameras. Second, using a self-correlation template matching algorithm, a circle the same size as the schlieren ball was dug from the main lobe cutting image and used to change the relative region of the main lobe cutting image within a 100×100 pixel region. The position that had the largest correlation coefficient between the side lobe cutting image and the main lobe cutting image when a circle was dug was identified as the best matching point. Finally, the least squares method was used to fit the center of the side lobe schlieren small ball, and the error was less than 1 pixel. The experimental results show that this method enables the accurate, high-dynamic-range measurement of a far-field focal spot and automatic image reconstruction. Because the best matching point is obtained through image processing rather than traditional reconstruction methods based on manual splicing, this method is less sensitive to the efficiency of focal-spot reconstruction and thus offers better experimental precision. PMID:28207758
A comparative study of interface reconstruction methods for multi-material ALE simulations
Kucharik, Milan; Garimalla, Rao; Schofield, Samuel; Shashkov, Mikhail
2009-01-01
In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs some-what worse than the above two while the solutions with VOF using the wrong material order are considerably worse.
A comparative study of interface reconstruction methods for multi-material ALE simulations
Kucharik, Milan Garimella, Rao V. Schofield, Samuel P. Shashkov, Mikhail J.
2010-04-01
In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs somewhat worse than the above two while the solutions with VOF using the wrong material order are considerably worse.
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan
2015-11-15
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan
2015-01-01
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method
Pantograph: A template-based method for genome-scale metabolic model reconstruction.
Loira, Nicolas; Zhukova, Anna; Sherman, David James
2015-04-01
Genome-scale metabolic models are a powerful tool to study the inner workings of biological systems and to guide applications. The advent of cheap sequencing has brought the opportunity to create metabolic maps of biotechnologically interesting organisms. While this drives the development of new methods and automatic tools, network reconstruction remains a time-consuming process where extensive manual curation is required. This curation introduces specific knowledge about the modeled organism, either explicitly in the form of molecular processes, or indirectly in the form of annotations of the model elements. Paradoxically, this knowledge is usually lost when reconstruction of a different organism is started. We introduce the Pantograph method for metabolic model reconstruction. This method combines a template reaction knowledge base, orthology mappings between two organisms, and experimental phenotypic evidence, to build a genome-scale metabolic model for a target organism. Our method infers implicit knowledge from annotations in the template, and rewrites these inferences to include them in the resulting model of the target organism. The generated model is well suited for manual curation. Scripts for evaluating the model with respect to experimental data are automatically generated, to aid curators in iterative improvement. We present an implementation of the Pantograph method, as a toolbox for genome-scale model reconstruction, curation and validation. This open source package can be obtained from: http://pathtastic.gforge.inria.fr.
Li, Ruizhe; Li, Liang; Chen, Zhiqiang
2017-02-07
Accurate estimation of distortion-free spectra is important but difficult in various applications, especially for spectral computed tomography. Two key problems must be solved to reconstruct the incident spectrum. One is the acquisition of the detector energy response. It can be calculated by Monte Carlo simulation, which requires detailed modeling of the detector system and a high computational power. It can also be acquired by establishing a parametric response model and be calibrated using monochromatic x-ray sources, such as synchrotron sources or radioactive isotopes. However, these monochromatic sources are difficult to obtain. Inspired by x-ray fluorescence (XRF) spectrum modeling, we propose a feasible method to obtain the detector energy response based on an optimized parametric model for CdZnTe or CdTe detectors. The other key problem is the reconstruction of the incident spectrum with the detector response. Directly obtaining an accurate solution from noisy data is difficult because the reconstruction problem is severely ill-posed. Different from the existing spectrum stripping method, a maximum likelihood-expectation maximization iterative algorithm is developed based on the Poisson noise model of the system. Simulation and experiment results show that our method is effective for spectrum reconstruction and markedly increases the accuracy of XRF spectra compared with the spectrum stripping method. The applicability of the proposed method is discussed, and promising results are presented.
Spectrum reconstruction method based on the detector response model calibrated by x-ray fluorescence
NASA Astrophysics Data System (ADS)
Li, Ruizhe; Li, Liang; Chen, Zhiqiang
2017-02-01
Accurate estimation of distortion-free spectra is important but difficult in various applications, especially for spectral computed tomography. Two key problems must be solved to reconstruct the incident spectrum. One is the acquisition of the detector energy response. It can be calculated by Monte Carlo simulation, which requires detailed modeling of the detector system and a high computational power. It can also be acquired by establishing a parametric response model and be calibrated using monochromatic x-ray sources, such as synchrotron sources or radioactive isotopes. However, these monochromatic sources are difficult to obtain. Inspired by x-ray fluorescence (XRF) spectrum modeling, we propose a feasible method to obtain the detector energy response based on an optimized parametric model for CdZnTe or CdTe detectors. The other key problem is the reconstruction of the incident spectrum with the detector response. Directly obtaining an accurate solution from noisy data is difficult because the reconstruction problem is severely ill-posed. Different from the existing spectrum stripping method, a maximum likelihood-expectation maximization iterative algorithm is developed based on the Poisson noise model of the system. Simulation and experiment results show that our method is effective for spectrum reconstruction and markedly increases the accuracy of XRF spectra compared with the spectrum stripping method. The applicability of the proposed method is discussed, and promising results are presented.
Single-cell volume estimation by applying three-dimensional reconstruction methods
NASA Astrophysics Data System (ADS)
Khatibi, Siamak; Allansson, Louise; Gustavsson, Tomas; Blomstrand, Fredrik; Hansson, Elisabeth; Olsson, Torsten
1999-05-01
We have studied three-dimensional reconstruction methods to estimate the cell volume of astroglial cells in primary culture. The studies are based on fluorescence imaging and optical sectioning. An automated image-acquisition system was developed to collect two-dimensional microscopic images. Images were reconstructed by the Linear Maximum a Posteriori method and the non-linear Maximum Likelihood Expectation Maximization (ML-EM) method. In addition, because of the high computational demand of the ML-EM algorithm, we have developed a fast variant of this method. (1) Advanced image analysis techniques were applied for accurate and automated cell volume determination. (2) The sensitivity and accuracy of the reconstruction methods were evaluated by using fluorescent micro-beads with known diameter. The algorithms were applied to fura-2-labeled astroglial cells in primary culture exposed to hypo- or hyper-osmotic stress. The results showed that the ML-EM reconstructed images are adequate for the determination of volume changes in cells or parts thereof.
Reconstruction of the sound field above a reflecting plane using the equivalent source method
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Jing, Wen-Qian; Zhang, Yong-Bin; Lin, Wang-Lin
2017-01-01
In practical situations, vibrating objects are usually located above a reflecting plane instead of exposing to a free field. The conventional nearfield acoustic holography (NAH) sometimes fails to identify sound sources under such situations. This paper develops two kinds of equivalent source method (ESM)-based half-space NAH to reconstruct the sound field above a reflecting plane. In the first kind of method, the half-space Green's function is introduced into the ESM-based NAH, and the sound field is reconstructed based on the condition that the surface impedance of the reflecting plane is known a prior. The second kind of method regards the reflections as being radiated by equivalent sources placed under the reflecting plane, and the sound field is reconstructed by matching the pressure on the hologram surface with the equivalent sources distributed within the vibrating object and those substituting for reflections. Thus, this kind of method is independent of the surface impedance of the reflecting plane. Numerical simulations and experiments demonstrate the feasibility of these two kinds of methods for reconstructing the sound field above a reflecting plane.
Phase derivative method for reconstruction of slightly off-axis digital holograms.
Guo, Cheng-Shan; Wang, Ben-Yi; Sha, Bei; Lu, Yu-Jie; Xu, Ming-Yuan
2014-12-15
A phase derivative (PD) method is proposed for reconstruction of off-axis holograms. In this method, a phase distribution of the tested object wave constrained within 0 to pi radian is firstly worked out by a simple analytical formula; then it is corrected to its right range from -pi to pi according to the sign characteristics of its first-order derivative. A theoretical analysis indicates that this PD method is particularly suitable for reconstruction of slightly off-axis holograms because it only requires the spatial frequency of the reference beam larger than spatial frequency of the tested object wave in principle. In addition, because the PD method belongs to a pure local method with no need of any integral operation or phase shifting algorithm in process of the phase retrieval, it could have some advantages in reducing computer load and memory requirements to the image processing system. Some experimental results are given to demonstrate the feasibility of the method.
McCloskey, Rosemary M.; Liang, Richard H.; Harrigan, P. Richard; Brumme, Zabrina L.
2014-01-01
ABSTRACT A population of human immunodeficiency virus (HIV) within a host often descends from a single transmitted/founder virus. The high mutation rate of HIV, coupled with long delays between infection and diagnosis, make isolating and characterizing this strain a challenge. In theory, ancestral reconstruction could be used to recover this strain from sequences sampled in chronic infection; however, the accuracy of phylogenetic techniques in this context is unknown. To evaluate the accuracy of these methods, we applied ancestral reconstruction to a large panel of published longitudinal clonal and/or single-genome-amplification HIV sequence data sets with at least one intrapatient sequence set sampled within 6 months of infection or seroconversion (n = 19,486 sequences, median [interquartile range] = 49 [20 to 86] sequences/set). The consensus of the earliest sequences was used as the best possible estimate of the transmitted/founder. These sequences were compared to ancestral reconstructions from sequences sampled at later time points using both phylogenetic and phylogeny-naive methods. Overall, phylogenetic methods conferred a 16% improvement in reproducing the consensus of early sequences, compared to phylogeny-naive methods. This relative advantage increased with intrapatient sequence diversity (P < 10−5) and the time elapsed between the earliest and subsequent samples (P < 10−5). However, neither approach performed well for reconstructing ancestral indel variation, especially within indel-rich regions of the HIV genome. Although further improvements are needed, our results indicate that phylogenetic methods for ancestral reconstruction significantly outperform phylogeny-naive alternatives, and we identify experimental conditions and study designs that can enhance accuracy of transmitted/founder virus reconstruction. IMPORTANCE When HIV is transmitted into a new host, most of the viruses fail to infect host cells. Consequently, an HIV infection tends to be
Zhang, Xiao-Zheng; Thomas, Jean-Hugh; Bi, Chuan-Xing; Pascal, Jean-Claude
2012-10-01
A time-domain plane wave superposition method is proposed to reconstruct nonstationary sound fields. In this method, the sound field is expressed as a superposition of time convolutions between the estimated time-wavenumber spectrum of the sound pressure on a virtual source plane and the time-domain propagation kernel at each wavenumber. By discretizing the time convolutions directly, the reconstruction can be carried out iteratively in the time domain, thus providing the advantage of continuously reconstructing time-dependent pressure signals. In the reconstruction process, the Tikhonov regularization is introduced at each time step to obtain a relevant estimate of the time-wavenumber spectrum on the virtual source plane. Because the double infinite integral of the two-dimensional spatial Fourier transform is discretized directly in the wavenumber domain in the proposed method, it does not need to perform the two-dimensional spatial fast Fourier transform that is generally used in time domain holography and real-time near-field acoustic holography, and therefore it avoids some errors associated with the two-dimensional spatial fast Fourier transform in theory and makes possible to use an irregular microphone array. The feasibility of the proposed method is demonstrated by numerical simulations and an experiment with two speakers.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems
A novel method for event reconstruction in Liquid Argon Time Projection Chamber
NASA Astrophysics Data System (ADS)
Diwan, M.; Potekhin, M.; Viren, B.; Qian, X.; Zhang, C.
2016-10-01
Future experiments such as the Deep Underground Neutrino Experiment (DUNE) will use very large Liquid Argon Projection Chambers (LArTPC) containing tens of kilotons of cryogenic medium. To be able to utilize sensitive volume of that size, current design employs arrays of wire electrodes grouped in readout planes, arranged with a stereo angle. This leads to certain challenges for object reconstruction due to ambiguities inherent in such a scheme. We present a novel reconstruction method (named "Wirecell") inspired by principles used in tomography, which brings the LArTPC technology closer to its full potential.
A simple method of aortic valve reconstruction with fixed pericardium in children
Hosseinpour, Amir-Reza; González-Calle, Antonio; Adsuar-Gómez, Alejandro; Santos-deSoto, José
2013-01-01
Aortic valve reconstruction with fixed pericardium may occasionally be very useful when treating children with aortic valve disease. This is because diseased aortic valves in children are sometimes too dysmorphic for simple repair without the addition of material, their annulus may be too small for a prosthesis, and the Ross operation may be precluded due to other congenital anomalies such as pulmonary valvar or coronary malformations. Such reconstruction is usually technically demanding and requires much precision. We describe a simple alternative method, which we have carried out in 3 patients, aged 1 week, 3 years and 12 years, respectively, with good early results. PMID:23343835
NASA Astrophysics Data System (ADS)
Xu, Luopeng; Dan, Youquan; Wang, Qingyuan
2015-10-01
The continuous wavelet transform (CWT) introduces an expandable spatial and frequency window which can overcome the inferiority of localization characteristic in Fourier transform and windowed Fourier transform. The CWT method is widely applied in the non-stationary signal analysis field including optical 3D shape reconstruction with remarkable performance. In optical 3D surface measurement, the performance of CWT for optical fringe pattern phase reconstruction usually depends on the choice of wavelet function. A large kind of wavelet functions of CWT, such as Mexican Hat wavelet, Morlet wavelet, DOG wavelet, Gabor wavelet and so on, can be generated from Gauss wavelet function. However, so far, application of the Gauss wavelet transform (GWT) method (i.e. CWT with Gauss wavelet function) in optical profilometry is few reported. In this paper, the method using GWT for optical fringe pattern phase reconstruction is presented first and the comparisons between real and complex GWT methods are discussed in detail. The examples of numerical simulations are also given and analyzed. The results show that both the real GWT method along with a Hilbert transform and the complex GWT method can realize three-dimensional surface reconstruction; and the performance of reconstruction generally depends on the frequency domain appearance of Gauss wavelet functions. For the case of optical fringe pattern of large phase variation with position, the performance of real GWT is better than that of complex one due to complex Gauss series wavelets existing frequency sidelobes. Finally, the experiments are carried out and the experimental results agree well with our theoretical analysis.
A Reconstructed Discontinuous Galerkin Method for the Euler Equations on Arbitrary Grids
Hong Luo; Luqing Luo; Robert Nourgaliev
2012-11-01
A reconstruction-based discontinuous Galerkin (RDG(P1P2)) method, a variant of P1P2 method, is presented for the solution of the compressible Euler equations on arbitrary grids. In this method, an in-cell reconstruction, designed to enhance the accuracy of the discontinuous Galerkin method, is used to obtain a quadratic polynomial solution (P2) from the underlying linear polynomial (P1) discontinuous Galerkin solution using a least-squares method. The stencils used in the reconstruction involve only the von Neumann neighborhood (face-neighboring cells) and are compact and consistent with the underlying DG method. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG(P1P2) method is third-order accurate, and outperforms the third-order DG method (DG(P2)) in terms of both computing costs and storage requirements.
NASA Astrophysics Data System (ADS)
Keselman, J. A.; Nusser, A.
2017-01-01
NoAM for "No Action Method" is a framework for reconstructing the past orbits of observed tracers of the large scale mass density field. It seeks exact solutions of the equations of motion (EoM), satisfying initial homogeneity and the final observed particle (tracer) positions. The solutions are found iteratively reaching a specified tolerance defined as the RMS of the distance between reconstructed and observed positions. Starting from a guess for the initial conditions, NoAM advances particles using standard N-body techniques for solving the EoM. Alternatively, the EoM can be replaced by any approximation such as Zel'dovich and second order perturbation theory (2LPT). NoAM is suitable for billions of particles and can easily handle non-regular volumes, redshift space, and other constraints. We implement NoAM to systematically compare Zel'dovich, 2LPT, and N-body dynamics over diverse configurations ranging from idealized high-res periodic simulation box to realistic galaxy mocks. Our findings are (i) Non-linear reconstructions with Zel'dovich, 2LPT, and full dynamics perform better than linear theory only for idealized catalogs in real space. For realistic catalogs, linear theory is the optimal choice for reconstructing velocity fields smoothed on scales {buildrel > over {˜}} 5 h^{-1}{Mpc}.(ii) all non-linear back-in-time reconstructions tested here, produce comparable enhancement of the baryonic oscillation signal in the correlation function.
NASA Astrophysics Data System (ADS)
Stephanakis, Ioannis M.; Anastassopoulos, George C.
2009-03-01
A novel algorithm for 3-D tomographic reconstruction is proposed. The proposed algorithm is based on multiresolution techniques for local inversion of the 3-D Radon transform in confined subvolumes within the entire object space. Directional wavelet functions of the form ψm,nj(x)=2j/2ψ(2jwm,nx) are employed in a sequel of double filtering and 2-D backprojection operations performed on vertical and horizontal reconstruction planes using the method suggested by Marr and others. The densities of the 3-D object are found initially as backprojections of coarse wavelet functions of this form at directions on vertical and horizontal planes that intersect the object. As the algorithm evolves, finer planar wavelets intersecting a subvolume of medical interest within the original object may be used to reconstruct its details by double backprojection steps on vertical and horizontal planes in a similar fashion. Reduction in the complexity of the reconstruction algorithm is achieved due to the good localization properties of planar wavelets that render the details of the projections with small errors. Experimental results that illustrate multiresolution reconstruction at four successive levels of resolution are given for wavelets belonging to the Daubechies family.
A comparison of force reconstruction methods for a lumped mass beam
Bateman, V.I.; Mayes, R.L.; Carne, T.G.
1992-11-01
Two extensions of the force reconstruction method, the Sum of Weighted Accelerations Technique (SWAT), are presented in this paper; and the results are compared to those obtained using SWAT. SWAT requires the use of the structure`s elastic mode shapes for reconstruction of the applied force. Although based on the same theory, the two, new techniques do not rely on mode shapes to reconstruct the applied force and may be applied to structures whose mode shapes are not available. One technique uses the measured force and acceleration responses with the rigid body mode shapes to calculate the scalar weighting vector, so the technique is called SWAT-CAL (SWAT using a CALibrated force input). The second technique uses only the free-decay time response of the structure with the rigid body mode shapes to calculate the scalar weighting vector and is called SWAT-TEEM (SWAT using Time Eliminated Elastic Modes).
A comparison of force reconstruction methods for a lumped mass beam
Bateman, V.I.; Mayes, R.L.; Carne, T.G.
1992-01-01
Two extensions of the force reconstruction method, the Sum of Weighted Accelerations Technique (SWAT), are presented in this paper; and the results are compared to those obtained using SWAT. SWAT requires the use of the structure's elastic mode shapes for reconstruction of the applied force. Although based on the same theory, the two, new techniques do not rely on mode shapes to reconstruct the applied force and may be applied to structures whose mode shapes are not available. One technique uses the measured force and acceleration responses with the rigid body mode shapes to calculate the scalar weighting vector, so the technique is called SWAT-CAL (SWAT using a CALibrated force input). The second technique uses only the free-decay time response of the structure with the rigid body mode shapes to calculate the scalar weighting vector and is called SWAT-TEEM (SWAT using Time Eliminated Elastic Modes).
A marked bounding box method for image data reduction and reconstruction of sole patterns
NASA Astrophysics Data System (ADS)
Wang, Xingyue; Wu, Jianhua; Zhao, Qingmin; Cheng, Jian; Zhu, Yican
2012-01-01
A novel and efficient method called marked bounding box method based on marching cubes is presented for the point cloud data reduction of sole patterns. This method is characterized in that each bounding box is marked with an index during the process of data reduction and later for use of data reconstruction. The data reconstruction is implemented from the simplified data set by using triangular meshes, the indices being used to search the nearest points from adjacent bounding boxes. Afterwards, the normal vectors are estimated to determine the strength and direction of the surface reflected light. The proposed method is used in a sole pattern classification and query system which uses OpenGL under Visual C++ to render the image of sole patterns. Digital results are given to demonstrate the efficiency and novelty of our method. Finally, conclusion and discussions are made.
A marked bounding box method for image data reduction and reconstruction of sole patterns
NASA Astrophysics Data System (ADS)
Wang, Xingyue; Wu, Jianhua; Zhao, Qingmin; Cheng, Jian; Zhu, Yican
2011-12-01
A novel and efficient method called marked bounding box method based on marching cubes is presented for the point cloud data reduction of sole patterns. This method is characterized in that each bounding box is marked with an index during the process of data reduction and later for use of data reconstruction. The data reconstruction is implemented from the simplified data set by using triangular meshes, the indices being used to search the nearest points from adjacent bounding boxes. Afterwards, the normal vectors are estimated to determine the strength and direction of the surface reflected light. The proposed method is used in a sole pattern classification and query system which uses OpenGL under Visual C++ to render the image of sole patterns. Digital results are given to demonstrate the efficiency and novelty of our method. Finally, conclusion and discussions are made.
NASA Technical Reports Server (NTRS)
Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)
2011-01-01
Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.
NASA Technical Reports Server (NTRS)
Baxes, Gregory A. (Inventor)
2010-01-01
Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.
Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method
Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng
2016-01-01
In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC. PMID:28005929
Richart, Jose; Otal, Antonio; Rodriguez, Silvia; Nicolás, Ana Isabel; DePiaggio, Marina; Santos, Manuel; Vijande, Javier; Perez-Calatayud, Jose
2015-01-01
Purpose There are perineal templates for interstitial implants such as MUPIT and Syed applicators. Their limitations are the intracavitary component deficit and the necessity to use computed tomography (CT) for treatment planning since both applicators are non-magnetic resonance imaging (MRI) compatibles. To overcome these problems, a new template named Template Benidorm (TB) has been recently developed. Titanium needles are usually reconstructed based on their own artifacts, mainly in T1-weighted sequence, using the void on the tip as the needle tip position. Nevertheless, patient tissues surrounding the needles present heterogeneities that complicate the accurate identification of these artifact patterns. The purpose of this work is to improve the titanium needle reconstruction uncertainty for the TB case using a simple method based on the free needle lengths and typical MRI pellets markers. Material and methods The proposed procedure consists on the inclusion of three small A-vitamin pellets (hyperintense on MRI images) compressed by both applicator plates defining the central plane of the plate's arrangement. The needles used are typically 20 cm in length. For each needle, two points are selected defining the straight line. From such line and the plane equations, the intersection can be obtained, and using the free length (knowing the offset distance), the coordinates of the needle tip can be obtained. The method is applied in both T1W and T2W acquisition sequences. To evaluate the inter-observer variation of the method, three implants of T1W and another three of T2W have been reconstructed by two different medical physicists with experience on these reconstructions. Results and conclusions The differences observed in the positioning were significantly smaller than 1 mm in all cases. The presented algorithm also allows the use of only T2W sequence either for contouring or reconstruction purposes. The proposed method is robust and independent of the visibility
NASA Astrophysics Data System (ADS)
Smerdon, J. E.; Kaplan, A.; Zorita, E.; Gonzalez-Rouco, F. J.; Evans, M. N.
2009-12-01
Paleoclimatic reconstructions of hemispheric and global surface temperatures during the last millennium vary significantly in their estimates of decadal-to-centennial variability. Although several estimates are based on spatially-resolved climate field reconstruction (CFR) methods, comparisons have been limited to mean Northern Hemisphere temperatures. Spatial skill is explicitly investigated for four CFR methods using pseudoproxy experiments derived from two millennial-length coupled Atmosphere-Ocean General Circulation Model (AOGCM) simulations. The adopted pseudoproxy network approximates the spatial distribution of a widely used multi-proxy network and the CFRs target annual temperature variability on a 5-degree latitude-longitude grid. Results indicate that the spatial skill of presently available large-scale CFRs depends on proxy type and location, target data, and the employed reconstruction methodology, although there are widespread consistencies in the general performance of all four methods. While results are somewhat sensitive to the ability of the AOGCMs to resolve ENSO and its teleconnections, important areas such as the ocean basins and much of the Southern Hemisphere are reconstructed with particularly poor skill in both model experiments. New high-resolution proxies from poorly sampled regions may be one of the best means of improving estimates of large-scale CFRs of the last millennium.
Simons, Craig J; Cobb, Loren; Davidson, Bradley S
2014-04-01
In vivo measurement of lumbar spine configuration is useful for constructing quantitative biomechanical models. Positional magnetic resonance imaging (MRI) accommodates a larger range of movement in most joints than conventional MRI and does not require a supine position. However, this is achieved at the expense of image resolution and contrast. As a result, quantitative research using positional MRI has required long reconstruction times and is sensitive to incorrectly identifying the vertebral boundary due to low contrast between bone and surrounding tissue in the images. We present a semi-automated method used to obtain digitized reconstructions of lumbar vertebrae in any posture of interest. This method combines a high-resolution reference scan with a low-resolution postural scan to provide a detailed and accurate representation of the vertebrae in the posture of interest. Compared to a criterion standard, translational reconstruction error ranged from 0.7 to 1.6 mm and rotational reconstruction error ranged from 0.3 to 2.6°. Intraclass correlation coefficients indicated high interrater reliability for measurements within the imaging plane (ICC 0.97-0.99). Computational efficiency indicates that this method may be used to compile data sets large enough to account for population variance, and potentially expand the use of positional MRI as a quantitative biomechanics research tool.
A Method for 3D Histopathology Reconstruction Supporting Mouse Microvasculature Analysis
Xu, Yiwen; Pickering, J. Geoffrey; Nong, Zengxuan; Gibson, Eli; Arpino, John-Michael; Yin, Hao; Ward, Aaron D.
2015-01-01
Structural abnormalities of the microvasculature can impair perfusion and function. Conventional histology provides good spatial resolution with which to evaluate the microvascular structure but affords no 3-dimensional information; this limitation could lead to misinterpretations of the complex microvessel network in health and disease. The objective of this study was to develop and evaluate an accurate, fully automated 3D histology reconstruction method to visualize the arterioles and venules within the mouse hind-limb. Sections of the tibialis anterior muscle from C57BL/J6 mice (both normal and subjected to femoral artery excision) were reconstructed using pairwise rigid and affine registrations of 5 µm-thick, paraffin-embedded serial sections digitized at 0.25 µm/pixel. Low-resolution intensity-based rigid registration was used to initialize the nucleus landmark-based registration, and conventional high-resolution intensity-based registration method. The affine nucleus landmark-based registration was developed in this work and was compared to the conventional affine high-resolution intensity-based registration method. Target registration errors were measured between adjacent tissue sections (pairwise error), as well as with respect to a 3D reference reconstruction (accumulated error, to capture propagation of error through the stack of sections). Accumulated error measures were lower (p<0.01) for the nucleus landmark technique and superior vasculature continuity was observed. These findings indicate that registration based on automatic extraction and correspondence of small, homologous landmarks may support accurate 3D histology reconstruction. This technique avoids the otherwise problematic “banana-into-cylinder” effect observed using conventional methods that optimize the pairwise alignment of salient structures, forcing them to be section-orthogonal. This approach will provide a valuable tool for high-accuracy 3D histology tissue reconstructions for
Linear control theory for gene network modeling.
Shin, Yong-Jun; Bleris, Leonidas
2010-09-16
Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain) and linear state-space (time domain) can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.
Spectral/HP Element Method With Hierarchical Reconstruction for Solving Hyperbolic Conservation Laws
Xu, Zhiliang; Lin, Guang
2009-12-01
Hierarchical reconstruction (HR) has been successfully applied to prevent oscillations in solutions computed by finite volume, discontinuous Galerkin, spectral volume schemes when solving hyperbolic conservation laws. In this paper, we demonstrate that HR can also be combined with spectral/hp element methods for solving hyperbolic conservation laws. We show that HR preserves the order of accuracy of spectral/hp element methods for smooth solutions and generate essentially non-oscillatory solution profiles for shock wave problems.
Reconstructing paleo- and initial landscapes using a multi-method approach in hummocky NE Germany
NASA Astrophysics Data System (ADS)
van der Meij, Marijn; Temme, Arnaud; Sommer, Michael
2016-04-01
The unknown state of the landscape at the onset of soil and landscape formation is one of the main sources of uncertainty in landscape evolution modelling. Reconstruction of these initial conditions is not straightforward due to the problems of polygenesis and equifinality: different initial landscapes can change through different sets of processes to an identical end state. Many attempts have been done to reconstruct this initial landscape. These include remote sensing, reverse modelling and the usage of soil properties. However, each of these methods is only applicable on a certain spatial scale and comes with its own uncertainties. Here we present a new framework and preliminary results of reconstructing paleo-landscapes in an eroding setting, where we combine reverse modelling, remote sensing, geochronology, historical data and present soil data. With the combination of these different approaches, different spatial scales can be covered and the uncertainty in the reconstructed landscape can be reduced. The study area is located in north-east Germany, where the landscape consists of a collection of small local depressions, acting as closed catchments. This postglacial hummocky landscape is suitable to test our new multi-method approach because of several reasons: i) the closed catchments enable a full mass balance of erosion and deposition, due to the collection of colluvium in these depressions, ii) significant topography changes only started recently with medieval deforestation and recent intensification of agriculture and iii) due to extensive previous research a large dataset is readily available.
Bailey, Geoffrey N; Reynolds, Sally C; King, Geoffrey C P
2011-03-01
This paper examines the relationship between complex and tectonically active landscapes and patterns of human evolution. We show how active tectonics can produce dynamic landscapes with geomorphological and topographic features that may be critical to long-term patterns of hominin land use, but which are not typically addressed in landscape reconstructions based on existing geological and paleoenvironmental principles. We describe methods of representing topography at a range of scales using measures of roughness based on digital elevation data, and combine the resulting maps with satellite imagery and ground observations to reconstruct features of the wider landscape as they existed at the time of hominin occupation and activity. We apply these methods to sites in South Africa, where relatively stable topography facilitates reconstruction. We demonstrate the presence of previously unrecognized tectonic effects and their implications for the interpretation of hominin habitats and land use. In parts of the East African Rift, reconstruction is more difficult because of dramatic changes since the time of hominin occupation, while fossils are often found in places where activity has now almost ceased. However, we show that original, dynamic landscape features can be assessed by analogy with parts of the Rift that are currently active and indicate how this approach can complement other sources of information to add new insights and pose new questions for future investigation of hominin land use and habitats.
NASA Astrophysics Data System (ADS)
Mignone, A.
2014-08-01
High-order reconstruction schemes for the solution of hyperbolic conservation laws in orthogonal curvilinear coordinates are revised in the finite volume approach. The formulation employs a piecewise polynomial approximation to the zone-average values to reconstruct left and right interface states from within a computational zone to arbitrary order of accuracy by inverting a Vandermonde-like linear system of equations with spatially varying coefficients. The approach is general and can be used on uniform and non-uniform meshes although explicit expressions are derived for polynomials from second to fifth degree in cylindrical and spherical geometries with uniform grid spacing. It is shown that, in regions of large curvature, the resulting expressions differ considerably from their Cartesian counterparts and that the lack of such corrections can severely degrade the accuracy of the solution close to the coordinate origin. Limiting techniques and monotonicity constraints are revised for conventional reconstruction schemes, namely, the piecewise linear method (PLM), third-order weighted essentially non-oscillatory (WENO) scheme and the piecewise parabolic method (PPM). The performance of the improved reconstruction schemes is investigated in a number of selected numerical benchmarks involving the solution of both scalar and systems of nonlinear equations (such as the equations of gas dynamics and magnetohydrodynamics) in cylindrical and spherical geometries in one and two dimensions. Results confirm that the proposed approach yields considerably smaller errors, higher convergence rates and it avoid spurious numerical effects at a symmetry axis.
NASA Astrophysics Data System (ADS)
Ye, Jinzuo; Du, Yang; An, Yu; Chi, Chongwei; Tian, Jie
2014-12-01
Fluorescence molecular tomography (FMT) is a promising imaging technique in preclinical research, enabling three-dimensional location of the specific tumor position for small animal imaging. However, FMT presents a challenging inverse problem that is quite ill-posed and ill-conditioned. Thus, the reconstruction of FMT faces various challenges in its robustness and efficiency. We present an FMT reconstruction method based on nonmonotone spectral projected gradient pursuit (NSPGP) with l1-norm optimization. At each iteration, a spectral gradient-projection method approximately minimizes a least-squares problem with an explicit one-norm constraint. A nonmonotone line search strategy is utilized to get the appropriate updating direction, which guarantees global convergence. Additionally, the Barzilai-Borwein step length is applied to build the optimal step length, further improving the convergence speed of the proposed method. Several numerical simulation studies, including multisource cases as well as comparative analyses, have been performed to evaluate the performance of the proposed method. The results indicate that the proposed NSPGP method is able to ensure the accuracy, robustness, and efficiency of FMT reconstruction. Furthermore, an in vivo experiment based on a heterogeneous mouse model was conducted, and the results demonstrated that the proposed method held the potential for practical applications of FMT.
An airborne acoustic method to reconstruct a dynamically rough flow surface.
Krynkin, Anton; Horoshenkov, Kirill V; Van Renterghem, Timothy
2016-09-01
Currently, there is no airborne in situ method to reconstruct with high fidelity the instantaneous elevation of a dynamically rough surface of a turbulent flow. This work proposes a holographic method that reconstructs the elevation of a one-dimensional rough water surface from airborne acoustic pressure data. This method can be implemented practically using an array of microphones deployed over a dynamically rough surface or using a single microphone which is traversed above the surface at a speed that is much higher than the phase velocity of the roughness pattern. In this work, the theory is validated using synthetic data calculated with the Kirchhoff approximation and a finite difference time domain method over a number of measured surface roughness patterns. The proposed method is able to reconstruct the surface elevation with a sub-millimeter accuracy and over a representatively large area of the surface. Since it has been previously shown that the surface roughness pattern reflects accurately the underlying hydraulic processes in open channel flow [e.g., Horoshenkov, Nichols, Tait, and Maximov, J. Geophys. Res. 118(3), 1864-1876 (2013)], the proposed method paves the way for the development of non-invasive instrumentation for flow mapping and characterization that are based on the acoustic holography principle.
Xiaodong Liu; Lijun Xuan; Hong Luo; Yidong Xia
2001-01-01
A reconstructed discontinuous Galerkin (rDG(P1P2)) method, originally introduced for the compressible Euler equations, is developed for the solution of the compressible Navier- Stokes equations on 3D hybrid grids. In this method, a piecewise quadratic polynomial solution is obtained from the underlying piecewise linear DG solution using a hierarchical Weighted Essentially Non-Oscillatory (WENO) reconstruction. The reconstructed quadratic polynomial solution is then used for the computation of the inviscid fluxes and the viscous fluxes using the second formulation of Bassi and Reay (Bassi-Rebay II). The developed rDG(P1P2) method is used to compute a variety of flow problems to assess its accuracy, efficiency, and robustness. The numerical results demonstrate that the rDG(P1P2) method is able to achieve the designed third-order of accuracy at a cost slightly higher than its underlying second-order DG method, outperform the third order DG method in terms of both computing costs and storage requirements, and obtain reliable and accurate solutions to the large eddy simulation (LES) and direct numerical simulation (DNS) of compressible turbulent flows.
Listening to the noise: random fluctuations reveal gene network parameters.
Munsky, Brian; Trinh, Brooke; Khammash, Mustafa
2009-01-01
The cellular environment is abuzz with noise originating from the inherent random motion of reacting molecules in the living cell. In this noisy environment, clonal cell populations show cell-to-cell variability that can manifest significant phenotypic differences. Noise-induced stochastic fluctuations in cellular constituents can be measured and their statistics quantified. We show that these random fluctuations carry within them valuable information about the underlying genetic network. Far from being a nuisance, the ever-present cellular noise acts as a rich source of excitation that, when processed through a gene network, carries its distinctive fingerprint that encodes a wealth of information about that network. We show that in some cases the analysis of these random fluctuations enables the full identification of network parameters, including those that may otherwise be difficult to measure. This establishes a potentially powerful approach for the identification of gene networks and offers a new window into the workings of these networks.
Accident or homicide--virtual crime scene reconstruction using 3D methods.
Buck, Ursula; Naether, Silvio; Räss, Beat; Jackowski, Christian; Thali, Michael J
2013-02-10
The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event.
Lichenstein, Sarah D.; Bishop, James H.; Verstynen, Timothy D.; Yeh, Fang-Cheng
2016-01-01
Purpose: Diffusion MRI provides a non-invasive way of estimating structural connectivity in the brain. Many studies have used diffusion phantoms as benchmarks to assess the performance of different tractography reconstruction algorithms and assumed that the results can be applied to in vivo studies. Here we examined whether quality metrics derived from a common, publically available, diffusion phantom can reliably predict tractography performance in human white matter tissue. Materials and Methods: We compared estimates of fiber length and fiber crossing among a simple tensor model (diffusion tensor imaging), a more complicated model (ball-and-sticks) and model-free (diffusion spectrum imaging, generalized q-sampling imaging) reconstruction methods using a capillary phantom and in vivo human data (N = 14). Results: Our analysis showed that evaluation outcomes differ depending on whether they were obtained from phantom or human data. Specifically, the diffusion phantom favored a more complicated model over a simple tensor model or model-free methods for resolving crossing fibers. On the other hand, the human studies showed the opposite pattern of results, with the model-free methods being more advantageous than model-based methods or simple tensor models. This performance difference was consistent across several metrics, including estimating fiber length and resolving fiber crossings in established white matter pathways. Conclusions: These findings indicate that the construction of current capillary diffusion phantoms tends to favor complicated reconstruction models over a simple tensor model or model-free methods, whereas the in vivo data tends to produce opposite results. This brings into question the previous phantom-based evaluation approaches and suggests that a more realistic phantom or simulation is necessary to accurately predict the relative performance of different tractography reconstruction methods. PMID:27656122
NASA Astrophysics Data System (ADS)
Zhang, Yi; Zhang, Xiao-Dong; Wang, Wen-Xin; Yang, He-Run; Yang, Zheng-Cai; Hu, Bi-Tao
2009-01-01
In this paper a two dimensional readout micromegas detector with a polyethylene foil as converter was simulated on GEANT4 toolkit and GARFIELD for fast neutron detection. A new track reconstruction method based on time coincidence technology was developed in the simulation to obtain the incident neutron position. The results showed that with this reconstruction method higher spatial resolution was achieved.
Zhang, Haiyan; Zhang, Liyi; Sun, Yunshan; Zhang, Jingyu
2015-01-01
Reducing X-ray tube current is one of the widely used methods for decreasing the radiation dose. Unfortunately, the signal-to-noise ratio (SNR) of the projection data degrades simultaneously. To improve the quality of reconstructed images, a dictionary learning based penalized weighted least-squares (PWLS) approach is proposed for sinogram denoising. The weighted least-squares considers the statistical characteristic of noise and the penalty models the sparsity of sinogram based on dictionary learning. Then reconstruct CT image using filtered back projection (FBP) algorithm from the denoised sinogram. The proposed method is particularly suitable for the projection data with low SNR. Experimental results show that the proposed method can get high-quality CT images when the signal to noise ratio of projection data declines sharply.
An infrared image super-resolution reconstruction method based on compressive sensing
NASA Astrophysics Data System (ADS)
Mao, Yuxing; Wang, Yan; Zhou, Jintao; Jia, Haiwei
2016-05-01
Limited by the properties of infrared detector and camera lens, infrared images are often detail missing and indistinct in vision. The spatial resolution needs to be improved to satisfy the requirements of practical application. Based on compressive sensing (CS) theory, this thesis presents a single image super-resolution reconstruction (SRR) method. With synthetically adopting image degradation model, difference operation-based sparse transformation method and orthogonal matching pursuit (OMP) algorithm, the image SRR problem is transformed into a sparse signal reconstruction issue in CS theory. In our work, the sparse transformation matrix is obtained through difference operation to image, and, the measurement matrix is achieved analytically from the imaging principle of infrared camera. Therefore, the time consumption can be decreased compared with the redundant dictionary obtained by sample training such as K-SVD. The experimental results show that our method can achieve favorable performance and good stability with low algorithm complexity.
A gene network engineering platform for lactic acid bacteria.
Kong, Wentao; Kapuganti, Venkata S; Lu, Ting
2016-02-29
Recent developments in synthetic biology have positioned lactic acid bacteria (LAB) as a major class of cellular chassis for applications. To achieve the full potential of LAB, one fundamental prerequisite is the capacity for rapid engineering of complex gene networks, such as natural biosynthetic pathways and multicomponent synthetic circuits, into which cellular functions are encoded. Here, we present a synthetic biology platform for rapid construction and optimization of large-scale gene networks in LAB. The platform involves a copy-controlled shuttle for hosting target networks and two associated strategies that enable efficient genetic editing and phenotypic validation. By using a nisin biosynthesis pathway and its variants as examples, we demonstrated multiplex, continuous editing of small DNA parts, such as ribosome-binding sites, as well as efficient manipulation of large building blocks such as genes and operons. To showcase the platform, we applied it to expand the phenotypic diversity of the nisin pathway by quickly generating a library of 63 pathway variants. We further demonstrated its utility by altering the regulatory topology of the nisin pathway for constitutive bacteriocin biosynthesis. This work demonstrates the feasibility of rapid and advanced engineering of gene networks in LAB, fostering their applications in biomedicine and other areas.
NASA Astrophysics Data System (ADS)
Li, Gang; Zou, Jiangwei; Xu, Shiyou; Tian, Biao; Chen, Zengping
2014-10-01
In this paper the effects of orbits motion makes for scattering centers trajectory is analyzed, and introduced to scattering centers association, as a constraint. A screening method of feature points is presented to analysis the false points of reconstructed result, and the wrong association which lead these false points. The loop iteration between 3D reconstruction and association result makes the precision of final reconstructed result have a further improvement. The simulation data shows the validity of the algorithm.
Karnowski, Thomas Paul; Tobin Jr, Kenneth William; Chaum, Edward; Muthusamy Govindasamy, Vijaya Priya
2009-09-01
In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.
Karnowski, Thomas P; Govindasamy, V; Tobin, Kenneth W; Chaum, Edward; Abramoff, M D
2008-01-01
In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.
NASA Astrophysics Data System (ADS)
Preza, Chrysanthe; Miller, Michael I.; Conchello, Jose-Angel
1993-07-01
We have shown that the linear least-squares (LLS) estimate of the intensities of a 3-D object obtained from a set of optical sections is unstable due to the inversion of small and zero-valued eigenvalues of the point-spread function (PSF) operator. The LLS solution was regularized by constraining it to lie in a subspace spanned by the eigenvectors corresponding to a selected number of the largest eigenvalues. In this paper we extend the regularized LLS solution to a maximum a posteriori (MAP) solution induced by a prior formed from a 'Good's like' smoothness penalty. This approach also yields a regularized linear estimator which reduces noise as well as edge artifacts in the reconstruction. The advantage of the linear MAP (LMAP) estimate over the current regularized LLS (RLLS) is its ability to regularize the inverse problem by smoothly penalizing components in the image associated with small eigenvalues. Computer simulations were performed using a theoretical PSF and a simple phantom to compare the two regularization techniques. It is shown that the reconstructions using the smoothness prior, give superior variance and bias results compared to the RLLS reconstructions. Encouraging reconstructions obtained with the LMAP method from real microscopical images of a 10 micrometers fluorescent bead, and a four-cell Volvox embryo are shown.
NASA Astrophysics Data System (ADS)
Bernstein, Ally Leigh; Dhanantwari, Amar; Jurcova, Martina; Cheheltani, Rabee; Naha, Pratap Chandra; Ivanc, Thomas; Shefer, Efrat; Cormode, David Peter
2016-05-01
Computed tomography is a widely used medical imaging technique that has high spatial and temporal resolution. Its weakness is its low sensitivity towards contrast media. Iterative reconstruction techniques (ITER) have recently become available, which provide reduced image noise compared with traditional filtered back-projection methods (FBP), which may allow the sensitivity of CT to be improved, however this effect has not been studied in detail. We scanned phantoms containing either an iodine contrast agent or gold nanoparticles. We used a range of tube voltages and currents. We performed reconstruction with FBP, ITER and a novel, iterative, modal-based reconstruction (IMR) algorithm. We found that noise decreased in an algorithm dependent manner (FBP > ITER > IMR) for every scan and that no differences were observed in attenuation rates of the agents. The contrast to noise ratio (CNR) of iodine was highest at 80 kV, whilst the CNR for gold was highest at 140 kV. The CNR of IMR images was almost tenfold higher than that of FBP images. Similar trends were found in dual energy images formed using these algorithms. In conclusion, IMR-based reconstruction techniques will allow contrast agents to be detected with greater sensitivity, and may allow lower contrast agent doses to be used.
Bernstein, Ally Leigh; Dhanantwari, Amar; Jurcova, Martina; Cheheltani, Rabee; Naha, Pratap Chandra; Ivanc, Thomas; Shefer, Efrat; Cormode, David Peter
2016-01-01
Computed tomography is a widely used medical imaging technique that has high spatial and temporal resolution. Its weakness is its low sensitivity towards contrast media. Iterative reconstruction techniques (ITER) have recently become available, which provide reduced image noise compared with traditional filtered back-projection methods (FBP), which may allow the sensitivity of CT to be improved, however this effect has not been studied in detail. We scanned phantoms containing either an iodine contrast agent or gold nanoparticles. We used a range of tube voltages and currents. We performed reconstruction with FBP, ITER and a novel, iterative, modal-based reconstruction (IMR) algorithm. We found that noise decreased in an algorithm dependent manner (FBP > ITER > IMR) for every scan and that no differences were observed in attenuation rates of the agents. The contrast to noise ratio (CNR) of iodine was highest at 80 kV, whilst the CNR for gold was highest at 140 kV. The CNR of IMR images was almost tenfold higher than that of FBP images. Similar trends were found in dual energy images formed using these algorithms. In conclusion, IMR-based reconstruction techniques will allow contrast agents to be detected with greater sensitivity, and may allow lower contrast agent doses to be used. PMID:27185492
Does thorax EIT image analysis depend on the image reconstruction method?
NASA Astrophysics Data System (ADS)
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2013-04-01
Different methods were proposed to analyze the resulting images of electrical impedance tomography (EIT) measurements during ventilation. The aim of our study was to examine if the analysis methods based on back-projection deliver the same results when applied on images based on other reconstruction algorithms. Seven mechanically ventilated patients with ARDS were examined by EIT. The thorax contours were determined from the routine CT images. EIT raw data was reconstructed offline with (1) filtered back-projection with circular forward model (BPC); (2) GREIT reconstruction method with circular forward model (GREITC) and (3) GREIT with individual thorax geometry (GREITT). Three parameters were calculated on the resulting images: linearity, global ventilation distribution and regional ventilation distribution. The results of linearity test are 5.03±2.45, 4.66±2.25 and 5.32±2.30 for BPC, GREITC and GREITT, respectively (median ±interquartile range). The differences among the three methods are not significant (p = 0.93, Kruskal-Wallis test). The proportions of ventilation in the right lung are 0.58±0.17, 0.59±0.20 and 0.59±0.25 for BPC, GREITC and GREITT, respectively (p = 0.98). The differences of the GI index based on different reconstruction methods (0.53±0.16, 0.51±0.25 and 0.54±0.16 for BPC, GREITC and GREITT, respectively) are also not significant (p = 0.93). We conclude that the parameters developed for images generated with GREITT are comparable with filtered back-projection and GREITC.
The method for the reconstruction of complex images of specimens using backscattered electrons.
Kaczmarek, Danuta; Domaradzki, Jaroslaw
2002-01-01
The backscattered electron signal (BSE) is widely used for investigation of specimen surfaces in a scanning electron microscope (SEM). The development of multiple detector systems for BSE signal detection and the methods of digital processing of these signals have allowed for reconstruction of the third dimension on the basis of the two-dimensional (2-D) SEM image. A technique for simultaneous mapping of material composition (COMPO mode) and reconstruction of surface topography (TOPO mode) has also been proposed. This method is based on the measurements of BSE currents sensed by four semiconductor detectors versus the inclination angle of surface. To improve the separation of topographic and material contrasts in SEM, a correction of the TOPO and COMPO modes (resulting from a theoretical description of the system: electron beam, specimen, and detector) was applied. The proposed method can be used for a correct reconstruction of the surface image when the surface slope is <60 degrees. The measuring limit of the slope was closely connected with the detector setup. Next, the digital simulation of the colors was performed (after application of the method of linearization of BSE characteristic versus atomic number). This procedure to increase the SEM resolution for the BSE signal by use of digital image processing allows for a better distinction between the two elements with high atomic numbers.
On reconstruction of acoustic pressure fields using the Helmholtz equation least squares method
Wu
2000-05-01
This paper presents analyses and implementation of the reconstruction of acoustic pressure fields radiated from a general, three-dimensional complex vibrating structure using the Helmholtz equation least-squares (HELS) method. The structure under consideration emulates a full-size four-cylinder engine. To simulate sound radiation from a vibrating structure, harmonic excitations are assumed to act on arbitrarily selected surfaces. The resulting vibration responses are solved by the commercial FEM (finite element method) software I-DEAS. Once the normal component of the surface velocity distribution is determined, the surface acoustic pressures are calculated using standard boundary element method (BEM) codes. The radiated acoustic pressures over several planar surfaces at certain distances from the source are calculated by the Helmholtz integral formulation. These field pressures are taken as the input to the HELS formulation to reconstruct acoustic pressures on the entire source surface, as well as in the field. The reconstructed acoustic pressures thus obtained are then compared with benchmark values. Numerical results demonstrate that good agreements can be obtained with relatively few expansion functions. The HELS method is shown to be very effective in the low-to-mid frequency regime, and can potentially become a powerful noise diagnostic tool.
The Nagoya cosmic-ray muon spectrometer 3, part 4: Track reconstruction method
NASA Technical Reports Server (NTRS)
Shibata, S.; Kamiya, Y.; Iijima, K.; Iida, S.
1985-01-01
One of the greatest problems in measuring particle trajectories with an optical or visual detector system, is the reconstruction of trajectories in real space from their recorded images. In the Nagoya cosmic-ray muon spectrometer, muon tracks are detected by wide gap spark chambers and their images are recorded on the photographic film through an optical system of 10 mirrors and two cameras. For the spatial reconstruction, 42 parameters of the optical system should be known to determine the configuration of this system. It is almost impossible to measure this many parameters directly with usual techniques. In order to solve this problem, the inverse transformation method was applied. In this method, all the optical parameters are determined from the locations of fiducial marks in real space and the locations of their images on the photographic film by the non-linear least square fitting.
NASA Astrophysics Data System (ADS)
Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul
2012-06-01
Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.
NASA Astrophysics Data System (ADS)
Feng, Min-nan; Wang, Yu-cong; Wang, Hao; Liu, Guo-quan; Xue, Wei-hua
2017-03-01
Using a total of 297 segmented sections, we reconstructed the three-dimensional (3D) structure of pure iron and obtained the largest dataset of 16254 3D complete grains reported to date. The mean values of equivalent sphere radius and face number of pure iron were observed to be consistent with those of Monte Carlo simulated grains, phase-field simulated grains, Ti-alloy grains, and Ni-based super alloy grains. In this work, by finding a balance between automatic methods and manual refinement, we developed an interactive segmentation method to segment serial sections accurately in the reconstruction of the 3D microstructure; this approach can save time as well as substantially eliminate errors. The segmentation process comprises four operations: image preprocessing, breakpoint detection based on mathematical morphology analysis, optimized automatic connection of the breakpoints, and manual refinement by artificial evaluation.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system
NASA Astrophysics Data System (ADS)
Ge, Zhuo; Zhu, Ying; Liang, Guanhao
2017-01-01
To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.
NASA Technical Reports Server (NTRS)
Striepe, Scott A.; Blanchard, Robert C.; Kirsch, Michael F.; Fowler, Wallace T.
2007-01-01
On January 14, 2005, ESA's Huygens probe separated from NASA's Cassini spacecraft, entered the Titan atmosphere and landed on its surface. As part of NASA Engineering Safety Center Independent Technical Assessment of the Huygens entry, descent, and landing, and an agreement with ESA, NASA provided results of all EDL analyses and associated findings to the Huygens project team prior to probe entry. In return, NASA was provided the flight data from the probe so that trajectory reconstruction could be done and simulation models assessed. Trajectory reconstruction of the Huygens entry probe at Titan was accomplished using two independent approaches: a traditional method and a POST2-based method. Results from both approaches are discussed in this paper.
NASA Astrophysics Data System (ADS)
Guo, Wei; Jia, Kebin; Tian, Jie; Han, Dong; Liu, Xueyan; Wu, Ping; Feng, Jinchao; Yang, Xin
2012-03-01
Among many molecular imaging modalities, Bioluminescence tomography (BLT) is an important optical molecular imaging modality. Due to its unique advantages in specificity, sensitivity, cost-effectiveness and low background noise, BLT is widely studied for live small animal imaging. Since only the photon distribution over the surface is measurable and the photo propagation with biological tissue is highly diffusive, BLT is often an ill-posed problem and may bear multiple solutions and aberrant reconstruction in the presence of measurement noise and optical parameter mismatches. For many BLT practical applications, such as early detection of tumors, the volumes of the light sources are very small compared with the whole body. Therefore, the L1-norm sparsity regularization has been used to take advantage of the sparsity prior knowledge and alleviate the ill-posedness of the problem. Iterative shrinkage (IST) algorithm is an important research achievement in a field of compressed sensing and widely applied in sparse signal reconstruction. However, the convergence rate of IST algorithm depends heavily on the linear operator. When the problem is ill-posed, it becomes very slow. In this paper, we present a sparsity regularization reconstruction method for BLT based on the two-step iterated shrinkage approach. By employing Two-step strategy of iterative reweighted shrinkage (IRS) to improve IST, the proposed method shows faster convergence rate and better adaptability for BLT. The simulation experiments with mouse atlas were conducted to evaluate the performance of proposed method. By contrast, the proposed method can obtain the stable and comparable reconstruction solution with less number of iterations.
An Iterative Method for Improving the Quality of Reconstruction of a Three-Dimensional Surface
Vishnyakov, G.N.; Levin, G.G.; Sukhorukov, K.A.
2005-12-15
A complex image with constraints imposed on the amplitude and phase image components is processed using the Gerchberg iterative algorithm for the first time. The use of the Gerchberg iterative algorithm makes it possible to improve the quality of a three-dimensional surface profile reconstructed by the previously proposed method that is based on the multiangle projection of fringes and the joint processing of the obtained images by Fourier synthesis.
NASA Astrophysics Data System (ADS)
Jang, Jaeseong; Ahn, Chi Young; Jeon, Kiwan; Choi, Jung-il; Lee, Changhoon; Seo, Jin Keun
2015-03-01
A reconstruction method is proposed here to quantify the distribution of blood flow velocity fields inside the left ventricle from color Doppler echocardiography measurement. From 3D incompressible Navier- Stokes equation, a 2D incompressible Navier-Stokes equation with a mass source term is derived to utilize the measurable color flow ultrasound data in a plane along with the moving boundary condition. The proposed model reflects out-of-plane blood flows on the imaging plane through the mass source term. For demonstrating a feasibility of the proposed method, we have performed numerical simulations of the forward problem and numerical analysis of the reconstruction method. First, we construct a 3D moving LV region having a specific stroke volume. To obtain synthetic intra-ventricular flows, we performed a numerical simulation of the forward problem of Navier-Stokes equation inside the 3D moving LV, computed 3D intra-ventricular velocity fields as a solution of the forward problem, projected the 3D velocity fields on the imaging plane and took the inner product of the 2D velocity fields on the imaging plane and scanline directional velocity fields for synthetic scanline directional projected velocity at each position. The proposed method utilized the 2D synthetic projected velocity data for reconstructing LV blood flow. By computing the difference between synthetic flow and reconstructed flow fields, we obtained the averaged point-wise errors of 0.06 m/s and 0.02 m/s for u- and v-components, respectively.
Four-dimensional cone beam CT reconstruction and enhancement using a temporal nonlocal means method
Jia Xun; Tian Zhen; Lou Yifei; Sonke, Jan-Jakob; Jiang, Steve B.
2012-09-15
Purpose: Four-dimensional cone beam computed tomography (4D-CBCT) has been developed to provide respiratory phase-resolved volumetric imaging in image guided radiation therapy. Conventionally, it is reconstructed by first sorting the x-ray projections into multiple respiratory phase bins according to a breathing signal extracted either from the projection images or some external surrogates, and then reconstructing a 3D CBCT image in each phase bin independently using FDK algorithm. This method requires adequate number of projections for each phase, which can be achieved using a low gantry rotation or multiple gantry rotations. Inadequate number of projections in each phase bin results in low quality 4D-CBCT images with obvious streaking artifacts. 4D-CBCT images at different breathing phases share a lot of redundant information, because they represent the same anatomy captured at slightly different temporal points. Taking this redundancy along the temporal dimension into account can in principle facilitate the reconstruction in the situation of inadequate number of projection images. In this work, the authors propose two novel 4D-CBCT algorithms: an iterative reconstruction algorithm and an enhancement algorithm, utilizing a temporal nonlocal means (TNLM) method. Methods: The authors define a TNLM energy term for a given set of 4D-CBCT images. Minimization of this term favors those 4D-CBCT images such that any anatomical features at one spatial point at one phase can be found in a nearby spatial point at neighboring phases. 4D-CBCT reconstruction is achieved by minimizing a total energy containing a data fidelity term and the TNLM energy term. As for the image enhancement, 4D-CBCT images generated by the FDK algorithm are enhanced by minimizing the TNLM function while keeping the enhanced images close to the FDK results. A forward-backward splitting algorithm and a Gauss-Jacobi iteration method are employed to solve the problems. The algorithms implementation on
Four-dimensional cone beam CT reconstruction and enhancement using a temporal nonlocal means method
Jia, Xun; Tian, Zhen; Lou, Yifei; Sonke, Jan-Jakob; Jiang, Steve B.
2012-01-01
Purpose: Four-dimensional cone beam computed tomography (4D-CBCT) has been developed to provide respiratory phase-resolved volumetric imaging in image guided radiation therapy. Conventionally, it is reconstructed by first sorting the x-ray projections into multiple respiratory phase bins according to a breathing signal extracted either from the projection images or some external surrogates, and then reconstructing a 3D CBCT image in each phase bin independently using FDK algorithm. This method requires adequate number of projections for each phase, which can be achieved using a low gantry rotation or multiple gantry rotations. Inadequate number of projections in each phase bin results in low quality 4D-CBCT images with obvious streaking artifacts. 4D-CBCT images at different breathing phases share a lot of redundant information, because they represent the same anatomy captured at slightly different temporal points. Taking this redundancy along the temporal dimension into account can in principle facilitate the reconstruction in the situation of inadequate number of projection images. In this work, the authors propose two novel 4D-CBCT algorithms: an iterative reconstruction algorithm and an enhancement algorithm, utilizing a temporal nonlocal means (TNLM) method. Methods: The authors define a TNLM energy term for a given set of 4D-CBCT images. Minimization of this term favors those 4D-CBCT images such that any anatomical features at one spatial point at one phase can be found in a nearby spatial point at neighboring phases. 4D-CBCT reconstruction is achieved by minimizing a total energy containing a data fidelity term and the TNLM energy term. As for the image enhancement, 4D-CBCT images generated by the FDK algorithm are enhanced by minimizing the TNLM function while keeping the enhanced images close to the FDK results. A forward–backward splitting algorithm and a Gauss–Jacobi iteration method are employed to solve the problems. The algorithms implementation
Ray, J.; Lee, J.; Yadav, V.; ...
2015-04-29
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also
Wisdom of crowds for robust gene network inference
Marbach, Daniel; Costello, James C.; Küffner, Robert; Vega, Nicci; Prill, Robert J.; Camacho, Diogo M.; Allison, Kyle R.; Kellis, Manolis; Collins, James J.; Stolovitzky, Gustavo
2012-01-01
Reconstructing gene regulatory networks from high-throughput data is a long-standing problem. Through the DREAM project (Dialogue on Reverse Engineering Assessment and Methods), we performed a comprehensive blind assessment of over thirty network inference methods on Escherichia coli, Staphylococcus aureus, Saccharomyces cerevisiae, and in silico microarray data. We characterize performance, data requirements, and inherent biases of different inference approaches offering guidelines for both algorithm application and development. We observe that no single inference method performs optimally across all datasets. In contrast, integration of predictions from multiple inference methods shows robust and high performance across diverse datasets. Thereby, we construct high-confidence networks for E. coli and S. aureus, each comprising ~1700 transcriptional interactions at an estimated precision of 50%. We experimentally test 53 novel interactions in E. coli, of which 23 were supported (43%). Our results establish community-based methods as a powerful and robust tool for the inference of transcriptional gene regulatory networks. PMID:22796662
a Data Driven Method for Building Reconstruction from LiDAR Point Clouds
NASA Astrophysics Data System (ADS)
Sajadian, M.; Arefi, H.
2014-10-01
Airborne laser scanning, commonly referred to as LiDAR, is a superior technology for three-dimensional data acquisition from Earth's surface with high speed and density. Building reconstruction is one of the main applications of LiDAR system which is considered in this study. For a 3D reconstruction of the buildings, the buildings points should be first separated from the other points such as; ground and vegetation. In this paper, a multi-agent strategy has been proposed for simultaneous extraction and segmentation of buildings from LiDAR point clouds. Height values, number of returned pulse, length of triangles, direction of normal vectors, and area are five criteria which have been utilized in this step. Next, the building edge points are detected using a new method named "Grid Erosion". A RANSAC based technique has been employed for edge line extraction. Regularization constraints are performed to achieve the final lines. Finally, by modelling of the roofs and walls, 3D building model is reconstructed. The results indicate that the proposed method could successfully extract the building from LiDAR data and generate the building models automatically. A qualitative and quantitative assessment of the proposed method is then provided.
Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D
2010-10-01
Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.
The Pixon Method for Data Compression Image Classification, and Image Reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard; Yahil, Amos
2002-01-01
As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.
NASA Astrophysics Data System (ADS)
Nara, T.; Koiwa, K.; Takagi, S.; Oyama, D.; Uehara, G.
2014-05-01
This paper presents an algebraic reconstruction method for dipole-quadrupole sources using magnetoencephalography data. Compared to the conventional methods with the equivalent current dipoles source model, our method can more accurately reconstruct two close, oppositely directed sources. Numerical simulations show that two sources on both sides of the longitudinal fissure of cerebrum are stably estimated. The method is verified using a quadrupolar source phantom, which is composed of two isosceles-triangle-coils with parallel bases.
Su, Jianzhong; Shan, Hua; Liu, Hanli; Klibanov, Michael V
2006-10-01
A method is presented for reconstruction of the optical absorption coefficient from transmission near-infrared data with a cw source. As it is distinct from other available schemes such as optimization or Newton's iterative method, this method resolves the inverse problem by solving a boundary value problem for a Volterra-type integral-differential equation. It is demonstrated in numerical studies that this technique has a better than average stability with respect to the discrepancy between the initial guess and the actual unknown absorption coefficient. The method is particularly useful for reconstruction from a large data set obtained from a CCD camera. Several numerical reconstruction examples are presented.
Benazzi, S; Stansfield, E; Milani, C; Gruppioni, G
2009-07-01
The process of forensic identification of missing individuals is frequently reliant on the superimposition of cranial remains onto an individual's picture and/or facial reconstruction. In the latter, the integrity of the skull or a cranium is an important factor in successful identification. Here, we recommend the usage of computerized virtual reconstruction and geometric morphometrics for the purposes of individual reconstruction and identification in forensics. We apply these methods to reconstruct a complete cranium from facial remains that allegedly belong to the famous Italian humanist of the fifteenth century, Angelo Poliziano (1454-1494). Raw data was obtained by computed tomography scans of the Poliziano face and a complete reference skull of a 37-year-old Italian male. Given that the amount of distortion of the facial remains is unknown, two reconstructions are proposed: The first calculates the average shape between the original and its reflection, and the second discards the less preserved left side of the cranium under the assumption that there is no deformation on the right. Both reconstructions perform well in the superimposition with the original preserved facial surface in a virtual environment. The reconstruction by means of averaging between the original and reflection yielded better results during the superimposition with portraits of Poliziano. We argue that the combination of computerized virtual reconstruction and geometric morphometric methods offers a number of advantages over traditional plastic reconstruction, among which are speed, reproducibility, easiness of manipulation when superimposing with pictures in virtual environment, and assumptions control.
Critical node treatment in the analytic function expansion method for Pin Power Reconstruction
Gao, Z.; Xu, Y.; Downar, T.
2013-07-01
Pin Power Reconstruction (PPR) was implemented in PARCS using the eight term analytic function expansion method (AFEN). This method has been demonstrated to be both accurate and efficient. However, similar to all the methods involving analytic functions, such as the analytic node method (ANM) and AFEN for nodal solution, the use of AFEN for PPR also has potential numerical issue with critical nodes. The conventional analytic functions are trigonometric or hyperbolic sine or cosine functions with an angular frequency proportional to buckling. For a critic al node the buckling is zero and the sine functions becomes zero, and the cosine function become unity. In this case, the eight terms of the analytic functions are no longer distinguishable from ea ch other which makes their corresponding coefficients can no longer be determined uniquely. The mode flux distribution of critical node can be linear while the conventional analytic functions can only express a uniform distribution. If there is critical or near critical node in a plane, the reconstructed pin power distribution is often be shown negative or very large values using the conventional method. In this paper, we propose a new method to avoid the numerical problem wit h critical nodes which uses modified trigonometric or hyperbolic sine functions which are the ratio of trigonometric or hyperbolic sine and its angular frequency. If there are no critical or near critical nodes present, the new pin power reconstruction method with modified analytic functions are equivalent to the conventional analytic functions. The new method is demonstrated using the L336C5 benchmark problem. (authors)
Setterbo, Jacob J.; Chau, Anh; Fyhrie, Patricia B.; Hubbard, Mont; Upadhyaya, Shrini K.; Symons, Jennifer E.; Stover, Susan M.
2012-01-01
Background Racetrack surface is a risk factor for racehorse injuries and fatalities. Current research indicates that race surface mechanical properties may be influenced by material composition, moisture content, temperature, and maintenance. Race surface mechanical testing in a controlled laboratory setting would allow for objective evaluation of dynamic properties of surface and factors that affect surface behavior. Objective To develop a method for reconstruction of race surfaces in the laboratory and validate the method by comparison with racetrack measurements of dynamic surface properties. Methods Track-testing device (TTD) impact tests were conducted to simulate equine hoof impact on dirt and synthetic race surfaces; tests were performed both in situ (racetrack) and using laboratory reconstructions of harvested surface materials. Clegg Hammer in situ measurements were used to guide surface reconstruction in the laboratory. Dynamic surface properties were compared between in situ and laboratory settings. Relationships between racetrack TTD and Clegg Hammer measurements were analyzed using stepwise multiple linear regression. Results Most dynamic surface property setting differences (racetrack-laboratory) were small relative to surface material type differences (dirt-synthetic). Clegg Hammer measurements were more strongly correlated with TTD measurements on the synthetic surface than the dirt surface. On the dirt surface, Clegg Hammer decelerations were negatively correlated with TTD forces. Conclusions Laboratory reconstruction of racetrack surfaces guided by Clegg Hammer measurements yielded TTD impact measurements similar to in situ values. The negative correlation between TTD and Clegg Hammer measurements confirms the importance of instrument mass when drawing conclusions from testing results. Lighter impact devices may be less appropriate for assessing dynamic surface properties compared to testing equipment designed to simulate hoof impact (TTD
A method of dose reconstruction for moving targets compatible with dynamic treatments
Rugaard Poulsen, Per; Lykkegaard Schmidt, Mai; Keall, Paul; Schjodt Worm, Esben; Fledelius, Walther; Hoffmann, Lone
2012-10-15
Purpose: To develop a method that allows a commercial treatment planning system (TPS) to perform accurate dose reconstruction for rigidly moving targets and to validate the method in phantom measurements for a range of treatments including intensity modulated radiation therapy (IMRT), volumetric arc therapy (VMAT), and dynamic multileaf collimator (DMLC) tracking. Methods: An in-house computer program was developed to manipulate Dicom treatment plans exported from a TPS (Eclipse, Varian Medical Systems) such that target motion during treatment delivery was incorporated into the plans. For each treatment, a motion including plan was generated by dividing the intratreatment target motion into 1 mm position bins and construct sub-beams that represented the parts of the treatment that were delivered, while the target was located within each position bin. For each sub-beam, the target shift was modeled by a corresponding isocenter shift. The motion incorporating Dicom plans were reimported into the TPS, where dose calculation resulted in motion including target dose distributions. For experimental validation of the dose reconstruction a thorax phantom with a moveable lung equivalent rod with a tumor insert of solid water was first CT scanned. The tumor insert was delineated as a gross tumor volume (GTV), and a planning target volume (PTV) was formed by adding margins. A conformal plan, two IMRT plans (step-and-shoot and sliding windows), and a VMAT plan were generated giving minimum target doses of 95% (GTV) and 67% (PTV) of the prescription dose (3 Gy). Two conformal fields with MLC leaves perpendicular and parallel to the tumor motion, respectively, were generated for DMLC tracking. All treatment plans were delivered to the thorax phantom without tumor motion and with a sinusoidal tumor motion. The two conformal fields were delivered with and without portal image guided DMLC tracking based on an embedded gold marker. The target dose distribution was measured with a
Single-slice reconstruction method for helical cone-beam differential phase-contrast CT.
Fu, Jian; Chen, Liyuan
2014-01-01
X-ray phase-contrast computed tomography (PC-CT) can provide the internal structure information of biomedical specimens with high-quality cross-section images and has become an invaluable analysis tool. Here a simple and fast reconstruction algorithm is reported for helical cone-beam differential PC-CT (DPC-CT), which is called the DPC-CB-SSRB algorithm. It combines the existing CB-SSRB method of helical cone-beam absorption-contrast CT with the differential nature of DPC imaging. The reconstruction can be performed using 2D fan-beam filtered back projection algorithm with the Hilbert imaginary filter. The quality of the results for large helical pitches is surprisingly good. In particular, with this algorithm comparable quality is obtained using helical cone-beam DPC-CT data with a normalized pitch of 10 to that obtained using the traditional inter-row interpolation reconstruction with a normalized pitch of 2. This method will push the future medical helical cone-beam DPC-CT imaging applications.
ADMM-EM Method for L1-Norm Regularized Weighted Least Squares PET Reconstruction
2016-01-01
The L1-norm regularization is usually used in positron emission tomography (PET) reconstruction to suppress noise artifacts while preserving edges. The alternating direction method of multipliers (ADMM) is proven to be effective for solving this problem. It sequentially updates the additional variables, image pixels, and Lagrangian multipliers. Difficulties lie in obtaining a nonnegative update of the image. And classic ADMM requires updating the image by greedy iteration to minimize the cost function, which is computationally expensive. In this paper, we consider a specific application of ADMM to the L1-norm regularized weighted least squares PET reconstruction problem. Main contribution is derivation of a new approach to iteratively and monotonically update the image while self-constraining in the nonnegativity region and the absence of a predetermined step size. We give a rigorous convergence proof on the quadratic subproblem of the ADMM algorithm considered in the paper. A simplified version is also developed by replacing the minima of the image-related cost function by one iteration that only decreases it. The experimental results show that the proposed algorithm with greedy iterations provides a faster convergence than other commonly used methods. Furthermore, the simplified version gives a comparable reconstructed result with far lower computational costs. PMID:27840655
NASA Astrophysics Data System (ADS)
Paget, A. C.; Brodzik, M. J.; Gotberg, J.; Hardman, M.; Long, D. G.
2014-12-01
Spanning over 35 years of Earth observations, satellite passive microwave sensors have generated a near-daily, multi-channel brightness temperature record of observations. Critical to describing and understanding Earth system hydrologic and cryospheric parameters, data products derived from the passive microwave record include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. While swath data are valuable to oceanographers due to the temporal scales of ocean phenomena, gridded data are more valuable to researchers interested in derived parameters at fixed locations through time and are widely used in climate studies. We are applying recent developments in image reconstruction methods to produce a systematically reprocessed historical time series NASA MEaSUREs Earth System Data Record, at higher spatial resolutions than have previously been available, for the entire SMMR, SSM/I-SSMIS and AMSR-E record. We take advantage of recently released, recalibrated SSM/I-SSMIS swath format Fundamental Climate Data Records. Our presentation will compare and contrast the two candidate image reconstruction techniques we are evaluating: Backus-Gilbert (BG) interpolation and a radiometer version of Scatterometer Image Reconstruction (SIR). Both BG and SIR use regularization to trade off noise and resolution. We discuss our rationale for the respective algorithm parameters we have selected, compare results and computational costs, and include prototype SSM/I images at enhanced resolutions of up to 3 km. We include a sensitivity analysis for estimating sensor measurement response functions critical to both methods.
NASA Astrophysics Data System (ADS)
Montgomery, Kevin N.; Ross, Muriel D.
1993-07-01
A simple method to reconstruct details of neural tissue architectures from transmission electron microscope (TEM) images will help us to increase our knowledge of the functional organization of neural systems in general. To be useful, the reconstruction method should provide high resolution, quantitative measurement, and quick turnaround. In pursuit of these goals, we developed a modern, semiautomated system for reconstruction of neural tissue from TEM serial sections. Images are acquired by a video camera mounted on TEM (Zeiss 902) equipped with an automated stage control. The images are reassembled automatically as a mosaicked section using a crosscorrelation algorithm on a Connection Machine-2 (CM-2) parallel supercomputer. An object detection algorithm on a Silicon Graphics workstation is employed to aid contour extraction. An estimated registration between sections is computed and verified by the user. The contours are then tessellated into a triangle-based mesh. At this point the data can be visualized as a wireframe or solid object, volume rendered, or used as a basis for simulations of functional activity.
Reconstruction of RHESSI Solar Flare Images with a Forward Fitting Method
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.; Schmahl, Ed; RHESSI Team
2002-11-01
We describe a forward-fitting method that has been developed to reconstruct hard X-ray images of solar flares from the Ramaty High-Energy Solar Spectroscopic Imager (RHESSI), a Fourier imager with rotation-modulated collimators that was launched on 5 February 2002. The forward-fitting method is based on geometric models that represent a spatial map by a superposition of multiple source structures, which are quantified by circular gaussians (4 parameters per source), elliptical gaussians (6 parameters), or curved ellipticals (7 parameters), designed to characterize real solar flare hard X-ray maps with a minimum number of geometric elements. We describe and demonstrate the use of the forward-fitting algorithm. We perform some 500 simulations of rotation-modulated time profiles of the 9 RHESSI detectors, based on single and multiple source structures, and perform their image reconstruction. We quantify the fidelity of the image reconstruction, as function of photon statistics, and the accuracy of retrieved source positions, widths, and fluxes. We outline applications for which the forward-fitting code is most suitable, such as measurements of the energy-dependent altitude of energy loss near the limb, or footpoint separation during flares.
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Haering, Edward A., Jr.; Ehernberger, L. J.
1996-01-01
In-flight measurements of the SR-71 near-field sonic boom were obtained by an F-16XL airplane at flightpath separation distances from 40 to 740 ft. Twenty-two signatures were obtained from Mach 1.60 to Mach 1.84 and altitudes from 47,600 to 49,150 ft. The shock wave signatures were measured by the total and static sensors on the F-16XL noseboo. These near-field signature measurements were distorted by pneumatic attenuation in the pitot-static sensors and accounting for their effects using optimal deconvolution. Measurement system magnitude and phase characteristics were determined from ground-based step-response tests and extrapolated to flight conditions using analytical models. Deconvolution was implemented using Fourier transform methods. Comparisons of the shock wave signatures reconstructed from the total and static pressure data are presented. The good agreement achieved gives confidence of the quality of the reconstruction analysis. although originally developed to reconstruct the sonic boom signatures from SR-71 sonic boom flight tests, the methods presented here generally apply to other types of highly attenuated or distorted pneumatic measurements.
a Method for the Reconstruction and Temporal Extension of Climatological Time Series
NASA Astrophysics Data System (ADS)
Valero, F.; Gonzalez, J. F.; Doblas, F. J.; García-Miguel, J. A.
1996-02-01
A method for the reconstruction and temporal extension of climatological time series is provided. This method was focused on a combination of methods, including harmonic analysis, seasonal weights, and the Durbin-Watson (DW) regression method. The DW method has been modified in this paper and is described in detail because it represents a novel use of the original DW method.The method is applied to monthly means of daily wind-run data sets recorded in two historical observatories (M series and A series) within the Parque del Retiro in Madrid (Spain) and covering different time periods with an overlapping period (1901-1919). The aim of the present study is to fill up to and to construct a historical time series ranging from 1867 to 1992. The proposed model is developed for the 1906-1919 calibration period and validated over the 1901-1905 verification period, which includes the hypothesis of constant ratio of variances. The verification results are almost as good as those for the calibration period. Hence, the M series was extended back to 1867, which results in the longest climatological wind-run data-set in Spain. Also, the reconstruction is shown to be reliable.
High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI
NASA Astrophysics Data System (ADS)
Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer
2011-03-01
Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.
Method of producing nanopatterned articles using surface-reconstructed block copolymer films
Russell, Thomas P; Park, Soojin; Wang, Jia-Yu; Kim, Bokyung
2013-08-27
Nanopatterned surfaces are prepared by a method that includes forming a block copolymer film on a substrate, annealing and surface reconstructing the block copolymer film to create an array of cylindrical voids, depositing a metal on the surface-reconstructed block copolymer film, and heating the metal-coated block copolymer film to redistribute at least some of the metal into the cylindrical voids. When very thin metal layers and low heating temperatures are used, metal nanodots can be formed. When thicker metal layers and higher heating temperatures are used, the resulting metal structure includes nanoring-shaped voids. The nanopatterned surfaces can be transferred to the underlying substrates via etching, or used to prepare nanodot- or nanoring-decorated substrate surfaces.
NASA Astrophysics Data System (ADS)
Okawa, Shinpei; Hirasawa, Takeshi; Kushibiki, Toshihiro; Ishihara, Miya
2013-09-01
An image reconstruction algorithm for biomedical photoacoustic imaging is discussed. The algorithm solves the inverse problem of the photoacoustic phenomenon in biological media and images the distribution of large optical absorption coefficients, which can indicate diseased tissues such as cancers with angiogenesis and the tissues labeled by exogenous photon absorbers. The linearized forward problem, which relates the absorption coefficients to the detected photoacoustic signals, is formulated by using photon diffusion and photoacoustic wave equations. Both partial differential equations are solved by a finite element method. The inverse problem is solved by truncated singular value decomposition, which reduces the effects of the measurement noise and the errors between forward modeling and actual measurement systems. The spatial resolution and the robustness to various factors affecting the image reconstruction are evaluated by numerical experiments with 2D geometry.
NASA Astrophysics Data System (ADS)
Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen
2016-11-01
To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.
Ray, J.; Lee, J.; Yadav, V.; ...
2014-08-20
We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
Bayesian network reconstruction using systems genetics data: comparison of MCMC methods.
Tasaki, Shinya; Sauerwine, Ben; Hoff, Bruce; Toyoshiba, Hiroyoshi; Gaiteri, Chris; Chaibub Neto, Elias
2015-04-01
Reconstructing biological networks using high-throughput technologies has the potential to produce condition-specific interactomes. But are these reconstructed networks a reliable source of biological interactions? Do some network inference methods offer dramatically improved performance on certain types of networks? To facilitate the use of network inference methods in systems biology, we report a large-scale simulation study comparing the ability of Markov chain Monte Carlo (MCMC) samplers to reverse engineer Bayesian networks. The MCMC samplers we investigated included foundational and state-of-the-art Metropolis-Hastings and Gibbs sampling approaches, as well as novel samplers we have designed. To enable a comprehensive comparison, we simulated gene expression and genetics data from known network structures under a range of biologically plausible scenarios. We examine the overall quality of network inference via different methods, as well as how their performance is affected by network characteristics. Our simulations reveal that network size, edge density, and strength of gene-to-gene signaling are major parameters that differentiate the performance of various samplers. Specifically, more recent samplers including our novel methods outperform traditional samplers for highly interconnected large networks with strong gene-to-gene signaling. Our newly developed samplers show comparable or superior performance to the top existing methods. Moreover, this performance gain is strongest in networks with biologically oriented topology, which indicates that our novel samplers are suitable for inferring biological networks. The performance of MCMC samplers in this simulation framework can guide the choice of methods for network reconstruction using systems genetics data.
Application of accelerated acquisition and highly constrained reconstruction methods to MR
NASA Astrophysics Data System (ADS)
Wang, Kang
2011-12-01
There are many Magnetic Resonance Imaging (MRI) applications that require rapid data acquisition. In conventional proton MRI, representative applications include real-time dynamic imaging, whole-chest pulmonary perfusion imaging, high resolution coronary imaging, MR T1 or T2 mapping, etc. The requirement for fast acquisition and novel reconstruction methods is either due to clinical demand for high temporal resolution, high spatial resolution, or both. Another important category in which fast MRI methods are highly desirable is imaging with hyperpolarized (HP) contrast media, such as HP 3He imaging for evaluation of pulmonary function, and imaging of HP 13C-labeled substrates for the study of in vivo metabolic processes. To address these needs, numerous MR undersampling methods have been developed and combined with novel image reconstruction techniques. This thesis aims to develop novel data acquisition and image reconstruction techniques for the following applications. (I) Ultrashort echo time spectroscopic imaging (UTESI). The need for acquiring many echo images in spectroscopic imaging with high spatial resolution usually results in extended scan times, and thus requires k-space undersampling and novel imaging reconstruction methods to overcome the artifacts related to the undersampling. (2) Dynamic hyperpolarized 13C spectroscopic imaging. HP 13C compounds exhibit non-equilibrium T1 decay and rapidly evolving spectral dynamics, and therefore it is vital to utilize the polarized signal wisely and efficiently to observe the entire temporal dynamic of the injected "C compounds as well as the corresponding downstream metabolites. (3) Time-resolved contrast-enhanced MR angiography. The diagnosis of vascular diseases often requires large coverage of human body anatomies with high spatial resolution and sufficient temporal resolution for the separation of arterial phases from venous phases. The goal of simultaneously achieving high spatial and temporal resolution has
FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector
Schaefer, Dirk; Grass, Michael; Haar, Peter van de
2011-05-15
Purpose: Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. Methods: The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting and back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. Results: The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical
Reconstruction of multiple gastric electrical wave fronts using potential-based inverse methods.
Kim, J H K; Pullan, A J; Cheng, L K
2012-08-21
One approach for non-invasively characterizing gastric electrical activity, commonly used in the field of electrocardiography, involves solving an inverse problem whereby electrical potentials on the stomach surface are directly reconstructed from dense potential measurements on the skin surface. To investigate this problem, an anatomically realistic torso model and an electrical stomach model were used to simulate potentials on stomach and skin surfaces arising from normal gastric electrical activity. The effectiveness of the Greensite-Tikhonov or the Tikhonov inverse methods were compared under the presence of 10% Gaussian noise with either 84 or 204 body surface electrodes. The stability and accuracy of the Greensite-Tikhonov method were further investigated by introducing varying levels of Gaussian signal noise or by increasing or decreasing the size of the stomach by 10%. Results showed that the reconstructed solutions were able to represent the presence of propagating multiple wave fronts and the Greensite-Tikhonov method with 204 electrodes performed best (correlation coefficients of activation time: 90%; pacemaker localization error: 3 cm). The Greensite-Tikhonov method was stable with Gaussian noise levels up to 20% and 10% change in stomach size. The use of 204 rather than 84 body surface electrodes improved the performance; however, for all investigated cases, the Greensite-Tikhonov method outperformed the Tikhonov method.
Reconstruction of multiple gastric electrical wave fronts using potential-based inverse methods
NASA Astrophysics Data System (ADS)
Kim, J. H. K.; Pullan, A. J.; Cheng, L. K.
2012-08-01
One approach for non-invasively characterizing gastric electrical activity, commonly used in the field of electrocardiography, involves solving an inverse problem whereby electrical potentials on the stomach surface are directly reconstructed from dense potential measurements on the skin surface. To investigate this problem, an anatomically realistic torso model and an electrical stomach model were used to simulate potentials on stomach and skin surfaces arising from normal gastric electrical activity. The effectiveness of the Greensite-Tikhonov or the Tikhonov inverse methods were compared under the presence of 10% Gaussian noise with either 84 or 204 body surface electrodes. The stability and accuracy of the Greensite-Tikhonov method were further investigated by introducing varying levels of Gaussian signal noise or by increasing or decreasing the size of the stomach by 10%. Results showed that the reconstructed solutions were able to represent the presence of propagating multiple wave fronts and the Greensite-Tikhonov method with 204 electrodes performed best (correlation coefficients of activation time: 90%; pacemaker localization error: 3 cm). The Greensite-Tikhonov method was stable with Gaussian noise levels up to 20% and 10% change in stomach size. The use of 204 rather than 84 body surface electrodes improved the performance; however, for all investigated cases, the Greensite-Tikhonov method outperformed the Tikhonov method.
Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian
2015-01-01
We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional “point intersection” to “trajectory intersection” in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable. PMID:25760053
Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian
2015-03-09
We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional "point intersection" to "trajectory intersection" in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable.
Shatokhina, Iuliia; Obereder, Andreas; Rosensteiner, Matthias; Ramlau, Ronny
2013-04-20
We present a fast method for the wavefront reconstruction from pyramid wavefront sensor (P-WFS) measurements. The method is based on an analytical relation between pyramid and Shack-Hartmann sensor (SH-WFS) data. The algorithm consists of two steps--a transformation of the P-WFS data to SH data, followed by the application of cumulative reconstructor with domain decomposition, a wavefront reconstructor from SH-WFS measurements. The closed loop simulations confirm that our method provides the same quality as the standard matrix vector multiplication method. A complexity analysis as well as speed tests confirm that the method is very fast. Thus, the method can be used on extremely large telescopes, e.g., for eXtreme adaptive optics systems.
Matsumoto, Tomotaka; Akashi, Hiroshi; Yang, Ziheng
2015-07-01
Inference of gene sequences in ancestral species has been widely used to test hypotheses concerning the process of molecular sequence evolution. However, the approach may produce spurious results, mainly because using the single best reconstruction while ignoring the suboptimal ones creates systematic biases. Here we implement methods to correct for such biases and use computer simulation to evaluate their performance when the substitution process is nonstationary. The methods we evaluated include parsimony and likelihood using the single best reconstruction (SBR), averaging over reconstructions weighted by the posterior probabilities (AWP), and a new method called expected Markov counting (EMC) that produces maximum-likelihood estimates of substitution counts for any branch under a nonstationary Markov model. We simulated base composition evolution on a phylogeny for six species, with different selective pressures on G+C content among lineages, and compared the counts of nucleotide substitutions recorded during simulation with the inference by different methods. We found that large systematic biases resulted from (i) the use of parsimony or likelihood with SBR, (ii) the use of a stationary model when the substitution process is nonstationary, and (iii) the use of the Hasegawa-Kishino-Yano (HKY) model, which is too simple to adequately describe the substitution process. The nonstationary general time reversible (GTR) model, used with AWP or EMC, accurately recovered the substitution counts, even in cases of complex parameter fluctuations. We discuss model complexity and the compromise between bias and variance and suggest that the new methods may be useful for studying complex patterns of nucleotide substitution in large genomic data sets.
A maximum-likelihood multi-resolution weak lensing mass reconstruction method
NASA Astrophysics Data System (ADS)
Khiabanian, Hossein
Gravitational lensing is formed when the light from a distant source is "bent" around a massive object. Lensing analysis has increasingly become the method of choice for studying dark matter, so much that it is one of the main tools that will be employed in the future surveys to study the dark energy and its equation of state as well as the evolution of galaxy clustering. Unlike other popular techniques for selecting galaxy clusters (such as studying the X-ray emission or observing the over-densities of galaxies), weak gravitational lensing does not have the disadvantage of relying on the luminous matter and provides a parameter-free reconstruction of the projected mass distribution in clusters without dependence on baryon content. Gravitational lensing also provides a unique test for the presence of truly dark clusters, though it is otherwise an expensive detection method. Therefore it is essential to make use of all the information provided by the data to improve the quality of the lensing analysis. This thesis project has been motivated by the limitations encountered with the commonly used direct reconstruction methods of producing mass maps. We have developed a multi-resolution maximum-likelihood reconstruction method for producing two dimensional mass maps using weak gravitational lensing data. To utilize all the shear information, we employ an iterative inverse method with a properly selected regularization coefficient which fits the deflection potential at the position of each galaxy. By producing mass maps with multiple resolutions in the different parts of the observed field, we can achieve a uniform signal to noise level by increasing the resolution in regions of higher distortions or regions with an over-density of background galaxies. In addition, we are able to better study the sub- structure of the massive clusters at a resolution which is not attainable in the rest of the observed field.
Finding pathway-modulating genes from a novel Ontology Fingerprint-derived gene network.
Qin, Tingting; Matmati, Nabil; Tsoi, Lam C; Mohanty, Bidyut K; Gao, Nan; Tang, Jijun; Lawson, Andrew B; Hannun, Yusuf A; Zheng, W Jim
2014-10-01
To enhance our knowledge regarding biological pathway regulation, we took an integrated approach, using the biomedical literature, ontologies, network analyses and experimental investigation to infer novel genes that could modulate biological pathways. We first constructed a novel gene network via a pairwise comparison of all yeast genes' Ontology Fingerprints--a set of Gene Ontology terms overrepresented in the PubMed abstracts linked to a gene along with those terms' corresponding enrichment P-values. The network was further refined using a Bayesian hierarchical model to identify novel genes that could potentially influence the pathway activities. We applied this method to the sphingolipid pathway in yeast and found that many top-ranked genes indeed displayed altered sphingolipid pathway functions, initially measured by their sensitivity to myriocin, an inhibitor of de novo sphingolipid biosynthesis. Further experiments confirmed the modulation of the sphingolipid pathway by one of these genes, PFA4, encoding a palmitoyl transferase. Comparative analysis showed that few of these novel genes could be discovered by other existing methods. Our novel gene network provides a unique and comprehensive resource to study pathway modulations and systems biology in general.
Berthier, B; Bouzerar, R; Legallais, C
2002-10-01
Many clinical studies suggest that local blood flow patterns are involved in the location and development of atherosclerosis. In coronary diseases, this assumption should be corroborated by quantitative information on local hemodynamic parameters such as pressure, velocity or wall shear stress. Nowadays, computational fluid dynamics (CFD) algorithms coupled to realistic 3-D reconstructions of such vessels make these data accessible. Nevertheless, they should be carefully analysed to avoid misinterpretations when the physiological parameters are not all considered. As an example, we propose here to compare the flow patterns calculated in a coronary vessel reconstructed by three different methods. In the three cases, the vessel trajectory respected the physiology. In the simplest reconstruction, the coronary was modelled by a tube of constant diameter while in the most complex one, the cross-sections corresponded to the reality. We showed that local pressures, wall shear rates and velocity profiles were severely affected by the geometrical modifications. In the constant cross-section vessel, the flow resembled to that of Poiseuille in a straight tube. On the contrary, velocity and shear rate exhibited sudden local variations in the more realistic vessels. As an example, velocity could be multiplied by 5 as compared to Poiseuille's flow and area of very low wall shear rates appeared. The results obtained with the most complex model clearly outlined that, in addition to a proper description of the vessel trajectory, the section area changes should be carefully taken into account, confirming assumptions already highlighted before the rise of commercially available and efficient CFD softwares.
Testing a New Anticoagulation Method for Free Flap Reconstruction of Head and Neck Cancers
Karimi, Ebrahim; Ardestani, Seyyed Hadi Samimi; Jafari, Mehrdad; Hagh, Ali Bagheri
2016-01-01
Objectives Free flaps are widely used to reconstruct head and neck defects. Despite the improvement in the surgical techniques and the surgeons’ experiences, flap failures still occur due to thrombotic occlusion after small vessels anastomosis. To reduce the possibility of flap loss as a result of thrombotic occlusion, various anticoagulants have been used. In this study we decided to evaluate a new protocol for anticoagulation therapy and its effect on flap survival and complications. Methods In this interventional study, 30 patients with head and neck cancer underwent surgical defects were reconstructed by microvascular free flap between 2013 and 2014. In the postoperative period patients have taken aspirin (100 mg/day) for 5 days and enoxaparin (40 mg/day subcutaneously) for 3 days. The flap survival was followed for three weeks. Results Given that there was no complete necrosis or loss of flap, the free flap success rate was as much as 100%. The need for re-exploration occurred in 3 patients (10%). Only in one patient the need for re-exploration was due to problem in venous blood flow. Conclusion The aspirin-enoxaparin short-term protocol may be a good choice after free flap transfer in reconstruction of head and neck surgical defects. PMID:27337950
NASA Astrophysics Data System (ADS)
Xia, Yidong
The objective this work is to develop a parallel, implicit reconstructed discontinuous Galerkin (RDG) method using Taylor basis for the solution of the compressible Navier-Stokes equations on 3D hybrid grids. This third-order accurate RDG method is based on a hierarchical weighed essentially non- oscillatory reconstruction scheme, termed as HWENO(P1P 2) to indicate that a quadratic polynomial solution is obtained from the underlying linear polynomial DG solution via a hierarchical WENO reconstruction. The HWENO(P1P2) is designed not only to enhance the accuracy of the underlying DG(P1) method but also to ensure non-linear stability of the RDG method. In this reconstruction scheme, a quadratic polynomial (P2) solution is first reconstructed using a least-squares approach from the underlying linear (P1) discontinuous Galerkin solution. The final quadratic solution is then obtained using a Hermite WENO reconstruction, which is necessary to ensure the linear stability of the RDG method on 3D unstructured grids. The first derivatives of the quadratic polynomial solution are then reconstructed using a WENO reconstruction in order to eliminate spurious oscillations in the vicinity of strong discontinuities, thus ensuring the non-linear stability of the RDG method. The parallelization in the RDG method is based on a message passing interface (MPI) programming paradigm, where the METIS library is used for the partitioning of a mesh into subdomain meshes of approximately the same size. Both multi-stage explicit Runge-Kutta and simple implicit backward Euler methods are implemented for time advancement in the RDG method. In the implicit method, three approaches: analytical differentiation, divided differencing (DD), and automatic differentiation (AD) are developed and implemented to obtain the resulting flux Jacobian matrices. The automatic differentiation is a set of techniques based on the mechanical application of the chain rule to obtain derivatives of a function given as
Kuiper, Justin J; Zimmerman, M Bridget; Pagedar, Nitin A; Carter, Keith D; Allen, Richard C; Shriver, Erin M
2016-08-01
This article compares the perception of health and beauty of patients after exenteration reconstruction with free flap, eyelid-sparing, split-thickness skin graft, or with a prosthesis. Cross-sectional evaluation was performed through a survey sent to all students enrolled at the University of Iowa Carver College of Medicine. The survey included inquiries about observer comfort, perceived patient health, difficulty of social interactions, and which patient appearance was least bothersome. Responses were scored from 0 to 4 for each method of reconstruction and an orbital prosthesis. A Friedman test was used to compare responses among each method of repair and the orbital prosthesis for each of the four questions, and if this was significant, then post-hoc pairwise comparison was performed with p values adjusted using Bonferroni's method. One hundred and thirty two students responded to the survey and 125 completed all four questions. Favorable response for all questions was highest for the orbital prosthesis and lowest for the split-thickness skin graft. Patient appearance with an orbital prosthesis had significantly higher scores compared to patient appearance with each of the other methods for all questions (p value < 0.0001). Second highest scores were for the free flap, which were higher than eyelid-sparing and significantly higher compared to split-thickness skin grafting (p value: Question 1: < 0.0001; Question 2: 0.0005; Question 3: 0.006; and Question 4: 0.019). The orbital prosthesis was the preferred post-operative appearance for the exenterated socket for each question. Free flap was the preferred appearance for reconstruction without an orbital prosthesis. Split-thickness skin graft was least preferred for all questions.
Testing for causality in reconstructed state spaces by an optimized mixed prediction method
NASA Astrophysics Data System (ADS)
Krakovská, Anna; Hanzely, Filip
2016-11-01
In this study, a method of causality detection was designed to reveal coupling between dynamical systems represented by time series. The method is based on the predictions in reconstructed state spaces. The results of the proposed method were compared with outcomes of two other methods, the Granger VAR test of causality and the convergent cross-mapping. We used two types of test data. The first test example is a unidirectional connection of chaotic systems of Rössler and Lorenz type. The second one, the fishery model, is an example of two correlated observables without a causal relationship. The results showed that the proposed method of optimized mixed prediction was able to reveal the presence and the direction of coupling and distinguish causality from mere correlation as well.
2016-01-01
Fluorescence molecular tomography (FMT) is an imaging technique that can localize and quantify fluorescent markers to resolve biological processes at molecular and cellular levels. Owing to a limited number of measurements and a large number of unknowns as well as the diffusive transport of photons in biological tissues, the inverse problem in FMT is usually highly ill-posed. In this work, a sparsity-constrained preconditioned Kaczmarz (SCP-Kaczmarz) method is proposed to reconstruct the fluorescent target for FMT. The SCP-Kaczmarz method uses the preconditioning strategy to minimize the correlation between the rows of the forward matrix and constrains the Kaczmarz iteration results to be sparse. Numerical simulation and phantom and in vivo experiments were performed to test the efficiency of the proposed method. The results demonstrate that both the convergence and accuracy of the proposed method are improved compared with the classical memory-efficient low-cost Kaczmarz method. PMID:27999796
NASA Astrophysics Data System (ADS)
Zhao, Lingling; Yang, He; Cong, Wenxiang; Wang, Ge; Intes, Xavier
2014-02-01
Time domain florescence molecular tomography (TD-FMT) allows 3D visualization of multiple fluorophores based on lifetime contrast and provides a unique data set for enhanced quantification and spatial resolution. The time-gate data set can be divided into two groups around the maximum gate, which are early gates and late gates. It is well-established that early gates allow for improved spatial resolution of reconstruction. However, photon counts are inherently very low at early gates due to the high absorption and scattering of tissue. It makes image reconstruction highly susceptible to the effects of noise and numerical errors. Moreover, the inverse problem of FMT is the ill-posed and underdetermined. These factors make reconstruction difficult for early time gates. In this work, lp (0
reconstruction algorithm was developed within our wide-field mesh-based Monte Carlo reconstruction strategy. The reconstructions performances were validated on a synthetic murine model simulating the fluorophores uptake in the kidneys and with experimental preclinical data. We compared the early time-gate reconstructed results using l1/3, l1/2 and l1 regularization methods in terms of quantification and resolution. The regularization parameters were selected by the Lcurve method. The simulation results of a 3D mouse atlas and mouse experiment show that lp (0
method obtained more sparse and accurate solutions than l1 regularization method for early time gates.
Mory, Cyril; Auvray, Vincent; Zhang, Bo; Grass, Michael; Schäfer, Dirk; Chen, S. James; Carroll, John D.; Rit, Simon; Peyrin, Françoise; Douek, Philippe; Boussel, Loïc
2014-02-15
Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.
NASA Astrophysics Data System (ADS)
Lang, Haitao; Liu, Liren; Yang, Qingguo
2007-04-01
In this paper, we propose a novel three-dimensional imaging method by which the object is captured by a coded cameras array (CCA) and computationally reconstructed as a series of longitudinal layered surface images of the object. The distribution of cameras in array, named code pattern, is crucial for reconstructed images fidelity when the correlation decoding is used. We use DIRECT global optimization algorithm to design the code patterns that possess proper imaging property. We have conducted primary experiments to verify and test the performance of the proposed method with a simple discontinuous object and a small-scale CCA including nine cameras. After certain procedures such as capturing, photograph integrating, computational reconstructing and filtering, etc., we obtain reconstructed longitudinal layered surface images of the object with higher signal-to-noise ratio. The results of experiments show that the proposed method is feasible. It is a promising method to be used in fields such as remote sensing, machine vision, etc.
a Method of 3d Measurement and Reconstruction for Cultural Relics in Museums
NASA Astrophysics Data System (ADS)
Zheng, S.; Zhou, Y.; Huang, R.; Zhou, L.; Xu, X.; Wang, C.
2012-07-01
Three-dimensional measurement and reconstruction during conservation and restoration of cultural relics have become an essential part of a modem museum regular work. Although many kinds of methods including laser scanning, computer vision and close-range photogrammetry have been put forward, but problems still exist, such as contradiction between cost and good result, time and fine effect. Aimed at these problems, this paper proposed a structure-light based method for 3D measurement and reconstruction of cultural relics in museums. Firstly, based on structure-light principle, digitalization hardware has been built and with its help, dense point cloud of cultural relics' surface can be easily acquired. To produce accurate 3D geometry model from point cloud data, multi processing algorithms have been developed and corresponding software has been implemented whose functions include blunder detection and removal, point cloud alignment and merge, 3D mesh construction and simplification. Finally, high-resolution images are captured and the alignment of these images and 3D geometry model is conducted and realistic, accurate 3D model is constructed. Based on such method, a complete system including hardware and software are built. Multi-kinds of cultural relics have been used to test this method and results prove its own feature such as high efficiency, high accuracy, easy operation and so on.
Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C
2015-04-13
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
Han, Zhaolong; Li, Jiasong; Singh, Manmohan; Wu, Chen; Liu, Chih-hao; Wang, Shang; Idugboe, Rita; Raghunathan, Raksha; Sudheendran, Narendran; Aglyamov, Salavat R.; Twa, Michael D.; Larin, Kirill V.
2015-01-01
We present a systematic analysis of the accuracy of five different methods for extracting the biomechanical properties of soft samples using optical coherence elastography (OCE). OCE is an emerging noninvasive technique, which allows assessing biomechanical properties of tissues with a micrometer spatial resolution. However, in order to accurately extract biomechanical properties from OCE measurements, application of proper mechanical model is required. In this study, we utilize tissue-mimicking phantoms with controlled elastic properties and investigate the feasibilities of four available methods for reconstructing elasticity (Young’s modulus) based on OCE measurements of an air-pulse induced elastic wave. The approaches are based on the shear wave equation (SWE), the surface wave equation (SuWE), Rayleigh-Lamb frequency equation (RLFE), and finite element method (FEM), Elasticity values were compared with uniaxial mechanical testing. The results show that the RLFE and the FEM are more robust in quantitatively assessing elasticity than the other simplified models. This study provides a foundation and reference for reconstructing the biomechanical properties of tissues from OCE data, which is important for the further development of noninvasive elastography methods. PMID:25860076
Zhao, Weizhao; Ginsberg, M. . Cerebral Vascular Disease Research Center); Young, T.Y. . Dept. of Electrical and Computer Engineering)
1993-12-01
Quantitative autoradiography is a powerful radio-isotopic-imaging method for neuroscientists to study local cerebral blood flow and glucose-metabolic rate at rest, in response to physiologic activation of the visual, auditory, somatosensory, and motor systems, and in pathologic conditions. Most autoradiographic studies analyze glucose utilization and blood flow in two-dimensional (2-D) coronal sections. With modern digital computer and image-processing techniques, a large number of closely spaced coronal sections can be stacked appropriately to form a three-dimensional (3-d) image. 3-D autoradiography allows investigators to observe cerebral sections and surfaces from any viewing angle. A fundamental problem in 3-D reconstruction is the alignment (registration) of the coronal sections. A new alignment method based on disparity analysis is presented which can overcome many of the difficulties encountered by previous methods. The disparity analysis method can deal with asymmetric, damaged, or tilted coronal sections under the same general framework, and it can be used to match coronal sections of different sizes and shapes. Experimental results on alignment and 3-D reconstruction are presented.
NASA Astrophysics Data System (ADS)
Jha, Abhinav K.; Song, Na; Caffo, Brian; Frey, Eric C.
2015-03-01
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method pro- vided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
Optimized tomography methods for plasma emissivity reconstruction at the ASDEX Upgrade tokamak.
Odstrčil, T; Pütterich, T; Odstrčil, M; Gude, A; Igochine, V; Stroth, U
2016-12-01
The soft X-ray (SXR) emission provides valuable insight into processes happening inside of high-temperature plasmas. A standard method for deriving the local emissivity profiles of the plasma from the line-of-sight integrals measured by pinhole cameras is the tomographic inversion. Such an inversion is challenging due to its ill-conditioned nature and because the reconstructed profiles depend not only on the quality of the measurements but also on the inversion algorithm used. This paper provides a detailed description of several tomography algorithms, which solve the inversion problem of Tikhonov regularization with linear computational complexity in the number of basis functions. The feasibility of combining these methods with the minimum Fisher information regularization is demonstrated, and various statistical methods for the optimal choice of the regularization parameter are investigated with emphasis on their reliability and robustness. Finally, the accuracy and the capability of the methods are demonstrated by reconstructions of experimental SXR profiles, featuring poloidal asymmetric impurity distributions as measured at the ASDEX Upgrade tokamak.
Identifying gene networks underlying the neurobiology of ethanol and alcoholism.
Wolen, Aaron R; Miles, Michael F
2012-01-01
For complex disorders such as alcoholism, identifying the genes linked to these diseases and their specific roles is difficult. Traditional genetic approaches, such as genetic association studies (including genome-wide association studies) and analyses of quantitative trait loci (QTLs) in both humans and laboratory animals already have helped identify some candidate genes. However, because of technical obstacles, such as the small impact of any individual gene, these approaches only have limited effectiveness in identifying specific genes that contribute to complex diseases. The emerging field of systems biology, which allows for analyses of entire gene networks, may help researchers better elucidate the genetic basis of alcoholism, both in humans and in animal models. Such networks can be identified using approaches such as high-throughput molecular profiling (e.g., through microarray-based gene expression analyses) or strategies referred to as genetical genomics, such as the mapping of expression QTLs (eQTLs). Characterization of gene networks can shed light on the biological pathways underlying complex traits and provide the functional context for identifying those genes that contribute to disease development.
Ramanjappa, Thogata; Rao, C. Ramakrishna; Raju, A Krishnam; Muralidhar, KR
2011-01-01
Purpose Intracavitary brachytherapy (ICB) is a widely used technique in the treatment of cervical cancer. In our Institute, we use different reconstructive methods in the conventional planning procedure. The main aim of this study was to compare these methods using critical organ doses obtained in various treatment plans. There is a small difference in the recommendations in selecting bladder dose point between ICRU (International Commission on Radiation Units & Measurements) -38 and ABS (American Brachytherapy Society). The second objective of the study was to find the difference in bladder dose using both recommendations. Material and methods We have selected two methods: variable angle method (M1) and orthogonal method (M2). Two orthogonal sets of radiographs were taken into consideration using conventional simulator. All four radiographs were used in M1 and only two radiographs were used in M2. Bladder and rectum doses were calculated using ICRU-38 recommendations. For maximum bladder dose reference point as per the ABS recommendation, 4 to 5 reference points were marked on Foley’s balloon. Results 64% of plans were showing more bladder dose and 50% of plans presented more rectum dose in M1 compared to M2. Many of the plans reviled maximum bladder dose point, other than ICRU-38 bladder point in both methods. Variation was exceeded in 5% of considerable number of plans. Conclusions We observed a difference in critical organ dose between two studied methods. There is an advantage of using variable angle reconstruction method in identifying the catheters. It is useful to follow ABS recommendation to find maximum bladder dose. PMID:27853480
Rezac, K.; Klir, D.; Kubes, P.; Kravarik, J.
2009-01-21
We present the reconstruction of neutron energy spectra from time-of-flight signals. This technique is useful in experiments with the time of neutron production in the range of about tens or hundreds of nanoseconds. The neutron signals were obtained by a common hard X-ray and neutron fast plastic scintillation detectors. The reconstruction is based on the Monte Carlo method which has been improved by simultaneous usage of neutron detectors placed on two opposite sides from the neutron source. Although the reconstruction from detectors placed on two opposite sides is more difficult and a little bit inaccurate (it followed from several presumptions during the inclusion of both sides of detection), there are some advantages. The most important advantage is smaller influence of scattered neutrons on the reconstruction. Finally, we describe the estimation of the error of this reconstruction.
DNA-Binding Kinetics Determines the Mechanism of Noise-Induced Switching in Gene Networks.
Tse, Margaret J; Chu, Brian K; Roy, Mahua; Read, Elizabeth L
2015-10-20
Gene regulatory networks are multistable dynamical systems in which attractor states represent cell phenotypes. Spontaneous, noise-induced transitions between these states are thought to underlie critical cellular processes, including cell developmental fate decisions, phenotypic plasticity in fluctuating environments, and carcinogenesis. As such, there is increasing interest in the development of theoretical and computational approaches that can shed light on the dynamics of these stochastic state transitions in multistable gene networks. We applied a numerical rare-event sampling algorithm to study transition paths of spontaneous noise-induced switching for a ubiquitous gene regulatory network motif, the bistable toggle switch, in which two mutually repressive genes compete for dominant expression. We find that the method can efficiently uncover detailed switching mechanisms that involve fluctuations both in occupancies of DNA regulatory sites and copy numbers of protein products. In addition, we show that the rate parameters governing binding and unbinding of regulatory proteins to DNA strongly influence the switching mechanism. In a regime of slow DNA-binding/unbinding kinetics, spontaneous switching occurs relatively frequently and is driven primarily by fluctuations in DNA-site occupancies. In contrast, in a regime of fast DNA-binding/unbinding kinetics, switching occurs rarely and is driven by fluctuations in levels of expressed protein. Our results demonstrate how spontaneous cell phenotype transitions involve collective behavior of both regulatory proteins and DNA. Computational approaches capable of simulating dynamics over many system variables are thus well suited to exploring dynamic mechanisms in gene networks.
He, Jinjing; Li, Song; Lv, Xiangwei; Wang, Liming; Liu, Qinlong
2015-01-01
Background The mouse model of arterialized orthotopic liver transplantation (AOLT) has played an important role in biomedical research. The available methods of sutured anastomosis for reconstruction of the hepatic artery are complicated, resulting in a high incidence of complications and failure. Therefore, we developed and evaluated a new model of AOLT in mice. Materials and methods Male inbred C57BL/6 mice were used in this study. A continuous suture approach was applied to connect the suprahepatic inferior vena cava (SHVC). The portal vein and infrahepatic inferior vena cava (IHVC) were connected according to the "two-cuff" method. The common bile duct was connected by a biliary stent. We used the stent (G3 group) or aortic trunk (G2 group) to reconstruct the hepatic artery. The patency of the hepatic artery was verified by transecting the artery near the graft after one week. The survival rate of the recipients and serum alanine aminotransferase (ALT) levels, hepatic pathologic alterations, apoptosis and necrosis were observed at one week postoperatively. Results The patency of the hepatic artery was verified in eight of ten mice in G3 and in six of ten mice in G2. The 7-day survival rate, extents of necrosis and apoptosis, and TGF-β levels were not significantly different among the three groups (P>0.05). However, the serum ALT levels and operation time were markedly lower in G3 compared with G2 or G1 (both P<0.05). Conclusions Reconstruction of the hepatic artery using a stent can be performed quickly with a high rate of patency. This model simplifies hepatic artery anastomosis and should be promoted in the field of biomedical research. PMID:26207367
Liu, Xueqi; Wang, Hong-Wei
2011-03-28
Single particle electron microscopy (EM) reconstruction has recently become a popular tool to get the three-dimensional (3D) structure of large macromolecular complexes. Compared to X-ray crystallography, it has some unique advantages. First, single particle EM reconstruction does not need to crystallize the protein sample, which is the bottleneck in X-ray crystallography, especially for large macromolecular complexes. Secondly, it does not need large amounts of protein samples. Compared with milligrams of proteins necessary for crystallization, single particle EM reconstruction only needs several micro-liters of protein solution at nano-molar concentrations, using the negative staining EM method. However, despite a few macromolecular assemblies with high symmetry, single particle EM is limited at relatively low resolution (lower than 1 nm resolution) for many specimens especially those without symmetry. This technique is also limited by the size of the molecules under study, i.e. 100 kDa for negatively stained specimens and 300 kDa for frozen-hydrated specimens in general. For a new sample of unknown structure, we generally use a heavy metal solution to embed the molecules by negative staining. The specimen is then examined in a transmission electron microscope to take two-dimensional (2D) micrographs of the molecules. Ideally, the protein molecules have a homogeneous 3D structure but exhibit different orientations in the micrographs. These micrographs are digitized and processed in computers as "single particles". Using two-dimensional alignment and classification techniques, homogenous molecules in the same views are clustered into classes. Their averages enhance the signal of the molecule's 2D shapes. After we assign the particles with the proper relative orientation (Euler angles), we will be able to reconstruct the 2D particle images into a 3D virtual volume. In single particle 3D reconstruction, an essential step is to correctly assign the proper orientation
A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction
Lu, Hongyang; Wei, Jingbo; Wang, Yuhao; Deng, Xiaohua
2016-01-01
Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values. PMID:27110235
Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method
Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.; Raval, Amish N.; Van Lysel, Michael S.; Speidel, Michael A.
2014-07-15
Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937
Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method
Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.; Raval, Amish N.; Van Lysel, Michael S.; Speidel, Michael A.
2014-01-01
Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937
A New Method for Reconstruction of Coronal Force-Free Magnetic Fields
NASA Astrophysics Data System (ADS)
Yi, Sibaek; Choe, Gwangson; Lim, Daye; Kim, Kap-Sung
2016-04-01
We present a new method for coronal magnetic field reconstruction based on vector magnetogram data. This method belongs to a variational method in that the magnetic energy of the system is decreased as the iteration proceeds. We employ a vector potential rather than the magnetic field vector in order to be free from the numerical divergence B problem. Whereas most methods employing three components of the magnetic field vector overspecify the boundary conditions, we only impose the normal components of magnetic field and current density as the bottom boundary conditions. Previous methods using a vector potential need to adjust the bottom boundary conditions continually, but we fix the bottom boundary conditions once and for all. To minimize the effect of the obscure lateral and top boundary conditions, we have adopted a nested grid system, which can accommodate as large as a computational domain without consuming as much computational resources. At the top boundary, we have implemented the source surface condition. We have tested our method with the analytic solution by Low & Lou (1990) as a reference. When the solution is given only at the bottom boundary, our method excels in most figures of merits devised by Schrijver et al. (2006). We have also applied our method to the active region AR 11974, in which two M class flares and a halo CME took place. Our reconstructed field shows three sigmoid structures in the lower corona and two interwound flux tubes in the upper corona. The former seem to cause the observed flares and the latter seem to be responsible for the global eruption, i.e., the CME.
NASA Astrophysics Data System (ADS)
Zhou, Huiyuan; Narayanan, Ram M.; Balasingham, Ilangko
2016-05-01
This paper addresses the detection and imaging of a small tumor underneath the inner surface of the human intestine. The proposed system consists of an around-body antenna array cooperating with a capsule carrying a radio frequency (RF) transmitter located within the human body. This paper presents a modified Levenberg-Marquardt algorithm to reconstruct the dielectric profile with this new system architecture. Each antenna around the body acts both as a transmitter and a receiver for the remaining array elements. In addition, each antenna also acts as a receiver for the capsule transmitter inside the body to collect additional data which cannot be obtained from the conventional system. In this paper, the synthetic data are collected from biological objects, which are simulated for the circular phantoms using CST studio software. For the imaging part, the Levenberg-Marquardt algorithm, which is a kind of Newton inversion method, is chosen to reconstruct the dielectric profile of the objects. The imaging process involves a two-part innovation. The first part is the use of a dual mesh method which builds a dense mesh grid around in the region around the transmitter and a coarse mesh for the remaining area. The second part is the modification of the Levenberg-Marquardt method to use the additional data collected from the inside transmitter. The results show that the new system with the new imaging algorithm can obtain high resolution images even for small tumors.
Lee, Heung-Rae
1997-01-01
A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object.
Robust reconstruction of the rate constant distribution using the phase function method.
Zhou, Yajun; Zhuang, Xiaowei
2006-12-01
Many biological processes exhibit complex kinetic behavior that involves a nontrivial distribution of rate constants. Characterization of the rate constant distribution is often critical for mechanistic understandings of these processes. However, it is difficult to extract a rate constant distribution from data measured in the time domain. This is due to the numerical instability of the inverse Laplace transform, a long-standing mathematical challenge that has hampered data analysis in many disciplines. Here, we present a method that allows us to reconstruct the probability distribution of rate constants from decay data in the time domain, without fitting to specific trial functions or requiring any prior knowledge of the rate distribution. The robustness (numerical stability) of this reconstruction method is numerically illustrated by analyzing data with realistic noise and theoretically proved by the continuity of the transformations connecting the relevant function spaces. This development enhances our ability to characterize kinetics and dynamics of biological processes. We expect this method to be useful in a broad range of disciplines considering the prevalence of complex exponential decays in many experimental systems.
A Method for Accurate Reconstructions of the Upper Airway Using Magnetic Resonance Images
Xiong, Huahui; Huang, Xiaoqing; Li, Yong; Li, Jianhong; Xian, Junfang; Huang, Yaqi
2015-01-01
Objective The purpose of this study is to provide an optimized method to reconstruct the structure of the upper airway (UA) based on magnetic resonance imaging (MRI) that can faithfully show the anatomical structure with a smooth surface without artificial modifications. Methods MRI was performed on the head and neck of a healthy young male participant in the axial, coronal and sagittal planes to acquire images of the UA. The level set method was used to segment the boundary of the UA. The boundaries in the three scanning planes were registered according to the positions of crossing points and anatomical characteristics using a Matlab program. Finally, the three-dimensional (3D) NURBS (Non-Uniform Rational B-Splines) surface of the UA was constructed using the registered boundaries in all three different planes. Results A smooth 3D structure of the UA was constructed, which captured the anatomical features from the three anatomical planes, particularly the location of the anterior wall of the nasopharynx. The volume and area of every cross section of the UA can be calculated from the constructed 3D model of UA. Conclusions A complete scheme of reconstruction of the UA was proposed, which can be used to measure and evaluate the 3D upper airway accurately. PMID:26066461
Nien, Hung; Fessler, Jeffrey A.
2014-01-01
Augmented Lagrangian (AL) methods for solving convex optimization problems with linear constraints are attractive for imaging applications with composite cost functions due to the empirical fast convergence rate under weak conditions. However, for problems such as X-ray computed tomography (CT) image reconstruction, where the inner least-squares problem is challenging and requires iterations, AL methods can be slow. This paper focuses on solving regularized (weighted) least-squares problems using a linearized variant of AL methods that replaces the quadratic AL penalty term in the scaled augmented Lagrangian with its separable quadratic surrogate (SQS) function, leading to a simpler ordered-subsets (OS) accelerable splitting-based algorithm, OS-LALM. To further accelerate the proposed algorithm, we use a second-order recursive system analysis to design a deterministic downward continuation approach that avoids tedious parameter tuning and provides fast convergence. Experimental results show that the proposed algorithm significantly accelerates the convergence of X-ray CT image reconstruction with negligible overhead and can reduce OS artifacts when using many subsets. PMID:25248178
Lee, H.R.
1997-11-18
A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object. 5 figs.
Vehmeijer, Maarten; van Eijnatten, Maureen; Liberton, Niels; Wolff, Jan
2016-08-01
Fractures of the orbital floor are often a result of traffic accidents or interpersonal violence. To date, numerous materials and methods have been used to reconstruct the orbital floor. However, simple and cost-effective 3-dimensional (3D) printing technologies for the treatment of orbital floor fractures are still sought. This study describes a simple, precise, cost-effective method of treating orbital fractures using 3D printing technologies in combination with autologous bone. Enophthalmos and diplopia developed in a 64-year-old female patient with an orbital floor fracture. A virtual 3D model of the fracture site was generated from computed tomography images of the patient. The fracture was virtually closed using spline interpolation. Furthermore, a virtual individualized mold of the defect site was created, which was manufactured using an inkjet printer. The tangible mold was subsequently used during surgery to sculpture an individualized autologous orbital floor implant. Virtual reconstruction of the orbital floor and the resulting mold enhanced the overall accuracy and efficiency of the surgical procedure. The sculptured autologous orbital floor implant showed an excellent fit in vivo. The combination of virtual planning and 3D printing offers an accurate and cost-effective treatment method for orbital floor fractures.
Wang, Shaobu; Lu, Shuai; Zhou, Ning; Lin, Guang; Elizondo, Marcelo A.; Pai, M. A.
2014-09-04
In interconnected power systems, dynamic model reduction can be applied on generators outside the area of interest to mitigate the computational cost with transient stability studies. This paper presents an approach of deriving the reduced dynamic model of the external area based on dynamic response measurements, which comprises of three steps, dynamic-feature extraction, attribution and reconstruction (DEAR). In the DEAR approach, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal ‘basis’ of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original external system. Network model is un-changed in the DEAR method. Tests on several IEEE standard systems show that the proposed method gets better reduction ratio and response errors than the traditional coherency aggregation methods.
Nien, Hung; Fessler, Jeffrey A
2015-02-01
Augmented Lagrangian (AL) methods for solving convex optimization problems with linear constraints are attractive for imaging applications with composite cost functions due to the empirical fast convergence rate under weak conditions. However, for problems such as X-ray computed tomography (CT) image reconstruction, where the inner least-squares problem is challenging and requires iterations, AL methods can be slow. This paper focuses on solving regularized (weighted) least-squares problems using a linearized variant of AL methods that replaces the quadratic AL penalty term in the scaled augmented Lagrangian with its separable quadratic surrogate function, leading to a simpler ordered-subsets (OS) accelerable splitting-based algorithm, OS-LALM. To further accelerate the proposed algorithm, we use a second-order recursive system analysis to design a deterministic downward continuation approach that avoids tedious parameter tuning and provides fast convergence. Experimental results show that the proposed algorithm significantly accelerates the convergence of X-ray CT image reconstruction with negligible overhead and can reduce OS artifacts when using many subsets.
Xia, Yidong; Luo, Hong; Frisbey, Megan; Nourgaliev, Robert
2014-07-01
A set of implicit methods are proposed for a third-order hierarchical WENO reconstructed discontinuous Galerkin method for compressible flows on 3D hybrid grids. An attractive feature in these methods are the application of the Jacobian matrix based on the P1 element approximation, resulting in a huge reduction of memory requirement compared with DG (P2). Also, three approaches -- analytical derivation, divided differencing, and automatic differentiation (AD) are presented to construct the Jacobian matrix respectively, where the AD approach shows the best robustness. A variety of compressible flow problems are computed to demonstrate the fast convergence property of the implemented flow solver. Furthermore, an SPMD (single program, multiple data) programming paradigm based on MPI is proposed to achieve parallelism. The numerical results on complex geometries indicate that this low-storage implicit method can provide a viable and attractive DG solution for complicated flows of practical importance.
Xia, Yidong; Luo, Hong; Frisbey, Megan; ...
2014-07-01
A set of implicit methods are proposed for a third-order hierarchical WENO reconstructed discontinuous Galerkin method for compressible flows on 3D hybrid grids. An attractive feature in these methods are the application of the Jacobian matrix based on the P1 element approximation, resulting in a huge reduction of memory requirement compared with DG (P2). Also, three approaches -- analytical derivation, divided differencing, and automatic differentiation (AD) are presented to construct the Jacobian matrix respectively, where the AD approach shows the best robustness. A variety of compressible flow problems are computed to demonstrate the fast convergence property of the implemented flowmore » solver. Furthermore, an SPMD (single program, multiple data) programming paradigm based on MPI is proposed to achieve parallelism. The numerical results on complex geometries indicate that this low-storage implicit method can provide a viable and attractive DG solution for complicated flows of practical importance.« less
Comparison of Short-term Complications Between 2 Methods of Coracoclavicular Ligament Reconstruction
Rush, Lane N.; Lake, Nicholas; Stiefel, Eric C.; Hobgood, Edward R.; Ramsey, J. Randall; O’Brien, Michael J.; Field, Larry D.; Savoie, Felix H.
2016-01-01
Background: Numerous techniques have been used to treat acromioclavicular (AC) joint dislocation, with anatomic reconstruction of the coracoclavicular (CC) ligaments becoming a popular method of fixation. Anatomic CC ligament reconstruction is commonly performed with cortical fixation buttons (CFBs) or tendon grafts (TGs). Purpose: To report and compare short-term complications associated with AC joint stabilization procedures using CFBs or TGs. Study Design: Cohort study; Level of evidence, 3. Methods: We conducted a retrospective review of the operative treatment of AC joint injuries between April 2007 and January 2013 at 2 institutions. Thirty-eight patients who had undergone a procedure for AC joint instability were evaluated. In these 38 patients with a mean age of 36.2 years, 18 shoulders underwent fixation using the CFB technique and 20 shoulders underwent reconstruction using the TG technique. Results: The overall complication rate was 42.1% (16/38). There were 11 complications in the 18 patients in the CFB group (61.1%), including 7 construct failures resulting in a loss of reduction. The most common mode of failure was suture breakage (n = 3), followed by button migration (n = 2) and coracoid fracture (n = 2). There were 5 complications in the TG group (25%), including 3 cases of asymptomatic subluxation, 1 symptomatic suture granuloma, and 1 superficial infection. There were no instances of construct failure seen in TG fixations. CFB fixation was found to have a statistically significant increase in complications (P = .0243) and construct failure (P = .002) compared with TG fixation. Conclusion: CFB fixation was associated with a higher rate of failure and higher rate of early complications when compared with TG fixation. PMID:27504468
NASA Astrophysics Data System (ADS)
Ma, Xibo; Tian, Jie; Zhang, Bo; Zhang, Xing; Xue, Zhenwen; Dong, Di; Han, Dong
2011-03-01
Among many optical molecular imaging modalities, bioluminescence imaging (BLI) has more and more wide application in tumor detection and evaluation of pharmacodynamics, toxicity, pharmacokinetics because of its noninvasive molecular and cellular level detection ability, high sensitivity and low cost in comparison with other imaging technologies. However, BLI can not present the accurate location and intensity of the inner bioluminescence sources such as in the bone, liver or lung etc. Bioluminescent tomography (BLT) shows its advantage in determining the bioluminescence source distribution inside a small animal or phantom. Considering the deficiency of two-dimensional imaging modality, we developed three-dimensional tomography to reconstruct the information of the bioluminescence source distribution in transgenic mOC-Luc mice bone with the boundary measured data. In this paper, to study the osteocalcin (OC) accumulation in transgenic mOC-Luc mice bone, a BLT reconstruction method based on multilevel adaptive finite element (FEM) algorithm was used for localizing and quantifying multi bioluminescence sources. Optical and anatomical information of the tissues are incorporated as a priori knowledge in this method, which can reduce the ill-posedness of BLT. The data was acquired by the dual modality BLT and Micro CT prototype system that was developed by us. Through temperature control and absolute intensity calibration, a relative accurate intensity can be calculated. The location of the OC accumulation was reconstructed, which was coherent with the principle of bone differentiation. This result also was testified by ex vivo experiment in the black 96-plate well using the BLI system and the chemiluminescence apparatus.
Aibar, Sara; Fontanillo, Celia; Droste, Conrad; De Las Rivas, Javier
2015-01-01
Summary: Functional Gene Networks (FGNet) is an R/Bioconductor package that generates gene networks derived from the results of functional enrichment analysis (FEA) and annotation clustering. The sets of genes enriched with specific biological terms (obtained from a FEA platform) are transformed into a network by establishing links between genes based on common functional annotations and common clusters. The network provides a new view of FEA results revealing gene modules with similar functions and genes that are related to multiple functions. In addition to building the functional network, FGNet analyses the similarity between the groups of genes and provides a distance heatmap and a bipartite network of functionally overlapping genes. The application includes an interface to directly perform FEA queries using different external tools: DAVID, GeneTerm Linker, TopGO or GAGE; and a graphical interface to facilitate the use. Availability and implementation: FGNet is available in Bioconductor, including a tutorial. URL: http://bioconductor.org/packages/release/bioc/html/FGNet.html Contact: jrivas@usal.es Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25600944
Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.
2015-07-15
For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.
NASA Astrophysics Data System (ADS)
Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.
2015-07-01
For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.
Miyata, Y; Suzuki, T; Takechi, M; Urano, H; Ide, S
2015-07-01
For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.
Mueller, K; Yagel, R; Wheller, J J
1999-06-01
This paper examines the use of the algebraic reconstruction technique (ART) and related techniques to reconstruct 3-D objects from a relatively sparse set of cone-beam projections. Although ART has been widely used for cone-beam reconstruction of high-contrast objects, e.g., in computed angiography, the work presented here explores the more challenging low-contrast case which represents a little-investigated scenario for ART. Preliminary experiments indicate that for cone angles greater than 20 degrees, traditional ART produces reconstructions with strong aliasing artifacts. These artifacts are in addition to the usual off-midplane inaccuracies of cone-beam tomography with planar orbits. We find that the source of these artifacts is the nonuniform reconstruction grid sampling and correction by the cone-beam rays during the ART projection-backprojection procedure. A new method to compute the weights of the reconstruction matrix is devised, which replaces the usual constant-size interpolation filter by one whose size and amplitude is dependent on the source-voxel distance. This enables the generation of reconstructions free of cone-beam aliasing artifacts, at only little extra cost. An alternative analysis reveals that simultaneous ART (SART) also produces reconstructions without aliasing artifacts, however, at greater computational cost. Finally, we thoroughly investigate the influence of various ART parameters, such as volume initialization, relaxation coefficient lambda, correction scheme, number of iterations, and noise in the projection data on reconstruction quality. We find that ART typically requires only three iterations to render satisfactory reconstruction results.
Community Phylogenetics: Assessing Tree Reconstruction Methods and the Utility of DNA Barcodes.
Boyle, Elizabeth E; Adamowicz, Sarah J
2015-01-01
Studies examining phylogenetic community structure have become increasingly prevalent, yet little attention has been given to the influence of the input phylogeny on metrics that describe phylogenetic patterns of co-occurrence. Here, we examine the influence of branch length, tree reconstruction method, and amount of sequence data on measures of phylogenetic community structure, as well as the phylogenetic signal (Pagel's λ) in morphological traits, using Trichoptera larval communities from Churchill, Manitoba, Canada. We find that model-based tree reconstruction methods and the use of a backbone family-level phylogeny improve estimations of phylogenetic community structure. In addition, trees built using the barcode region of cytochrome c oxidase subunit I (COI) alone accurately predict metrics of phylogenetic community structure obtained from a multi-gene phylogeny. Input tree did not alter overall conclusions drawn for phylogenetic signal, as significant phylogenetic structure was detected in two body size traits across input trees. As the discipline of community phylogenetics continues to expand, it is important to investigate the best approaches to accurately estimate patterns. Our results suggest that emerging large datasets of DNA barcode sequences provide a vast resource for studying the structure of biological communities.
NASA Astrophysics Data System (ADS)
Pan, Qi; Liu, De-Jun; Guo, Zhi-Yong; Fang, Hua-Feng; Feng, Mu-Qun
2016-06-01
In the model of a horizontal straight pipeline of finite length, the segmentation of the pipeline elements is a significant factor in the accuracy and rapidity of the forward modeling and inversion processes, but the existing pipeline segmentation method is very time-consuming. This paper proposes a section segmentation method to study the characteristics of pipeline magnetic anomalies—and the effect of model parameters on these magnetic anomalies—as a way to enhance computational performance and accelerate the convergence process of the inversion. Forward models using the piece segmentation method and section segmentation method based on magnetic dipole reconstruction (MDR) are established for comparison. The results show that the magnetic anomalies calculated by these two segmentation methods are almost the same regardless of different measuring heights and variations of the inclination and declination of the pipeline. In the optimized inversion procedure the results of the simulation data calculated by these two methods agree with the synthetic data from the original model, and the inversion accuracies of the burial depths of the two methods are approximately equal. The proposed method is more computationally efficient than the piece segmentation method—in other words, the section segmentation method can meet the requirements for precision in the detection of pipelines by magnetic anomalies and reduce the computation time of the whole process.
High resolution image reconstruction method for a double-plane PET system with changeable spacing
NASA Astrophysics Data System (ADS)
Gu, Xiao-Yue; Zhou, Wei; Li, Lin; Wei, Long; Yin, Peng-Fei; Shang, Lei-Min; Yun, Ming-Kai; Lu, Zhen-Rui; Huang, Xian-Chao
2016-05-01
Breast-dedicated positron emission tomography (PET) imaging techniques have been developed in recent years. Their capacities to detect millimeter-sized breast tumors have been the subject of many studies. Some of them have been confirmed with good results in clinical applications. With regard to biopsy application, a double-plane detector arrangement is practicable, as it offers the convenience of breast immobilization. However, the serious blurring effect of the double-plane PET, with changeable spacing for different breast sizes, should be studied. We investigated a high resolution reconstruction method applicable for a double-plane PET. The distance between the detector planes is changeable. Geometric and blurring components were calculated in real-time for different detector distances, and accurate geometric sensitivity was obtained with a new tube area model. Resolution recovery was achieved by estimating blurring effects derived from simulated single gamma response information. The results showed that the new geometric modeling gave a more finite and smooth sensitivity weight in the double-plane PET. The blurring component yielded contrast recovery levels that could not be reached without blurring modeling, and improved visual recovery of the smallest spheres and better delineation of the structures in the reconstructed images were achieved with the blurring component. Statistical noise had lower variance at the voxel level with blurring modeling at matched resolution, compared to without blurring modeling. In distance-changeable double-plane PET, finite resolution modeling during reconstruction achieved resolution recovery, without noise amplification. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences (KJCX2-EW-N06)
Reconstructive and rehabilitating methods in patients with dysphagia and nutritional disturbances
Motsch, Christiane
2005-01-01
As diverse as the causes of oropharyngeal dysphagia can be, as broad is the range of potential therapeutical approaches. In the past two decades, methods of plastic-reconstructive surgery, in particular microsurgically revascularised tissue transfer and minimally invasive, endoscopic techniques of every hue have substantially added to the portfolio of reconstructive surgery available for rehabilitating deglutition. Numerically, reconstructing the pharyngolaryngeal tract following resection of squamous-cell carcinomas in the oral cavity, the pharynx and the larynx has been gaining ground, as has functional deglutitive therapy performed to treat posttherapeutical sequelae. Dysphagia and malnutrition are closely interrelated. Every third patient hospitalised in Germany suffers from malnutrition; ENT tumour patients are not excluded. For patients presenting with advancing malnutrition, the mortality, the morbidity and the individual complication rate have all been observed to increase; also a longer duration of stay in hospital has been noted and a lesser individual toleration of treatment, diminished immunocompetence, impaired general physical and psychical condition and, thus, a less favourable prognosis on the whole. Therefore, in oncological patients, the dietotherapy will have to assume a key role in supportive treatment. It is just for patients, who are expected to go through a long process of deglutitive rehabilitation, that enteral nutrition through percutaneous endoscopically controlled gastrostomy (PEG) performed at an early stage can provide useful and efficient support to the therapeutic efforts. Nutrition and oncology are mutually influencing fields where, sooner or later, a change in paradigms will have to take place, i.e. gradually switching from therapy to prevention. While cancer causes malnutrition, feasible changes in feeding and nutrition-associated habits, including habitual drinking and smoking, might lower the incidence of cancer worldwide by 30
Effects of bidirectional regulation on noises in gene networks.
Zheng, Xiudeng; Tao, Yi
2010-03-14
To investigate the effects of bidirectional regulation on the noise in protein concentration, a theoretical and simple three-gene network model is considered. The basic idea behind this model is from Paulsson's proposition (J. Paulsson, Phys. Life Rev. 2005, 2, 157-175), where the synthesis and degradation of a mRNA species corresponding to a target protein are regulated directly and indirectly by a certain sigma-factor, and a random increase in the concentration of the sigma-factor should increase both the synthesis and degradation rates of the mRNA species (bidirectional regulation). Using the standard Omega-expansion technique (linear noise approximation) and Monte Carlo simulation, our main results show clearly that for the steady-state statistics the effects of the noise of the sigma-factor on the stochastic fluctuation of the target protein could partially cancel out.
NASA Astrophysics Data System (ADS)
Kedelty, Dominic; Ballesteros, Carlos; Chan, Ronald; Herrmann, Marcus
2016-11-01
To accurately predict the interaction of the interface with shocks and rarefaction waves, sharp interface methods maintaining the interface as a discontinuity are preferable to capturing methods that tend to smear the interface. We present a hybrid capturing/tracking method (Smiljanovski et al., 1997) that couples an unsplit geometric volume tracking method (Owkes & Desjardins, 2014) to a finite volume wave propagation scheme (LeVeque, 2010). In cells containing the phase interface, states on either side are reconstructed using the jump conditions across the interface, the geometric information of the volume tracking method, and the cell averages of the finite volume method. Cell face Riemann problems are then solved within each phase separately, resulting in area fraction weighted fluxes that update the cell averages directly. This, together with a linearization of the wave interaction across cell faces avoids the small cut-cell time step limitation of typical tracking methods. However, the interaction of waves with the phase interface cannot be linearized and is solved using either exact or approximate two-phase Riemann solvers with arbitrary jumps in the equation of state. Several test cases highlight the capabilities of the new method. Support by the 2016 CTR Summer program at Stanford University and Taitech, Inc. under subcontract TS15-16-02-005 is gratefully acknowledged.
NASA Technical Reports Server (NTRS)
Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.
2015-01-01
High-order methods are quickly becoming popular for turbulent flows as the amount of computer processing power increases. The flux reconstruction (FR) method presents a unifying framework for a wide class of high-order methods including discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV). It offers a simple, efficient, and easy way to implement nodal-based methods that are derived via the differential form of the governing equations. Whereas high-order methods have enjoyed recent success, they have been known to introduce numerical instabilities due to polynomial aliasing when applied to under-resolved nonlinear problems. Aliasing errors have been extensively studied in reference to DG methods; however, their study regarding FR methods has mostly been limited to the selection of the nodal points used within each cell. Here, we extend some of the de-aliasing techniques used for DG methods, primarily over-integration, to the FR framework. Our results show that over-integration does remove aliasing errors but may not remove all instabilities caused by insufficient resolution (for FR as well as DG).
NASA Astrophysics Data System (ADS)
Szalay, Viktor
1999-11-01
The reconstruction of a function from knowing only its values on a finite set of grid points, that is the construction of an analytical approximation reproducing the function with good accuracy everywhere within the sampled volume, is an important problem in all branches of sciences. One such problem in chemical physics is the determination of an analytical representation of Born-Oppenheimer potential energy surfaces by ab initio calculations which give the value of the potential at a finite set of grid points in configuration space. This article describes the rudiments of iterative and direct methods of potential surface reconstruction. The major new results are the derivation, numerical demonstration, and interpretation of a reconstruction formula. The reconstruction formula derived approximates the unknown function, say V, by linear combination of functions obtained by discretizing the continuous distributed approximating functional (DAF) approximation of V over the grid of sampling. The simplest of contracted and ordinary Hermite-DAFs are shown to be sufficient for reconstruction. The linear combination coefficients can be obtained either iteratively or directly by finding the minimal norm least-squares solution of a linear system of equations. Several numerical examples of reconstructing functions of one and two variables, and very different shape are given. The examples demonstrate the robustness, high accuracy, as well as the caveats of the proposed method. As to the mathematical foundation of the method, it is shown that the reconstruction formula can be interpreted as, and in fact is, frame expansion. By recognizing the relevance of frames in determining analytical approximation to potential energy surfaces, an extremely rich and beautiful toolbox of mathematics has come to our disposal. Thus, the simple reconstruction method derived in this paper can be refined, extended, and improved in numerous ways.
Hidden hysteresis – population dynamics can obscure gene network dynamics
2013-01-01
Background Positive feedback is a common motif in gene regulatory networks. It can be used in synthetic networks as an amplifier to increase the level of gene expression, as well as a nonlinear module to create bistable gene networks that display hysteresis in response to a given stimulus. Using a synthetic positive feedback-based tetracycline sensor in E. coli, we show that the population dynamics of a cell culture has a profound effect on the observed hysteretic response of a population of cells with this synthetic gene circuit. Results The amount of observable hysteresis in a cell culture harboring the gene circuit depended on the initial concentration of cells within the culture. The magnitude of the hysteresis observed was inversely related to the dilution procedure used to inoculate the subcultures; the higher the dilution of the cell culture, lower was the observed hysteresis of that culture at steady state. Although the behavior of the gene circuit in individual cells did not change significantly in the different subcultures, the proportion of cells exhibiting high levels of steady-state gene expression did change. Although the interrelated kinetics of gene expression and cell growth are unpredictable at first sight, we were able to resolve the surprising dilution-dependent hysteresis as a result of two interrelated phenomena - the stochastic switching between the ON and OFF phenotypes that led to the cumulative failure of the gene circuit over time, and the nonlinear, logistic growth of the cell in the batch culture. Conclusions These findings reinforce the fact that population dynamics cannot be ignored in analyzing the dynamics of gene networks. Indeed population dynamics may play a significant role in the manifestation of bistability and hysteresis, and is an important consideration when designing synthetic gene circuits intended for long-term application. PMID:23800122
[Methods and importance of volume measurement in reconstructive and aesthetic breast surgery].
Kunos, Csaba; Gulyás, Gusztáv; Pesthy, Pál; Kovács, Eszter; Mátrai, Zoltán
2014-03-16
Volume measurement of the breast allows for better surgical planning and implant selection in breast reconstructive and symmetrization procedures. The safety and accuracy of tumor removal, in accordance with oncoplastic principles, may be improved by knowing the true breast- and breast tumor volume. The authors discuss the methods of volume measurement of the breast and describe the method based on magnetic resonance imaging digital volume measurement in details. The volume of the breast parenchyma and the tumor was determined by processing the diagnostic magnetic resonance scans, and the difference in the volume of the two breasts was measured. Surgery was planned and implant selection was made based on the measured volume details. The authors conclude that digital volume measurement proved to be a valuable tool in preoperative planning of volume reducing mammaplasty, replacement of unknown size implants and in cases when breast asymmetry is treated.
Three Dimensional Defect Reconstruction Using State Space Search and Woodbury's Substructure Method
NASA Astrophysics Data System (ADS)
Liu, X.; Deng, Y.; Li, Y.; Udpa, L.; Udpa, S. S.
2010-02-01
This paper introduces a model-based approach to reconstruct the three-dimensional defect profiles using eddy-current heat exchanger tube inspection signals. The method uses a Woodbury's substructure finite element forward model to simulate the underlying physics, a state space defect representation, and a tree search algorithm to solve the inverse problem. The advantage of the substructure method is that it divides the whole solution domain into two substructures and only the region of interest (ROI) with dramatic material changes will be updated in each iterative step. Since the number of elements inside the ROI is very small compared with the number of elements in the entire mesh, the computational effort needed in both LU factorization and coefficient matrix assembly is reduced. Therefore, the execution time is reduced significantly making the inversion very efficient. The initial inversion results are presented to confirm the validity of the approach.
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
NASA Astrophysics Data System (ADS)
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
An Efficient Augmented Lagrangian Method for Statistical X-Ray CT Image Reconstruction
Li, Jiaojiao; Niu, Shanzhou; Huang, Jing; Bian, Zhaoying; Feng, Qianjin; Yu, Gaohang; Liang, Zhengrong; Chen, Wufan; Ma, Jianhua
2015-01-01
Statistical iterative reconstruction (SIR) for X-ray computed tomography (CT) under the penalized weighted least-squares criteria can yield significant gains over conventional analytical reconstruction from the noisy measurement. However, due to the nonlinear expression of the objective function, most exiting algorithms related to the SIR unavoidably suffer from heavy computation load and slow convergence rate, especially when an edge-preserving or sparsity-based penalty or regularization is incorporated. In this work, to address abovementioned issues of the general algorithms related to the SIR, we propose an adaptive nonmonotone alternating direction algorithm in the framework of augmented Lagrangian multiplier method, which is termed as “ALM-ANAD”. The algorithm effectively combines an alternating direction technique with an adaptive nonmonotone line search to minimize the augmented Lagrangian function at each iteration. To evaluate the present ALM-ANAD algorithm, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present ALM-ANAD algorithm can achieve noticeable gains over the classical nonlinear conjugate gradient algorithm and state-of-the-art split Bregman algorithm in terms of noise reduction, contrast-to-noise ratio, convergence rate, and universal quality index metrics. PMID:26495975
Hip reconstruction osteotomy by Ilizarov method as a salvage option for abnormal hip joints.
Umer, Masood; Rashid, Haroon; Umer, Hafiz Muhammad; Raza, Hasnain
2014-01-01
Hip joint instability can be secondary to congenital hip pathologies like developmental dysplasia (DDH) or acquired such as sequel of infective or neoplastic process. An unstable hip is usually associated with loss of bone from the proximal femur, proximal migration of the femur, lower-extremity length discrepancy, abnormal gait, and pain. In this case series of 37 patients coming to our institution between May 2005 and December 2011, we report our results in treatment of unstable hip joint by hip reconstruction osteotomy using the Ilizarov method and apparatus. This includes an acute valgus and extension osteotomy of the proximal femur combined with gradual varus and distraction (if required) for realignment and lengthening at a second, more distal, femoral osteotomy. 18 males and 19 females participated in the study. There were 17 patients with DDH, 12 with sequelae of septic arthritis, 2 with tuberculous arthritis, 4 with posttraumatic arthritis, and 2 with focal proximal femoral deficiency. Outcomes were evaluated by using Harris Hip Scoring system. At the mean follow-up of 37 months, Harris Hip Score had significantly improved in all patients. To conclude, illizarov hip reconstruction can successfully improve Trendelenburg's gait. It supports the pelvis and simultaneously restores knee alignment and corrects lower-extremity length discrepancy (LLD).
D Reconstruction from Multi-View Medical X-Ray Images - Review and Evaluation of Existing Methods
NASA Astrophysics Data System (ADS)
Hosseinian, S.; Arefi, H.
2015-12-01
The 3D concept is extremely important in clinical studies of human body. Accurate 3D models of bony structures are currently required in clinical routine for diagnosis, patient follow-up, surgical planning, computer assisted surgery and biomechanical applications. However, 3D conventional medical imaging techniques such as computed tomography (CT) scan and magnetic resonance imaging (MRI) have serious limitations such as using in non-weight-bearing positions, costs and high radiation dose(for CT). Therefore, 3D reconstruction methods from biplanar X-ray images have been taken into consideration as reliable alternative methods in order to achieve accurate 3D models with low dose radiation in weight-bearing positions. Different methods have been offered for 3D reconstruction from X-ray images using photogrammetry which should be assessed. In this paper, after demonstrating the principles of 3D reconstruction from X-ray images, different existing methods of 3D reconstruction of bony structures from radiographs are classified and evaluated with various metrics and their advantages and disadvantages are mentioned. Finally, a comparison has been done on the presented methods with respect to several metrics such as accuracy, reconstruction time and their applications. With regards to the research, each method has several advantages and disadvantages which should be considered for a specific application.
Fang, J H; Tang, G L; Chen, H; Song, L J; Li, X
2017-04-04
Objective: To assess the clinical results of the method of anatomic coracoclavicular ligament reconstruction for distal clavicle fractures. Methods: From August 2013 to January 2015, the super image system was used to measure the CT data of 16 patients suffering distal clavicle fractures before operation in Department of Orthopaedics , the First Affiliated Hospital of Nanjing Medical Univerisity. The fractures' morphological features and acromioclavicular dislocation degree were assessed. By referring to the data collected by the my research group on Chinese people's coracoclavicular ligament, the injuries of the coracoclavicular ligament were estimated, which was then to verify the actual injuries detected during operation. Coracoclavicular ligament reconstruction was performed on patients and screws or suture anchors fixing small bone blocks was used as an adjuvant therapy. Clinical and radiological follow-up was at 1, 3, 6 and 12 months after the procedure. The clinical outcomes were assessed pre- and postoperatively with Constant Scores. Anteroposterior radiographs for the bilateral acromioclavicular joints were obtained immediately after surgery and every follow-up.To compare the reduction maintenance, coracoclavicular distances of the injured shoulders were measured in preoperative and postoperative standard radiographs. Results: All patients received satisfactory fracture and acromioclavicular joint reduction. The average follow-up period was (12.6±3.9) months (ranging from 6 to 22 months). Fractures healed six months after the operation. The coracoclavicular distances increased from (7.8±1.4)mm at one month follow-up to (7.9±1.2)mm at the final follow-up (P>0.05), which could be considered as no difference statistically. The constant score significantly increased from (49.1±4.4) at one month follow-up to (93.8±2.1) at the final evaluation (P<0.001). Obvious loss of acromioclavicular joint reduction was not observed after the operation. Coracoid process
The validation of made-to-measure method for reconstruction of phase-space distribution functions
NASA Astrophysics Data System (ADS)
Tagawa, H.; Gouda, N.; Yano, T.; Hara, T.
2016-11-01
We investigate how accurately phase-space distribution functions (DFs) in galactic models can be reconstructed by a made-to-measure (M2M) method, which constructs N-particle models of stellar systems from photometric and various kinematic data. The advantage of the M2M method is that this method can be applied to various galactic models without assumption of the spatial symmetries of gravitational potentials adopted in galactic models, and furthermore, numerical calculations of the orbits of the stars cannot be severely constrained by the capacities of computer memories. The M2M method has been applied to various galactic models. However, the degree of accuracy for the recovery of DFs derived by the M2M method in galactic models has never been investigated carefully. Therefore, we show the degree of accuracy for the recovery of the DFs for the anisotropic Plummer model and the axisymmetric Stäckel model, which have analytic solutions of the DFs. Furthermore, this study provides the dependence of the degree of accuracy for the recovery of the DFs on various parameters and a procedure adopted in this paper. As a result, we find that the degree of accuracy for the recovery of the DFs derived by the M2M method for the spherical target model is a few per cent, and more than 10 per cent for the axisymmetric target model.
Evaluating curvature for the volume of fluid method via interface reconstruction
NASA Astrophysics Data System (ADS)
Evrard, Fabien; Denner, Fabian; van Wachem, Berend
2016-11-01
The volume of fluid method (VOF) is widely adopted for the simulation of interfacial flows. A critical step in VOF modelling is to evaluate the local mean curvature of the fluid interface for the computation of surface tension. Most existing curvature evaluation techniques exhibit errors due to the discrete nature of the field they are dealing with, and potentially to the smoothing of this field that the method might require. This leads to the production of inaccurate or unphysical results. We present a curvature evaluation method which aims at greatly reducing these errors. The interface is reconstructed from the volume fraction field and the curvature is evaluated by fitting local quadric patches onto the resulting triangulation. The patch that best fits the triangulated interface can be found by solving a local minimisation problem. Combined with a partition of unity strategy with compactly supported radial basis functions, the method provides a semi-global implicit expression for the interface from which curvature can be exactly derived. The local mean curvature is then integrated back on the Eulerian mesh. We show a detailed analysis of the associated errors and comparisons with existing methods. The method can be extended to unstructured meshes. Financial support from Petrobras is gratefully acknowledged.
A Coarse Alignment Method Based on Digital Filters and Reconstructed Observation Vectors.
Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Wang, Zhicheng
2017-03-29
In this paper, a coarse alignment method based on apparent gravitational motion is proposed. Due to the interference of the complex situations, the true observation vectors, which are calculated by the apparent gravity, are contaminated. The sources of the interference are analyzed in detail, and then a low-pass digital filter is designed in this paper for eliminating the high-frequency noise of the measurement observation vectors. To extract the effective observation vectors from the inertial sensors' outputs, a parameter recognition and vector reconstruction method are designed, where an adaptive Kalman filter is employed to estimate the unknown parameters. Furthermore, a robust filter, which is based on Huber's M-estimation theory, is developed for addressing the outliers of the measurement observation vectors due to the maneuver of the vehicle. A comprehensive experiment, which contains a simulation test and physical test, is designed to verify the performance of the proposed method, and the results show that the proposed method is equivalent to the popular apparent velocity method in swaying mode, but it is superior to the current methods while in moving mode when the strapdown inertial navigation system (SINS) is under entirely self-contained conditions.
Fully-Implicit Reconstructed Discontinuous Galerkin Method for Stiff Multiphysics Problems
NASA Astrophysics Data System (ADS)
Nourgaliev, Robert
2015-11-01
A new reconstructed Discontinuous Galerkin (rDG) method, based on orthogonal basis/test functions, is developed for fluid flows on unstructured meshes. Orthogonality of basis functions is essential for enabling robust and efficient fully-implicit Newton-Krylov based time integration. The method is designed for generic partial differential equations, including transient, hyperbolic, parabolic or elliptic operators, which are attributed to many multiphysics problems. We demonstrate the method's capabilities for solving compressible fluid-solid systems (in the low Mach number limit), with phase change (melting/solidification), as motivated by applications in Additive Manufacturing. We focus on the method's accuracy (in both space and time), as well as robustness and solvability of the system of linear equations involved in the linearization steps of Newton-based methods. The performance of the developed method is investigated for highly-stiff problems with melting/solidification, emphasizing the advantages from tight coupling of mass, momentum and energy conservation equations, as well as orthogonality of basis functions, which leads to better conditioning of the underlying (approximate) Jacobian matrices, and rapid convergence of the Krylov-based linear solver. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, and funded by the LDRD at LLNL under project tracking code 13-SI-002.
NASA Astrophysics Data System (ADS)
Nourgaliev, R.; Luo, H.; Weston, B.; Anderson, A.; Schofield, S.; Dunn, T.; Delplanque, J.-P.
2016-01-01
A new reconstructed Discontinuous Galerkin (rDG) method, based on orthogonal basis/test functions, is developed for fluid flows on unstructured meshes. Orthogonality of basis functions is essential for enabling robust and efficient fully-implicit Newton-Krylov based time integration. The method is designed for generic partial differential equations, including transient, hyperbolic, parabolic or elliptic operators, which are attributed to many multiphysics problems. We demonstrate the method's capabilities for solving compressible fluid-solid systems (in the low Mach number limit), with phase change (melting/solidification), as motivated by applications in Additive Manufacturing (AM). We focus on the method's accuracy (in both space and time), as well as robustness and solvability of the system of linear equations involved in the linearization steps of Newton-based methods. The performance of the developed method is investigated for highly-stiff problems with melting/solidification, emphasizing the advantages from tight coupling of mass, momentum and energy conservation equations, as well as orthogonality of basis functions, which leads to better conditioning of the underlying (approximate) Jacobian matrices, and rapid convergence of the Krylov-based linear solver.
Chen, Baiyu; Christianson, Olav; Wilson, Joshua M.; Samei, Ehsan
2014-07-15
Purpose: For nonlinear iterative image reconstructions (IR), the computed tomography (CT) noise and resolution properties can depend on the specific imaging conditions, such as lesion contrast and image noise level. Therefore, it is imperative to develop a reliable method to measure the noise and resolution properties under clinically relevant conditions. This study aimed to develop a robust methodology to measure the three-dimensional CT noise and resolution properties under such conditions and to provide guidelines to achieve desirable levels of accuracy and precision. Methods: The methodology was developed based on a previously reported CT image quality phantom. In this methodology, CT noise properties are measured in the uniform region of the phantom in terms of a task-based 3D noise-power spectrum (NPS{sub task}). The in-plane resolution properties are measured in terms of the task transfer function (TTF) by applying a radial edge technique to the rod inserts in the phantom. The z-direction resolution properties are measured from a supplemental phantom, also in terms of the TTF. To account for the possible nonlinearity of IR, the NPS{sub task} is measured with respect to the noise magnitude, and the TTF with respect to noise magnitude and edge contrast. To determine the accuracy and precision of the methodology, images of known noise and resolution properties were simulated. The NPS{sub task} and TTF were measured on the simulated images and compared to the truth, with criteria established to achieve NPS{sub task} and TTF measurements with <10% error. To demonstrate the utility of this methodology, measurements were performed on a commercial CT system using five dose levels, two slice thicknesses, and three reconstruction algorithms (filtered backprojection, FBP; iterative reconstruction in imaging space, IRIS; and sinogram affirmed iterative reconstruction with strengths of 5, SAFIRE5). Results: To achieve NPS{sub task} measurements with <10% error, the
Xu, H
2014-06-01
Purpose: To develop and investigate whether the logarithmic barrier (LB) method can result in high-quality reconstructed CT images using sparsely-sampled noisy projection data Methods: The objective function is typically formulated as the sum of the total variation (TV) and a data fidelity (DF) term with a parameter λ that governs the relative weight between them. Finding the optimized value of λ is a critical step for this approach to give satisfactory results. The proposed LB method avoid using λ by constructing the objective function as the sum of the TV and a log function whose augment is the DF term. Newton's method was used to solve the optimization problem. The algorithm was coded in MatLab2013b. Both Shepp-Logan phantom and a patient lung CT image were used for demonstration of the algorithm. Measured data were simulated by calculating the projection data using radon transform. A Poisson noise model was used to account for the simulated detector noise. The iteration stopped when the difference of the current TV and the previous one was less than 1%. Results: Shepp-Logan phantom reconstruction study shows that filtered back-projection (FBP) gives high streak artifacts for 30 and 40 projections. Although visually the streak artifacts are less pronounced for 64 and 90 projections in FBP, the 1D pixel profiles indicate that FBP gives noisier reconstructed pixel values than LB does. A lung image reconstruction is presented. It shows that use of 64 projections gives satisfactory reconstructed image quality with regard to noise suppression and sharp edge preservation. Conclusion: This study demonstrates that the logarithmic barrier method can be used to reconstruct CT images from sparsely-amped data. The number of projections around 64 gives a balance between the over-smoothing of the sharp demarcation and noise suppression. Future study may extend to CBCT reconstruction and improvement on computation speed.
Tarsitano, Achille; Battaglia, Salvatore; Crimi, Salvatore; Ciocca, Leonardo; Scotti, Roberto; Marchetti, Claudio
2016-07-01
The design and manufacture of patient-specific mandibular reconstruction plates, particularly in combination with cutting guides, has created many new opportunities for the planning and implementation of mandibular reconstruction. Although this surgical method is being used more widely and the outcomes appear to be improved, the question of the additional cost has to be discussed. To evaluate the cost generated by the management of this technology, we studied a cohort of patients treated for mandibular neoplasms. The population was divided into two groups of 20 patients each who were undergoing a 'traditional' freehand mandibular reconstruction or a computer-aided design/computer-aided manufacturing (CAD-CAM) mandibular reconstruction. Data concerning operation time, complications, and days of hospitalisation were used to evaluate costs related to the management of these patients. The mean operating time for the CAD-CAM group was 435 min, whereas that for the freehand group was 550.5 min. The total difference in terms of average time gain was 115.5 min. No microvascular complication occurred in the CAD-CAM group; two complications (10%) were observed in patients undergoing freehand reconstructions. The mean overall lengths of hospital stay were 13.8 days for the CAD-CAM group and 17 days for the freehand group. Finally, considering that the institutional cost per minute of theatre time is €30, the money saved as a result of the time gained was €3,450. This cost corresponds approximately to the total price of the CAD-CAM surgery. In conclusion, we believe that CAD-CAM technology for mandibular reconstruction will become a widely used reconstructive method and that its cost will be covered by gains in terms of surgical time, quality of reconstruction, and reduced complications.
Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction.
Fessler, J A; Booth, S D
1999-01-01
Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.
An estimation method of MR signal parameters for improved image reconstruction in unilateral scanner
NASA Astrophysics Data System (ADS)
Bergman, Elad; Yeredor, Arie; Nevo, Uri
2013-12-01
Unilateral NMR devices are used in various applications including non-destructive testing and well logging, but are not used routinely for imaging. This is mainly due to the inhomogeneous magnetic field (B0) in these scanners. This inhomogeneity results in low sensitivity and further forces the use of the slow single point imaging scan scheme. Improving the measurement sensitivity is therefore an important factor as it can improve image quality and reduce imaging times. Short imaging times can facilitate the use of this affordable and portable technology for various imaging applications. This work presents a statistical signal-processing method, designed to fit the unique characteristics of imaging with a unilateral device. The method improves the imaging capabilities by improving the extraction of image information from the noisy data. This is done by the use of redundancy in the acquired MR signal and by the use of the noise characteristics. Both types of data were incorporated into a Weighted Least Squares estimation approach. The method performance was evaluated with a series of imaging acquisitions applied on phantoms. Images were extracted from each measurement with the proposed method and were compared to the conventional image reconstruction. All measurements showed a significant improvement in image quality based on the MSE criterion - with respect to gold standard reference images. An integration of this method with further improvements may lead to a prominent reduction in imaging times aiding the use of such scanners in imaging application.
Prioritization of Susceptibility Genes for Ectopic Pregnancy by Gene Network Analysis.
Liu, Ji-Long; Zhao, Miao
2016-02-01
Ectopic pregnancy is a very dangerous complication of pregnancy, affecting 1%-2% of all reported pregnancies. Due to ethical constraints on human biopsies and the lack of suitable animal models, there has been little success in identifying functionally important genes in the pathogenesis of ectopic pregnancy. In the present study, we developed a random walk-based computational method named TM-rank to prioritize ectopic pregnancy-related genes based on text mining data and gene network information. Using a defined threshold value, we identified five top-ranked genes: VEGFA (vascular endothelial growth factor A), IL8 (interleukin 8), IL6 (interleukin 6), ESR1 (estrogen receptor 1) and EGFR (epidermal growth factor receptor). These genes are promising candidate genes that can serve as useful diagnostic biomarkers and therapeutic targets. Our approach represents a novel strategy for prioritizing disease susceptibility genes.
NASA Astrophysics Data System (ADS)
Nilsen, Gørill
2016-08-01
Seal hunting and whaling have played an important part of people's livelihoods throughout prehistory as evidenced by rock carvings, remains of bones, artifacts from aquatic animals and hunting tools. This paper focuses on one of the more elusive resources relating to such activities: marine mammal blubber. Although marine blubber easily decomposes, the organic material has been documented from the Mesolithic Period onwards. Of particular interest in this article are the many structures in Northern Norway from the Iron Age and in Finland on Kökar, Åland, from both the Bronze and Early Iron Ages in which these periods exhibited traits interpreted as being related to oil rendering from marine mammal blubber. The article discusses methods used in this oil production activity based on historical sources, archaeological investigations and experimental reconstruction of Iron Age slab-lined pits from Northern Norway.
A Maximum Likelihood Method for Reconstruction of the Evolution of Eukaryotic Gene Structure
Carmel, Liran; Rogozin, Igor B.; Wolf, Yuri I.; Koonin, Eugene V.
2012-01-01
Spliceosomal introns are one of the principal distinctive features of eukaryotes. Nevertheless, different large-scale studies disagree about even the most basic features of their evolution. In order to come up with a more reliable reconstruction of intron evolution, we developed a model that is far more comprehensive than previous ones. This model is rich in parameters, and estimating them accurately is infeasible by straightforward likelihood maximization. Thus, we have developed an expectation-maximization algorithm that allows for efficient maximization. Here, we outline the model and describe the expectation-maximization algorithm in detail. Since the method works with intron presence–absence maps, it is expected to be instrumental for the analysis of the evolution of other binary characters as well. PMID:19381540
NASA Astrophysics Data System (ADS)
Fraysse, F.; Redondo, C.; Rubio, G.; Valero, E.
2016-12-01
This article is devoted to the numerical discretisation of the hyperbolic two-phase flow model of Baer and Nunziato. A special attention is paid on the discretisation of intercell flux functions in the framework of Finite Volume and Discontinuous Galerkin approaches, where care has to be taken to efficiently approximate the non-conservative products inherent to the model equations. Various upwind approximate Riemann solvers have been tested on a bench of discontinuous test cases. New discretisation schemes are proposed in a Discontinuous Galerkin framework following the criterion of Abgrall and the path-conservative formalism. A stabilisation technique based on artificial viscosity is applied to the high-order Discontinuous Galerkin method and compared against classical TVD-MUSCL Finite Volume flux reconstruction.
Model-based near-wall reconstructions for immersed-boundary methods
NASA Astrophysics Data System (ADS)
Posa, Antonio; Balaras, Elias
2014-08-01
In immersed-boundary methods, the cost of resolving the thin boundary layers on a solid boundary at high Reynolds numbers is prohibitive. In the present work, we propose a new model-based, near-wall reconstruction to account for the lack of resolution and provide the correct wall shear stress and hydrodynamic forces. The models are analytical versions of a generalized version of the two-layer model developed by Balaras et al. (AIAA J 34:1111-1119, 1996) for large-eddy simulations. We will present the results for the flow around a cylinder and a sphere, where we use Cartesian and cylindrical coordinate grids. We will demonstrate that the proposed treatment reproduces very accurately the wall stress on grids, which are one order of magnitude coarser compared to well-resolved simulations.
Broadbent, James; Sampson, Dayle; Sabapathy, Surendran; Haseler, Luke J; Wagner, Karl-Heinz; Bulmer, Andrew C; Peake, Jonathan M; Neubauer, Oliver
2017-04-01
It remains incompletely understood whether there is an association between the transcriptome profiles of skeletal muscle and blood leukocytes in response to exercise or other physiological stressors. We have previously analyzed the changes in the muscle and blood neutrophil transcriptome in eight trained men before and 3, 48, and 96 h after 2 h cycling and running. Because we collected muscle and blood in the same individuals and under the same conditions, we were able to directly compare gene expression between the muscle and blood neutrophils. Applying weighted gene coexpression network analysis (WGCNA) as an advanced network-driven method to these original data sets enabled us to compare the muscle and neutrophil transcriptomes in a rigorous and systematic manner. Two gene networks were identified that were preserved between skeletal muscle and blood neutrophils, functionally related to mitochondria and posttranslational processes. Strong preservation measures (Zsummary > 10) for both muscle-neutrophil gene networks were evident within the postexercise recovery period. Muscle and neutrophil gene coexpression was strongly correlated in the mitochondria-related network (r = 0.97; P = 3.17E-2). We also identified multiple correlations between muscular gene subnetworks and exercise-induced changes in blood leukocyte counts, inflammation, and muscle damage markers. These data reveal previously unidentified gene coexpression between skeletal muscle and blood neutrophils following exercise, showing the value of WGCNA to understand exercise physiology. Furthermore, these findings provide preliminary evidence in support of the notion that blood neutrophil gene networks may potentially help us to track physiological and pathophysiological changes in the muscle.NEW & NOTEWORTHY By using weighted gene coexpression network analysis, an advanced bioinformatics method, we have identified previously unknown, functional gene networks that are preserved between skeletal muscle
Ruchala, Kenneth J; Olivera, Gustavo H; Kapatoes, Jeffrey M; Reckwerdt, Paul J; Mackie, Thomas R
2002-11-01
There are many benefits to having an online CT imaging system for radiotherapy, as it helps identify changes in the patient's position and anatomy between the time of planning and treatment. However, many current online CT systems suffer from a limited field-of-view (LFOV) in that collected data do not encompass the patient's complete cross section. Reconstruction of these data sets can quantitatively distort the image values and introduce artifacts. This work explores the use of planning CT data as a priori information for improving these reconstructions. Methods are presented to incorporate this data by aligning the LFOV with the planning images and then merging the data sets in sinogram space. One alignment option is explicit fusion, producing fusion-aligned reprojection (FAR) images. For cases where explicit fusion is not viable, FAR can be implemented using the implicit fusion of normal setup error, referred to as normal-error-aligned reprojection (NEAR). These methods are evaluated for multiday patient images showing both internal and skin-surface anatomical variation. The iterative use of NEAR and FAR is also investigated, as are applications of NEAR and FAR to dose calculations and the compensation of LFOV online MVCT images with kVCT planning images. Results indicate that NEAR and FAR can utilize planning CT data as imperfect a priori information to reduce artifacts and quantitatively improve images. These benefits can also increase the accuracy of dose calculations and be used for augmenting CT images (e.g., MVCT) acquired at different energies than the planning CT.
Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim; Zheng Yefeng; Wang Yang; Lauritsch, Guenter; Rohkohl, Christopher; Maier, Andreas K.; Schultz, Carl; Fahrig, Rebecca
2013-03-15
Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In this approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all
Chen, Shuo; Ong, Yi Hong; Lin, Xiaoqian; Liu, Quan
2015-07-01
Raman spectroscopy has shown great potential in biomedical applications. However, intrinsically weak Raman signals cause slow data acquisition especially in Raman imaging. This problem can be overcome by narrow-band Raman imaging followed by spectral reconstruction. Our previous study has shown that Raman spectra free of fluorescence background can be reconstructed from narrow-band Raman measurements using traditional Wiener estimation. However, fluorescence-free Raman spectra are only available from those sophisticated Raman setups capable of fluorescence suppression. The reconstruction of Raman spectra with fluorescence background from narrow-band measurements is much more challenging due to the significant variation in fluorescence background. In this study, two advanced Wiener estimation methods, i.e. modified Wiener estimation and sequential weighted Wiener estimation, were optimized to achieve this goal. Both spontaneous Raman spectra and surface enhanced Raman spectra were evaluated. Compared with traditional Wiener estimation, two advanced methods showed significant improvement in the reconstruction of spontaneous Raman spectra. However, traditional Wiener estimation can work as effectively as the advanced methods for SERS spectra but much faster. The wise selection of these methods would enable accurate Raman reconstruction in a simple Raman setup without the function of fluorescence suppression for fast Raman imaging.
Łeski, Szymon; Wójcik, Daniel K; Tereszczuk, Joanna; Swiejkowski, Daniel A; Kublik, Ewa; Wróbel, Andrzej
2007-01-01
Estimation of the continuous current-source density in bulk tissue from a finite set of electrode measurements is a daunting task. Here we present a methodology which allows such a reconstruction by generalizing the one-dimensional inverse CSD method. The idea is to assume a particular plausible form of CSD within a class described by a number of parameters which can be estimated from available data, for example a set of cubic splines in 3D spanned on a fixed grid of the same size as the set of measurements. To avoid specificity of particular choice of reconstruction grid we add random jitter to the points positions and show that it leads to a correct reconstruction. We propose different ways of improving the quality of reconstruction which take into account the sources located outside the recording region through appropriate boundary treatment. The efficiency of the traditional CSD and variants of inverse CSD methods is compared using several fidelity measures on different test data to investigate when one of the methods is superior to the others. The methods are illustrated with reconstructions of CSD from potentials evoked by stimulation of a bunch of whiskers recorded in a slab of the rat forebrain on a grid of 4x5x7 positions.
NASA Astrophysics Data System (ADS)
Le Touz, Nicolas; Dumoulin, Jean; Soldovieri, Francesco
2016-04-01
In this numerical study we present an approach allowing introducing a priori information in an identification method of internal thermal properties field for a thick wall using infrared thermography measurements. This method is based on a coupling with an electromagnetic reconstructing method which data are obtained from measurements of Ground Penetrating Radar (GPR) ([1], [2]). This new method aims at improving the accuracy of reconstructions performed by using only the thermal reconstruction method under quasi-periodic natural solicitation ([3], [4]). Indeed, these thermal reconstructions, without a priori information, have the disadvantage of being performed on the entire studied wall. Through the intake of information from GPR, it becomes possible to focus on the internal zones that may contain defects. These areas are obtained by defining subdomains around remarkable points identified with the GPR reconstruction and considered as belonging to a discontinuity. For thermal reconstruction without providing a priori information, we need to minimize a functional equal to a quadratic residue issued from the difference between the measurements and the results of the direct model. By defining search fields around these potential defects, and thus by forcing the thermal parameters further thereof, we provide information to the data to reconstruct. The minimization of the functional is then modified through the contribution of these constraints. We do not seek only to minimize a residue, but to minimize the overall residue and constraints, what changes the direction followed by the optimization algorithm in the space of thermal parameters to reconstruct. Providing a priori information may then allow to obtain reconstruction with higher residues but whose thermal parameters are better estimated, whether for locating potential defects or for the reconstructed values of these parameters. In particular, it is the case for air defects or more generally for defects having a
In-motion coarse alignment method based on reconstructed observation vectors
NASA Astrophysics Data System (ADS)
Xu, Xiang; Xu, Xiaosu; Yao, Yiqing; Wang, Zhicheng
2017-03-01
In this paper, an in-motion coarse alignment method is proposed based on the reconstructed observation vectors. Since the complicated noises are contained in the outputs of the inertial sensors, the components of measurement observation vectors, which are constructed by the sensors' outputs, are analyzed in detail. To suppress the high-frequency noises, an effective digital filter based on the Infinite Impulse Response technology is employed. On the basis of the parameter models of the observation vectors, a new form Kalman filter, which is also an adaptive filter, is designed for the recognition of the parameter matrix. Furthermore, a robust filter technology, which is based on the Huber's M-estimation, is employed to suppress the gross outliers, which are caused by the movement of the carrier. Simulation test and field trial are designed to verify the proposed method. All the alignment results demonstrate that the performance of the proposed method is superior to the conventional optimization-based alignment and the digital filter alignment, which are the current popular methods.
An anatomically driven anisotropic diffusion filtering method for 3D SPECT reconstruction
NASA Astrophysics Data System (ADS)
Kazantsev, Daniil; Arridge, Simon R.; Pedemonte, Stefano; Bousse, Alexandre; Erlandsson, Kjell; Hutton, Brian F.; Ourselin, Sébastien
2012-06-01
In this study, we aim to reconstruct single-photon emission computed tomography images using anatomical information from magnetic resonance imaging as a priori knowledge about the activity distribution. The trade-off between anatomical and emission data is one of the main concerns for such studies. In this work, we propose an anatomically driven anisotropic diffusion filter (ADADF) as a penalized maximum likelihood expectation maximization optimization framework. The ADADF method has improved edge-preserving denoising characteristics compared to other smoothing penalty terms based on quadratic and non-quadratic functions. The proposed method has an important ability to retain information which is absent in the anatomy. To make our approach more stable to the noise-edge classification problem, robust statistics have been employed. Comparison of the ADADF method is performed with a successful anatomically driven technique, namely, the Bowsher prior (BP). Quantitative assessment using simulated and clinical neuroreceptor volumetric data show the advantage of the ADADF over the BP. For the modelled data, the overall image resolution, the contrast, the signal-to-noise ratio and the ability to preserve important features in the data are all improved by using the proposed method. For clinical data, the contrast in the region of interest is significantly improved using the ADADF compared to the BP, while successfully eliminating noise.
Azevedo, Teresa C S; Tavares, João Manuel R S; Vaz, Mário A P
2010-06-01
This work presents a volumetric approach to reconstruct and characterise 3D models of external anatomical structures from 2D images. Volumetric methods represent the final volume using a finite set of 3D geometric primitives, usually designed as voxels. Thus, from an image sequence acquired around the object to reconstruct, the images are calibrated and the 3D models of the referred object are built using different approaches of volumetric methods. The final goal is to analyse the accuracy of the obtained models when modifying some of the parameters of the considered volumetric methods, such as the type of voxel projection (rectangular or accurate), the way the consistency of the voxels is tested (only silhouettes or silhouettes and photo-consistency) and the initial size of the reconstructed volume.
NASA Astrophysics Data System (ADS)
Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.
2016-10-01
We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.
Mansfeldt, Cresten B.; Heavner, Gretchen W.; Rowe, Annette R.; Hayete, Boris; Church, Bruce W.; Richardson, Ruth E.
2016-01-01
The interpretation of high-throughput gene expression data for non-model microorganisms remains obscured because of the high fraction of hypothetical genes and the limited number of methods for the robust inference of gene networks. Therefore, to elucidate gene-gene and gene-condition linkages in the bioremediation-important genus Dehalococcoides, we applied a Bayesian inference strategy called Reverse Engineering/Forward Simulation (REFS™) on transcriptomic data collected from two organohalide-respiring communities containing different Dehalococcoides mccartyi strains: the Cornell University mixed community D2 and the commercially available KB-1® bioaugmentation culture. In total, 49 and 24 microarray datasets were included in the REFS™ analysis to generate an ensemble of 1,000 networks for the Dehalococcoides population in the Cornell D2 and KB-1® culture, respectively. Considering only linkages that appeared in the consensus network for each culture (exceeding the determined frequency cutoff of ≥ 60%), the resulting Cornell D2 and KB-1® consensus networks maintained 1,105 nodes (genes or conditions) with 974 edges and 1,714 nodes with 1,455 edges, respectively. These consensus networks captured multiple strong and biologically informative relationships. One of the main highlighted relationships shared between these two cultures was a direct edge between the transcript encoding for the major reductive dehalogenase (tceA (D2) or vcrA (KB-1®)) and the transcript for the putative S-layer cell wall protein (DET1407 (D2) or KB1_1396 (KB-1®)). Additionally, transcripts for two key oxidoreductases (a [Ni Fe] hydrogenase, Hup, and a protein with similarity to a formate dehydrogenase, “Fdh”) were strongly linked, generalizing a strong relationship noted previously for Dehalococcoides mccartyi strain 195 to multiple strains of Dehalococcoides. Notably, the pangenome array utilized when monitoring the KB-1® culture was capable of resolving signals from
Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa
2009-01-01
Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769
Zhou, Yi-Jun; Yunus, Akbar; Tian, Zheng; Chen, Jiang-Tao; Wang, Chong; Xu, Lei-Lei
2016-01-01
Hemipelvic resections for primary bone tumours require reconstruction to restore weight bearing along anatomic axes. However, reconstruction of the pelvic arch remains a major surgical challenge because of the high rate of associated complications. We used the pedicle screw-rod system to reconstruct the pelvis, and the purpose of this investigation was to assess the oncology, functional outcome and complication rate following this procedure. The purpose of this study was to investigate the operative indications and technique of the pedicle screw-rod system in reconstruction of the stability of the sacroiliac joint after resection of sacroiliac joint tumours. The average MSTS (Musculoskeletal Tumour Society) score was 26.5 at either three months after surgery or at the latest follow-up. Seven patients had surgery-related complications, including wound dehiscence in one, infection in two, local necrosis in four (including infection in two), sciatic nerve palsy in one and pubic symphysis subluxation in one. There was no screw loosening or deep vein thrombosis occurring in this series. Using a pedicle screw-rod after resection of a sacroiliac joint tumour is an acceptable method of pelvic reconstruction because of its reduced risk of complications and satisfactory functional outcome, as well as its feasibility of reconstruction for type IV pelvis tumour resection without elaborate preoperative customisation. Level of evidence: Level IV, therapeutic study. PMID:27095944
Lutton, E. Josiah; Lammers, Wim J. E. P.; James, Sean
2017-01-01
Background The fibrous structure of the myometrium has previously been characterised at high resolutions in small tissue samples (< 100 mm3) and at low resolutions (∼500 μm per voxel edge) in whole-organ reconstructions. However, no high-resolution visualisation of the myometrium at the organ level has previously been attained. Methods and results We have developed a technique to reconstruct the whole myometrium from serial histological slides, at a resolution of approximately 50 μm per voxel edge. Reconstructions of samples taken from human and rat uteri are presented here, along with histological verification of the reconstructions and detailed investigation of the fibrous structure of these uteri, using a range of tools specifically developed for this analysis. These reconstruction techniques enable the high-resolution rendering of global structure previously observed at lower resolution. Moreover, structures observed previously in small portions of the myometrium can be observed in the context of the whole organ. The reconstructions are in direct correspondence with the original histological slides, which allows the inspection of the anatomical context of any features identified in the three-dimensional reconstructions. Conclusions and significance The methods presented here have been used to generate a faithful representation of myometrial smooth muscle at a resolution of ∼50 μm per voxel edge. Characterisation of the smooth muscle structure of the myometrium by means of this technique revealed a detailed view of previously identified global structures in addition to a global view of the microarchitecture. A suite of visualisation tools allows researchers to interrogate the histological microarchitecture. These methods will be applicable to other smooth muscle tissues to analyse fibrous microarchitecture. PMID:28301486
A new multiresolution method applied to the 3D reconstruction of small bodies
NASA Astrophysics Data System (ADS)
Capanna, C.; Jorda, L.; Lamy, P. L.; Gesquiere, G.
2012-12-01
The knowledge of the three-dimensional (3D) shape of small solar system bodies, such as asteroids and comets, is essential in determining their global physical properties (volume, density, rotational parameters). It also allows performing geomorphological studies of their surface through the characterization of topographic features, such as craters, faults, landslides, grooves, hills, etc.. In the case of small bodies, the shape is often only constrained by images obtained by interplanetary spacecrafts. Several techniques are available to retrieve 3D global shapes from these images. Stereography which relies on control points has been extensively used in the past, most recently to reconstruct the nucleus of comet 9P/Tempel 1 [Thomas (2007)]. The most accurate methods are however photogrammetry and photoclinometry, often used in conjunction with stereography. Stereophotogrammetry (SPG) has been used to reconstruct the shapes of the nucleus of comet 19P/Borrelly [Oberst (2004)] and of the asteroid (21) Lutetia [Preusker (2012)]. Stereophotoclinometry (SPC) has allowed retrieving an accurate shape of the asteroids (25143) Itokawa [Gaskell (2008)] and (2867) Steins [Jorda (2012)]. We present a new photoclinometry method based on the deformation of a 3D triangular mesh [Capanna (2012)] using a multi-resolution scheme which starts from a sphere of 300 facets and yields a shape model with 100; 000 facets. Our strategy is inspired by the "Full Multigrid" method [Botsch (2007)] and consists in going alternatively between two resolutions in order to obtain an optimized shape model at a given resolution before going to the higher resolution. In order to improve the robustness of our method, we use a set of control points obtained by stereography. Our method has been tested on images acquired by the OSIRIS visible camera, aboard the Rosetta spacecraft of the European Space Agency, during the fly-by of asteroid (21) Lutetia in July 2010. We present the corresponding 3D shape
A new combined prior based reconstruction method for compressed sensing in 3D ultrasound imaging
NASA Astrophysics Data System (ADS)
Uddin, Muhammad S.; Islam, Rafiqul; Tahtali, Murat; Lambert, Andrew J.; Pickering, Mark R.
2015-03-01
Ultrasound (US) imaging is one of the most popular medical imaging modalities, with 3D US imaging gaining popularity recently due to its considerable advantages over 2D US imaging. However, as it is limited by long acquisition times and the huge amount of data processing it requires, methods for reducing these factors have attracted considerable research interest. Compressed sensing (CS) is one of the best candidates for accelerating the acquisition rate and reducing the data processing time without degrading image quality. However, CS is prone to introduce noise-like artefacts due to random under-sampling. To address this issue, we propose a combined prior-based reconstruction method for 3D US imaging. A Laplacian mixture model (LMM) constraint in the wavelet domain is combined with a total variation (TV) constraint to create a new regularization regularization prior. An experimental evaluation conducted to validate our method using synthetic 3D US images shows that it performs better than other approaches in terms of both qualitative and quantitative measures.
Wagner, Roland; Helin, Tapio; Obereder, Andreas; Ramlau, Ronny
2016-02-20
The imaging quality of modern ground-based telescopes such as the planned European Extremely Large Telescope is affected by atmospheric turbulence. In consequence, they heavily depend on stable and high-performance adaptive optics (AO) systems. Using measurements of incoming light from guide stars, an AO system compensates for the effects of turbulence by adjusting so-called deformable mirror(s) (DMs) in real time. In this paper, we introduce a novel reconstruction method for ground layer adaptive optics. In the literature, a common approach to this problem is to use Bayesian inference in order to model the specific noise structure appearing due to spot elongation. This approach leads to large coupled systems with high computational effort. Recently, fast solvers of linear order, i.e., with computational complexity O(n), where n is the number of DM actuators, have emerged. However, the quality of such methods typically degrades in low flux conditions. Our key contribution is to achieve the high quality of the standard Bayesian approach while at the same time maintaining the linear order speed of the recent solvers. Our method is based on performing a separate preprocessing step before applying the cumulative reconstructor (CuReD). The efficiency and performance of the new reconstructor are demonstrated using the OCTOPUS, the official end-to-end simulation environment of the ESO for extremely large telescopes. For more specific simulations we also use the MOST toolbox.
NASA Astrophysics Data System (ADS)
Mantini, D.; Alleva, G.; Comani, S.
2005-10-01
Fetal magnetocardiography (fMCG) allows monitoring the fetal heart function through algorithms able to retrieve the fetal cardiac signal, but no standardized automatic model has become available so far. In this paper, we describe an automatic method that restores the fetal cardiac trace from fMCG recordings by means of a weighted summation of fetal components separated with independent component analysis (ICA) and identified through dedicated algorithms that analyse the frequency content and temporal structure of each source signal. Multichannel fMCG datasets of 66 healthy and 4 arrhythmic fetuses were used to validate the automatic method with respect to a classical procedure requiring the manual classification of fetal components by an expert investigator. ICA was run with input clusters of different dimensions to simulate various MCG systems. Detection rates, true negative and false positive component categorization, QRS amplitude, standard deviation and signal-to-noise ratio of reconstructed fetal signals, and real and per cent QRS differences between paired fetal traces retrieved automatically and manually were calculated to quantify the performances of the automatic method. Its robustness and reliability, particularly evident with the use of large input clusters, might increase the diagnostic role of fMCG during the prenatal period.
Au, Anthony G; Otto, David D; Raso, V James; Amirfazli, Alidad
2005-04-01
To increase knee stability following anterior cruciate ligament (ACL) reconstruction, development of increasingly stronger and stiffer fixation is required. This study assessed the initial pullout force, stiffness of fixation, and failure modes for a novel hybrid fixation method combining periosteal and direct fixation using porcine femoral bone. A soft tissue graft was secured by combining both an interference screw and an EndoButton (Smith and Nephew Endoscopy, Andover, MA). The results were compared with the traditional direct fixation method using a titanium interference screw. Twenty porcine hindlimbs were divided into two groups. Specimens were loaded in line with the bone tunnel on a materials testing machine. Maximum pullout force of the hybrid fixation (588+/-37 N) was significantly greater than with an interference screw alone (516+/-37 N). The stiffness of the hybrid fixation (52.1+/-12.8 N/mm) was similar to that of screw fixation (56.5+/-10.2 N/mm). Graft pullout was predominant for screw fixation, whereas a combination of graft pullout and graft failure was seen for hybrid fixation. These results indicate that initial pullout force of soft tissue grafts can be increased by using the suggested novel hybrid fixation method.
Material depth reconstruction method of multi-energy X-ray images using neural network.
Lee, Woo-Jin; Kim, Dae-Seung; Kang, Sung-Won; Yi, Won-Jin
2012-01-01
With the advent of technology, multi-energy X-ray imaging is promising technique that can reduce the patient's dose and provide functional imaging. Two-dimensional photon-counting detector to provide multi-energy imaging is under development. In this work, we present a material decomposition method using multi-energy images. To acquire multi-energy images, Monte Carlo simulation was performed. The X-ray spectrum was modeled and ripple effect was considered. Using the dissimilar characteristics in energy-dependent X-ray attenuation of each material, multiple energy X-ray images were decomposed into material depth images. Feedforward neural network was used to fit multi-energy images to material depth images. In order to use the neural network, step wedge phantom images were used for training neuron. Finally, neural network decomposed multi-energy X-ray images into material depth image. To demonstrate the concept of this method, we applied it to simulated images of a 3D head phantom. The results show that neural network method performed effectively material depth reconstruction.
NASA Astrophysics Data System (ADS)
Chun, Se Young; Fessler, Jeffrey A.; Dewaraja, Yuni K.
2013-09-01
Quantitative SPECT techniques are important for many applications including internal emitter therapy dosimetry where accurate estimation of total target activity and activity distribution within targets are both potentially important for dose-response evaluations. We investigated non-local means (NLM) post-reconstruction filtering for accurate I-131 SPECT estimation of both total target activity and the 3D activity distribution. We first investigated activity estimation versus number of ordered-subsets expectation-maximization (OSEM) iterations. We performed simulations using the XCAT phantom with tumors containing a uniform and a non-uniform activity distribution, and measured the recovery coefficient (RC) and the root mean squared error (RMSE) to quantify total target activity and activity distribution, respectively. We observed that using more OSEM iterations is essential for accurate estimation of RC, but may or may not improve RMSE. We then investigated various post-reconstruction filtering methods to suppress noise at high iteration while preserving image details so that both RC and RMSE can be improved. Recently, NLM filtering methods have shown promising results for noise reduction. Moreover, NLM methods using high-quality side information can improve image quality further. We investigated several NLM methods with and without CT side information for I-131 SPECT imaging and compared them to conventional Gaussian filtering and to unfiltered methods. We studied four different ways of incorporating CT information in the NLM methods: two known (NLM CT-B and NLM CT-M) and two newly considered (NLM CT-S and NLM CT-H). We also evaluated the robustness of NLM filtering using CT information to erroneous CT. NLM CT-S and NLM CT-H yielded comparable RC values to unfiltered images while substantially reducing RMSE. NLM CT-S achieved -2.7 to 2.6% increase of RC compared to no filtering and NLM CT-H yielded up to 6% decrease in RC while other methods yielded lower RCs
Chun, Se Young; Fessler, Jeffrey A; Dewaraja, Yuni K
2013-09-07
Quantitative SPECT techniques are important for many applications including internal emitter therapy dosimetry where accurate estimation of total target activity and activity distribution within targets are both potentially important for dose–response evaluations. We investigated non-local means (NLM) post-reconstruction filtering for accurate I-131 SPECT estimation of both total target activity and the 3D activity distribution. We first investigated activity estimation versus number of ordered-subsets expectation–maximization (OSEM) iterations. We performed simulations using the XCAT phantom with tumors containing a uniform and a non-uniform activity distribution, and measured the recovery coefficient (RC) and the root mean squared error (RMSE) to quantify total target activity and activity distribution, respectively. We observed that using more OSEM iterations is essential for accurate estimation of RC, but may or may not improve RMSE. We then investigated various post-reconstruction filtering methods to suppress noise at high iteration while preserving image details so that both RC and RMSE can be improved. Recently, NLM filtering methods have shown promising results for noise reduction. Moreover, NLM methods using high-quality side information can improve image quality further. We investigated several NLM methods with and without CT side information for I-131 SPECT imaging and compared them to conventional Gaussian filtering and to unfiltered methods. We studied four different ways of incorporating CT information in the NLM methods: two known (NLM CT-B and NLM CT-M) and two newly considered (NLM CT-S and NLM CT-H). We also evaluated the robustness of NLM filtering using CT information to erroneous CT. NLM CT-S and NLM CT-H yielded comparable RC values to unfiltered images while substantially reducing RMSE. NLM CT-S achieved −2.7 to 2.6% increase of RC compared to no filtering and NLM CT-H yielded up to 6% decrease in RC while other methods yielded lower
Listening to the Noise: Random Fluctuations Reveal Gene Network Parameters
NASA Astrophysics Data System (ADS)
Munsky, Brian; Trinh, Brooke; Khammash, Mustafa
2010-03-01
The cellular environment is abuzz with noise originating from the inherent random motion of reacting molecules in the living cell. In this noisy environment, clonal cell populations exhibit cell-to-cell variability that can manifest significant prototypical differences. Noise induced stochastic fluctuations in cellular constituents can be measured and their statistics quantified using flow cytometry, single molecule fluorescence in situ hybridization, time lapse fluorescence microscopy and other single cell and single molecule measurement techniques. We show that these random fluctuations carry within them valuable information about the underlying genetic network. Far from being a nuisance, the ever-present cellular noise acts as a rich source of excitation that, when processed through a gene network, carries its distinctive fingerprint that encodes a wealth of information about that network. We demonstrate that in some cases the analysis of these random fluctuations enables the full identification of network parameters, including those that may otherwise be difficult to measure. We use theoretical investigations to establish experimental guidelines for the identification of gene regulatory networks, and we apply these guideline to experimentally identify predictive models for different regulatory mechanisms in bacteria and yeast.
Toxic Diatom Aldehydes Affect Defence Gene Networks in Sea Urchins
Varrella, Stefano; Ruocco, Nadia; Ianora, Adrianna; Bentley, Matt G.; Costantini, Maria
2016-01-01
Marine organisms possess a series of cellular strategies to counteract the negative effects of toxic compounds, including the massive reorganization of gene expression networks. Here we report the modulated dose-dependent response of activated genes by diatom polyunsaturated aldehydes (PUAs) in the sea urchin Paracentrotus lividus. PUAs are secondary metabolites deriving from the oxidation of fatty acids, inducing deleterious effects on the reproduction and development of planktonic and benthic organisms that feed on these unicellular algae and with anti-cancer activity. Our previous results showed that PUAs target several genes, implicated in different functional processes in this sea urchin. Using interactomic Ingenuity Pathway Analysis we now show that the genes targeted by PUAs are correlated with four HUB genes, NF-κB, p53, δ-2-catenin and HIF1A, which have not been previously reported for P. lividus. We propose a working model describing hypothetical pathways potentially involved in toxic aldehyde stress response in sea urchins. This represents the first report on gene networks affected by PUAs, opening new perspectives in understanding the cellular mechanisms underlying the response of benthic organisms to diatom exposure. PMID:26914213
Programmable cells: Interfacing natural and engineered gene networks
NASA Astrophysics Data System (ADS)
Kobayashi, Hideki; Kærn, Mads; Araki, Michihiro; Chung, Kristy; Gardner, Timothy S.; Cantor, Charles R.; Collins, James J.
2004-06-01
Novel cellular behaviors and characteristics can be obtained by coupling engineered gene networks to the cell's natural regulatory circuitry through appropriately designed input and output interfaces. Here, we demonstrate how an engineered genetic circuit can be used to construct cells that respond to biological signals in a predetermined and programmable fashion. We employ a modular design strategy to create Escherichia coli strains where a genetic toggle switch is interfaced with: (i) the SOS signaling pathway responding to DNA damage, and (ii) a transgenic quorum sensing signaling pathway from Vibrio fischeri. The genetic toggle switch endows these strains with binary response dynamics and an epigenetic inheritance that supports a persistent phenotypic alteration in response to transient signals. These features are exploited to engineer cells that form biofilms in response to DNA-damaging agents and cells that activate protein synthesis when the cell population reaches a critical density. Our work represents a step toward the development of "plug-and-play" genetic circuitry that can be used to create cells with programmable behaviors. heterologous gene expression | synthetic biology | Escherichia coli
Mechanistically Consistent Reduced Models of Synthetic Gene Networks
Mier-y-Terán-Romero, Luis; Silber, Mary; Hatzimanikatis, Vassily
2013-01-01
Designing genetic networks with desired functionalities requires an accurate mathematical framework that accounts for the essential mechanistic details of the system. Here, we formulate a time-delay model of protein translation and mRNA degradation by systematically reducing a detailed mechanistic model that explicitly accounts for the ribosomal dynamics and the cleaving of mRNA by endonucleases. We exploit various technical and conceptual advantages that our time-delay model offers over the mechanistic model to probe the behavior of a self-repressing gene over wide regions of parameter space. We show that a heuristic time-delay model of protein synthesis of a commonly used form yields a notably different prediction for the parameter region where sustained oscillations occur. This suggests that such heuristics can lead to erroneous results. The functional forms that arise from our systematic reduction can be used for every system that involves transcription and translation and they could replace the commonly used heuristic time-delay models for these processes. The results from our analysis have important implications for the design of synthetic gene networks and stress that such design must be guided by a combination of heuristic models and mechanistic models that include all relevant details of the process. PMID:23663853
A gene network simulator to assess reverse engineering algorithms.
Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio
2009-03-01
In the context of reverse engineering of biological networks, simulators are helpful to test and compare the accuracy of different reverse-engineering approaches in a variety of experimental conditions. A novel gene-network simulator is presented that resembles some of the main features of transcriptional regulatory networks related to topology, interaction among regulators of transcription, and expression dynamics. The simulator generates network topology according to the current knowledge of biological network organization, including scale-free distribution of the connectivity and clustering coefficient independent of the number of nodes in the network. It uses fuzzy logic to represent interactions among the regulators of each gene, integrated with differential equations to generate continuous data, comparable to real data for variety and dynamic complexity. Finally, the simulator accounts for saturation in the response to regulation and transcription activation thresholds and shows robustness to perturbations. It therefore provides a reliable and versatile test bed for reverse engineering algorithms applied to microarray data. Since the simulator describes regulatory interactions and expression dynamics as two distinct, although interconnected aspects of regulation, it can also be used to test reverse engineering approaches that use both microarray and protein-protein interaction data in the process of learning. A first software release is available at http://www.dei.unipd.it/~dicamill/software/netsim as an R programming language package.
Muon Energy Reconstruction Through the Multiple Scattering Method in the NO$\\mathrm{\
Psihas Olmedo, Silvia Fernanda
2015-01-01
Neutrino energy measurements are a crucial component in the experimental study of neutrino oscillations. These measurements are done through the reconstruction of neutrino interactions and energy measurements of their products. This thesis presents the development of a technique to reconstruct the energy of muons from neutrino interactions in the NO$\\mathrm{\
Muon Energy Reconstruction Through the Multiple Scattering Method in the NO$\\mathrm{\
Psihas Olmedo, Silvia Fernanda
2013-06-01
Neutrino energy measurements are a crucial component in the experimental study of neutrino oscillations. These measurements are done through the reconstruction of neutrino interactions and energy measurements of their products. This thesis presents the development of a technique to reconstruct the energy of muons from neutrino interactions in the NO$\\mathrm{\
NASA Astrophysics Data System (ADS)
Islam, Fahima Fahmida
Sparse tomography is an efficient technique which saves time as well as minimizes cost. However, due to few angular data it implies the image reconstruction problem as ill-posed. In the ill posed problem, even with exact data constraints, the inversion cannot be uniquely performed. Therefore, selection of suitable method to optimize the reconstruction problems plays an important role in sparse data CT. Use of regularization function is a well-known method to control the artifacts in limited angle data acquisition. In this work, we propose directional total variation regularized ordered subset (OS) type image reconstruction method for neutron limited data CT. Total variation (TV) regularization works as edge preserving regularization which not only preserves the sharp edge but also reduces many of the artifacts that are very common in limited data CT. However TV itself is not direction dependent. Therefore, TV is not very suitable for images with a dominant direction. The images with dominant direction it is important to know the total variation at certain direction. Hence, here a directional TV is used as prior term. TV regularization assumes the constraint of piecewise smoothness. As the original image is not piece wise constant image, sparsifying transform is used to convert the image in to sparse image or piecewise constant image. Along with this regularized function (D TV) the likelihood function which is adapted as objective function. To optimize this objective function a OS type algorithm is used. Generally there are two methods available to make OS method convergent. This work proposes OS type directional TV regularized likelihood reconstruction method which yields fast convergence as well as good quality image. Initial iteration starts with the filtered back projection (FBP) reconstructed image. The indication of convergence is determined by the convergence index between two successive reconstructed images. The quality of the image is assessed by showing
Lei Liu; Feng Zhou; Xue-Ru Bai; Ming-Liang Tao; Zi-Jing Zhang
2016-04-01
Traditionally, the factorization method is applied to reconstruct the 3D geometry of a target from its sequential inverse synthetic aperture radar images. However, this method requires performing cross-range scaling to all the sub-images and thus has a large computational burden. To tackle this problem, this paper proposes a novel method for joint cross-range scaling and 3D geometry reconstruction of steadily moving targets. In this method, we model the equivalent rotational angular velocity (RAV) by a linear polynomial with time, and set its coefficients randomly to perform sub-image cross-range scaling. Then, we generate the initial trajectory matrix of the scattering centers, and solve the 3D geometry and projection vectors by the factorization method with relaxed constraints. After that, the coefficients of the polynomial are estimated from the projection vectors to obtain the RAV. Finally, the trajectory matrix is re-scaled using the estimated rotational angle, and accurate 3D geometry is reconstructed. The two major steps, i.e., the cross-range scaling and the factorization, are performed repeatedly to achieve precise 3D geometry reconstruction. Simulation results have proved the effectiveness and robustness of the proposed method.
NASA Astrophysics Data System (ADS)
Baraldi, Piero; Di Maio, Francesco; Turati, Pietro; Zio, Enrico
2015-08-01
In this work, we propose a modification of the traditional Auto Associative Kernel Regression (AAKR) method which enhances the signal reconstruction robustness, i.e., the capability of reconstructing abnormal signals to the values expected in normal conditions. The modification is based on the definition of a new procedure for the computation of the similarity between the present measurements and the historical patterns used to perform the signal reconstructions. The underlying conjecture for this is that malfunctions causing variations of a small number of signals are more frequent than those causing variations of a large number of signals. The proposed method has been applied to real normal condition data collected in an industrial plant for energy production. Its performance has been verified considering synthetic and real malfunctioning. The obtained results show an improvement in the early detection of abnormal conditions and the correct identification of the signals responsible of triggering the detection.
Reconstruction of dynamical perturbations in optical systems by opto-mechanical simulation methods
NASA Astrophysics Data System (ADS)
Gilbergs, H.; Wengert, N.; Frenner, K.; Eberhard, P.; Osten, W.
2012-03-01
High-performance objectives pose very strict limitations on errors present in the system. External mechanical influences can induce structural vibrations in such a system, leading to small deviations of the position and tilt of the optical components inside the objective from the undisturbed system. This can have an impact on the imaging performance, causing blurred images or broadened structures in lithography processes. A concept to detect the motion of the components of an optical system is presented and demonstrated on a simulated system. The method is based on a combination of optical simulation together with mechanical simulation and inverse problem theory. On the optical side raytracing is used for the generation of wavefront data of the system in its current state. A Shack-Hartmann sensor is implemented as a model to gather this data. The sensor can capture wavefront data with high repetition rates to resolve the periodic motion of the vibrating parts. The mechanical side of the system is simulated using multibody dynamics. The system is modeled as a set of rigid bodies (lenses, mounts, barrel), represented by rigid masses connected by springs that represent the coupling between the individual parts. External excitations cause the objective to vibrate. The vibration can be characterized by the eigenmodes and eigenfrequencies of the system. Every state of the movement during the vibration can be expressed as a linear combination of the eigenmodes. The reconstruction of the system geometry from the wavefront data is an inverse problem. Therefore, Tikhonov regularization is used in the process in order to achieve more accurate reconstruction results. This method relies on a certain amount of a-priori information on the system. The mechanical properties of the system are a great source of such information. It is taken into account by performing the calculation in the coordinate system spanned by the eigenmodes of the objective and using information on the
Stochastic models and numerical algorithms for a class of regulatory gene networks.
Fournier, Thomas; Gabriel, Jean-Pierre; Pasquier, Jerôme; Mazza, Christian; Galbete, José; Mermod, Nicolas
2009-08-01
Regulatory gene networks contain generic modules, like those involving feedback loops, which are essential for the regulation of many biological functions (Guido et al. in Nature 439:856-860, 2006). We consider a class of self-regulated genes which are the building blocks of many regulatory gene networks, and study the steady-state distribution of the associated Gillespie algorithm by providing efficient numerical algorithms. We also study a regulatory gene network of interest in gene therapy, using mean-field models with time delays. Convergence of the related time-nonhomogeneous Markov chain is established for a class of linear catalytic networks with feedback loops.
NASA Technical Reports Server (NTRS)
Gherlone, Marco; Cerracchio, Priscilla; Mattone, Massimiliano; Di Sciuva, Marco; Tessler, Alexander
2011-01-01
A robust and efficient computational method for reconstructing the three-dimensional displacement field of truss, beam, and frame structures, using measured surface-strain data, is presented. Known as shape sensing , this inverse problem has important implications for real-time actuation and control of smart structures, and for monitoring of structural integrity. The present formulation, based on the inverse Finite Element Method (iFEM), uses a least-squares variational principle involving strain measures of Timoshenko theory for stretching, torsion, bending, and transverse shear. Two inverse-frame finite elements are derived using interdependent interpolations whose interior degrees-of-freedom are condensed out at the element level. In addition, relationships between the order of kinematic-element interpolations and the number of required strain gauges are established. As an example problem, a thin-walled, circular cross-section cantilevered beam subjected to harmonic excitations in the presence of structural damping is modeled using iFEM; where, to simulate strain-gauge values and to provide reference displacements, a high-fidelity MSC/NASTRAN shell finite element model is used. Examples of low and high-frequency dynamic motion are analyzed and the solution accuracy examined with respect to various levels of discretization and the number of strain gauges.
Xue, Songchao; Gong, Hui; Jiang, Tao; Luo, Weihua; Meng, Yuanzheng; Liu, Qian; Chen, Shangbin; Li, Anan
2014-01-01
The topology of the cerebral vasculature, which is the energy transport corridor of the brain, can be used to study cerebral circulatory pathways. Limited by the restrictions of the vascular markers and imaging methods, studies on cerebral vascular structure now mainly focus on either observation of the macro vessels in a whole brain or imaging of the micro vessels in a small region. Simultaneous vascular studies of arteries, veins and capillaries have not been achieved in the whole brain of mammals. Here, we have combined the improved gelatin-Indian ink vessel perfusion process with Micro-Optical Sectioning Tomography for imaging the vessel network of an entire mouse brain. With 17 days of work, an integral dataset for the entire cerebral vessels was acquired. The voxel resolution is 0.35×0.4×2.0 µm(3) for the whole brain. Besides the observations of fine and complex vascular networks in the reconstructed slices and entire brain views, a representative continuous vascular tracking has been demonstrated in the deep thalamus. This study provided an effective method for studying the entire macro and micro vascular networks of mouse brain simultaneously.
An interface reconstruction method based on an analytical formula for 3D arbitrary convex cells
Diot, Steven; François, Marianne M.
2015-10-22
In this study, we are interested in an interface reconstruction method for 3D arbitrary convex cells that could be used in multi-material flow simulations for instance. We assume that the interface is represented by a plane whose normal vector is known and we focus on the volume-matching step that consists in finding the plane constant so that it splits the cell according to a given volume fraction. We follow the same approach as in the recent authors' publication for 2D arbitrary convex cells in planar and axisymmetrical geometries, namely we derive an analytical formula for the volume of the specificmore » prismatoids obtained when decomposing the cell using the planes that are parallel to the interface and passing through all the cell nodes. This formula is used to bracket the interface plane constant such that the volume-matching problem is rewritten in a single prismatoid in which the same formula is used to find the final solution. Finally, the proposed method is tested against an important number of reproducible configurations and shown to be at least five times faster.« less
An interface reconstruction method based on an analytical formula for 3D arbitrary convex cells
Diot, Steven; François, Marianne M.
2015-10-22
In this study, we are interested in an interface reconstruction method for 3D arbitrary convex cells that could be used in multi-material flow simulations for instance. We assume that the interface is represented by a plane whose normal vector is known and we focus on the volume-matching step that consists in finding the plane constant so that it splits the cell according to a given volume fraction. We follow the same approach as in the recent authors' publication for 2D arbitrary convex cells in planar and axisymmetrical geometries, namely we derive an analytical formula for the volume of the specific prismatoids obtained when decomposing the cell using the planes that are parallel to the interface and passing through all the cell nodes. This formula is used to bracket the interface plane constant such that the volume-matching problem is rewritten in a single prismatoid in which the same formula is used to find the final solution. Finally, the proposed method is tested against an important number of reproducible configurations and shown to be at least five times faster.
A reconstruction method for cone-beam differential x-ray phase-contrast computed tomography.
Fu, Jian; Velroyen, Astrid; Tan, Renbo; Zhang, Junwei; Chen, Liyuan; Tapfer, Arne; Bech, Martin; Pfeiffer, Franz
2012-09-10
Most existing differential phase-contrast computed tomography (DPC-CT) approaches are based on three kinds of scanning geometries, described by parallel-beam, fan-beam and cone-beam. Due to the potential of compact imaging systems with magnified spatial resolution, cone-beam DPC-CT has attracted significant interest. In this paper, we report a reconstruction method based on a back-projection filtration (BPF) algorithm for cone-beam DPC-CT. Due to the differential nature of phase contrast projections, the algorithm restrains from differentiation of the projection data prior to back-projection, unlike BPF algorithms commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a micro-focus x-ray tube source. Moreover, the numerical simulation and experimental results demonstrate that the proposed method can deal with several classes of truncated cone-beam datasets. We believe that this feature is of particular interest for future medical cone-beam phase-contrast CT imaging applications.
Maximum entropy reconstruction method for moment-based solution of the BGK equation
NASA Astrophysics Data System (ADS)
Summy, Dustin; Pullin, D. I.
2016-11-01
We describe a method for a moment-based solution of the BGK equation. The starting point is a set of equations for a moment representation which must have even-ordered highest moments. The partial-differential equations for these moments are unclosed, containing higher-order moments in the flux terms. These are evaluated using a maximum-entropy reconstruction of the one-particle velocity distribution function f (x , t) , using the known moments. An analytic, asymptotic solution describing the singular behavior of the maximum-entropy construction near to the local equilibrium velocity distribution is presented, and is used to construct a complete hybrid closure scheme for the case of fourth-order and lower moments. For the steady-flow normal shock wave, this produces a set of 9 ordinary differential equations describing the shock structure. For a variable hard-sphere gas these can be solved numerically. Comparisons with results using the direct-simulation Monte-Carlo method will be presented. Supported partially by NSF award DMS 1418903.
Galaxy cluster mass reconstruction project - I. Methods and first results on galaxy-based techniques
NASA Astrophysics Data System (ADS)
Old, L.; Skibba, R. A.; Pearce, F. R.; Croton, D.; Muldrew, S. I.; Muñoz-Cuartas, J. C.; Gifford, D.; Gray, M. E.; der Linden, A. von; Mamon, G. A.; Merrifield, M. R.; Müller, V.; Pearson, R. J.; Ponman, T. J.; Saro, A.; Sepp, T.; Sifón, C.; Tempel, E.; Tundo, E.; Wang, Y. O.; Wojtak, R.
2014-06-01
This paper is the first in a series in which we perform an extensive comparison of various galaxy-based cluster mass estimation techniques that utilize the positions, velocities and colours of galaxies. Our primary aim is to test the performance of these cluster mass estimation techniques on a diverse set of models that will increase in complexity. We begin by providing participating methods with data from a simple model that delivers idealized clusters, enabling us to quantify the underlying scatter intrinsic to these mass estimation techniques. The mock catalogue is based on a Halo Occupation Distribution (HOD) model that assumes spherical Navarro, Frenk and White (NFW) haloes truncated at R200, with no substructure nor colour segregation, and with isotropic, isothermal Maxwellian velocities. We find that, above 1014M⊙, recovered cluster masses are correlated with the true underlying cluster mass with an intrinsic scatter of typically a factor of 2. Below 1014M⊙, the scatter rises as the number of member galaxies drops and rapidly approaches an order of magnitude. We find that richness-based methods deliver the lowest scatter, but it is not clear whether such accuracy may simply be the result of using an over-simplistic model to populate the galaxies in their haloes. Even when given the true cluster membership, large scatter is observed for the majority non-richness-based approaches, suggesting that mass reconstruction with a low number of dynamical tracers is inherently problematic.
Application of damage detection methods using passive reconstruction of impulse response functions.
Tippmann, J D; Zhu, X; Lanza di Scalea, F
2015-02-28
In structural health monitoring (SHM), using only the existing noise has long been an attractive goal. The advances in understanding cross-correlations in ambient noise in the past decade, as well as new understanding in damage indication and other advanced signal processing methods, have continued to drive new research into passive SHM systems. Because passive systems take advantage of the existing noise mechanisms in a structure, offshore wind turbines are a particularly attractive application due to the noise created from the various aerodynamic and wave loading conditions. Two damage detection methods using a passively reconstructed impulse response function, or Green's function, are presented. Damage detection is first studied using the reciprocity of the impulse response functions, where damage introduces new nonlinearities that break down the similarity in the causal and anticausal wave components. Damage detection and localization are then studied using a matched-field processing technique that aims to spatially locate sources that identify a change in the structure. Results from experiments conducted on an aluminium plate and wind turbine blade with simulated damage are also presented.
Xue, Songchao; Gong, Hui; Jiang, Tao; Luo, Weihua; Meng, Yuanzheng; Liu, Qian; Chen, Shangbin; Li, Anan
2014-01-01
The topology of the cerebral vasculature, which is the energy transport corridor of the brain, can be used to study cerebral circulatory pathways. Limited by the restrictions of the vascular markers and imaging methods, studies on cerebral vascular structure now mainly focus on either observation of the macro vessels in a whole brain or imaging of the micro vessels in a small region. Simultaneous vascular studies of arteries, veins and capillaries have not been achieved in the whole brain of mammals. Here, we have combined the improved gelatin-Indian ink vessel perfusion process with Micro-Optical Sectioning Tomography for imaging the vessel network of an entire mouse brain. With 17 days of work, an integral dataset for the entire cerebral vessels was acquired. The voxel resolution is 0.35×0.4×2.0 µm3 for the whole brain. Besides the observations of fine and complex vascular networks in the reconstructed slices and entire brain views, a representative continuous vascular tracking has been demonstrated in the deep thalamus. This study provided an effective method for studying the entire macro and micro vascular networks of mouse brain simultaneously. PMID:24498247
Kryuchkov, Victor; Chumak, Vadim; Maceika, Evaldas; Anspaugh, Lynn R.; Cardis, Elisabeth; Bakhanova, Elena; Golovanov, Ivan; Drozdovitch, Vladimir; Luckyanov, Nickolas; Kesminiene, Ausrele; Voillequé, Paul; Bouville, André
2010-01-01
Between 1986 and 1990, several hundred thousand workers, called “liquidators” or “clean-up workers”, took part in decontamination and recovery activities within the 30-km zone around the Chernobyl nuclear power plant in Ukraine, where a major accident occurred in April 1986. The Chernobyl liquidators were mainly exposed to external ionizing radiation levels that depended primarily on their work locations and the time after the accident when the work was performed. Because individual doses were often monitored inadequately or were not monitored at all for the majority of liquidators, a new method of photon (i.e. gamma and x-rays) dose assessment, called “RADRUE” (Realistic Analytical Dose Reconstruction with Uncertainty Estimation) was developed to obtain unbiased and reasonably accurate estimates for use in three epidemiologic studies of hematological malignancies and thyroid cancer among liquidators. The RADRUE program implements a time-and-motion dose reconstruction method that is flexible and conceptually easy to understand. It includes a large exposure rate database and interpolation and extrapolation techniques to calculate exposure rates at places where liquidators lived and worked within ~70 km of the destroyed reactor. The RADRUE technique relies on data collected from subjects’ interviews conducted by trained interviewers, and on expert dosimetrists to interpret the information and provide supplementary information, when necessary, based upon their own Chernobyl experience. The RADRUE technique was used to estimate doses from external irradiation, as well as uncertainties, to the bone-marrow for 929 subjects and to the thyroid gland for 530 subjects enrolled in epidemiologic studies. Individual bone-marrow dose estimates were found to range from less than one μGy to 3,300 mGy, with an arithmetic mean of 71 mGy. Individual thyroid dose estimates were lower and ranged from 20 μGy to 507 mGy, with an arithmetic mean of 29 mGy. The
Universal data-based method for reconstructing complex networks with binary-state dynamics
NASA Astrophysics Data System (ADS)
Li, Jingwen; Shen, Zhesi; Wang, Wen-Xu; Grebogi, Celso; Lai, Ying-Cheng
2017-03-01
To understand, predict, and control complex networked systems, a prerequisite is to reconstruct the network structure from observable data. Despite recent progress in network reconstruction, binary-state dynamics that are ubiquitous in nature, technology, and society still present an outstanding challenge in this field. Here we offer a framework for reconstructing complex networks with binary-state dynamics by developing a universal data-based linearization approach that is applicable to systems with linear, nonlinear, discontinuous, or stochastic dynamics governed by monotonic functions. The linearization procedure enables us to convert the network reconstruction into a sparse signal reconstruction problem that can be resolved through convex optimization. We demonstrate generally high reconstruction accuracy for a number of complex networks associated with distinct binary-state dynamics from using binary data contaminated by noise and missing data. Our framework is completely data driven, efficient, and robust, and does not require any a priori knowledge about the detailed dynamical process on the network. The framework represents a general paradigm for reconstructing, understanding, and exploiting complex networked systems with binary-state dynamics.
NASA Astrophysics Data System (ADS)
Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord
2017-04-01
This article establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T - namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin ,Tmax ]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [ 300 K , 3000 K ] with only 9 reference temperatures.
Estep, Robert J.
2012-05-31
We have developed a dynamic image reconstruction method called MVIR (Moving Voxel Image Reconstruction) for lane detection in multilane portal monitor systems. MVIR was evaluated for use in the Fixed Site Detection System, a prototype three-lane portal monitor system for EZ-pass toll plazas. As a baseline, we compared MVIR with a static image reconstruction method in analyzing the same real and simulated data sets. Performance was judged by the distributions of image intensities for source and no-source vehicles over many trials as a function of source strength. We found that MVIR produced significantly better results in all cases. The performance difference was greatest at low count rates, where source/no-source distributions were well separated with the MVIR method, allowing reliable source vehicle identification with a low probability of false positive identifications. Static reconstruction of the same data produced overlapping distributions that made source vehicle identification unreliable. The performance of the static method was acceptable at high count rates. Both algorithms reliably identified two strong sources passing through at nearly the same time.
Ducru, Pablo; Josey, Colin; Dibert, Karia; ...
2017-01-25
This paper establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T — namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of themore » operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin,Tmax]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [300 K,3000 K] with only 9 reference temperatures.« less
Kiyokawa, Kensuke; Tai, Yoshiaki; Inoue, Yojiro; Yanaga, Hiroko; Mori, Kazunori; Shigemori, Minoru; Tokutomi, Takashi
1999-01-01
Anterior skull base defects after extended anterior skull base resection including unilateral orbit and the dura were reconstructed using the temporal musculopericranial (TMP) flaps or frontal musculopericranial (FMP) flap in 14 patients. Dural defect was reconstructed with the TMP or FMP flap by making it overlap on the remaining dura around the defects. These flaps were also used, in principle, for the separation of the nasal cavity. For bone defects on the anterior skull base, a bone graft was transplanted in the place between the flap for dural reconstruction and the flap for the separation of the nasal cavity. Bone grafting was nor performed in patients who had an extensive defect and for whom a free flap was used for the separation. After surgery, CSF rhinorrhea did not occur in the 14 patients. Twelve patients did not develop any postoperative complications. Two patients had epidural abscess, but with debridement and the drainage to the nasal cavity, they did not develop severe intracranial complications. We conclude that reconstruction using musculopericranial flaps is a reliable and versatile method with minimum invasion and the shortest operation hours. In particular, musculopericranial flap for dura reconstruction was highly efficacious for the prevention of CSF rhinorrhea. ImagesFigure 1Figure 2Figure 5Figure 7Figure 8Figure 9p217-b PMID:17171092
Menekşe, Ebru; Özyazıcı, Sefa; Karateke, Faruk; Turan, Ümit; Kuvvetli, Adnan; Gökler, Cihan; Özdoğan, Mehmet; Önel, Safa
2015-01-01
Objective We aimed to present our experience with rhomboid flap reconstruction, which is a simple technique, in breast cancer patients who underwent breast-conserving surgery. Methods We reviewed the medical records of 13 patients with breast cancer who underwent rhomboid flap reconstruction. The patients were evaluated for tumor size, safe surgical margin, and other clinical and pathological features. Results The mean age of the patients was 43.1 years (range: 28–69 years). The mean tumor diameter was 30.8 mm (range: 15–60 mm). The mean of the safe margin of resection was evaluated to be 17.8 mm (range: 5–30 mm). Re-excision was required for one patient in the same session. Conclusion Rhomboid flap reconstruction can facilitate the applicability of breast-conserving surgery in early breast cancer patients with large tumor-to-breast-size ratio or tumors close to the skin.
NASA Astrophysics Data System (ADS)
Roomi, A.; Habibi, M.; Saion, E.; Amrollahi, R.
2011-02-01
In this study we present Monte Carlo method for obtaining the time-resolved energy spectra of neutrons emitted by D-D reaction in plasma focus devices. Angular positions of detectors obtained to maximum reconstruction of neutron spectrum. The detectors were arranged over a range of 0-22.5 m from the source and also at 0°, 30°, 60°, and 90° with respect to the central axis. The results show that an arrangement with five detectors placed at 0, 2, 7.5, 15 and 22.5 m around the central electrode of plasma focus as an anisotropic neutron source is required. As it shown in reconstructed spectrum, the distance between the neutron source and detectors is reduced and also the final reconstructed signal obtained with a very fine accuracy.
Zhao, Yong-guang; Ma, Ling-ling; Li, Chuan-rong; Zhu, Xiao-hua; Tang, Ling-li
2015-07-01
Due to the lack of enough spectral bands for multi-spectral sensor, it is difficult to reconstruct surface retlectance spectrum from finite spectral information acquired by multi-spectral instrument. Here, taking into full account of the heterogeneity of pixel from remote sensing image, a method is proposed to simulate hyperspectral data from multispectral data based on canopy radiation transfer model. This method first assumes the mixed pixels contain two types of land cover, i.e., vegetation and soil. The sensitive parameters of Soil-Leaf-Canopy (SLC) model and a soil ratio factor were retrieved from multi-spectral data based on Look-Up Table (LUT) technology. Then, by combined with a soil ratio factor, all the parameters were input into the SLC model to simulate the surface reflectance spectrum from 400 to 2 400 nm. Taking Landsat Enhanced Thematic Mapper Plus (ETM+) image as reference image, the surface reflectance spectrum was simulated. The simulated reflectance spectrum revealed different feature information of different surface types. To test the performance of this method, the simulated reflectance spectrum was convolved with the Landsat ETM + spectral response curves and Moderate Resolution Imaging Spectrometer (MODIS) spectral response curves to obtain the simulated Landsat ETM+ and MODIS image. Finally, the simulated Landsat ETM+ and MODIS images were compared with the observed Landsat ETM+ and MODIS images. The results generally showed high correction coefficients (Landsat: 0.90-0.99, MODIS: 0.74-0.85) between most simulated bands and observed bands and indicated that the simulated reflectance spectrum was well simulated and reliable.
NASA Astrophysics Data System (ADS)
Tamasan, Alexandru
2004-12-01
In this paper we reconstruct convection coefficients from boundary measurements. We reduce the Beals and Coifman formalism from a linear first order system to a formalism for the \\overline{\\partial} -equation.
Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A
2012-08-01
Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Steinfs Unbiased Risk Estimate. SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance Ð2), and GCV (that does not need Ð2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type .1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly suboptimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms.
SU-D-207-04: GPU-Based 4D Cone-Beam CT Reconstruction Using Adaptive Meshing Method
Zhong, Z; Gu, X; Iyengar, P; Mao, W; Wang, J; Guo, X
2015-06-15
Purpose: Due to the limited number of projections at each phase, the image quality of a four-dimensional cone-beam CT (4D-CBCT) is often degraded, which decreases the accuracy of subsequent motion modeling. One of the promising methods is the simultaneous motion estimation and image reconstruction (SMEIR) approach. The objective of this work is to enhance the computational speed of the SMEIR algorithm using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the tetrahedral mesh based on the features of a reference phase 4D-CBCT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. After the mesh generation, the updated motion model and other phases of 4D-CBCT can be obtained by matching the 4D-CBCT projection images at each phase with the corresponding forward projections of the deformed reference phase of 4D-CBCT. The entire process of this 4D-CBCT reconstruction method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its tremendous parallel computing ability. Results: A 4D XCAT digital phantom was used to test the proposed mesh-based image reconstruction algorithm. The image Result shows both bone structures and inside of the lung are well-preserved and the tumor position can be well captured. Compared to the previous voxel-based CPU implementation of SMEIR, the proposed method is about 157 times faster for reconstructing a 10 -phase 4D-CBCT with dimension 256×256×150. Conclusion: The GPU-based parallel 4D CBCT reconstruction method uses the feature-based mesh for estimating motion model and demonstrates equivalent image Result with previous voxel-based SMEIR approach, with significantly improved computational speed.
Zou, Tiefang; Peng, Xulong; Wu, Wenguang; Cai, Ming
2017-01-01
In order to make the reconstructed result more reliable, a method named improved probabilistic-interval method was proposed to analyze the uncertainty of a reconstructed result in a traffic accident with probabilistic and interval traces. In the method, probabilistic traces are replaced by probabilistic sub-intervals firstly; secondly, these probabilistic sub-intervals and those interval traces will be combined to form many new uncertainty analysis problems with only interval traces; thirdly, the upper and lower bound of the reconstructed result and their probability were calculated in each new uncertainty analysis problem, and an algorithm was proposed to shorten the time taken for this step; finally, distribution functions of the upper and lower bound of the reconstructed result were obtained by doing statistic analysis. Through 2 numerical cases, results obtained from the proposed method were almost the same as results obtained from the Monte Carlo method, but the time taken for the proposed method was far less than the time taken for the Monte Carlo method and results obtained from the proposed method were more stable. Through applying the proposed method to a true vehicle-pedestrian accident, not only the upper and lower bound of the impact velocity (v) can be obtained; but also the probability that the upper bound and the lower bound of v falls in an arbitrary interval can be obtained; furthermore, the probability that the interval of v is less than an arbitrary interval can be obtained also. It is concluded that the proposed improved probabilistic-interval method is practical.
NASA Astrophysics Data System (ADS)
Pathak, Ashish; Raessi, Mehdi
2016-02-01
We introduce a piecewise-linear, volume-of-fluid method for reconstructing and advecting three-dimensional interfaces and contact lines formed by three materials. The new method employs a set of geometric constructs that can be used in conjunction with any volume-tracking scheme. In this work, we used the mass-conserving scheme of Youngs to handle two-material cells, perform interface reconstruction in three-material cells, and resolve the contact line. The only information required by the method is the available volume fraction field. Although the proposed method is order dependent and requires a priori information on material ordering, it is suitable for typical contact line applications, where the material representing the contact surface is always known. Following the reconstruction of the contact surface, to compute the interface orientation in a three-material cell, the proposed method minimizes an error function that is based on volume fraction distribution around that cell. As an option, the minimization procedure also allows the user to impose a contact angle. Performance of the proposed method is assessed via both static and advection test cases. The tests show that the new method preserves the accuracy and mass-conserving property of the Youngs method in volume-tracking three materials.
NASA Astrophysics Data System (ADS)
Yeo, Inhwan Jason; Jung, Jae Won; Chew, Meng; Kim, Jong Oh; Wang, Brian; Di Biase, Steven; Zhu, Yunping; Lee, Dohyung
2009-09-01
A straightforward and accurate method was developed to verify the delivery of intensity-modulated radiation therapy (IMRT) and to reconstruct the dose in a patient. The method is based on a computational algorithm that linearly describes the physical relationship between beamlets and dose-scoring voxels in a patient and the dose image from an electronic portal imaging device (EPID). The relationship is expressed in the form of dose response functions (responses) that are quantified using Monte Carlo (MC) particle transport techniques. From the dose information measured by the EPID the received patient dose is reconstructed by inversely solving the algorithm. The unique and novel non-iterative feature of this algorithm sets it apart from many existing dose reconstruction methods in the literature. This study presents the algorithm in detail and validates it experimentally for open and IMRT fields. Responses were first calculated for each beamlet of the selected fields by MC simulation. In-phantom and exit film dosimetry were performed on a flat phantom. Using the calculated responses and the algorithm, the exit film dose was used to inversely reconstruct the in-phantom dose, which was then compared with the measured in-phantom dose. The dose comparison in the phantom for all irradiated fields showed a pass rate of higher than 90% dose points given the criteria of dose difference of 3% and distance to agreement of 3 mm.
Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut
2016-01-01
Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications. PMID:27181695
Malaby, Andrew W; Chakravarthy, Srinivas; Irving, Thomas C; Kathuria, Sagar V; Bilsel, Osman; Lambright, David G
2015-08-01
Size-exclusion chromatography in line with small-angle X-ray scattering (SEC-SAXS) has emerged as an important method for investigation of heterogeneous and self-associating systems, but presents specific challenges for data processing including buffer subtraction and analysis of overlapping peaks. This paper presents novel methods based on singular value decomposition (SVD) and Guinier-optimized linear combination (LC) to facilitate analysis of SEC-SAXS data sets and high-quality reconstruction of protein scattering directly from peak regions. It is shown that Guinier-optimized buffer subtraction can reduce common subtraction artifacts and that Guinier-optimized linear combination of significant SVD basis components improves signal-to-noise and allows reconstruction of protein scattering, even in the absence of matching buffer regions. In test cases with conventional SAXS data sets for cytochrome c and SEC-SAXS data sets for the small GTPase Arf6 and the Arf GTPase exchange factors Grp1 and cytohesin-1, SVD-LC consistently provided higher quality reconstruction of protein scattering than either direct or Guinier-optimized buffer subtraction. These methods have been implemented in the context of a Python-extensible Mac OS X application known as Data Evaluation and Likelihood Analysis (DELA), which provides convenient tools for data-set selection, beam intensity normalization, SVD, and other relevant processing and analytical procedures, as well as automated Python scripts for common SAXS analyses and Guinier-optimized reconstruction of protein scattering.
A Feature-adaptive Subdivision Method for Real-time 3D Reconstruction of Repeated Topology Surfaces
NASA Astrophysics Data System (ADS)
Lin, Jinhua; Wang, Yanjie; Sun, Honghai
2017-03-01
It's well known that rendering for a large number of triangles with GPU hardware tessellation has made great progress. However, due to the fixed nature of GPU pipeline, many off-line methods that perform well can not meet the on-line requirements. In this paper, an optimized Feature-adaptive subdivision method is proposed, which is more suitable for reconstructing surfaces with repeated cusps or creases. An Octree primitive is established in irregular regions where there are the same sharp vertices or creases, this method can find the neighbor geometry information quickly. Because of having the same topology structure between Octree primitive and feature region, the Octree feature points can match the arbitrary vertices in feature region more precisely. In the meanwhile, the patches is re-encoded in the Octree primitive by using the breadth-first strategy, resulting in a meta-table which allows for real-time reconstruction by GPU hardware tessellation unit. There is only one feature region needed to be calculated under Octree primitive, other regions with the same repeated feature generate their own meta-table directly, the reconstruction time is saved greatly for this step. With regard to the meshes having a large number of repeated topology feature, our algorithm improves the subdivision time by 17.575% and increases the average frame drawing time by 0.2373 ms compared to the traditional FAS (Feature-adaptive Subdivision), at the same time the model can be reconstructed in a watertight manner.
Li, Jun; Wei, Hairong; Zhao, Patrick Xuechun
2013-01-01
Analysis of genome-scale gene networks (GNs) using large-scale gene expression data provides unprecedented opportunities to uncover gene interactions and regulatory networks involved in various biological processes and developmental programs, leading to accelerated discovery of novel knowledge of various biological processes, pathways and systems. The widely used context likelihood of relatedness (CLR) method based on the mutual information (MI) for scoring the similarity of gene pairs is one of the accurate methods currently available for inferring GNs. However, the MI-based reverse engineering method can achieve satisfactory performance only when sample size exceeds one hundred. This in turn limits their applications for GN construction from expression data set with small sample size. We developed a high performance web server, DeGNServer, to reverse engineering and decipher genome-scale networks. It extended the CLR method by integration of different correlation methods that are suitable for analyzing data sets ranging from moderate to large scale such as expression profiles with tens to hundreds of microarray hybridizations, and implemented all analysis algorithms using parallel computing techniques to infer gene-gene association at extraordinary speed. In addition, we integrated the SNBuilder and GeNa algorithms for subnetwork extraction and functional module discovery. DeGNServer is publicly and freely available online.
NASA Astrophysics Data System (ADS)
Schroeder, Walter; Schulze, Wolfram; Wetter, Thomas; Chen, Chi-Hsien
2008-08-01
Three-dimensional (3D) body surface reconstruction is an important field in health care. A popular method for this purpose is laser scanning. However, using Photometric Stereo (PS) to record lumbar lordosis and the surface contour of the back poses a viable alternative due to its lower costs and higher flexibility compared to laser techniques and other methods of three-dimensional body surface reconstruction. In this work, we extended the traditional PS method and proposed a new method for obtaining surface and volume data of a moving object. The principle of traditional Photometric Stereo uses at least three images of a static object taken under different light sources to obtain 3D information of the object. Instead of using normal light, the light sources in the proposed method consist of the RGB-Color-Model's three colors: red, green and blue. A series of pictures taken with a video camera can now be separated into the different color channels. Each set of the three images can then be used to calculate the surface normals as a traditional PS. This method waives the requirement that the object imaged must be kept still as in almost all the other body surface reconstruction methods. By putting two cameras opposite to a moving object and lighting the object with the colored light, the time-varying surface (4D) data can easily be calculated. The obtained information can be used in many medical fields such as rehabilitation, diabetes screening or orthopedics.
NASA Astrophysics Data System (ADS)
Elias, Scott A.
The Mutual Climatic Range (MCR) method of palaeoclimate reconstruction has been employed in Europe for the last decade. A quantitative, calibrated method, MCR has many advantages over qualitative methods. More recent applications deal with eastern and central North America, and the method is also being developed for desert and arctic faunas. The climate envelopes for North American beetles have been compiled using a 25-km gridded North American climate database that pairs climate parameters with modern collection sites. Modern tests of the reliability of the MCR method for North American species yielded similar results to prior European tests. Linear regressions of predicted on observed values yielded equations used to calibrate the MCR estimates. Work is under way to develop MCR estimates of mean annual precipitation for fossil assemblages from the desert southwest, where moisture conditions may play a more important role in determining beetle species' ranges. An examination of British and North American mean July temperature reconstructions during the Late Wisconsinan glacial interval compares and contrasts three sets of records. The North American records show no indication of the Younger Dryas cooling that is clearly marked in records from northwest Europe. The MCR method adds vigour to our reconstructions, and allows us to compare between regions and with other palaeoenvironmental methods.
Bouvier, Adeline; Deleaval, Flavien; Doyley, Marvin M; Yazdani, Saami K; Finet, Gérard; Le Floc'h, Simon; Cloutier, Guy; Pettigrew, Roderic I; Ohayon, Jacques
2016-01-01
The peak cap stress (PCS) amplitude is recognized as a biomechanical predictor of vulnerable plaque (VP) rupture. However, quantifying PCS in vivo remains a challenge since the stress depends on the plaque mechanical properties. In response, an iterative material finite element (FE) elasticity reconstruction method using strain measurements has been implemented for the solution of these inverse problems. Although this approach could resolve the mechanical characterization of VPs, it suffers from major limitations since (i) it is not adapted to characterize VPs exhibiting high material discontinuities between inclusions, and (ii) does not permit real time elasticity reconstruction for clinical use. The present theoretical study was therefore designed to develop a direct material-FE algorithm for elasticity reconstruction problems which accounts for material heterogeneities. We originally modified and adapted the extended FE method (Xfem), used mainly in crack analysis, to model material heterogeneities. This new algorithm was successfully applied to six coronary lesions of patients imaged in vivo with intravascular ultrasound. The results demonstrated that the mean relative absolute errors of the reconstructed Young's moduli obtained for the arterial wall, fibrosis, necrotic core, and calcified regions of the VPs decreased from 95.3±15.56%, 98.85±72.42%, 103.29±111.86% and 95.3±10.49%, respectively, to values smaller than 2.6 × 10−8±5.7 × 10−8% (i.e. close to the exact solutions) when including modified-Xfem method into our direct elasticity reconstruction method. PMID:24240392
NASA Astrophysics Data System (ADS)
Bouvier, Adeline; Deleaval, Flavien; Doyley, Marvin M.; Yazdani, Saami K.; Finet, Gérard; Le Floc'h, Simon; Cloutier, Guy; Pettigrew, Roderic I.; Ohayon, Jacques
2013-12-01
The peak cap stress (PCS) amplitude is recognized as a biomechanical predictor of vulnerable plaque (VP) rupture. However, quantifying PCS in vivo remains a challenge since the stress depends on the plaque mechanical properties. In response, an iterative material finite element (FE) elasticity reconstruction method using strain measurements has been implemented for the solution of these inverse problems. Although this approach could resolve the mechanical characterization of VPs, it suffers from major limitations since (i) it is not adapted to characterize VPs exhibiting high material discontinuities between inclusions, and (ii) does not permit real time elasticity reconstruction for clinical use. The present theoretical study was therefore designed to develop a direct material-FE algorithm for elasticity reconstruction problems which accounts for material heterogeneities. We originally modified and adapted the extended FE method (Xfem), used mainly in crack analysis, to model material heterogeneities. This new algorithm was successfully applied to six coronary lesions of patients imaged in vivo with intravascular ultrasound. The results demonstrated that the mean relative absolute errors of the reconstructed Young's moduli obtained for the arterial wall, fibrosis, necrotic core, and calcified regions of the VPs decreased from 95.3±15.56%, 98.85±72.42%, 103.29±111.86% and 95.3±10.49%, respectively, to values smaller than 2.6 × 10-8±5.7 × 10-8% (i.e. close to the exact solutions) when including modified-Xfem method into our direct elasticity reconstruction method.
Development of a synthetic gene network to modulate gene expression by mechanical forces
Kis, Zoltán; Rodin, Tania; Zafar, Asma; Lai, Zhangxing; Freke, Grace; Fleck, Oliver; Del Rio Hernandez, Armando; Towhidi, Leila; Pedrigi, Ryan M.; Homma, Takayuki; Krams, Rob
2016-01-01
The majority of (mammalian) cells in our body are sensitive to mechanical forces, but little work has been done to develop assays to monitor mechanosensor activity. Furthermore, it is currently impossible to use mechanosensor activity to drive gene expression. To address these needs, we developed the first mammalian mechanosensitive synthetic gene network to monitor endothelial cell shear stress levels and directly modulate expression of an atheroprotective transcription factor by shear stress. The technique is highly modular, easily scalable and allows graded control of gene expression by mechanical stimuli in hard-to-transfect mammalian cells. We call this new approach mechanosyngenetics. To insert the gene network into a high proportion of cells, a hybrid transfection procedure was developed that involves electroporation, plasmids replication in mammalian cells, mammalian antibiotic selection, a second electroporation and gene network activation. This procedure takes 1 week and yielded over 60% of cells with a functional gene network. To test gene network functionality, we developed a flow setup that exposes cells to linearly increasing shear stress along the length of the flow channel floor. Activation of the gene network varied logarithmically as a function of shear stress magnitude. PMID:27404994
Zhao, X.D.; Tsui, B.M.W.; Gregoriou, G.K.; Lalush, D.S.; Li, J. ); Eisner, R.L. . Dept. of Radiology)
1994-12-01
The goal of the investigation was to study the effectiveness of the corrective reconstruction methods in cardiac SPECT using a realistic phantom and to qualitatively and quantitatively evaluate the reconstructed images using bull's-eye plots. A 3D mathematical phantom which realistically models the anatomical structures of the cardiac-torso region of patients was used. The phantom allows simulation of both the attenuation distribution and the uptake of radiopharmaceuticals in different organs. Also, the phantom can be easily modified to simulate different genders and variations in patient anatomy. Two-dimensional projection data were generated from the phantom and included the effects of attenuation and detector response blurring. The reconstruction methods used in the study included the conventional filtered backprojection (FBP) with no attenuation compensation, and the first-order Chang algorithm, an iterative filtered backprojection algorithm (IFBP), the weighted least square conjugate gradient algorithm and the ML-EM algorithm with non-uniform attenuation compensation. The transaxial reconstructed images were rearranged into short-axis slices from which bull's-eye plots of the count density distribution in the myocardium were generated.
NASA Astrophysics Data System (ADS)
Wahl, Eugene R.; Amrhein, Dan E.; Smerdon, Jason E.; Ammann, Caspar M.
2010-05-01
A key question in late-Holocene climate dynamics is the role of dominant modes in influencing climates in teleconnected regions of the world. For example, it has recently been proposed that ENSO had a key role in influencing the extended period of largely positive-phase NAO during ~1100-1400 CE (Trouet et al., 2009, Science, 324, 78). Fundamental to understanding the global and regional climatological roles of dominant modes are primary data on the variations of the modes themselves, in particular paleoclimate data that greatly extend instrumental-period information. Establishing records of ENSO indices that span the past millennium has proven difficult, and well-verified reconstructions produced to date have non-trivial differences (cf., e.g., Braganza et al., 2009, Journal of Geophysical Research, 114, D05106). This presentation examines important general questions regarding reconstructions of modal indices, including ENSO: is it best (1) to focus on proxy evidence from the most strongly influenced (or most strongly teleconnected) areas, (2) to combine proxy data from a large regional network encompassing the primary area of modal activity and teleconnections (e.g., around the Pacific Rim in the case of ENSO), or (3) to use climate field reconstruction (CFR) methods that assimilate up-to-global-scale proxy information? A systematic suite of reconstruction simulation experiments (RSEs), derived from NCAR CSM 1.4 millennium transient model output, is explored to test the various strengths and weaknesses of these three approaches for reconstructing the NINO3 index. By doing this, NINO3 reconstruction fidelity can be gauged over the entire simulated millennium via comparison to the known model target; such comparisons are restricted to brief "validation" periods in real-world reconstructions due to the length of the instrumental record. For strategies (1) and (2), pseudoproxies are formed by adding white noise to the model output (seasonally-appropriate precipitation
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Fuld, Matthew K.; Fung, George S. K.; Tsui, Benjamin M. W.
2015-04-01
Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved.
The pedicled latissimus dorsi flap in head and neck reconstruction: an old method revisited.
Wilkman, Tommy; Suominen, Sinikka; Back, Leif; Vuola, Jyrki; Lassus, Patrik
2014-03-01
In head and neck cancer patients with significant comorbidities, the reconstructive options are limited, and there is a need for a safe alternative for microvascular flaps without compromising flap size. During the study period, 331 head and neck cancer patients were reconstructed with microvascular tissue flaps. Ten patients requiring large resections were considered to have high risks for long surgery and to be poor candidates for free tissue transfer and thus were reconstructed with a subpectorally tunneled pedicled latissimus dorsi (SP-LD) flap. The flap was raised simultaneously with the tumor resection and tunneled to the head and neck region. The flap was used for reconstruction of oral, mandibular, pharyngeal, or neck defects. Median follow-up was 3.6